Continuing with the work to refine and improve how we build Ubuntu in an open, transparent, and collaborative way, I want to take a few minutes to discuss some work going on to improve the regularity of our planning and the benefits this brings.
Traditionally planning for Ubuntu has worked like this.
While this has served us well, there are a few problems with this approach. The most notable issue is that we work in software, and a lot changes in software in a six month period. This means we define a set of work items, prepare the burndown, and then if requirements or direction changes it can be difficult to reflect those changes across our community and we have to go and postpone a bunch of work items and re-build our burndowns. This means that even though the changes are made to open blueprints, it can cause folks across our community to be out of sync. It also presents the misconception that everything at UDS is locked in for the duration of the six month cycle. If something changes in our strategy or a new opportunity opens up, it can be difficult to change course with everyone on the same page.
Solving this is part of our theme of making Ubuntu engineering as transparent and agile as possible.
One approach we are experimenting with in the Ubuntu Engineering Management team at Canonical is to increase the regularity and transparency of how we plan. Instead of locking in every six months we will do it like this:
Now, to set expectations clearly: this is just an idea for how to improve this workflow, and we are doing it for the first time this week, but the idea is that it will dramatically increase the transparency of which teams are working on what, making it easier for others to (a) know what is going on and (b), participate in areas of interest.
My team is currently preparing the work items for April and you will be able to see the final burndown here when it is complete. From there you will be able to see all the blueprints.
I will provide plenty of feedback on what is working well and less well, and your feedback is welcomed, as ever, in the comments.Building Re-usable Processes
As I mentioned in my previous blog entry, we want to make virtual UDS an event that is repeatable and useful for not just UDS but also for domain-specific events too (such as a LoCo themed UDS). The goal is that this event format is repeatable for our wider community.
Likewise, the monthly planning process is also designed to be repeatable for our wider community too, making it simple to get everyone on the same page for planning and executing on awesome projects.
As ever, feedback is always welcome, but I think this combo of a wider planning event every three months combined with monthly work item sync-ups and planning will result in a pretty effective formula for helping Ubuntu to be as effective, transparent, and collaborative as possible.
Of course, the one thing you don’t want to cause a problem is your integration with social networks, because you end up spamming everyone as you try to fix things.
Guess what plugin seems to be causing the problem?
I was previously trying to fix server errors with the blog. Having disabled and re-enabled all the plugins, things seem to be working, but it could be a problem that only affects new posts, so this is another test. Keep your fingers crossed.
Update 1: seems like it affects new posts only.
Update 2: de-activating one by one to find out which one is the killer plugin.
I’m getting server 500 errors whenever I post to WordPress. Common wisdom says the solution to debugging is disable all your plugins and then re-enable one by one until you find the cause. That means a bunch of test posts, of which this is one. Apologies in advance for the noise.
Update 1: I disabled all plugins and posting worked.
Update 2: I re-enabled some of the essentials (Akismet, Google Analytics, etc). This is a test to see if posting still works.
Update 3: On to the “frivolous” plugins (iframe, youtube, etc). All’s well?
Update 4: And re-enabling the suspected troublemakers. How now?
Update 5: The final suspect, WP Super Cache…
It’s been quite a week. But in a really good way. It started off last Sunday with a trip to see Richard Herring‘s latest show, “Talking Cock.” The subject matter should be obvious from the title, and it says something about the topics he has covered in the past that this is probably the lightest and fluffiest of the four shows that I’ve seen. It’s very enjoyable and not particularly crude.
I spent the middle of the week at the Photography Farm, a three day residential workshop run by the award-winning Lisa Devlin. I’ll write more about it and share some of my photographs in a couple of weeks, but for now suffice it to say that it was a challenging, fun and exhausting time. It will take me a while to fully absorb it all, but I know it will have a huge impact on my wedding photography. But most importantly I met some fantastic new friends.
This weekend was Big Finish Day 3 in Barking, where I was proud to be representing The Doctor Who Podcast, wearing one of their rather snazzy t-shirts. Laura and I recorded lots of interviews with contributors to Doctor Who and Big Finish. I won’t list them all here to preserve some element of surprise, but I’m grateful to so many people for giving up their time to talk to us.
Then back home, via a whistle-stop visit to Emma Jane and James Westby’s wedding reception in Nottingham, where I managed to fall over spectacularly on sheet ice less than two seconds after warning others not to do the same. Ow.
The last thing I wanted on Sunday evening was to go back out into the cold, but I’m glad I made the effort to see Mark Thomas’ show “Bravo Figaro.” It’s a performance piece rather than stand up, and is in turns funny, dark and touching.
And now, I’m off to bed.Pin It
I use get-iplayer to download TV programs so I can watch them on the devices that suit me, when it suits me. What follows is how I install get-iplayer on a headless Debian 6.0 server I have a home. The server is question is really low powered so building from source was not an option.
In order to install the latest version of get-iplayer (currently 2.82) on Debian Squeeze a couple of additional package respositories need enabling.
Enable the Debain Backports repository by adding the following line to /etc/apt/sources.list.d/backports.list.deb http://backports.debian.org/debian-backports squeeze-backports main
Enable the Debain Multimedia repository by adding the following lines to /etc/apt/sources.list.d/multimedia.list.deb http://www.deb-multimedia.org squeeze main non-free deb http://www.deb-multimedia.org squeeze-backports main
Update the repositories.sudo apt-get update
Install the deb-multimedia-keyring package.sudo apt-get --allow-unauthenticated install deb-multimedia-keyring
Install get-iplayer (currently v2.78) from the official Debian repositories, this will also install the dependencies.sudo apt-get install get-iplayer
Install the get-iplayer suggested packages.sudo apt-get install ffmpeg rtmpdump libdata-dump-perl libid3-tools libcrypt-ssleay-perl libio-socket-ssl-perl
I have seen it suggested that mplayer should also be installed. I've not determined if that is an absolute requirement. But this is how to install it on a headless Debian computer.sudo apt-get --no-install-recommends install mplayer
Finally, upgrade get-iplayer to v2.82.sudo apt-get install libmp3-tag-perl libxml-simple-perl wget http://ftp.uk.debian.org/debian/pool/main/g/get-iplayer/get-iplayer_2.82-2_all.deb sudo dpkg -i get-iplayer_2.82-2_all.deb
At this point get-iplayer should be good to go and the get-iplayer website and man get-iplayer will assist you.References
I enjoyed the book; I must have considering I bought the second edition! The material has been updated where needed and it's still lacking a section on ACLs so I'll stick to my score of 8/10 for people purchasing this book for the first time and look forward to another refresh in a couple of years time. If you already own the first edition then your choice is a little harder - this book is still an excellent stepping on point for the cost but don't expect much beyond a refresh on the same content.
Disclaimer: Part of my previous review is quoted in the marketing blurb at the front of the book. I did however pay for this book myself.
Just … wow.
Mail.app lost my default signature.
Nice one, Tesco.
Thank you, Firefox. I love that ⌘← takes me back a page, rather than to the start of the line in my textarea.
45 minutes of work: lost.
Sometimes NFS breaks, and gives helpful messages like :
mount.nfs: connection timed out
Stale NFS handle on clients.
While I’m confident that my /etc/exports and other configuration files are correct, it still insists on misbehaving.
Below is a random shell script I seem to have created to fix the NFS server -#!/bin/bash set -e /etc/init.d/nfs-kernel-server stop /etc/init.d/nfs-common stop /etc/init.d/rpcbind stop rm -Rf /var/lib/nfs mkdir /var/lib/nfs mkdir /var/lib/nfs/v4recovery /var/lib/nfs/rpc_pipefs for f in /var/lib/nfs/etab \ /var/lib/nfs/rmtab \ /var/lib/nfs/xtab; do [ -e $f ] || touch $f done /etc/init.d/rpcbind start sleep 2 /etc/init.d/nfs-common start sleep 2 /etc/init.d/nfs-kernel-server start echo "NFS may now work" exportfs -f
Yes… “NFS may now work” … that sums it up about right.
Microsoft .Net framework unhandled exception. sdrv.ms/16aXyJ2
Otherwise I've done little coding recently:
I'm pondering libpcap a little, for work purposes. There is a plan to write a deamon which will count incoming SYN packets, per-IP, and drop access to clients that make "too many" requests "too quickly".
This plan is a simple anti-DoS tool which might or might not work in the real world. We do have a couple of clients that seem to be DoS magnets and this is better than using grep + sort against apache access logs.
For cases where a small number of source IPs make many-many requests it will help. For the class of attacks where a huge botnet has members making only a couple of requests each it won't do anything useful.
We'll see how it turns out.
This is an excellent paper classifying bugs in Linux filesystems. The results seem to be generally applicable to bugs in open source kernel code.
Waffle mit heißen Kirschen sdrv.ms/10wfBYD