Recently there was yet another storm in a teacup that distracted us from creating and sharing Ubuntu and our flavors with others. I am not going to dive into the details of this particular incident…it has been exhaustively documented elsewhere…but at the heart of this case was a concern around the conduct in which some folks engaged around something they disagreed with. This is not the first time we have seen disappointing conduct in a debate, and I wanted to share some thoughts on this too.
In every community I have worked in I have tried to build an environment in which all view points that challenge decisions or decision makers are welcome with the requirement that they are built on a platform of respectful discourse; this is the essence of our Code Of Conduct. Within the context of an Open Source community we also encourage this engagement around differences to be expressed as solutions with a focus on solving problems; this helps us to be productive and move the project forward. This is why we have such a strong emphasis on blueprints, specs, bugs, and other ways of expressing issues and exploring solutions.
Within the context of this most recent issue I saw three problems (problems I have seen present in other similar arguments too):
Ubuntu is a community filled with passionate people, and I love that we have folks who are critical of our direction and decisions. If everyone agreed with what we are doing, we would not always make the right decisions, and our diversity is what makes Ubuntu and our flavors such a great place to participate.
As I said at the beginning of this post, it is important that all viewpoints are welcome, but we have to get the tone and conduct of some of these debates under control. The sheer level of sensationalist and confrontational language that is often in place in these disagreements doesn’t serve anyone but hungry journalists looking for page hits.
Now, I am not suggesting here that anyone should change any of their viewpoints. If you vehemently disagree with an aspect of what we are doing in Ubuntu or at Canonical, that is fine and of course, welcome. What I am appealing to everyone though is to treat others like you wish to be treated, with respect and dignity, and lets keep the sensationalism out of our community and focus on what we do best…building a world-class Free Software platform and its rich ecosystem of flavors.
Yesterday was the (almost) annual OpenTech conference. For various reasons, the conference didn’t happen last year, so it was good to see it back this year.
OpenTech is the conference where I most wish I could clone myself. There are three streams of talks and in pretty much every slot there are talks I’d like like to see in more than one stream. These are the talks that I saw.
Electromagnetic Field: Tales From the UK’s First Large-Scale Hacker Camp (Russ Garrett)
Last August, Russ was involved in getting 500 hackers together in a field near Milton Keynes for a weekend of hacking. The field apparently had better connectivity than some data centres. Russ talked about some of the challenges of organising an event like this and asked for help organising the next one which will hopefully take place in 2014.
Prescribing Analytics (Bruce Durling)
Bruce is the CTO of Mastodon C, a company that helps people extract value from large amounts of data. He talked about a project that crunched NHS prescription data and identified areas where GPs seem to have a tendency to prescribe proprietary drugs rather than cheaper generic alternatives.
GOV.UK (Tom Loosemore)
Tom is Deputy Director at the Government Digital Service. In less than a year, the GDS has made a huge difference to the way that the government uses the internet. It’s inspirational to see an OpenTech stalwart like Tom having such an effect at the heart of government.
How We Didn’t Break the Web (Jordan Hatch)
Jordan works in Tom Loosemore’s team. He talked in a little more detail about one aspect of the GDS’s work. When they turned off the old DirectGov and Business Link web sites in October 2012, they worked hard to ensure that tens of thousands of old URLs didn’t break. Jordan explained some of the tools they used to do that.
The ‘State of the Intersection’ address (Bill Thompson)
Bill’s talk was couched as a warning. For years, talks at OpenTech have been about the importance of Open Data and it’s obvious that this is starting to have an effect. Bill is worried that this data can be used in ways that are antithetical to the OpenTech movement and warned us that we need to be vigilant against this.
Beyond Open Data (Gavin Starks)
Gavin has been speaking at OpenTech since the first one in 2004 (even before it was called OpenTech) and, as with Tom Loosemore, it’s great to see his ideas bearing fruit. He is now the CEO of the Open Data Institute, an organisation founded by Tim Berners-Lee to the production and use of Open Data. Gavin talked about how the new organisation has been doing in its first six months of existence.
Silence and Thunderclaps (Emma Mulqueeny)
Emma has two contradictory-sounding ideas. The Silent Club is about taking time out in our busy lives to sit and be still and silent for an hour or so; and then sending her a postcard about what you thought or did during that time. The Thunderclap is a way to get a good effect out of that stack of business cards that we all seem to acquire.
Thinking Pictures Paul Clarke)
Paul takes very good photographs and used some of them to illustrate his talk which covered some of the ethical, moral and legal questions that go through his mind when deciding which pictures to take, share and sell.
1080s – the 300seconds project (300seconds)
The 300 seconds project wants to get more women talking at conferences. And they think that one good way to achieve that is for new speakers to only have to talk for five minutes instead of the full 20- or 40-minutes (or more) that many conferences expect. The Perl community has been using Lightning Talks to do this with great success for over ten years, so I can’t see why they shouldn’t succeed.
Politics, Programming, Data and the Drogulus (Nicholas Tollervey)
Nicholas is building a global federated, decentralized and openly writable data storage mechanism. It’s a huge task and it’s just him working on the project on his commutes. Sounds like he needs a community. Which is handy as the very next talk was…
Scaling the ZeroMQ Community (Pieter Hintjens)
Peter talked about how the ZeroMQ community runs itself. Speaking as someone who has run a couple of open source project communities, some of his rules seemed a little harsh to me (“you can only expect to be listened to if you bring a patch or money”) but his underlying principles are sound. All projects should aim to reach a stage where the project founders are completely replaceable.
The Cleanweb Movement (James Smith)
I admit that I knew nothing about the Cleanweb Movement. Turns out it’s a group of people who are building web tools which make it easier for people to use less energy. Which sounds like a fine idea to me.
Repair, don’t despair! Towards a better relationship with electronics (Janet Gunter and David Mery)
Janet and David started the Restart Project, which is all about encouraging people to fix electrical and electronic devices rather than throwing them out and buying replacements. They are looking for more volunteers to help people to fix stuff (and to teach people how to teach stuff).
CheapSynth (Dave Green)
Dave Green has been missing from OpenTech for a few years, but this was a triumphant return. He told us how you can build a cheap synth from a repurposed Rock Band game controller. He ended his talk (and the day) by leading the room in a rendition of Blue Money.
As always, OpenTech was a great way to spend a Saturday. Thank you to all of the organisers and the speakers for creating such and interesting day. As I tweeted during the day:
Being at @opentechuk always makes me embarrassed that I’m not getting more done. Which is, I suppose, the point of it :/
— Dave Cross (@davorg) May 18, 2013
But I spent yesterday hacking on something. More on that later.Related Posts:
'The Electric Club'
St Marks Road,
Wolverhampton WV3 0QH.
A chance for the LUG to present the benfits of Linux and Open Source Software
Entry Fee - £ by way of donation,
Make use of the bar & refreshments
Special interest groups (That's us!)
As many of you will know, our goal is to get the Ubuntu phone in a state where it can be used on a daily basis for testing, and importantly, finding bugs, UI issues, and other details that help us to refine the overall Ubuntu Touch experience. Progress is on-track for the end of May.
I decided to start dogfooding a little early (please remember, we are shooting for the beginning of July to be broadly in shape for dogfooding, so if you try, don’t expect things to be ready right now), so today I put my SIM card in my Galaxy Nexus with Ubuntu Touch and things are working pretty well so far. It seems that my data is no longer getting wiped on image updates, which helps testing significantly, so I am regularly upgrading with the daily images.
As ever, if you decide to test, you are doing so at your own risk…don’t be surprised to see bugs, crashes, and potential data loss (although I have not seen any data loss so far).
Some notes about my experience dogfooding:
The primary blockers in my way right now for normal use out and about are:
All of these are on the TODO list for completion by the end of the month.
I have been filing bugs for a bunch of the issues I am seeing on a day to day basis and the team are working hard to hit the end of May goal. Overall progress is looking good.
Although I have been using the daily images for quite some time on a phone without a SIM card, using as an actual phone is even more motivating than before. I can feel the phone coming together and when we get many of these issues fixed, it is going to deliver a far superior experience than the Android phone I was using before.
Put it in your calendars .. May 28th is Fedora 19 virtualization test day.
Every day is libguestfs test day. Just follow the instructions here.
Obviously it would be good to use this to scan guests, especially in a cloud scenario where you want to help naive users not to deploy guests that are just going to get pwned the minute they go online.
(Thanks Daniel Kopecek and Peter Vrabec)
A while back I started a project called the Ubuntu Advocacy Kit. The goal is simple: create a single downloadable kit that provides all the information and materials you need to go out and help advocate Ubuntu and our flavors to others. The project lives here on Launchpad and is available in this daily PPA. If you want to see the kit in action just run:sudo add-apt-repository ppa:uak-admins/uak sudo apt-get update sudo apt-get install uak-en
Now open the dash and search for “advocacy”. Click the icon to see the kit load in your browser.
We discussed the UAK this week at UDS and I want to get the kit to 1.0 level of completeness. This doesn’t require a huge amount of work, just getting a core set of content written up in a concise, simple, but detailed fashion. I want to complete this work and then get the kit up on loco.ubuntu.com as something people can download to get started advocating Ubuntu and our flavors.
I have created a blueprint to track this work and I am stubbing out a bunch of pages in the kit for pages that I think we will need as part of a 1.0 release.And why are you telling me this?
Well, I am looking for help.
If you enjoy writing and have a knowledge of good quality advocacy, I would like to invite you to write some content. If you can just reply to this post in the comments (or anywhere else I tend to look, such as email or IRC), we coordinate who works on what and I will update the blueprint where appropriate.
Thanks for reading!
Recently the Technical Board made a decision to sunset Brainstorm, the site we have been using for some time to capture a list of what folks would like to see fixed and improved in Ubuntu. Although the site has been in operation for quite some time, it had fallen into something of a state of disrepair. Not only was it looking rather decrepit and old, but the ideas highlighted there were not curated and rendered into the Ubuntu development process. Some time ago the Technical Board took a work item to try to solve this problem by regularly curating the most popular items in brainstorm with a commentary around technical feasibility, but the members of the TB unfortunately didn’t have time to fulfill this. As such, brainstorm turned into a big list of random ideas, ranging from the sublime to the ridiculous, and largely ignored by the Ubuntu development process.
Now, some folks have mused on the decision to sunset brainstorm and wondered if this is somehow a reflection on our community and our openness to ideas. I don’t think this is the case. While it is always important to build an environment where ideas are openly discussed and debated, ideas are free and relatively simply to come by, and the real challenge is converting that awesome vision in your head into something we can see and touch and deliver to others; this is not quite so free and simple. While Brainstorm provided a great place to capture the ideas, and we had no shortage of them, the challenge was connecting brainstorm to the people who were happy and willing to perform the work, and it didn’t really serve this purpose very well.
There were two problems with this. Firstly, picking up other people’s popular ideas is not how Open Source traditionally works. Open Source is built on a philosophy of scratching your own itch, traditionally fueled by programmers fixing their annoyances and building features and applications they want. Now, this is not to say a non-programmer can’t rally the community around their idea and build momentum around an implementation, but doing this requires significantly more effort than a fire and forget submission into brainstorm. In other words, just because an idea is popular doesn’t necessarily mean it is interesting enough for a developer to want to implement it. Secondly, brainstorm started to garner an unrealistic social expectation that popular ideas would be automatically added to the TODO list of prominent Ubuntu developers, which was never the case.
Today at UDS we had a discussion about these deficiencies in brainstorm in traversing the chasm between idea and implementation and Randall Ross had an interesting idea. With brainstorm retired we should re-focus the brainstorm URL and provide some guidance for tips and tricks for how to take an idea and rally support around it to develop an implementation. As an example, over the years I have discovered that taking an idea and building a well formed spec with detailed UI mock-ups and architectural diagrams, a detailed blueprint, regular meetings, and burndown charts, all significantly help to taking ideas from fiction to fandom. Equipping our community with the skills and tools to bring these ideas to fruition is a better use of our time.
So, the TL;DR of all of this is…brainstorm was a great idea at the time, but it didn’t effectively drive the most popular ideas in our community to fruition and delivery in Ubuntu. We want to help provide guidance and best practice to help our community be more successful in converting their ideas into development plans and getting people interested in participating.
These two improvements come under the banner of 'External Facts'. The first allows you to surface your own facts from a static file, either plain text key value pairs or a specific YAML / JSON format. These static files should be placed under /etc/facter/facts.d$ sudo mkdir -p /etc/facter/facts.d # note - the .txt file extension $ echo 'external_fact=yes' | sudo tee /etc/facter/facts.d/external_test.txt external_fact=worked $ facter external_fact worked
At its simplest this is a way to surface basic, static, details from system provisioning and other similar large events but it's also an easy way to include details from other daemon and cronjobs. One of my first use cases for this was to create 'last_backup_time' and 'last_backup_status' facts that are written at the conclusion of my backup cronjob. Having the values inserted from out of band is a much nicer prospect that writing a custom fact that parses the cron logs.
If that's a little too static for you then the second usage might be what you're looking for. Any executable scripts dropped in the same directory that produce the same output formats as allowed above will be executed by facter when it's invoked.# scripts must be executable! $ sudo chmod a+rx /etc/facter/facts.d/process_count $ cat /etc/facter/facts.d/process_count #!/bin/bash count=$(ps -efwww | wc -l | tr -s ' ') echo "process_count=$count" $ facter process_count 209
The ability to run scripts that provide facts and values makes customisation easier in situations where ruby isn't the best language for the job. It's also a nice way to reuse existing tools or for including information from further afield - such as the current binary log in use by MySQL or Postgres or the hosts current state in the load balancer.
While there have been third party extensions that provided this functionality for a while it's great to see these enhancements get included in core facter.
Hot on the heels of my last post showing Unity 8 running on Mir on a Macbook Pro Retina, there were some folks who were curious about how well Unity and Mir work on a phone.
Well, thanks to your friend and mine, Kevin Gunn, you can see a video of Unity 8 on Mir running on a Galaxy Nexus (which is by no means a super-powerful smartphone these days):
Can’t see the video? See it here!
Again, just to emphasize, this has not been through a round of performance optimizations, so you can expect additional performance improvements in the future, but I think this demonstrates that we are heading in the right direction.
Today my main machine was down for about 8 hours. Oops.
That meant when I got home, after a long and dull train journey, I received a bunch of mails from various hosts each saying:
Slaughter is my sysadmin utility which pulls policies/recipies from a central location and applies them to the local host.
Slaughter has a bunch of different transports, which are the means by which policies and files are transferred from the remote "central host" to the local machine. Since git is supported I've now switched my policies to be fetched from the master github repository.
In other news I've fettled with lumail a bit this week, but I'm basically doing nothing until I've pondered my way out of the hole I've dug myself into.
Like mutt lumail has the notion of "limiting" the display of things:
Unfortunately the latter has caused an annoying, and anticipated, failure case. If you open a folder and cause it to only show unread messages all looks good. Until you read a message. At which point it is no longer allowed to be displayed, so it disappears. Since you were reading a message the next one is opened instead. WHich then becomes marked as read, and no longer should be displayed, because we've said "show me new/unread-only messages please".
The net result is if you show only unread messages and make the mistake of reading one .. you quickly cycle through reading all of them, and are left with an empty display. As each message in turn is opened, read, and marked as non-new.
There are solutions, one of which I documented on the issue. But this has a bad side-effect that message navigation is suddenly complicated in ways that are annoying.
For the moment I'm mulling the problem over and I will only make trivial cleanup changes until I've got my head back in the game and a good solution that won't cause me more pain.
Recently the Mir and Unity Next teams got Unity 8 up and running on Mir. Now, this work is still very early in development and neither Mir nor Unity Next are finished yet, but I reached out to Michael Zanetti, who is on the team, and asked him to put together a short video demo to show the progress of this work. This demo shows the phone/tablet part of the Unity 8 codebase; the final desktop version will come later.
Here is is:
Can’t see the video? Click here!
As you can see, impressive progress is being made; this demo is running on a MacBook Pro Retina utilizing the full resolution of 2880×1800 pixels and using Intel HD 4400 graphics. The performance is already looking great, and the team haven’t done a deep dive into performance optimization yet.