If you use a Linux or Unix box with bash or zsh, and you haven’t come across Liquid Prompt, then I suggest you head there right now to install it. I’m loving having more info on the status line, especially near code version control, but even having cpu load and temperature along with battery life right under where I am typing is really useful
As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.
Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.
Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.
Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.
I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.
My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.
It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.
I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.
Time has come once again to announce WYLUG’s monthly meeting. As usual it will be held in the Lord Darcy. As usual it will start at around 7pm. There will be no obligation to book in advance. However, the man with the red hat lanyard will not be there to guide you steps, this month.
It turns out that a raspberry pi does a very good job of being a print server for a google cloud printer. Thanks to https://matthew.mceachen.us/blog/add-google-cloudprint-wifi-access-to-your-older-printer-with-a-raspberry-pi-1342.html I can now print at home directly from my phone!
Update: Replacing the battery and retraining the receiver fixed it. I suppose it must have had enough juice to flash the LED but not transmit.
A few days ago my CurrentCost starting reading just dashes. There’s also no transmitter icon, so I think it’s not receiving anything from the transmitter. It looks like this:
I went and fished the transmitter box out of the meter closet expecting its batteries to be dead, but it still has its red LED flashing periodically, so I don’t think it’s that.
I did the thing where you hold down the button on the transmitter for 9 seconds and also hold down the V button on the display to make them pair. The display showed its “searching” screen for a while but then it went back to how it looks above.
Anyone had that happen before? It’s otherwise worked fine for 4 years or so (batteries replaced once).
Most of the time, when I've got some software I want to write, I do it in python or sometimes bash. Occasionally though, I like to slip into something with a few more brackets. I've written a bit of C in the past and love it but recently I've been learning Go and what's really struck me is how clever it is. I'm not just talking about the technical merits of the language itself; it's clever in several areas:
You don't need to install anything to run Go binaries.
At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.
Go has the feeling of C but without all that tedious mucking about in hyperspace memory
Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.
The build system is designed around how we (as developers) use code hosting facilities
Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:import "github.com/codegangsta/cli"
which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:go get github.com/codegangsta/cli
Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.
In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.
Yesterday the new Government published a press release about the forthcoming first meeting of the new National Security Council (NSC). That meeting was due to discuss the Tory administration’s plans for a new Counter-Extremism Bill. The press release includes the following extraordinary stement which is attributed to the Prime Minister:
“For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone. “
Forgive me, but what exactly is wrong with that view? Personally I think it admirable that we live in a tolerant society (“passive” or not). Certainly I believe that tolerance of difference, tolerance of free speech, tolerance of the right to hold divergent opinion, and to voice that opinion is to be cherished and lauded. And is it not right and proper that a Government should indeed “leave alone” any and all of its citizens who are obeying the law?
Clearly, however, our Prime Minster disagrees with me and believes that a tolerant society is not what we really need in the UK because the press release continues:
“This government will conclusively turn the page on this failed approach. “
If tolerance is a “failed approach”, what are we likely to see in its place?
A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.
One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.
It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.
I wonder if this is a useful enough improvement that I should report it as an enhancement bug.
Without going into any of the details, it's a web application with a front end written using Ember and various services that it calls out to, written using whatever seems appropriate per service.
At the outset of the project, we decided we would bite the bullet and build for Docker from the outset. This meant we would get to avoid the usual dependency and developer environment setup nightmares.The problem
What we quickly realised as we started to put the bare bones of a few of the services in place, was that we had three seemingly conflicting goals for each component and for the application as a whole.
Build images that can be deployed in production.
Allow developers to run services locally.
Provide a means for running unit tests (both by developers and our CI server).
So here's what we've ended up with:The solution
Or: docker-compose to the rescueFolder structure
Here's what the project layout looks like:Project | +-docker-compose.yml | +-Service 1 | | | +-Dockerfile | | | +-docker.compose.yml | | | +-<other files> | +-Service 2 | | +-Dockerfile | +-docker.compose.yml | +-<other files> Building for production
This is the easy bit and is where we started first. The Dockerfile for each service was designed to run everything with the defaults. Usually, this is something simple like:FROM python:3-onbuild CMD ["python", "main.py"]
Our CI server can easily take these, produce images, and push them to the registry.Allowing developers to run services locally
This is slightly harder. In general, each service wants to do something slightly different when being run for development; e.g. automatically restarting when code changes. Additionally, we don't want to have to rebuild an image every time we make a code change. This is where docker-compose comes in handy.
The docker-compose.yml at the root of the project folder looks like this:service1: build: Service 1 environment: ENV: dev volumes: - Service 1:/usr/src/app links: - service2 - db ports: - 8001:8000 service2: build: Service2 environment: ENV: dev volumes: - Service 2:/usr/src/app links: - service1 - db ports: - 8002:8000 db: image: mongo
This gives us several features right away:
We can locally run all of the services together with docker-compose up
The ENV environment variable is set to dev in each service so that the service can configure itself when it starts to run things in "dev" mode where needed.
The source folder for each service is mounted inside the container. This means you don't need to rebuild the image to try out new code.
Each service is bound to a different port so you can connect to each part directly where needed.
Each service defines links to the other services it needs.
This was the trickiest part to get right. Some services have dependencies on other things even just to get unit tests running. For example, Eve is a huge pain to get running with a fake database so it's much easier to just link it to a temporary "real" database.
Additionally, we didn't want to mess with the idea that the images should run production services by default but also didn't want to require folks to need to churn out complicated docker invocations like docker run --rm -v $(pwd):/usr/src/app --link db:db service1 python -m unittest just to run the test suite after coding up some new features.
So, it was docker-compose to the rescue again :)
Each service has a docker-compose.yml that looks something like:tests: build: . command: python -m unittest volumes: - .:/usr/src/app links: - db db: image: mongo
Which sets up any dependencies needed just for the tests, mounts the local source in the container, and runs the desired command for running the tests.
So, a developer (or the CI box) can run the unit tests with:docker-compose run tests Summary
Each Dockerfile builds an image that can go straight into production without further configuration required.
Each image runs in "developer mode" if the ENV environment variable is set.
Running docker-compose up from the root of the project gets you a full stack running locally in developer mode.
Running docker-compose run tests in each service's own folder will run the unit tests for that service - starting any dependencies as needed.
Arbitrary tweets made by TheGingerDog up to 16 June 2014
Attempt to pass homeopathy off as credible by combining it with empirically valid medicine.
We woke to continual thunder.
I think it is time to leave the country.
It seems Mozilla is targeting emerging markets and developing nations with $25 cell phones. This is tremendous news, and an admirable focus for Mozilla, but it is not without risk.
Bringing simple, accessible technology to these markets can have a profound impact. As an example, in 2001, 134 million Nigerians shared 500,000 land-lines (as covered by Jack Ewing in Businessweek back in 2007). That year the government started encouraging wireless market competition and by 2007 Nigeria had 30 million cellular subscribers.
This generated market competition and better products, but more importantly, we have seen time and time again that access to technology such as cell phones improves education, provides opportunities for people to start small businesses, and in many cases is a contributing factor for bringing people out of poverty.
So, cell phones are having a profound impact in these nations, but the question is, will it work with FirefoxOS?
I am not sure.
In Mozilla’s defence, they have done an admirable job with FirefoxOS. They have built a powerful platform, based on open web technology, and they lined up a raft of carriers to launch with. They have a strong brand, an active and passionate community, and like so many other success stories, they already have a popular existing product (their browser) to get them into meetings and headlines.
Success though is judged by many different factors, and having a raft of carriers and products on the market is not enough. If they ship in volume but get high return rates, it could kill them, as is common for many new product launches.
What I don’t know is whether this volume/return-rate balance plays such a critical role in developing markets. I would imagine that return rates could be higher (such as someone who has never used a cell phone before taking it back because it is just too alien to them). On the other hand, I wonder if those consumers there are willing to put up with more quirks just to get access to the cell network and potentially the Internet.
What seems clear to me is that success here has little to do with the elegance or design of FirefoxOS (or any other product for that matter). It is instead about delivering incredibly dependable hardware. In developing nations people have less access to energy (for charging devices) and have to work harder to obtain it, and have lower access to support resources for how to use new technology. As such, it really needs to just work. This factor, I imagine, is going to be more outside of Mozilla’s hands.
So, in a nutshell, if the $25 phones fail to meet expectations, it may not be Mozilla’s fault. Likewise, if they are successful, it may not be to their credit.
I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both.
I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.
Last year I ran my first community management training course, and it was very positively received. I am delighted to announce that I will be running an updated training course at three events over the coming months.OSCON
On Sunday 20th July 2014 I will be presenting the course at the OSCON conference in Portland, Oregon. This is a tutorial, so you will need to purchase a tutorial ticket to attend. Attendance is limited, so be sure to get to the class early on the day to reserve a seat!
Firstly, on Fri 22nd August 2014 I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.
Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.
Space is limited, so go and register ASAP:
So what is in the training course?
My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.
The day is mapped out like this:
I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.
I hope to see you there!
Arbitrary tweets made by TheGingerDog up to 04 June 2014
Arbitrary tweets made by TheGingerDog up to 01 June 2014
I just want to say how touched I have been by the response. The comments, social media posts, emails, and calls from you have been so kind and supportive. You are all good people, and I am going to miss every single one of you.
The reason why I have devoted my life to understanding communities is that I believe communities bring out the best in people, and all of you are a perfect example of that. I cannot express just how much I appreciate it.
Over the course of the next few weeks my replacement will be sourced and announced. and in the interim my team (Daniel Holbach, Michael Hall, David Planella, Nicholas Skaggs, Alan Pope) will take over my duties. Everything has been transitioned over, and remember, the weekly Q&As will continue at 6pm UTC every Tuesday on Ubuntu On Air with my team filling in for me. As ever, any and all Ubuntu questions are welcome!
Of course, I will still be around. I am going to continue to be a member of the Ubuntu community and an avid Ubuntu user, tester, and supporter. I will continue to be on IRC, you can email me at email@example.com, I will continue to do Bad Voltage, and I have a busy schedule at the Community Leadership Summit, OSCON, and more. I am also going to continue to have my own Q&A session every week where you can ask questions about my perspectives on Ubuntu, Canonical, community management, XPRIZE, and more; I will announce this soon.
Ubuntu has a tremendous future ahead of it, built on the hard work and passion of a global community. We are only just getting started with a new era of Ubuntu convergence and cloud orchestration and while I will miss being there in an official capacity, I am just thankful that I can continue to be along for the ride in the very community I played a part in building.
I now have a few weeks off and then my new adventure begins. Stay tuned.