Time has come once again to announce WYLUG’s monthly meeting. As usual it will be held in the Lord Darcy. As usual it will start at around 7pm. There will be no obligation to book in advance. However, the man with the red hat lanyard will not be there to guide you steps, this month.
It turns out that a raspberry pi does a very good job of being a print server for a google cloud printer. Thanks to https://matthew.mceachen.us/blog/add-google-cloudprint-wifi-access-to-your-older-printer-with-a-raspberry-pi-1342.html I can now print at home directly from my phone!
Update: Replacing the battery and retraining the receiver fixed it. I suppose it must have had enough juice to flash the LED but not transmit.
A few days ago my CurrentCost starting reading just dashes. There’s also no transmitter icon, so I think it’s not receiving anything from the transmitter. It looks like this:
I went and fished the transmitter box out of the meter closet expecting its batteries to be dead, but it still has its red LED flashing periodically, so I don’t think it’s that.
I did the thing where you hold down the button on the transmitter for 9 seconds and also hold down the V button on the display to make them pair. The display showed its “searching” screen for a while but then it went back to how it looks above.
Anyone had that happen before? It’s otherwise worked fine for 4 years or so (batteries replaced once).
Most of the time, when I've got some software I want to write, I do it in python or sometimes bash. Occasionally though, I like to slip into something with a few more brackets. I've written a bit of C in the past and love it but recently I've been learning Go and what's really struck me is how clever it is. I'm not just talking about the technical merits of the language itself; it's clever in several areas:
You don't need to install anything to run Go binaries.
At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.
Go has the feeling of C but without all that tedious mucking about in hyperspace memory
Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.
The build system is designed around how we (as developers) use code hosting facilities
Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:import "github.com/codegangsta/cli"
which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:go get github.com/codegangsta/cli
Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.
In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.
Yesterday the new Government published a press release about the forthcoming first meeting of the new National Security Council (NSC). That meeting was due to discuss the Tory administration’s plans for a new Counter-Extremism Bill. The press release includes the following extraordinary stement which is attributed to the Prime Minister:
“For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone. “
Forgive me, but what exactly is wrong with that view? Personally I think it admirable that we live in a tolerant society (“passive” or not). Certainly I believe that tolerance of difference, tolerance of free speech, tolerance of the right to hold divergent opinion, and to voice that opinion is to be cherished and lauded. And is it not right and proper that a Government should indeed “leave alone” any and all of its citizens who are obeying the law?
Clearly, however, our Prime Minster disagrees with me and believes that a tolerant society is not what we really need in the UK because the press release continues:
“This government will conclusively turn the page on this failed approach. “
If tolerance is a “failed approach”, what are we likely to see in its place?
A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.
One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.
It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.
I wonder if this is a useful enough improvement that I should report it as an enhancement bug.
Without going into any of the details, it's a web application with a front end written using Ember and various services that it calls out to, written using whatever seems appropriate per service.
At the outset of the project, we decided we would bite the bullet and build for Docker from the outset. This meant we would get to avoid the usual dependency and developer environment setup nightmares.The problem
What we quickly realised as we started to put the bare bones of a few of the services in place, was that we had three seemingly conflicting goals for each component and for the application as a whole.
Build images that can be deployed in production.
Allow developers to run services locally.
Provide a means for running unit tests (both by developers and our CI server).
So here's what we've ended up with:The solution
Or: docker-compose to the rescueFolder structure
Here's what the project layout looks like:Project | +-docker-compose.yml | +-Service 1 | | | +-Dockerfile | | | +-docker.compose.yml | | | +-<other files> | +-Service 2 | | +-Dockerfile | +-docker.compose.yml | +-<other files> Building for production
This is the easy bit and is where we started first. The Dockerfile for each service was designed to run everything with the defaults. Usually, this is something simple like:FROM python:3-onbuild CMD ["python", "main.py"]
Our CI server can easily take these, produce images, and push them to the registry.Allowing developers to run services locally
This is slightly harder. In general, each service wants to do something slightly different when being run for development; e.g. automatically restarting when code changes. Additionally, we don't want to have to rebuild an image every time we make a code change. This is where docker-compose comes in handy.
The docker-compose.yml at the root of the project folder looks like this:service1: build: Service 1 environment: ENV: dev volumes: - Service 1:/usr/src/app links: - service2 - db ports: - 8001:8000 service2: build: Service2 environment: ENV: dev volumes: - Service 2:/usr/src/app links: - service1 - db ports: - 8002:8000 db: image: mongo
This gives us several features right away:
We can locally run all of the services together with docker-compose up
The ENV environment variable is set to dev in each service so that the service can configure itself when it starts to run things in "dev" mode where needed.
The source folder for each service is mounted inside the container. This means you don't need to rebuild the image to try out new code.
Each service is bound to a different port so you can connect to each part directly where needed.
Each service defines links to the other services it needs.
This was the trickiest part to get right. Some services have dependencies on other things even just to get unit tests running. For example, Eve is a huge pain to get running with a fake database so it's much easier to just link it to a temporary "real" database.
Additionally, we didn't want to mess with the idea that the images should run production services by default but also didn't want to require folks to need to churn out complicated docker invocations like docker run --rm -v $(pwd):/usr/src/app --link db:db service1 python -m unittest just to run the test suite after coding up some new features.
So, it was docker-compose to the rescue again :)
Each service has a docker-compose.yml that looks something like:tests: build: . command: python -m unittest volumes: - .:/usr/src/app links: - db db: image: mongo
Which sets up any dependencies needed just for the tests, mounts the local source in the container, and runs the desired command for running the tests.
So, a developer (or the CI box) can run the unit tests with:docker-compose run tests Summary
Each Dockerfile builds an image that can go straight into production without further configuration required.
Each image runs in "developer mode" if the ENV environment variable is set.
Running docker-compose up from the root of the project gets you a full stack running locally in developer mode.
Running docker-compose run tests in each service's own folder will run the unit tests for that service - starting any dependencies as needed.
I've been meaning to blog about the podcasts I listen to and the setup I use for consuming them as both have evolved a little over the past few months.The podcasts
I use syncthing to have those replicated to my laptop and home media server.
When I'm cycling to work or in the car, I use the mp3 player to listen to them. (No, when I'm in the car, I plug it in to the stereo, I don't drive with headphones on :P)
When I'm sitting at a computer or at home, I use Plex to serve up podcasts from my home media box.
I keep on top of everything by making sure that I move (rather than copy) when putting things on the mp3 player and rely on Syncthing to remove listened-to podcasts from everywhere else.
It's not the most elegant setup I've heard of but it's simple and works for me :)What next?
I find I have a lot of things I want to listen to and not really enough time to listen to them in. I've heard that some people speed podcasts up (I've heard as much as 50%). Does anyone do this? Does it make things any less enjoyable to listen to? I really enjoy the quality of what I listen to; I don't want to feel like I'm just consuming information for the sake of it.
The Debian Ruby Ruby team had a first sprint in 2014. The experience was very positive, and it was decided to do it again in 2015. Last April, the team once more met at the IRILL offices, in Paris, France.
The participants worked to improve the quality Ruby packages in Debian, including fixing release critical and security bugs, improving metadata and packaging code, and triaging test failures on the Debian Continuous Integration service.
The sprint also served to prepare the team infrastructure for the future Debian 9 release:
the gem2deb packaging helper to improve the semi-automated generation of Debian source packages from existing standard-compliant Ruby packages from Rubygems.
there was also an effort to prepare the switch to Ruby 2.2, the latest stable release of the Ruby language which was released after the Debian testing suite was already frozen for the Debian 8 release.
Left to right: Christian Hofstaedtler, Tomasz Nitecki, Sebastien Badia and Antonio Terceiro.
A full report with technical details has been posted to the relevant Debian mailing lists.
The UK has just had it's General Election. Labour failed miserably to increase their vote. The SNP picked uploads of votes and seats - mostly as they felt betrayed by the failure of delivery of anything after they agreed to remain in the union. The Liberal Democrats lost votes and seats a plenty as expected. The result is now we have a weak Conservative government with a slim majority - that will no doubt destroy it's self as the swivel-eyed loons on the far right of the party start to make increasingly unrealistic demands on the rest of the party.
The nutters in the home office, with the Liberal Democrat "sanity" checks removed will now demand ever increasing powers to snoop on everything we do, so that they can protect us from what ever problem they have invented to scare us with next...
I now feel compelled to support the Open Rights Group with my money as well as my moral support. If the lunatics aren't stopped then we'll have no civil liberties left.
Arbitrary tweets made by TheGingerDog up to 16 June 2014
Attempt to pass homeopathy off as credible by combining it with empirically valid medicine.
We woke to continual thunder.
I think it is time to leave the country.
It seems Mozilla is targeting emerging markets and developing nations with $25 cell phones. This is tremendous news, and an admirable focus for Mozilla, but it is not without risk.
Bringing simple, accessible technology to these markets can have a profound impact. As an example, in 2001, 134 million Nigerians shared 500,000 land-lines (as covered by Jack Ewing in Businessweek back in 2007). That year the government started encouraging wireless market competition and by 2007 Nigeria had 30 million cellular subscribers.
This generated market competition and better products, but more importantly, we have seen time and time again that access to technology such as cell phones improves education, provides opportunities for people to start small businesses, and in many cases is a contributing factor for bringing people out of poverty.
So, cell phones are having a profound impact in these nations, but the question is, will it work with FirefoxOS?
I am not sure.
In Mozilla’s defence, they have done an admirable job with FirefoxOS. They have built a powerful platform, based on open web technology, and they lined up a raft of carriers to launch with. They have a strong brand, an active and passionate community, and like so many other success stories, they already have a popular existing product (their browser) to get them into meetings and headlines.
Success though is judged by many different factors, and having a raft of carriers and products on the market is not enough. If they ship in volume but get high return rates, it could kill them, as is common for many new product launches.
What I don’t know is whether this volume/return-rate balance plays such a critical role in developing markets. I would imagine that return rates could be higher (such as someone who has never used a cell phone before taking it back because it is just too alien to them). On the other hand, I wonder if those consumers there are willing to put up with more quirks just to get access to the cell network and potentially the Internet.
What seems clear to me is that success here has little to do with the elegance or design of FirefoxOS (or any other product for that matter). It is instead about delivering incredibly dependable hardware. In developing nations people have less access to energy (for charging devices) and have to work harder to obtain it, and have lower access to support resources for how to use new technology. As such, it really needs to just work. This factor, I imagine, is going to be more outside of Mozilla’s hands.
So, in a nutshell, if the $25 phones fail to meet expectations, it may not be Mozilla’s fault. Likewise, if they are successful, it may not be to their credit.
I am a firm believer in building strong and empowered communities. We are in an age of a community management renaissance in which we are defining repeatable best practice that can be applied many different types of communities, whether internal to companies, external to volunteers, or a mix of both.
I have been working to further this growth in community management via my books, The Art of Community and Dealing With Disrespect, the Community Leadership Summit, the Community Leadership Forum, and delivering training to our next generation of community managers and leaders.
Last year I ran my first community management training course, and it was very positively received. I am delighted to announce that I will be running an updated training course at three events over the coming months.OSCON
On Sunday 20th July 2014 I will be presenting the course at the OSCON conference in Portland, Oregon. This is a tutorial, so you will need to purchase a tutorial ticket to attend. Attendance is limited, so be sure to get to the class early on the day to reserve a seat!
Firstly, on Fri 22nd August 2014 I will be presenting the course at LinuxCon North America in Chicago, Illinois and then on Thurs Oct 16th 2014 I will deliver the training at LinuxCon Europe in Düsseldorf, Germany.
Tickets are $300 for the day’s training. This is a steal; I usually charge $2500+/day when delivering the training as part of a consultancy arrangement. Thanks to the Linux Foundation for making this available at an affordable rate.
Space is limited, so go and register ASAP:
So what is in the training course?
My goal with each training day is to discuss how to build and grow a community, including building collaborative workflows, defining a governance structure, planning, marketing, and evaluating effectiveness. The day is packed with Q&A, discussion, and I encourage my students to raise questions, challenge me, and explore ways of optimizing their communities. This is not a sit-down-and-listen-to-a-teacher-drone on kind of session; it is interactive and designed to spark discussion.
The day is mapped out like this:
I will warn you; it is an exhausting day, but ultimately rewarding. It covers a lot of ground in a short period of time, and then you can follow with further discussion of these and other topics on our Community Leadership discussion forum.
I hope to see you there!
Arbitrary tweets made by TheGingerDog up to 04 June 2014
Arbitrary tweets made by TheGingerDog up to 01 June 2014