So recently I got into trouble running Redis on a host, because the data no-longer fits into RAM.
As an interim measure I fixed this by bumping the RAM allocated to the guest, but a real solution was needed. I figure there are three real alternatives:
Looking around I found a couple of Redis-alternatives, but I was curious to see how hard it would be to hack something useful myself, as a creative solution.
This evening I spotted Protocol::Redis, which is a perl module for decoding/encoding data to/from a Redis server.
It's a limited implementation which stores data in an SQLite database, and currently has support for:
It isn't hugely fast, but it is fast enough, and it should be possible to use alternative backends in the future.
I suspect I'll not add sets/hashes, but it could be done if somebody was keen.
Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?
Does anyone actually mean SSL? Have there been any accidents through people confusing the two?
So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.
The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.
Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…
So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.
The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.
The cliché is that lotteries are a tax on the mathematically illiterate.
It's easy to have some sympathy for this position. Did you know trying to get rich by playing the lottery is like trying to commit suicide by flying on commercial airlines? These comparisons are superficially amusing but to look at lotteries in this rational way has seems to be in-itself irrational, ignoring the real motivations of the participants.
Even defined as a tax they are problematic – far from being progressive or redistributive, it has always seemed suspect when lottery money is spent proudly on high-brow projects such as concert hall restorations and theatre lighting rigs when—with no risk of exaggeration—there is zero overlap between the people who would benefit from the project and who funded it.
But no, what rankles me more about our lotteries isn't the unsound economics of buying a ticket or even that it's a state-run monopoly, but rather the faux philanthropic way it manages to evade all criticism by talking about the "good causes" it is helping.
Has our discourse become so relative and non-judgemental that when we are told that the lottery does some good, however slight, we are willing to forgive all of the bad? Isn't there something fundamentally dishonest about disguising the avarice, cupidity, escapism and being part of some shared cultural event—that are surely the only incentives to play this game—with some shallow feel-good fluff about good causes? And where are the people doing real good in communities complaining about this corrupting lucre, or are they just happy to take the money and don't want to ask too many awkward questions..?
"Vices are not crimes" claims Lysander Spooner, and I would not want to legislate that citizens cannot make dubious investments in any market, let alone a "lottery market", but we should at least be able to agree that this nasty regressive tax should enjoy no protection nor special privileges from the state, and it should be incapable of getting away with deflecting criticism with a bunch of photogenic children from an inner-city estate clutching a dozen branded footballs.
As my previous post suggested I'd been running a service for a few years, using Redis as a key-value store.
Redis is lovely. If your dataset will fit in RAM. Otherwise it dies hard.
Inspired by Memcached, which is a simple key=value store, redis allows for more operations: using sets, using hashes, etc, etc.
As it transpires I mostly set keys to values, so it crossed my mind last night an alternative to rewriting the service to use a non-RAM-constrained service might be to juggle redis out and replace it with something else.
If it were possible to have a redis-compatible API which secretly stored the data in leveldb, sqlite, or even Berkley DB, then that would solve my problem of RAM-constraints, and also be useful.
Anyway the short version is that this might be a way forward, the real solution might be to use sqlite or postgres, but that would take a few days work. For the moment the service has been moved to a donated guest and has 2Gb of RAM instead of the paltry 512Mb it was running on previously.
Happily the server is installed/maintained by my slaughter tool so reinstalling took about ten minutes - the only hard part was migrating the Redis-contents, and that's trivial thanks to the integrated "slave of" support. (I should write that up regardless though.)
I’ve finally worked out how to create self-signed SSL certificates for multiple domain names with openssl.
These notes relate to Debian GNU/Linux, but the principles will apply to other operating systems.
The first step to make the process easier and repeatable in the future is to copy the default configuration file from /etc/ssl/openssl.cnf to a working directory where you can adapt it
Let’s assume that you’ve copied /etc/ssl/openssl.cnf to ~/project-openssl.cnf. Edit the new file and set the various default values to the ones that you need — that’s better than having to respond to openssl’s prompts every time you run it.
For real non-self-signed certificates, you would generate a certificate signing request (.csr) file, ready for a certificate authority to sign it for you. In that case, you need to follow the instructions at http://wiki.cacert.org/FAQ/subjectAltName.
But for a self-signed certificate, the subjectAltName has to go in a different place. Make sure you’ve got this line present and un-commented in the [req] section of the config file:[req] ... x509_extensions = v3_ca
and then this goes at the end of the [v3_ca] section:[v3_ca] ... subjectAltName = @alt_names [alt_names] DNS.1 = example.com DNS.2 = www.example.com DNS.3 = example.co.uk DNS.4 = www.example.co.uk
There is (apparently) a limit to the number (or total length) of the alternate names, but I didn’t reach it with 11 domain names.
It’s also possible to add IP addresses to the alt_names section like this:IP.1 = 192.168.1.1 IP.2 = 192.168.69.14
Then to create the key and self-signed certificate, run commands similar to these:openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout project.key -out project.crt -config ~/project-openssl.cnf cp project.crt /etc/ssl/localcerts/ mv project.key /etc/ssl/private/
Note that I move (rather than copy) the key to the private directory to avoid leaving a copy of it lying around unprotected.
You can check that the certificate contains all the domains that you added by running this:openssl x509 -in project.crt -text -noout | less Alternative approach
I haven’t tried this, but according to http://apetec.com/support/GenerateSAN-CSR.htm it’s also possible to create a CSR and then self-sign it like this:openssl x509 -req -days 3650 -in project.csr -signkey project.key -out project.crt v3_req -extfile project-openssl.cnf References:
Once upon a time I setup a centralized service for spam-testing blog/forum-comments in real time, that service is BlogSpam.net.
This was created because the Debian Administration site was getting hammered with bogus comments, as was my personal blog.
Today the unfortunate thing happened, the virtual machine this service was running on ran out of RAM and died - The redis-store that holds all the state has now exceeded the paltry 512Mb allocated to the guest, so OOM killed it.
So I'm at an impasse - I either recode it to use MySQL instead of Redis, or something similar to allow the backing store to exceed the RAM-size, or I shut the thing down.
There seems to be virtually no liklihood somebody would sponsor a host to run the service, because people just don't pay for this kind of service.
I've temporarily given the guest 1Gb of RAM, but that comes at a cost. I've had to shut down my "builder" host - which is used to build Debian packages via pbuilder.
Offering an API, for free, which has become increasingly popular and yet equally gets almost zero feedback or "thanks" is a bit of a double-edged sword. Because it has so many users it provides a better service - but equally costs more to run in terms of time, effort, and attention.
(And I just realized over the weekend that my Flattr account is full of money (~50 euro) that I can't withdraw - since I deleted my paypal account last year. Ooops.)
Happy news? I'm avoiding the issue of free service indefinitely with the git-based DNS product which was covering costs and now is .. doing better. (20% off your first months bill with coupon "20PERCENT".)
I put out the call for nominations for the 2014 Software in the Public Interest (SPI) Board election last week. At this point I haven't yet received any nominations, so I'm mentioning it here in the hope of a slightly wider audience. Possibly not the most helpful as I would hope readers who are interested in SPI are already reading spi-announce. There are 3 positions open this election and it would be good to see a bit more diversity in candidates this year. Nominations are open until the end of Tuesday July 13th.
The primary hard and fast time commitment a board member needs to make is to attend the monthly IRC board meetings, which are conducted publicly via IRC (#spi on the OFTC network). These take place at 20:00 UTC on the second Thursday of every month. More details, including all past agendas and minutes, can be found at http://spi-inc.org/meetings/. Most of the rest of the board communication is carried out via various mailing lists.
The ideal candidate will have an existing involvement in the Free and Open Source community, though this need not be with a project affiliated with SPI.
Software in the Public Interest (SPI, http://www.spi-inc.org/) is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We see it as our role to handle things like holding domain names and/or trademarks, and processing donations for free and open source projects, allowing them to concentrate on actual development.
At work I needed a cheap laptop for a computer-illiterate user. Giving them Windows, would have meant that they would have had to keep up-to-date with Windows Updates, with all the potential issues that would cause, along with the need for malware protection. It would also have pushed up the cost, a laptop capable of pushing Windows along reasonably decently, would have cost a few hundred pounds at least.
Generally I would just have purchased a low-end Lenovo laptop and installed Ubuntu onto it, but I was aware that Ebuyer had recently launched an HP255 G1 Laptop with Ubuntu pre-installed for £219.99 inc. vat (just £183 if you can reclaim the VAT).
Buying pre-installed with Ubuntu afforded me the comfort of knowing that everything would work. Whilst Ubuntu generally does install very easily, there are sometimes hassles in getting some of the function buttons working, for brightness, volume etc. Knowing that these issues would all be sorted, along with saving me the time in having to install Ubuntu, seemed an attractive proposition.Unboxing
My first impressions were good, the laptop comes with a laptop case and the laptop itself looks smart enough for a budget machine. An Ubuntu sticker, instead of the usual Windows sticker, was welcome, although the two sticky marks where previous stickers had been removed were less so. Still, at least they had been removed.
Whilst we are on the subject of Windows’ remnants – the Getting Started leaflet was for Windows 8 rather than Ubuntu. Most Ubuntu users won’t care, but this is a poor attention to detail and, if this laptop is to appeal to the mass market, then it may cause confusion.First Boot
Booting up the laptop for the first time gave me an “Essential First Boot Set-up is being performed. Please wait.” message. I did wait and for quite a considerable time – probably a not dissimilar time to installing Ubuntu from scratch; I couldn’t help but suspect that was precisely what was happening. Eventually I was presented with a EULA from HP, which I had no choice but to accept or choose to re-install from scratch. Finally I was presented with an Ubuntu introduction, which I confess I skipped; suffice to say the new user was welcomed to Ubuntu with spinny things.
The first thing to note is that this is Ubuntu 12.04, the previous LTS (Long Term Support release). This will be supported until 2017, but it is a shame that it didn’t have the latest LTS release – Ubuntu 14.04. Users may of course choose to upgrade.
Secondly, the wireless was slow to detect the wireless access points on the network. Eventually I decided to restart network-manager, but just as I was about to do so, it suddenly sprang into life and displayed all the local access points. Once connected, it will re-connect quickly enough, but it does seem to take a while to scan new networks. Or perhaps I am just too impatient.
Ubuntu then prompted to run some updates, but the updates failed, as “22.214.171.124” was said to be unreachable, even though it was ping-able. The address is owned by Canonical, but whether this was a momentary server error, or some misconfiguration on the laptop, I have no idea.
This would have been a major stumbling block for a new Ubuntu user. Running apt-get update and apt-get dist-upgrade worked fine, typing Ctrl+Alt+t to bring up the terminal and then typing:$ sudo apt-get update $ sudo apt-get dist-upgrade
I notice that this referenced an HP-specific repository doubtless equipped with hardware specific packages:http://oem.archive.canonical.com/updates/ precise-oem.sp1 public http://hp.archive.canonical.com/updates precise-stella-anaheim public
I assume that adding this latter repository would be a good idea if purchasing a Windows version of this laptop and installing Ubuntu.Hardware
This is a typical chunky laptop. But, if you were expecting a sleek Air-like laptop for £220, then you need to take a reality shower. What it is, is a good-looking, well-made, traditional laptop from a quality manufacturer. At this price, that really should be enough.
Ubuntu “System Details” reveals that this is running an “AMD E1-1500 APU with Radeon HD Graphics x 2″, running 64-bit with a 724GB drive and 3.5GiB RAM. That would appear to be a lower spec processor than is typically available on-line for an HP 255 G1 running Windows; which generally seem to have 1.75Ghz processors (albeit at twice the price).
The great news was that, as expected, all the buttons worked. So what? Well, it may seem like a trivial matter whether, for example, pressing Fn10 increases the volume or not, but I think many of us have the experience of spending inordinate amounts of time trying to get such things to work properly. And buttons that don’t work, continue to irritate until the day you say goodbye to that machine. The fact that everything works as it should is enormously important and is the primary reason why buying Ubuntu pre-installed is such a big deal.
The keyboard and trackpad seem perfectly good enough to me, certainly much better than on my Novatech ultrabook; although not everyone seems to like them. In particular, it is good to have a light on the caps lock key.
I have not tested battery life, but, as this is usually the first thing to suffer in an entry-level machine, I would not hope for much beyond a couple of hours.
Booting up takes around 45 seconds and a further 20 seconds to reach the desktop. That is quite a long time these days for Ubuntu, but fast enough I would imagine for most users and considerably faster than it takes Windows to reach a usable state, at least in my experience.
Being that bit slower to boot, Suspend becomes more important: Closing the lid suspended the laptop and opening it again brought up the lock screen password prompt almost immediately. Repeated use showed this to work reliably.
As to system performance, well frankly this is not a fast laptop. Click on Chromium, post boot, and it takes about 9 seconds to load; LibreOffice takes about 6 seconds to load. Even loading System Settings takes a second or two. Once you’ve run them once, after each boot, they will load again in less than half the time. Despite the slow performance, it is always perfectly usable, and is absolutely fine for email and web-browsing applications.
The other thing to remember is that this will be the performance you should be able to expect throughout its life – i.e. it will not slow down even more as it gets older. Windows users typically expect their computers to slow down over time, largely because of the different way in which system and application settings are stored in Windows. Ubuntu does not suffer from this problem, meaning that a 5-year-old Ubuntu installation should be working as fast as it did when it was first installed.Conclusions
I struggle to think of what else you could buy that provides a full desktop experience for £220. And it isn’t even some cheap unbranded laptop from the developing world. Sure, it isn’t the fastest laptop around, but it is perfectly fast enough for web, email and office documents. And the fact that you can expect it to continue working, with few, if any, worries about viruses, makes it ideal for many users. It certainly deserves to be a success for HP, Ubuntu and Ebuyer.
But, whilst this low-price, low-power combination was ideal for me on this occasion, it is a shame that there are no other choices available pre-installed with Ubuntu. I wonder how many newcomers to Ubuntu will come with the belief that Ubuntu is slow, when in reality it is the low-end hardware that is to blame?
Please HP, Ubuntu and Ebuyer – give us more choice.
And Lenovo, please take note – you just lost a sale.
For more reviews please visit Reevo.
It now supports:
Needless to say, this software is not endorsed by Strava. Suggestions, feedback and contributions welcome.
I have a Brother MFC-7360N printer at home and there is also one at work. I wanted to to get Cloudprint working with Android devices rather than use the Android app Brother provide, which is great when it works but deeply frustrating (for my wife) when it doesn't.
What I describe below is how to Cloudprint enable "Classic printers" using Debian Wheezy.Install CUPS
Install CUPS and the Cloudprint requirements.sudo apt-get install cups python-cups python-daemon python-pkg-resources Install the MFC-7360N Drivers
I used the URL below to access the .deb files required.
If you're running a 64-bit Debian, then install ia32-libs first.sudo apt-get install ia32-libs
Download and install the MFC-7360N drivers.wget -c http://download.brother.com/welcome/dlf006237/mfc7360nlpr-2.1.0-1.i386.deb wget -c http://download.brother.com/welcome/dlf006239/cupswrapperMFC7360N-2.0.4-2.i386.deb sudo dpkg -i --force-all mfc7360nlpr-2.1.0-1.i386.deb sudo dpkg -i --force-all cupswrapperMFC7360N-2.0.4-2.i386.deb Configure CUPS
Edit the CUPS configuration file commonly located in /etc/cups/cupsd.conf and make the section that looks like:# Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock
look like this:# Listen on all interfaces Port 631 Listen /var/run/cups/cups.sock
Modify the Apache specific directives to allow connections from everywhere as well. Find the follow section in /etc/cups/cupsd.conf:<Location /> # Restrict access to the server... Order allow,deny </Location> # Restrict access to the admin pages... <Location /admin> Order allow,deny </Location> # Restrict access to the configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny </Location>
Add Allow All after each Order allow,deny so it looks like this:<Location /> # Restrict access to the server... Order allow,deny Allow All </Location> # Restrict access to the admin pages... <Location /admin> Order allow,deny Allow All </Location> # Restrict access to the configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow All </Location> Add the MFC-7360N to CUPS.
If your MFC-7360N is connected to your server via USB then you should be all set. Login to the CUPS administration interface on http://yourserver:631 and modify the MFC7360N printer (if one was created when the drivers where installed) then make sure you can print a test page via CUPS before proceeding.Install Cloudprint and Cloudprint service wget -c http://davesteele.github.io/cloudprint-service/deb/cloudprint_0.11-5.1_all.deb wget -c http://davesteele.github.io/cloudprint-service/deb/cloudprint-service_0.11-5.1_all.deb sudo dpkg -i cloudprint_0.11-5.1_all.deb sudo dpkg -i cloudprint-service_0.11-5.1_all.deb Authenticate
Google accounts with 2 step verification enabled need to use an application-specific password.
Authenticate cloudprintd.sudo service cloudprintd login
You should see something like this.Accounts with 2 factor authentication require an application-specific password Google username: email@example.com Password: Added Printer MFC7360N
Start the Cloudprint daemon.sudo service cloudprintd start
If everything is working correctly you should see your printer the following page:
Add the Google Cloud Print app to Android devices and you'll be able to configure your printer preferences and print from Android..Chrome and Chromium
When printing from within Google Chrome and Chromium you can now select Cloudprint as the destination and choose your printer.References
The Tour de France visits YorkshireLocation: Yorkshire
1. This weekend I will apply to rejoin the Debian project, as a developer.
3. This is the end of my list.
4. I lied. This is the end of my list. Powers of two, baby.
I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-
(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)
(I quite like the cloud that pops into existence at ~27 seconds in)
Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.
So my requirements were basically this:-
I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-[ftp] dir = /home/alan/Pictures/webcam/current file = webcam.jpg tmp = uploading.jpeg debug = 1 local = 1 [grab] device = /dev/video0 text = popeycam %Y-%m-%d %H:%M:%S fg_red = 255 fg_green = 0 fg_blue = 0 width = 1280 height = 720 delay = 1 brightness = 50 rotate = 0 top = 0 left = 0 bottom = -1 right = -1 quality = 100 once = 0 archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg
Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.Making the video
This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi Upload videos to YouTube
The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi
I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.
In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.
This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.Automate it all
webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens
I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.10 * * * * /home/alan/bin/encode_upload.sh
I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.
Here (on github) is the final script. Don’t laugh.Tweet