Planet HantsLUG

Syndicate content
Planet HantsLUG -
Updated: 1 hour 36 min ago

Steve Kemp: A transient home-directory?

Wed, 25/11/2015 - 14:00

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Wombat Upgrade

Wed, 18/11/2015 - 22:21

Today the order went in for a major rebuild of Wombat. Some parts will remain from the original, but overall most of the system will be replaced with more modern parts:

  • The new CPU has double the core count, higher clock speed and better features. It should be faster under both single and multi-threaded use. It should also use less electricity and be cooler.
  • The new GPU should be much faster, it's on a faster bus, and it has proprietary driver support (if required).
  • The SATA controller is more modern and should be much faster than the hard disk, the current controller is an older generation than the disk.
  • The RAM is much faster - two generations faster and there is four times as much of it.

Overall it should be faster, use less electricity, and be thermally cooler. It won't be as fast as my desktop, but it should be noticeably faster and my better half should be happy enough - especially as I shouldn't have to touch the data on the hard disk, which was only recently reinstalled.

Categories: LUG Community Blogs

Steve Kemp: lumail2 nears another release

Mon, 16/11/2015 - 18:35

I'm pleased with the way that Lumail2 development is proceeding, and it is reaching a point where there will be a second source-release.

I've made a lot of changes to the repository recently, and most of them boil down to moving code from the C++ side of the application, over to the Lua side.

This morning, for example, I updated the handing of index.limit to be entirely Lua based.

When you open a Maildir folder you see the list of messages it contains, as you would expect.

The notion of the index.limit is that you can limit the messages displayed, for example:

  • See all messages: Config:set( "index.limit", "all")
  • See only new/unread messages: Config:set( "index.limit", "new")
  • See only messages which arrived today: Config:set( "index.limit", "today")
  • See only messages which contain "Steve" in their formatted version: Config:set( "index.limit", "steve")

These are just examples that are present as defaults, but they give an idea of how things can work. I guess it isn't so different to Mutt's "limit" facilities - but thanks to the dynamic Lua nature of the application you can add your own with relative ease.

One of the biggest changes, recently, was the ability to display coloured text! That was always possible before, but a single line could only be one colour. Now colours can be mixed within a line, so this works as you might imagine:

Panel:append( "$[RED]This is red, $[GREEN]green, $[WHITE]white, and $[CYAN]cyan!" )

Other changes include a persistant cache of "stuff", which is Lua-based, the inclusion of at least one luarocks library to parse Date: headers, and a simple API for all our objects.

All good stuff. Perhaps time for a break in the next few weeks, but right now I think I'm making useful updates every other evening or so.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Cold

Sat, 14/11/2015 - 14:58

I'm starting to get close to my target weight - only 3 kg to go according to BMI or a few centimetres according to height to waist ratio. Sadly for the past month I've not been able to get as much exercise as normal and I've hit another weight plateau.

Another strange phenomenon I'm experiencing is being cold. Normally even in mid winter I'm fine at work or home with short sleeved shirts and only on the coldest days do I need to wear a T-shirt under a regular shirt. In fact I've not worn or owned a proper vest for decades.

Last weekend I felt cold at home and put a jumper on. Something I wouldn't normally do in autumn. On Monday at work I felt cold but it was such a strange sensation that I didn't even recognise it. On Tuesday I went into M&S and bought a vest! Human fat doesn't provide the same level of insulation that blubber does for seals and whales, but it does provide some and more importantly on a diet the body down regulates thermogenesis to save energy. As I've been starving my self for 9 months to lose weight, it's hardly surprising that my body has decided that keeping warm isn't that important when I've shed over 21 kg.

Today isn't that cold - but it is pretty miserable. I've now got a vest, shirt, thin fleece and thick fleece gilet on. I even raised the temperature on the central heating thermostat up by 2°C...!

Categories: LUG Community Blogs

Andy Smith: Supermicro IPMI remote console on Ubuntu 14.04 through SSH tunnel

Fri, 13/11/2015 - 05:04

I normally don’t like using the web interface of Supermicro IPMI because it’s extremely clunky, unintuitive and uses Java in some places.

The other day however I needed to look at the console of a machine which had been left running Memtest86+. You can make Memtest86+ output to serial which is generally preferred for looking at it remotely, but this wasn’t run in that mode so was outputting nothing to the IPMI serial-over-LAN. I would have to use the Java remote console viewer.

As an added wrinkle, the IPMI network interfaces are on a network that I can’t access except through an SSH jump host.

So, I just gave it a go without doing anything special other than launching an SSH tunnel:

$ ssh me@jumphost -L127.0.0.1:1443: -N

This tunnels my localhost port 1443 to port 443 of as available from the jump host. Local port 1443 used because binding low ports requires root privileges.

This allowed me to log in to the web interface of the IPMI at https://localhost:1443/, though it kept putting up a dialog which said I needed a newer JDK. Going to “Remote Control / Console Redirection” attempted to download a JNLP file and then said it failed to download.

This was with openjdk-7-jre and icedtea-7-plugin installed.

I decided maybe it would work better if I installed the Oracle Java 8 stuff (ugh). That was made easy by following these simple instructions. That’s an Ubuntu PPA which does everything for you, after you agree that you are a bad person who should feel badaccept the license.

This time things got a little further, but still failed saying it couldn’t download a JAR file. I noticed that it was trying to download the JAR from even though my tunnel was on port 1443.

I eventually did get the remote console viewer to work but I’m not 100% convinced it was because I switched to Oracle Java.

So, basic networking issue here. Maybe it really needs port 443?

Okay, ran SSH as root so it could bind port 443. Got a bit further but now says “connection failed” with no diagnostics as to exactly what connection had failed. Still, gut instinct was that this was the remote console app having started but not having access to some port it needed.

Okay, ran SSH as a SOCKS proxy instead, set the SOCKS proxy in my browser. Same problem.

Did a search to see what ports the Supermicro remote console needs. Tried a new SSH command:

$ sudo ssh me@jumphost \ -L127.0.0.1:443: \ -L127.0.0.1:5900: \ -L127.0.0.1:5901: \ -L127.0.0.1:5120: \ -L127.0.0.1:5123: -N

Apart from a few popup dialogs complaining about “MalformedURLException: unknown protocol: socket” (wtf?), this now appears to work.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (September and October 2015)

Wed, 11/11/2015 - 21:35

The following contributors got their Debian Developer accounts in the last two months:

  • ChangZhuo Chen (czchen)
  • Eugene Zhukov (eugene)
  • Hugo Lefeuvre (hle)
  • Milan Kupcevic (milan)
  • Timo Weingärtner (tiwe)
  • Uwe Kleine-König (ukleinek)
  • Bernhard Schmidt (berni)
  • Stein Magnus Jodal (jodal)
  • Prach Pongpanich (prach)
  • Markus Koschany (apo)
  • Andy Simpkins (rattustrattus)

The following contributors were added as Debian Maintainers in the last two months:

  • Miguel A. Colón Vélez
  • Afif Elghraoui
  • Bastien Roucariès
  • Carsten Schoenert
  • Tomasz Nitecki
  • Christoph Ulrich Scholler
  • Mechtilde Stehmann
  • Alexandre Viau
  • Daniele Tricoli
  • Russell Sim
  • Benda Xu
  • Andrew Kelley
  • Ivan Udovichenko
  • Shih-Yuan Lee
  • Edward Betts
  • Punit Agrawal
  • Andreas Boll
  • Dave Hibberd
  • Alexandre Detiste
  • Marcio de Souza Oliveira
  • Andrew Ayer
  • Alf Gaida


Categories: LUG Community Blogs

Andy Smith: Linux Software RAID and drive timeouts

Mon, 09/11/2015 - 09:06
All the RAIDs are breaking

I feel like I’ve been seeing a lot more threads on the linux-raid mailing list recently where people’s arrays have broken, they need help putting them back together (because they aren’t familiar with what to do in that situation), and it turns out that there’s nothing much wrong with the devices in question other than device timeouts.

When I say “a lot”, I mean, “more than I used to.”

I think the reason for the increase in failures may be that HDD vendors have been busy segregating their products into “desktop” and “RAID” editions in a somewhat arbitrary fashion, by removing features from the “desktop” editions in the drive firmware. One of the features that today’s consumer desktop drives tend to entirely lack is configurable error timeouts, also known as SCTERC, also known as TLER.


If you use redundant storage but may be using non-RAID drives, you absolutely must check them for configurable timeout support. If they don’t have it then you must increase your storage driver’s timeout to compensate, otherwise you risk data loss.

How do storage timeouts work, and when are they a factor?

When the operating system requests from or write to a particular drive sector and fails to do so, it keeps trying, and does nothing else while it is trying. An HDD that either does not have configurable timeouts or that has them disabled will keep doing this for quite a long time—minutes—and won’t be responding to any other command while it does that.

At some point Linux’s own timeouts will be exceeded and the Linux kernel will decide that there is something really wrong with the drive in question. It will try to reset it, and that will probably fail, because the drive will not be responding to the reset command. Linux will probably then reset the entire SATA or SCSI link and fail the IO request.

In a single drive situation (no RAID redundancy) it is probably a good thing that the drive tries really hard to get/set the data. If it really persists it just may work, and so there’s no data loss, and you are left under no illusion that your drive is really unwell and should be replaced soon.

In a multiple drive software RAID situation it’s a really bad thing. Linux MD will kick the drive out because as far as it is concerned it’s a drive that stopped responding to anything for several minutes. But why do you need to care? RAID is resilient, right? So a drive gets kicked out and added back again, it should be no problem.

Well, a lot of the time that’s true, but if you happen to hit another unreadable sector on some other drive while the array is degraded then you’ve got two drives kicked out, and so on. A bus / controller reset can also kick multiple drives out. It’s really easy to end up with an array that thinks it’s too damaged to function because of a relatively minor amount of unreadable sectors. RAID6 can’t help you here.

If you know what you’re doing you can still coerce such an array to assemble itself again and begin rebuilding, but if its component drives have long timeouts set then you may never be able to get it to rebuild fully!

What should happen in a RAID setup is that the drives give up quickly. In the case of a failed read, RAID just reads it from elsewhere and writes it back (causing a sector reallocation in the drive). The monthly scrub that Linux MD does catches these bad sectors before you have a bad time. You can monitor your reallocated sector count and know when a drive is going bad.

How to check/set drive timeouts

You can query the current timeout setting with smartctl like so:

# for drive in /sys/block/sd*; do drive="/dev/$(basename $drive)"; echo "$drive:"; smartctl -l scterc $drive; done

You hopefully end up with something like this:

/dev/sda: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke,   SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds)   /dev/sdb: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke,   SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds)

That’s a good result because it shows that configurable error timeouts (scterc) are supported, and the timeout is set to 70 all over. That’s in centiseconds, so it’s 7 seconds.

Consumer desktop drives from a few years ago might come back with something like this though:

SCT Error Recovery Control: Read: Disabled Write: Disabled

That would mean that the drive supports scterc, but does not enable it on power up. You will need to enable it yourself with smartctl again. Here’s how:

# smartctl -q errorsonly -l scterc,70,70 /dev/sda

That will be silent unless there is some error.

More modern consumer desktop drives probably won’t support scterc at all. They’ll look like this:

Warning: device does not support SCT Error Recovery Control command

Here you have no alternative but to tell Linux itself to expect this drive to take several minutes to recover from an error and please not aggressively reset it or its controller until at least that time has passed. 180 seconds has been found to be longer than any observed desktop drive will try for.

# echo 180 > /sys/block/sda/device/timeout I’ve got a mix of drives that support scterc, some that have it disabled, and some that don’t support it. What now?

It’s not difficult to come up with a script that leaves your drives set into their most optimal error timeout condition on each boot. Here’s a trivial example:

#!/bin/sh   for disk in `find /sys/block -maxdepth 1 -name 'sd*' | xargs -n 1 basename` do smartctl -q errorsonly -l scterc,70,70 /dev/$disk   if test $? -eq 4 then echo "/dev/$disk doesn't suppport scterc, setting timeout to 180s" '/o\' echo 180 > /sys/block/$disk/device/timeout else echo "/dev/$disk supports scterc " '\o/' fi done

If you call that from your system’s startup scripts (e.g. /etc/rc.local on Debian/Ubuntu) then it will try to set scterc to 7 seconds on every /dev/sd* block device. If it works, great. If it gets an error then it sets the device driver timeout to 180 seconds instead.

There are a couple of shortcomings with this approach, but I offer it here because it’s simple to understand.

It may do odd things if you have a /dev/sd* device that isn’t a real SATA/SCSI disk, for example if it’s iSCSI, or maybe some types of USB enclosure. If the drive is something that can be unplugged and plugged in again (like a USB or eSATA dock) then the drive may reset its scterc setting while unpowered and not get it back when re-plugged: the above script only runs at system boot time.

A more complete but more complex approach may be to get udev to do the work whenever it sees a new drive. That covers both boot time and any time one is plugged in. The smartctl project has had one of these scripts contributed. It looks very clever—for example it works out which devices are part of MD RAIDs—but I don’t use it yet myself as a simpler thing like the script above works for me.

What about hardware RAIDs?

A hardware RAID controller is going to set low timeouts on the drives itself, so as long as they support the feature you don’t have to worry about that.

If the support isn’t there in the drive then you may or may not be screwed there: chances are that the RAID controller is going to be smarter about how it handles slow requests and just ignore the drive for a while. If you are unlucky though you will end up in a position where some of your drives need the setting changed but you can’t directly address them with smartctl. Some brands e.g. 3ware/LSI do allow smartctl interaction through a control device.

When using hardware RAID it would be a good idea to only buy drives that support scterc.

What about ZFS?

I don’t know anything about ZFS and a quick look gives some conflicting advice:

Drives with scterc support don’t cost that much more, so I’d probably want to buy them and check it’s enabled if it were me.

What about btrfs?

As far as I can see btrfs does not disable drives, it leaves it until Linux does that, so you’re probably not at risk of losing data.

If your drives do support scterc though then you’re still best off making sure it’s set as otherwise things will crawl to a halt at the first sign of trouble.

What about NAS devices?

The thing about these is, they’re quite often just low-end hardware running Linux and doing Linux software RAID under the covers. With the disadvantage that you maybe can’t log in to them and change their timeout settings. This post claims that a few NAS vendors say they have their own timeouts and ignore scterc.

So which drives support SCTERC/TLER and how much more do they cost?

I’m not going to list any here because the list will become out of date too quickly. It’s just something to bear in mind, check for, and take action over.

Fart fart fart

Comments along the lines of “Always use hardware RAID” or “always use $filesystem” will be replaced with “fart fart fart,” so if that’s what you feel the need to say you should probably just do so on Twitter instead, where I will only have the choice to read them in my head as “fart fart fart.”

Categories: LUG Community Blogs

Steve Kemp: lumail2 approaches readiness

Thu, 05/11/2015 - 21:52

So the work on lumail2 is going well, and already I can see that it is a good idea. The main reason for (re)writing it is to unify a lot of the previous ad-hoc primitives (i.e. lua functions) and to try and push as much of the code into Lua, and out of C++, as possible. This work is already paying off with the introduction of new display-modes and simpler implementation.

View modes are an important part of lumail, because it is a modal mail-client. You're always in one mode:

  • maildir-mode
    • Shows you lists of Maildir-folders.
  • index-mode
    • Shows you lists of messages inside the maildir you selected.
  • message-mode
    • Shows you a single message.

This is nothing new, but there are two new modes:

  • attachment-mode
    • Shows you the attachments associated with the current message.
  • lua-mode
    • Shows you your configuration-settings and trivia.

Each of these modes draws lines of text on the screen, and those lines consist of things that Lua generated. So there is a direct mapping:

ModeLua Function maildirfunction maildir_view() indexfunction index_view() messagefunction message_view() luafunction lua_view()

With that in mind it is possible to write a function to scroll to the next line containing a pattern like so:

function find() local pattern = Screen:get_line( "Search for:" ) -- Get the global mode. local mode = Config:get("global.mode") -- Use that to get the lines we're currently displaying loadstring( "out = " .. mode .. "_view()" )() -- At this point "out" is a table containing lines that -- the current mode wishes to display. -- .. do searching here. end

Thus the whole thing is dynamic and mode-agnostic.

The other big change is pushing things to lua. So to reply to an email, populating the new message, appending your ~/.signature, is handled by Lua. As is forwarding a message, or composing a new mail.

The downside is that the configuration-file is now almost 1000 lines long, thanks to the many little function definitions, and key-binding setup.

At this rate the first test-release will be out at the weekend, but API documentation, and sample configuration file might make interesting reading until then.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: New Wombat

Tue, 03/11/2015 - 09:35

I realised that one of my desktop systems is now over a decade old, I bought it in 2005 and its starting to show it's age. The case and DVD drives are fine, the PSU and the hard disk have been upgraded mid-life, and there is nothing wrong with the display, keyboard and mouse. The problem is that the CPU is under-powered, there isn't enough RAM and the graphics subsystem is slow and no longer up to the job.

I've considered several option, but I think the least disruptive is a modern motherboard and faster current generation CPU. I can keep the case and most of the rest of the parts and just do a heart transplant.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Blood pressure and weight

Sat, 31/10/2015 - 16:48

Since discovering I have elevated blood pressure and average but unhealthy levels of blood lipids at the start of this year I've made some small changes to my lifestyle. My diet was fundamentally sound but I have tweaked it a little and reduced the amount of it.

To date I've lost just over 21 kg at an average of 83 g per day, or in medieval units: 3 stones and 5 pounds at a rate of 3 ounces per day. It's important that the rate is slow so you don't stress your body and the weight loss should be sustainable as a result.

The last blood tests showed that my blood lipids had shifted drastically and are now very healthy, so my diet is worth sticking with. Yesterday I spoke to my GP, and after analysising a 5-day set of home blood pressure readings he's decided to stop the blood pressure medication.

I probably do have inherited high blood pressure, but my excess weight brought it on early and as long as I monitor it weekly we should be able to start medication again if it starts to rise up again.

My hybrid diet seems to have worked very well. To be honest I've only really tweaked what I was already eating, but the tweaks were worth the effort and I'll be sticking to the diet for ever now. I will be allowed a few extra calories per day - I do want to stop losing weight eventually!

The only real inconvenience has been having to replace most of my clothes, and as I've now shrunk down so much, even finding local shops that sell men's clothing that is even small enough to not look funny on me!

Categories: LUG Community Blogs

Steve Kemp: It begins - a new mail client, with lua scripting

Mon, 26/10/2015 - 22:01

Once upon a time I wrote a mail-client, which worked in the console directly via Maildir manipulation.

My mail client was written in C++, and used Lua for scripting unlike clients such as mutt, alpine, and similar alternatives which don't have complete scripting support.

I've pondered several times whether to restart this project, but I think it is the right thing to do.

The original lumail client has a rich API, but it is very ad-hoc and random. Functions were added where they seemed like a good idea, but with no real planning, and although there are grouped functions that operate similarly there isn't a lot of consistency. The implementation is clean in places, elegant in others, and horrid in yet more parts.

This time round everything is an object, accessible to Lua, with Lua, and for Lua. This time round all the drawing-magic is will be written in Lua.

So to display a list of Maildirs I create a bunch of objects, one for each Maildir, and then the Lua function Maildir.to_string is called. That function looks like this:

-- -- This method returns the text which is displayed when a maildir is -- to be show in maildir-mode. -- function Maildir.to_string(self) local total = self:total_messages() local unread = self:unread_messages() local path = self:path() local output = string.format( "[%05d / %05d] - %s", unread, total, path ); if ( unread > 0 ) then output = "$[RED]" .. output end if ( string.find( output, "Automated." ) ) then output = strip_colour( output ) output = "$[YELLOW]" .. output end return output end

The end result is something that looks like this:

[00001 / 00010 ] -
[00000 / 00023 ] - Automated.root

The formatting can thus be adjusted clearly, easily, and without hacking the core of the client. Providing I implement the apporpriate methods to the Maildir object.

It's still work in progress. You can view maildirs, view indexes, and view messages. You cannot reply, forward, or scroll properly. That said the hard part is done now, and I'm reasonably happy with it.

The sample configuration file is a bit verbose, but a good demonstration regardless.

See the code, if you wish, online here:

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Lipids

Fri, 23/10/2015 - 08:05

Early this year I went to see my GP. I had a strange pain on my left side that I had been aware of for a few months and had become concerned about as it hadn't gone away. My GP asked the obvious questions and could make no sense, so he ordered a battery of tests. Two things came out of the tests, the first being elevated low-density lipoprotein (LDL) levels in my blood, not high enough for medication but best reduced and high blood pressure which did need urgent medication...

I spent most of the summer having more tests: being prodded, irradiated, magnetised, and having more blood drawn (I think doctors still like their leaches...). I also had some pretty horrible alpha/beta blockers to lower my blood pressure.

No one mentioned it but I decided that I should lose some weight - I knew I was overweight, so removed extras from my diet, getting rid of excess salt, fat and sugar. I didn't actually have a bad diet, but I clearly did have too much of it.

Lowering my weight and improving the quality of my diet has reduced my systolic blood pressure by over 1 mmHg per kilo, or about 22 mmHg in total, and my diastolic blood pressure by slightly less, about 18 mmHg. That's pretty good and has drastically fewer side-effects than any medication.

Yesterday I had the result of my lipid tests. My total lipoprotein levels have fallen from over 5 mmol/l to just over 3 mmol/l. They weren't dangerously high, but they were not good, now they are very healthy. At the same time the level of my high-density lipoprotein (HDL) have risen from around 1 mmol/l to 1.8 mmol/l, changing my total cholesterol to HDL ratio from just over 5 (not so good) to less than 2 (very good).

As I've previously said my diet wasn't too bad when I started this change, and I already had plenty of exercise. What I did was trim the extras and snacks, curbed the salt and caffeine, added nuts and more soluble fibre, substituted some of the milk with soya milk and substituted some of my yoghurt with stanols containing yoghurt. Finally I just watched my portion sizes. Nothing radical overall.

The pain? It went away on it's own, still no explanation...! I'm still on calcium channel blockers for my blood pressure, but my GP thinks if I can stick to my diet/activity levels I may be able to come of them too!

Categories: LUG Community Blogs

Steve Kemp: Robbing Peter to pay Paul, or location spoofing via DNS

Sat, 17/10/2015 - 08:57

I rarely watched TV online when I was located in the UK, but now I've moved to Finland with appalling local TV choices it has become more common.

The biggest problem with trying to watch BBC's iPlayer, and similar services, is the location restrictions.

Not a huge problem though:

  • Rent a virtual machine.
  • Configure an OpenVPN server on it.
  • Connect from $current-country to it.

The next part is the harder one - making your traffic pass over the VPN. If you were simple you'd just say "Send everything over the VPN". But that would slow down local traffic, so instead you have to use trickery.

My approach was just to run a series of routing additions, similar to this (except I did it in the openvpn configuration, via pushed-routes):

ip -4 route add .... dev tun0

This works, but it is a pain as you have to add more and more routes. The simpler solution which I switched to after a while was just configuring mitmproxy on the remote OpenVPN end-point, and then configuring that in the browser. With that in use all your traffic goes over the VPN link, if you enable the proxy in your browser, but nothing else will.

I've got a network device on-order, which will let me watch netflix, etc, from my TV, and I'm lead to believe this won't let you setup proxies, or similar, to avoid region-bypass.

It occurs to me that I can configure my router to give out bogus DNS responses - if the device asks for "" it can return - which is the remote host running the proxy.

I imagine this will be nice and simple, and thought I was being clever:

  • Remote OpenVPN server.
  • MITM proxy on remote VPN-host
    • Which is basically a transparent HTTP/HTTPS proxy.
  • Route traffic to it via DNS.
    • e.g. For any DNS request, if it ends in return

Because I can handle DNS-magic on the router I can essentially spoof my location for all the devices on the internal LAN, which is a good thing.

Anyway I was reasonably pleased with the idea of using DNS to route traffic over the VPN, in combination with a transparent proxy. I was even going to blog about it, and say "Hey! This is a cool idea I've never heard of before".

Instead I did a quick google(.fi) and discovered that there are companies offering this as a service. They don't mention the proxying bit, but it's clearly what they're doing - for example OverPlay's SmartDNS.

So in conclusion I can keep my current setup, or I can use the income I receive from DNS hosting to pay for SmartDNS, or other DNS-based location-fakers.

Regardless. DNS. VPN. Good combination. Try it if you get bored.

Categories: LUG Community Blogs

Steve Kemp: So about that idea of using ssh-keygen on untrusted input?

Mon, 12/10/2015 - 10:40

My previous blog post related to using ssh-keygen to generate fingerprints from SSH public keys.

At the back of my mind was the fear that running the command against untrusted, user-supplied, keys might be a bad plan. So I figured I'd do some fuzzing to reassure myself.

The most excellent LWN recently published a piece on Fuzzing with american fuzzy lop, so with that to guide me I generated a pair of SSH public keys, and set to work.

Two days later I found an SSH public key that would make ssh-keygen segfault, and equally the SSH client (same parser), so that was a shock.

The good news is that my Perl module to fingerprint keys is used like so:

my $helper = SSHKey::Fingerprint->new( key => "ssh ...." ); if ( $helper->valid() ) { my $fingerprint = $helper->fingerprint(); ... }

The validity-test catches my bogus key, so in my personal use-cases this OK. That said it's a surprise to see this:

skx@shelob ~ $ ssh -i Segmentation fault

Similarly running "ssh-keygen -l -f ~/" results in an identical segfault.

In practice this is a low risk issue, hence mentioning it, and filing the bug-report publicly, even if code execution is possible. Because in practice how many times do people fingerprint keys from unknown sources? Except for things like githubs key management page?

Some people probably do it, but I assume they do it infrequently and only after some minimal checking.

Anyway we'll say this is my my first security issue of 2015, we'll call it #roadhouse, and we'll get right on trademarking the term, designing the logo, and selling out for all the filthy filthy lucre ;)

Categories: LUG Community Blogs

Alan Pope: Troubleshooting as a Choose Your Own Adventure

Sun, 11/10/2015 - 07:00

We have a lot of documentation and help in the Ubuntu project, and much of it is quite hostile to new users. We have IRC channels, mailing lists, dense & out of date wiki pages, lengthy and hard to consume forum posts & lengthy out of date tutorial videos. We also have some more modern tools such as AskUbuntu and Discourse.

Most are good for asking one specific question, but most aren’t well suited to guiding a user through a specific problem diagnosis. If you know what questions to ask, then a search engine might find part of the problem, or hopefully part of the solution.

However one aspect we don’t cover very well is guided self-support. This is very apparent in distribution upgrades. People often give up completely when an upgrade breaks down, rather than work through the problem as they would with anything else. There can be many reasons for this of course, but from what I’ve seen in the community, fixing a broken upgraded system is hard, and made harder when you only have a black screen, or tty to work with when you’re not an expert.


I’m thinking very specifically about a target audience of non-technical users. Someone who uses Ubuntu but wants to feel empowered to fix a problem themselves, and not have to call their wife or daughter or other ‘expert’ to fix it. I want someone to find a guide which gets them out of the upgrade issue. Obviously I’d like us to fix the problem which cause the issues, but I want to start here, because frankly it’s easier for me.


While I can hear some of my co-workers shouting “But popey, Snappy Core fixes the issue of broken updates!”, no it doesn’t, not today, and not while there are people running the traditional Debian based desktop – which I suspect will be for a good few years yet.


I can also hear those of you saying “I never do upgrades, I always clean install”, and I’m happy for you, but we have an upgrade system, people should be confident that it works, and when it doesn’t it should be fixable without going nuclear. Just the same as “Buy a new TV” isn’t usually the solution to “My TV Remote stopped working”.


In my mind a user might be more inclined to fix their system if there’s an easy to use ‘expert system’ which they can walk through to get them working again. I wondered about this some time back and considered the idea of a Choose Your Own Adventure style of troubleshooting issues, rather than just punting people to IRC when their system breaks. We have been helping people with broken upgrades for years, surely we can amass the technical knowledge into a self-service system to help future users.

One assumption I am making with this is that the person going through this is able to access the Internet (and thus this guide) via another machine or their phone. This seems semi-reasonable as we often get people in IRC or on other support resources asking for help with a broken upgrade. So the guide should work well on any browser and ideally on mobile too.

There are of course unfortunate people for which they have only one computer and no smartphone so have no way to access this resource easily. This system doesn’t cater for them well.


I made a little prototype using Twine which is an “an open-source tool for telling interactive, nonlinear stories.” – typically text adventure or “Interactive Fiction” games, but can be used for other things too. I chose Twine because it’s open source, easy to use, cross platform and simple. You can even create Twine ‘stories’ directly in your browser. Neat!

The output generated by Twine is HTML and can be customised and styled with a stylesheet. I did one simple bit of styling in that I used the Ubuntu font :).


Here’s what the map of the ‘world’ looks like inside Twine (after I carefully moved the blocks around so none criss-cross):-

Editing the pages within the ‘story’ is super easy, linking to other pages is as simple as typing the name of a new page in square brackets:-

You can try out the very early prototype I made at Obviously this isn’t finished, won’t actually help anyone fix their system, and is hosted somewhere obscure, all “TODO” items.


There’s a few known issues with the above prototype twine:-

  • Browser back button exits the story, it should go back a step
  • Twine back button is hard to see, some CSS will fix that
  • Many of the pages are quite wordy, or technical, probably could be simplified

I wanted to share what I’d done to see if it seemed like a good idea, and whether people might be interested in helping create and curate content for it. The idea is to make a prototype and get some content rather than make the thing look especially pretty right now. The way I see it there’s some important steps to do:-

  • Confirm this isn’t a stupid idea (this blog post)
  • Figure out the best way to distribute this and make it accessible
  • Find people interested in helping
  • Identify a bunch of specific breakages that happen in Ubuntu upgrades
  • Craft diagnostic steps to identify one breakage scenario from another
  • Come up with robust solutions for each scenario
  • Test the scenarios
  • Publish and publicise the work

Things I haven’t yet considered, but probably should:-

  • Translations
  • Accessibility
  • Co-ordinating work
  • How far back I’ll roll my eyes at people telling me rolling distributions are better

Comments, suggestions or flames are welcome here or in my inbox.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Why you are heavier than you think...

Thu, 08/10/2015 - 21:51

Today I went into my local M&S to try on trousers. In the casual section the smallest available size is marked as 32" (81 cm in real units). They are way too large for me and would fall down without a belt or braces. They apparently make some ranges in 28" and 30", but only stock them at the larger stores. As I've mentioned before they are all about 2" larger than they, say, so I was actually trying on a pair of 34" trousers - which I already know are too large.

The point is if you think you haven't put on any weight because you are wearing the same size of trousers that you did a decade or more ago, then you are sadly mistaken as in the interval your trousers have got bigger like you have but the sizes have lied to accommodate the change.

I use to wear 34" trousers at university and went to 36" shortly after. My most recent trousers were 36" and so I assumed (in error) that I had not put on weight over the years. As it happens I had and the vanity sizing had mislead me. In fact if it were not for vanity sizing I would have noticed this earlier and I may have done something about it earlier...

Vanity sizing doesn't seem to apply to French clothing from Decathlon yet, but seems to be wide spread in the UK market.

Categories: LUG Community Blogs

Steve Kemp: Generating fingerprints from SSH keys

Wed, 07/10/2015 - 10:56

I've been allowing users to upload SSH public-keys, and displaying them online in a form. Displaying an SSH public key is a pain, because they're typically long. That means you need to wrap them, or truncate them, or you introduce a horizontal scroll-bar.

So rather than displaying them I figured I'd generate a fingerprint when the key was uploaded and show that instead - This is exactly how github shows your ssh-keys.

Annoyingly there is only one reasonable way to get a fingerprint from a key:

  • Write it to a temporary file.
  • Run "ssh-keygen -lf temporary/file/name".

You can sometimes calculate the key via more direct, but less obvious methods:

awk '{print $2}' ~/.ssh/ | base64 -d | md5sum

But that won't work for all key-types.

It is interesting to look at the various key-types which are available these days:

mkdir ~/ssh/ cd ~/ssh/ for i in dsa ecdsa ed25519 rsa rsa1 ; do ssh-keygen -P "" -t $i -f ${i}-key done

I've never seen an ed25519 key in the wild. It looks like this:

$ cat ~/ssh/ ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMcT04t6UpewqQHWI4gfyBpP/ueSjbcGEze22vdlq0mW skx@shelob

Similarly curve-based keys are short too, but not as short:

ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLTJ5+ \ rWoq5cNcjXdhzRiEK3Yq6tFSYr4DBsqkRI0ZqJdb+7RxbhJYUOq5jsBlHUzktYhOahEDlc9Lezz3ZUqXg= skx@shelob

Remember what I said about wrapping? Ha!

Anyway for the moment I've hacked up a simple perl module SSH::Key::Fingerprint which will accept a public key and return the fingerprint, as well as validating the key is well-formed and of a known-type. I might make it public in the future, but I think the name is all wrong.

The only code I could easily find to do a similar job is this node.js package, but it doesn't work on all key-types. Shame.

And that concludes this weeks super-happy fun-time TODO-list item.

Categories: LUG Community Blogs

Philip Stubbs: Gear profile generator

Mon, 05/10/2015 - 10:32
Having been inspired by the gear generator found at I decided to have a go at doing this myself.

Some time ago, I had tried to do this in Java as a learning exercise. I only got so far and gave up before I managed to generate any involute curves required for the tooth profile. Trying to learn Java and the math required at the same time was probably too much and it got put aside.

Recently I had a look at the Go programming language. Then Matthias Wandel produced the page mentioned above, and I decided to have another crack at drawing gears.

The results so far can be seen on Github, and an example is shown here.

What I have learnt
  • Math makes my head hurt.
  • The Go programming language fits the way my brain works better than most other languages. I much prefer it to Java, and will try and see if I can tackle other problems with it, just for fun.
Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Green gage chutney

Sat, 03/10/2015 - 09:20

Yesterday we made two batches of chutney: one was green gage and apple plus lots of spice and the second was just green gage based and less spicy. We now have used up all our green gages from this year - at long last...

Categories: LUG Community Blogs

Alan Pope: DevRelCon 2015 Trip Report

Fri, 02/10/2015 - 14:00

Huh, this turned out to be longer than I expected. Don’t feel obliged to read it, it’s more notes for myself, and to remind me of why I liked the event.


On Wednesday I went to DevRelCon in London. DevRelCon is “a one day single track conference for technical evangelists, developer advocates and anyone interested in developer relations” setup by Matthew Revell. I don’t think there’s a lot of difference between my role (defined as Community Manager) at Canonical and Developer Relations so figured it would probably have appropriate content for my role. Boy was I right!

DevRelCon was easily the single most valuable short conference I’ve ever attended. The speakers were knowledgeable, friendly and accessible, and easy to understand. I took a ton of notes, and will distil some of them down here, but will almost certainly keep referring back to them over the coming months as I look to implement some of the suggestions I heard.


The event took place at The Trampery Old Street, in Shoreditch, the trendy/hipster part of London. We had access to a bright and airy ‘ballroom’ and were served with regular drinks, snacks and a light lunch. Free WiFi was also available, which worked well, but I didn’t use it much as we had little time away from the talks.


The day consisted of a mix of long (40 minute) talks, some shorter (20 min) ones, and a few ‘lightning’ talks. Having a mix of durations worked well I think. We started a little late, but Matthew massaged the timetable to claw back some time, and as it was a single track day there was no real issue if things didn’t run to time, as you weren’t likely to run off to another talk, and miss something.

All the talks were great, but I took considerably more notes in some than others, so this is represented below in that I haven’t listed every talk.

Morning Talks

Rob SpectreTwilio – Scaling Developer Evangelism.

This started off well as Rob plugged in his laptop and we were greeted with an Ubuntu desktop! He started off detailing some interesting stats to focus our minds on who we’re evangelising to. Starting with the 18.2m developers worldwide, given ~3Bn smartphone users, and ~4Bn Internet users that means ~0.08% have the capability to write code. There’s a 6% year on year increase in developers, mostly in developing nations, the ratio is less in the western world. So for example India could overtake every other countries’ developer count by ~2017.

Rob talked at length about the structure of Developer Evangelists, Developer Educators and the Community Team at Twilio. The talk continued to outline how valuable developers are, how at Twilio their Developer Evangelists are the ‘Red Carpet’ to their community. I was struck by how very differently we (Open Source projects) and Ubuntu specifically treat contributors to the project.

There was also a section on running developer events, and Rob spent some time talking about strategies for successful events, and how those can feed back to improve your product. He also talked a little about measurement, which was also going to be covered in later talks that day.

Another useful anecdote Rob detailed was regarding conversion of talks into blog posts. While a talk at an event can catalyse the 20-100 people in the room, converting that into a detailed tutorial blog post can bring in hundreds or thousands more.

The final slide in Rob’s talk was “Would you recommend this talk?” with a phone number attendees could send a score to. I thought this was a particularly cunning strategy. There was also talk of using the external IP address of the venue WiFi as one factor to determine the effectiveness / conversion rate of attendees.

Cristiano BettaBraintree – Tooling your way to a great devrel team

Cristiano started off talking about BattleHack which I’d not heard of. These are in person events where teams of developers get 24 hours to work on a project fuelled by coffee, cake and Red Bull to be in with a chance of winning a cash prize and an amusing axe.

He then went on to talk about a personal project to manage event sign-ups. This replaces tools like Eventbrite and MailChimp and enables Cristiano to get a better handle on the success of his events.

Laura CowenIBM – Building a developer community in an enterprise world

Laura started off giving some history of the products and groups inside IBM who are responsible for WAS, the public facing developer sites and the struggles she’s had updating them

The interesting parts for me came when Laura was detailing the pain she had getting developer time to update documentation and engage with users and communities outside their own four walls. Laura also talked about the difficulty when interfacing developers and marketing, their differing goals and some strategies for coping.

I recognised for example the frustration in people wanting to publish everything on a developer site, whether it’s appropriate to the target audience or not. Sometimes we (in Ubuntu) fail to deeply consider the target audience before we publish articles, guides or documentation. I think we can do better here. Pushing back on content creators, and finding the right place for a published article is worth it, if the target audience is to be defended.

Lightning Talks

Shaunak Kashyapelastic – Getting the measure of DevRel

In this short talk Shaunak gave some interesting snippets on how elastic measure community engagement. I found a couple interesting which I felt we might use in Ubuntu. Measuring “time to first response” for questions and issues by looking for responses from someone other than the first poster. While I don’t think they were actively using this data yet, getting an initial base line would be useful.

Shaunak also detailed one factor in measuring meet-up effectiveness. Typically elastic have 3-4 meet-ups a week, globally. For each meet-up group they measured “time since last meetup”. For those where there was a long delta between one meetup and the next they would consider actions. This could be contacting the group to see if there’s issues, offering assistance, swag & ‘meet up in a box’ kits, and finally disbanding the group if there wasn’t sufficient critical mass.

I took away a few good ideas from this talk, especially given recent conversations in Ubuntu about sparking up more meet-ups.

Phil LeggetterPusher – ROI on DevRel

Phil kicked off his short talk by talking about the ROI on DevRel by explaining Acquisition vs Activation. Where Acquisition of new developers might be them signing up for an account or downloading a product/sdk/library. Activation would be the conversion which might be measured differently per product. So perhaps “purchased paid API key” or “submitted app with N downloads”.

Phil then moved on to talk a bit about how they can measure the effectiveness of online tutorials or blog articles by correlating sign ups with traffic coming from those online articles. There was some more discussion on this later on including the effectiveness of giving away vouchers/codes to incentivise downloads, with some disagreement on the results of doing so.

Afternoon Talks

Brandon WestSendGrid – Burnout

I’ve been to many talks and discussions about burnout in developer communities over the years. This talk from Brandon one was easily the most useful, factual and actionable one. I also enjoyed Brandon’s attempts to inject Britishness into his talk which lightened the mood on a potentially very dark topic.

Brendon kicked off with a bit of a ‘woe is me’ #firstworldproblems introduction to his own current life issues. The usual things that affect a lot of people, but all happening at once, becoming overwhelming. We then moved on to defining burnout clearly, and what types of people are likely to suffer (clue: anyone) and some strategies for recognizing and preventing burnout.

A few key assertions / take-aways:-

“Burnout & depression are pathalogically indistinguishable”

“Burnout and work engagement are not exclusive or correlatable”

“Those most likely to burnout believe they are least at risk”

“Learn a skill on holiday – the holiday will be more rewarding”

Tim FogartyMajor League Hacking – Hackathons as a part of your DevRel strategy

Another great talk which built upon what Cristiano talked about earlier in the day – hackathons. Tim introduced different types of hackathons and which in his experience were more popular with developers and why.

Tim started by breaking down the types of hackathon – ‘hacking’, ‘corporate’ and ‘civic’ with the second being least popular as it’s seen as free labour by developers, and so they’re distrustful. He went on to reasons why people might run hackathons including evangelism, gathering (+ve and -ve) feedback, recruiting and mindshare (marketing).

He then moved on to strategies for making an impact, measuring the effect, sponsoring and how to craft the perfect demo to kick off the event.

Having never been to an in-person hackathon I found this another fascinating talk and will be following up with Tim Later.

Jessica Rose – Stop talking about diversity and just do it

Well. This was enlightening. This talk was excellent, and covered two main topics. First the focus was on getting a more diverse set of people running / attending / talking at your event. Some strategies were discussed and Jessica highlighted where many people go terribly wrong, assumptions people make and excuses people give.

The second part was a conversation about the ways in which an event can cater for as many people as possible. Here’s a highlight of some of the ways we discussed, but this obviously doesn’t cover everything:-

  • Attendees and speakers should be able to get in under their own power
  • Meal choices should be available – possibly beyond vegetarian/vegan
  • Code of Conduct
  • Sign language for talks
  • Well lit and safe feeling route from venue to accomodation
  • Space for breastfeeding / pumping, with snacks / drinks nearby
  • Non boozy spaces
  • Prayer room
  • After party with low noise level – and covered by Code of Conduct
  • Childcare
  • Professional chapparones (for under 18’s)
  • Diversity tickets & travel grants
  • Scale inclusivity to budget (be realistic about what you can achieve)

Lots to think about!

Joe NashBraintree – Engaging Students

Joe kicked off his fast-paced talk with an introduction to things which influenced how he got where he is, including “Twilio Heroes”. The talk was focussed on the UK University system, how to engage with students and some tips for running events which engage effectively with both CS and non-CS students.

James Milnerersi UK – So you want to run a meet-up

James talked about his personal experience running GeoDev Meet-Ups. I found this information quite valuable as the subject is under discussion in Ubuntu. James gave some great tips for running good meet-ups, and had a number of things he’s clearly learned the hard way. I hope to put some of his tips into action in the UK.

Dawn FosterLessons about community from science fiction.

This was a great uplifting talk to end the day. Dawn drew inspiration from her prolific science fiction reading to come up with some tips for people running community projects. I’ll give you a flavour with a few of them. Each was accompanied by an appropriate picture.

Picture: Star Trek Red Shirt
Lesson: “Participate and contribute in a way that people will notice and value your work”

Picture: Doctor Who TARDIS
Lesson: “Communities look different from inside then when viewing as an outsider”

Picture: Enders Game
Lesson: “Age is often unknown, encourage young people to contribute”

Dawn is a thoughtful, entertaining and engaging speaker. I’d certainly like to see more of her talks.

After Party

We all left the venue after the last talk and headed to a nearby trendy bar for a pint then headed home, pretty exhausted. A great event, I look forward to the next one.

Categories: LUG Community Blogs