Planet HantsLUG

Syndicate content
Planet HantsLUG -
Updated: 40 min 12 sec ago

Adam Trickett: Bog Roll: French Clothing Sizes

Sun, 27/09/2015 - 11:24

On Friday we spent several hours and quite a bit of money buying stuff in Decathlon. We needed some specific things and some things were on sale after the summer. Owing to my body shape change as a result of losing nearly 20 kg this year, a lot of my clothes - regular and sport - don't fit properly. Some I've been able to alter and some I can get away with, but some are now uncomfortable to wear or look absurd...

Decathlon design a lot of their own kit and then have it made all over the world. In that respect they are no different from many other companies both British and foreign. What is striking though is that unlike British and American brands, the stated size is more often the actual stated size, rather than a vanity size. For example to buy M&S or Next trousers I need to buy one size smaller than the quoted size or they fall down, but at Decathlon, I just need to buy the correct size and they fit...

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Vanity Sizing

Wed, 16/09/2015 - 12:32

Owing to a reduction in my body mass I've been forced to buy new clothing. It's been an expensive process and annoying to replace otherwise perfectly usable clothes...

I happened to have some older clothes that were not used much as they had previously been a little on the small size. The good news is that I've been able to wear them more regularly so I've avoided buying a few things!

One thing does annoy me. Unlike women's clothing which uses strange numbers, most men's clothing using simple actual measurements, waist of x inches / y centimetres. It's a fairly straight forward system, you know how big your waist size etc and should should be able to buy off-the-peg clothing without too much of a worry. However it's not true anymore! My older clothes when measured with a tape are the size that the label says. The modern ones vary wildly - though they all seem to be much larger than the label says. For example:

  • 20 year old pairs of M&S St. Michael Jeans, says 34", are 34"
  • 20 year old pairs of BHS chinos, says 34", are 34"
  • Modern M&S relaxed fit chinos, says 36", at least 38" more as they age
  • Modern Craghoppers Kiwi walking trousers, says 30", more like 33"

Some clothes, like jeans and those with elasticated bits do get larger with wear and washing, but even so modern clothes seem to be at least once size (2"/5 cm) larger than the label says they should be. Older clothing from the same company is the size it says...

At the moemnt I'm about a real size of 33"/84 cm waist, which is a pain as British men's trousers don't come in odd sizes, so it's either a real size of 32" (tight) or 34" (loose), but in vanity sizing that could be anything from 30"...

Categories: LUG Community Blogs

Steve Kemp: All about sharing files easily

Sun, 13/09/2015 - 12:39

Although I've been writing a bit recently about file-storage, this post is about something much more simple: Just making a random file or two available on an ad-hoc basis.

In the past I used to have my email and website(s) hosted on the same machine, and that machine was well connected. Making a file visible just involved running ~/bin/publish, which used scp to write a file beneath an apache document-root.

These days I use "my computer", "my work computer", and "my work laptop", amongst other hosts. The SSH-keys required to access my personal boxes are not necessarily available on all of these hosts. Add in firewall constraints and suddenly there isn't an obvious way for me to say "Publish this file online, and show me the root".

I asked on twitter but nothing useful jumped out. So I ended up writing a simple server, via sinatra which would allow:

  • Login via the site, and a browser. The login-form looks sexy via bootstrap.
  • Upload via a web-form, once logged in. The upload-form looks sexy via bootstrap.
  • Or, entirely seperately, with HTTP-basic-auth and a HTTP POST (i.e. curl)

This worked, and was even secure-enough, given that I run SSL if you import my CA file.

But using basic auth felt like cheating, and I've been learning more Go recently, and I figured I should start taking it more seriously, so I created a small repository of learning-programs. The learning programs started out simply, but I did wire up a simple TOTP authenticator.

Having TOTP available made me rethink things - suddenly even if you're not using SSL having an eavesdropper doesn't compromise future uploads.

I'd also spent a few hours working out how to make extensible commands in go, the kind of thing that lets you run:

cmd sub-command1 arg1 arg2 cmd sub-command2 arg1 .. argN

The solution I came up with wasn't perfect, but did work, and allow the seperation of different sub-command logic.

So suddenly I have the ability to run "subcommands", and the ability to authenticate against a time-based secret. What is next? Well the hard part with golang is that there are so many things to choose from - I went with gorilla/mux as my HTTP-router, then I spend several hours filling in the blanks.

The upshot is now that I have a TOTP-protected file upload site:

publishr init - Generates the secret publishr secret - Shows you the secret for import to your authenticator publishr serve - Starts the HTTP daemon

Other than a lack of comments, and test-cases, it is complete. And stand-alone. Uploads get dropped into ./public, and short-links are generated for free.

If you want to take a peak the code is here:

The only annoyance is the handling of dependencies - which need to be "go got ..". I guess I need to look at godep or similar, for my next learning project.

I guess there's a minor gain in making this service available via golang. I've gained protection against replay attacks, assuming non-SSL environment, and I've simplified deployment. The downside is I can no longer login over the web, and I must use curl, or similar, to upload. Acceptible tradeoff.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Reduction Rate

Sun, 13/09/2015 - 11:21

Over the weekend I recalculated my weight reduction rate target. As you lose weight your BMR falls, mine has come down by about 700 kj. That means to lose weight at the same rate as before I have to eat even less than I was previously. That means eating such a low energy diet that I'll probably miss important stuff out and I could start to lose muscle mass rather than fat.

Since hitting my weight plateau a few weeks ago I've been careful to not over indulge and to push harder on the bike. Re-plotting my weight against target on the new weekly reduction rate of 550 g per week rather than 750 g per week has resulted in a more realistic trajectory that I'm sticking to. Even after my holiday I'm still on target and should hit a healthy weight at the end of November this year.

One problem I do face is clothing. Lots of my clothes now fit me like a tent. Trousers fall down and shirts flap about in the wind... I've bought some smaller clothing, men's size small or medium rather than large or extra-large as previous, but I'm waiting until I reach my healthy target weight so I don't end up with new clothes that are too large. One problem I will face is that, in Basingstoke at least, I can't buy men's casual trousers in a small enough size in any of the local department stores, they don't stock anything small enough! Jeans I can get as they sell them to teenagers who should be smaller than full grown men, but they aren't really allowed for work...

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Cambridge Gage Jam

Wed, 09/09/2015 - 22:11

Last year our gage tree (probably a Cambridge Gage) had plenty of fruit but they were all inedible. This year it had plenty of fruit, so much so that as fast as we collect it there is even more ready to collect....

It's been a while since we had gages to jam. I used the same method as previously, though I added a fraction more sugar as the fruit wasn't fully ripe. Today's batch was 1.7 kg of fruit (cleaned and destoned), 350 g water and 1.2 kg of sugar, plus the usual juice of a frozen and defrosted lemon. The yield was pretty good and as we have loads of fruit left, even after we give some of them away I'll do another batch later this week.

Categories: LUG Community Blogs

Steve Kemp: The Jessie 8.2 point-release broke for me

Mon, 07/09/2015 - 09:37

I have about 18 personal hosts, all running the Jessie release of Debian GNU/Linux. To keep up with security updates I use unattended-upgrades.

The intention is that every day, via cron, the system will look for updates and apply them. Although I mostly expect it to handle security updates I also have it configured such that point-releases will be applied by magic too.

Unfortunately this weekend, with the 8.2 release, things broke in a significant way - The cron deamon was left in a broken state, such that all cronjobs failed to execute.

I was amazed that nobody had reported a bug, as several people on twitter had the same experience as me, but today I read through a lot of bug-reports and discovered that #783683 is to blame:

  • Old-cron runs.
  • Scheduled unattended-upgrades runs.
  • This causes cron to restart.
  • When cron restarts the jobs it was running are killed.
  • The system is in a broken state.

The solution:

# dpkg --configure -a # apt-get upgrade

I guess the good news is I spotted it promptly, with the benefit of hindsight the bug report does warn of this as being a concern, but I guess there wasn't a great solution.

Anyway I hope others see this, or otherwise spot the problem themselves.


In unrelated news the seaweedfs file-store I previously introduced is looking more and more attractive to me.

I reported a documentation-related bug which was promptly handled, even though it turned out I was wrong, and I contributed CIDR support to whitelisting hosts which was merged in well.

I've got a two-node "cluster" setup at the moment, and will be expanding that shortly.

I've been doing a lot of little toy-projects in Go recently. This weekend I was mostly playing with the message-bus, and tying it together with sinatra.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (July and August 2015)

Tue, 01/09/2015 - 11:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller


Categories: LUG Community Blogs

Andy Smith: Scrobbling to from D-Bus

Sun, 23/08/2015 - 11:50

Yesterday afternoon I noticed that my music player, Banshee, had not been scrobbling to my for a few weeks. seem to be in the middle of reorganising their site but that shouldn’t affect their API (at least not for scrobbling). However, it seems that it has upset Banshee so no more scrobbling for me.

Banshee has a number of deficiencies but there’s a few things about it that I really do like, so I wasn’t relishing changing to a different player. It’s also written in Mono which doesn’t look like something I could learn very quickly.

I then noticed that Banshee has some sort of D-Bus interface where it writes things about what it it doing, such as the metadata for the currently-playing track… and so a hackish idea was formed.

Here’s a thing that listens to what Banshee is saying over D-Bus and submits the relevant “now playing” and scrobble to The first time you run it it asks you to authorise it and then it remembers that forever.

I’ve never looked at D-Bus before so I’m probably doing it all very wrong, but it appears to work. Look, I have scrobbles again! And after all it would not be Linux on the desktop if it didn’t require some sort of lash-up that would make Heath Robinson cry his way to the nearest Apple store to beg a Genius to install iTunes, right?

Anyway it turns out that there is a standard for this remote control and introspection of media players, called MPRIS, and quite a few of them support it. Even Spotify, apparently. So it probably wouldn’t be hard to adapt this script to scrobble from loads of different things even if they don’t have scrobbling extensions themselves.

Categories: LUG Community Blogs

Debian Bits: Debian turns 22!

Sun, 16/08/2015 - 21:59

Sorry for posting so late, we're very busy at DebConf15!

Happy 22nd birthday Debian!

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Plateau

Thu, 13/08/2015 - 21:09

After nearly 22 weeks of continuous and even weight loss I've hit my weigh plateau and not changed my weight for over three weeks now.

There are three basic reasons for this:

  1. My energy intake now equals my energy use. As you lose weight your total metabolic demand falls, so the deliberate energy deficit gradually shrinks to nothing. This is normal.
  2. Calorie creep, over time it's easy to add a little extra to your diet, which means the energy deficit isn't as big as it should be. This is common.
  3. Laziness creep, over time it's easy to slow down and not stick to the exercise plan. This is also common.

The closer you are to your target weight the more likely, and the easier it is to give and stay put. In my case all three are probably happening, my BMR has probably fallen by 168 kcal / 702 kj, which is 400 g of milk or 30 g of almonds - which isn't much but if you eat a few extra nuts or an extra glass of milk, it adds up...

To correct this, I've made sure I don't eat too many nuts (they are good for me in moderation) and I've cut down on the milk in my porridge, substituting water. I've also trimmed my bread, as good though it is, wheat has ~ 360 kcal per 100 g. I'll also try to push harder on the bike and walk faster...

I'm currently stuck under 74 kg, with about 8 kg to go...

Categories: LUG Community Blogs

Steve Kemp: Making an old android phone useful again

Thu, 13/08/2015 - 14:44

I've got an HTC Desire, running Android 2.2. It is old enough that installing applications such as thsoe from my bank, etc, fails.

The process of upgrading the stock ROM/firmware seems to be:

  • Download an unsigned zip file, from a shady website/forum.
  • Boot the phone in recovery mode.
  • Wipe the phone / reset to default state.
  • Install the update, and hope it works.
  • Assume you're not running trojaned binaries.
  • Hope the thing still works.
  • Reboot into the new O/S.

All in all .. not ideal .. in any sense.

I wish there were a more "official" way to go. For the moment I guess I'll ignore the problem for another year. My nokia phone does look pretty good ..

Categories: LUG Community Blogs

Steve Kemp: A brief look at the weed file store

Mon, 10/08/2015 - 13:29

Now that I've got a citizen-ID, a pair of Finnish bank accounts, and have enrolled in a Finnish language-course (due to start next month) I guess I can go back to looking at object stores, and replicated filesystems.

To recap my current favourite, despite the lack of documentation, is the Camlistore project which is written in Go.

Looking around there are lots of interesting projects being written in Go, and so is my next one the seaweedfs, which despite its name is not a filesystem at all, but a store which is accessed via HTTP.

Installation is simple, if you have a working go-lang environment:

go get

Once that completes you'll find you have the executable bin/weed placed beneath your $GOPATH. This single binary is used for everything though it is worth noting that there are distinct roles:

  • A key concept in weed is "volumes". Volumes are areas to which files are written. Volumes may be replicated, and this replication is decided on a per-volume basis, rather than a per-upload one.
  • Clients talk to a master. The master notices when volumes spring into existance, or go away. For high-availability you can run multiple masters, and they elect the real master (via RAFT).

In our demo we'll have three hosts one, the master, two and three which are storage nodes. First of all we start the master:

root@one:~# mkdir / root@one:~# weed master -mdir / -defaultReplication=001

Then on the storage nodes we start them up:

root@two:~# mkdir /data; root@two:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

Then the second storage-node:

root@three:~# mkdir /data; root@three:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

At this point we have a master to which we'll talk (on port :9333), and a pair of storage-nodes which will accept commands over :8080. We've configured replication such that all uploads will go to both volumes. (The -max=1 configuration ensures that each volume-store will only create one volume each. This is in the interest of simplicity.)

Uploading content works in two phases:

  • First tell the master you wish to upload something, to gain an ID in response.
  • Then using the upload-ID actually upload the object.

We'll do that like so:

laptop ~ $ curl -X POST http://one.our.domain:9333/dir/assign {"fid":"1,06c3add5c3","url":"","publicUrl":"","count":1} client ~ $ curl -X PUT -F file=@/etc/passwd,06c3add5c3 {"name":"passwd","size":2137}

In the first command we call /dir/assign, and receive a JSON response which contains the IPs/ports of the storage-nodes, along with a "file ID", or fid. In the second command we pick one of the hosts at random (which are the IPs of our storage nodes) and make the upload using the given ID.

If the upload succeeds it will be written to both volumes, which we can see directly by running strings on the files beneath /data on the two nodes.

The next part is retrieving a file by ID, and we can do that by asking the master server where that ID lives:

client ~ $ curl http://one.our.domain:9333/dir/lookup?volumeId=1,06c3add5c3 {"volumeId":"1","locations":[ {"url":"","publicUrl":""}, {"url":"","publicUrl":""} ]}

Or, if we prefer we could just fetch via the master - it will issue a redirect to one of the volumes that contains the file:

client ~$ curl http://one.our.domain:9333/1,06c3add5c3 <a href=",06c3add5c3">Moved Permanently</a>

If you follow redirections then it'll download, as you'd expect:

client ~ $ curl -L http://one.our.domain:9333/1,06c3add5c3 root:x:0:0:root:/root:/bin/bash ..

That's about all you need to know to decide if this is for you - in short uploads require two requests, one to claim an identifier, and one to use it. Downloads require that your storage-volumes be publicly accessible, and will probably require a proxy of some kind to make them visible on :80, or :443.

A single "weed volume .." process, which runs as a volume-server can support multiple volumes, which are created on-demand, but I've explicitly preferred to limit them here. I'm not 100% sure yet whether it's a good idea to allow creation of multiple volumes or not. There are space implications, and you need to read about replication before you go too far down the rabbit-hole. There is the notion of "data centres", and "racks", such that you can pretend different IPs are different locations and ensure that data is replicated across them, or only within-them, but these choices will depend on your needs.

Writing a thin middleware/shim to allow uploads to be atomic seems simple enough, and there are options to allow exporting the data from the volumes as .tar files, so I have no undue worries about data-storage.

This system seems reliable, and it seems well designed, but people keep saying "I'm not using it in production because .. nobody else is" which is an unfortunate problem to have.

Anyway, I like it. The biggest omission is really authentication. All files are public if you know their IDs, but at least they're not sequential ..

Categories: LUG Community Blogs

Andy Smith: SSDs and Linux Native Command Queuing

Sun, 09/08/2015 - 07:10
Native Command Queueing

Native Command Queuing (NCQ) is an extension of the Serial ATA protocol that allows multiple requests to be sent to a drive, allowing the drive to order them in a way it considers optimal.

This is very handy for rotational media like conventional hard drives, because they have to move the head all over to do random IO, so in theory if they are allowed to optimise ordering then they may be able to do a better job of it. If the drive supports NCQ then it will advertise this fact to the operating system and Linux by default will enable it.

Queue depth

The maximum depth of the queue in SATA is 31 for practical purposes, and so if the drive supports NCQ then Linux will usually set the depth to 31. You can change the depth by writing a number between 1 and 31 to /sys/block/<device>/device/queue_depth. Writing 1 to the file effectively disables NCQ for that device.

NCQ and SSDs

So what about SSDs? They aren’t rotational media; any access is in theory the same as any other access, so no need to optimally order the commands, right?

The sad fact is, many SSDs even today have incompatibilities with SATA drivers and chipsets such that NCQ does not reliably work. There’s advice all over the place that NCQ can be disabled with no ill effect, because supposedly SSDs do not benefit from it. Some posts even go as far as to suggest that NCQ might be detrimental to performance with SSDs.

Well, let’s see what fio has to say about that.

The setup
  • Two Intel DC s3610 1.6TB SSDs in an MD RAID-10 on Debian 8.1.
  • noop IO scheduler.
  • fio operating on a 4GiB test file that is on an ext4 filesystem backed by LVM.
  • fio set to do a 70/30% mix of read vs write operations with 128 simultaneous IO operations in flight.

The goal of this is to simulate a busy highly parallel server load, such as you might see with a database.

The fio command line looks like this:

fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=ncq \ --filename=test \ --bs=4k \ --iodepth=128 \ --size=4G \ --readwrite=randrw \ --rwmixread=70

Expected output will be something like this:

ncq: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128 fio-2.1.11 Starting 1 process Jobs: 1 (f=1): [m(1)] [100.0% done] [50805KB/21546KB/0KB /s] [12.8K/5386/0 iops] [eta 00m:00s] ncq1: (groupid=0, jobs=1): err= 0: pid=11272: Sun Aug 9 06:29:33 2015 read : io=2867.6MB, bw=44949KB/s, iops=11237, runt= 65327msec write: io=1228.5MB, bw=19256KB/s, iops=4813, runt= 65327msec cpu : usr=4.39%, sys=25.20%, ctx=732814, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued : total=r=734099/w=314477/d=0, short=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=128   Run status group 0 (all jobs): READ: io=2867.6MB, aggrb=44949KB/s, minb=44949KB/s, maxb=44949KB/s, mint=65327msec, maxt=65327msec WRITE: io=1228.5MB, aggrb=19255KB/s, minb=19255KB/s, maxb=19255KB/s, mint=65327msec, maxt=65327msec   Disk stats (read/write): dm-0: ios=732755/313937, merge=0/0, ticks=4865644/3457248, in_queue=8323636, util=99.97%, aggrios=734101/314673, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% md4: ios=734101/314673, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=364562/313849, aggrmerge=2519/1670, aggrticks=2422422/2049132, aggrin_queue=4471730, aggrutil=94.37% sda: ios=364664/313901, merge=2526/1618, ticks=2627716/2223944, in_queue=4852092, util=94.37% sdb: ios=364461/313797, merge=2513/1722, ticks=2217128/1874320, in_queue=4091368, util=91.68%

The figures we’re interested in are the iops= ones, in this case 11237 and 4813 for read and write respectively.


Here’s how different NCQ queue depths affected things. Click the graph image for the full size version.


On this setup anything below a queue depth of about 8 is disastrous to performance. The aberration at a queue depth of 19 is interesting. This is actually repeatable. I have no explanation for it.

Don’t believe anyone who tells you that NCQ is unimportant for SSDs unless you’ve benchmarked that and proven it to yourself. Disabling NCQ on an Intel DC s3610 appears to reduce its performance to around 25% of what it would be with even a queue depth of 8. Modern SSDs, especially enterprise ones, have a parallel architecture that allows them to get multiple things done at once. They expect NCQ to be enabled.

It’s easy to guess why 8 might be the magic number for the DC s3610:

The top of the PCB has eight NAND emplacements and Intel’s proprietary eight-channel PC29AS21CB0 controller.

The newer NVMe devices are even more aggressive with this; while the SATA spec stops at one queue with a depth of 32, NVMe specifies up to 65k queues with a depth of up to 65k each! Modern SSDs are designed with this in mind.

Categories: LUG Community Blogs