LUG Community Blogs

Steve Kemp: All about sharing files easily

Planet HantsLUG - Sun, 13/09/2015 - 13:39

Although I've been writing a bit recently about file-storage, this post is about something much more simple: Just making a random file or two available on an ad-hoc basis.

In the past I used to have my email and website(s) hosted on the same machine, and that machine was well connected. Making a file visible just involved running ~/bin/publish, which used scp to write a file beneath an apache document-root.

These days I use "my computer", "my work computer", and "my work laptop", amongst other hosts. The SSH-keys required to access my personal boxes are not necessarily available on all of these hosts. Add in firewall constraints and suddenly there isn't an obvious way for me to say "Publish this file online, and show me the root".

I asked on twitter but nothing useful jumped out. So I ended up writing a simple server, via sinatra which would allow:

  • Login via the site, and a browser. The login-form looks sexy via bootstrap.
  • Upload via a web-form, once logged in. The upload-form looks sexy via bootstrap.
  • Or, entirely seperately, with HTTP-basic-auth and a HTTP POST (i.e. curl)

This worked, and was even secure-enough, given that I run SSL if you import my CA file.

But using basic auth felt like cheating, and I've been learning more Go recently, and I figured I should start taking it more seriously, so I created a small repository of learning-programs. The learning programs started out simply, but I did wire up a simple TOTP authenticator.

Having TOTP available made me rethink things - suddenly even if you're not using SSL having an eavesdropper doesn't compromise future uploads.

I'd also spent a few hours working out how to make extensible commands in go, the kind of thing that lets you run:

cmd sub-command1 arg1 arg2 cmd sub-command2 arg1 .. argN

The solution I came up with wasn't perfect, but did work, and allow the seperation of different sub-command logic.

So suddenly I have the ability to run "subcommands", and the ability to authenticate against a time-based secret. What is next? Well the hard part with golang is that there are so many things to choose from - I went with gorilla/mux as my HTTP-router, then I spend several hours filling in the blanks.

The upshot is now that I have a TOTP-protected file upload site:

publishr init - Generates the secret publishr secret - Shows you the secret for import to your authenticator publishr serve - Starts the HTTP daemon

Other than a lack of comments, and test-cases, it is complete. And stand-alone. Uploads get dropped into ./public, and short-links are generated for free.

If you want to take a peak the code is here:

The only annoyance is the handling of dependencies - which need to be "go got ..". I guess I need to look at godep or similar, for my next learning project.

I guess there's a minor gain in making this service available via golang. I've gained protection against replay attacks, assuming non-SSL environment, and I've simplified deployment. The downside is I can no longer login over the web, and I must use curl, or similar, to upload. Acceptible tradeoff.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Reduction Rate

Planet HantsLUG - Sun, 13/09/2015 - 12:21

Over the weekend I recalculated my weight reduction rate target. As you lose weight your BMR falls, mine has come down by about 700 kj. That means to lose weight at the same rate as before I have to eat even less than I was previously. That means eating such a low energy diet that I'll probably miss important stuff out and I could start to lose muscle mass rather than fat.

Since hitting my weight plateau a few weeks ago I've been careful to not over indulge and to push harder on the bike. Re-plotting my weight against target on the new weekly reduction rate of 550 g per week rather than 750 g per week has resulted in a more realistic trajectory that I'm sticking to. Even after my holiday I'm still on target and should hit a healthy weight at the end of November this year.

One problem I do face is clothing. Lots of my clothes now fit me like a tent. Trousers fall down and shirts flap about in the wind... I've bought some smaller clothing, men's size small or medium rather than large or extra-large as previous, but I'm waiting until I reach my healthy target weight so I don't end up with new clothes that are too large. One problem I will face is that, in Basingstoke at least, I can't buy men's casual trousers in a small enough size in any of the local department stores, they don't stock anything small enough! Jeans I can get as they sell them to teenagers who should be smaller than full grown men, but they aren't really allowed for work...

Categories: LUG Community Blogs

Chris Lamb: Joining strings in POSIX shell

Planet ALUG - Thu, 10/09/2015 - 22:18

A common programming task is to glue (or "join") items together to create a single string. For example:

>>> ', '.join(['foo', 'bar', 'baz']) "foo, bar, baz"

Notice that we have three items but only two commas — this can be important if the tool we passing doesn't support trailing delimiters or we simply want the result to be human-readable.

Unfortunately, this can be inconvenient in POSIX shell where we construct strings via explicit concatenation. A naïve solution of:

RESULT="" for X in foo bar baz do RESULT="${RESULT}, ${X}" done

... incorrectly returns ", foo, bar, baz". We can solve this with a (cumbersome) counter or flag to only attach the delimiter when we need it:

COUNT=0 RESULT="" for X in foo bar baz do if [ "${COUNT}" = 0 ] then RESULT="${X}" else RESULT="${RESULT}, ${X}" fi COUNT=$((COUNT + 1)) done

One alternative is to use the little-known ":+" expansion modifier. Many people are familiar with ":-" for returning default values:

$ echo ${VAR:-fallback}

By contrast, the ":+" modifier inverts this logic, returning the fallback if the specified variable is actually set. This results in the elegant:

RESULT="" for X in foo bar baz do RESULT="${RESULT:+${RESULT}, }${X}" done
Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Cambridge Gage Jam

Planet HantsLUG - Wed, 09/09/2015 - 23:11

Last year our gage tree (probably a Cambridge Gage) had plenty of fruit but they were all inedible. This year it had plenty of fruit, so much so that as fast as we collect it there is even more ready to collect....

It's been a while since we had gages to jam. I used the same method as previously, though I added a fraction more sugar as the fruit wasn't fully ripe. Today's batch was 1.7 kg of fruit (cleaned and destoned), 350 g water and 1.2 kg of sugar, plus the usual juice of a frozen and defrosted lemon. The yield was pretty good and as we have loads of fruit left, even after we give some of them away I'll do another batch later this week.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Orchestration, a cry for help

Planet ALUG - Tue, 08/09/2015 - 16:02

Over the past few years, a plethora of orchestration frameworks have been exploding onto the scene. Many have been around for quite a while but not all have the same sort of community behind them. For example there's a very interesting option in Joey Hess' Propellor but that is hurt by needing to be able to build Propellor on all the hosts you manage. On the other hand, Ansible is able to operate without installing extra software on your target hosts, but instead it ends up very latency-bound which can cause problems when your managed hosts are "far away".

I have considered CFEngine, Chef, Puppet and Salt in addition to the above mentioned options, but none of them feel quite right to me. I am looking for a way to manage a small number of hosts, at least one of which is not always online (my laptop) and all of which are essentially snowflakes whose sparkleybits I want some reasonable control over.

I have a few basic requirements which I worry would be hard to meet -- I want to be able to make changes to my hosts by editing a file and committing/pushing it to a git server. I want to be able to manage a host entirely over SSH from one or more systems, ideally without having to install the orchestration software on the target host, but where if the software is present it will get used to accelerate matters. I don't want to have to install Ruby or PHP on any system in order to have orchestration, and some of the systems I wish to manage simply can't compile Haskell stuff sanely. I'm not desperately interested in learning yet more DSLs, but I appreciate that it will be necessary, but I really don't want to have to learn more than one DSL simply to run one frameworks.

I don't want to have to learn strange and confusing combinations of file formats. For example, Ansible quite sensibly uses YAML for its structured data except for its host/group lists. It uses Jinja2 for its templating and looping, except for some things which it generates its own looping constructs inside its YAML. I also personally find Ansible's sportsball oriented terminology to be confusing, but that might just be me.

So what I'm hoping is that someone will be able to point me at a project which combines all the wonderful features of the above, with a need to learn only one DSL and which doesn't require to be installed on the managed host but which can benefit from being so installed, is driven from git, and won't hurt my already overly burdened brain.

Dear Lazyweb, pls. kthxbye.

Categories: LUG Community Blogs

Steve Kemp: The Jessie 8.2 point-release broke for me

Planet HantsLUG - Mon, 07/09/2015 - 10:37

I have about 18 personal hosts, all running the Jessie release of Debian GNU/Linux. To keep up with security updates I use unattended-upgrades.

The intention is that every day, via cron, the system will look for updates and apply them. Although I mostly expect it to handle security updates I also have it configured such that point-releases will be applied by magic too.

Unfortunately this weekend, with the 8.2 release, things broke in a significant way - The cron deamon was left in a broken state, such that all cronjobs failed to execute.

I was amazed that nobody had reported a bug, as several people on twitter had the same experience as me, but today I read through a lot of bug-reports and discovered that #783683 is to blame:

  • Old-cron runs.
  • Scheduled unattended-upgrades runs.
  • This causes cron to restart.
  • When cron restarts the jobs it was running are killed.
  • The system is in a broken state.

The solution:

# dpkg --configure -a # apt-get upgrade

I guess the good news is I spotted it promptly, with the benefit of hindsight the bug report does warn of this as being a concern, but I guess there wasn't a great solution.

Anyway I hope others see this, or otherwise spot the problem themselves.


In unrelated news the seaweedfs file-store I previously introduced is looking more and more attractive to me.

I reported a documentation-related bug which was promptly handled, even though it turned out I was wrong, and I contributed CIDR support to whitelisting hosts which was merged in well.

I've got a two-node "cluster" setup at the moment, and will be expanding that shortly.

I've been doing a lot of little toy-projects in Go recently. This weekend I was mostly playing with the message-bus, and tying it together with sinatra.

Categories: LUG Community Blogs

Linux Microsoft Skype for Business / Lync 2013 Client

Planet SurreyLUG - Wed, 02/09/2015 - 10:14

I was surprised to learn that Ubuntu 14.04 can talk to Skype for Business AKA Lync 2013 using the Pidgin Instant Messaging client. The general steps were:
# apt-get install pidgin pidgin-sipe

And then restart Pidgin and add a new Account. The Office Communicator is the relevant plugin, with the following parameters:

  • Protocol: Office Communicator
  • Username: Your Office 365 or Skype for Business username – probably your email address
  • Password: Your password is obviously required – and will be stored unencrypted in the config file, so you may wish to leave this blank and enter at each login
  • Server[:Port]: Leave empty if your set-up has autodiscovery
  • Connection type: Auto
  • User Agent: UCCAPI/15.0.4420.1017 OC/15.0.4420.1017
  • Authentication scheme: TLS-DSK

I am unclear why the user agent is required, and whether that will need to change from time to time or not. So far it has worked fine here.

Unfortunately a few days ago the above set-up stopped working, with “Failed to authenticate with server”. It seems that you must now use version 1.20 of the Sipe plugin, which fixes “Office365 rejects RC4 in TLS-DSK”. As this version was only completed three days ago, it is not yet available in any of the Ubuntu repositories that I have been able to find, you will probably have to compile yourself.

Broadly speaking I followed these key stages:

  1. Install build tools if you don’t already have them (apt-get install build-essential).
  2. Install checkinstall if you don’t already have it (apt-get install checkinstall).
  3. Download source files.
  4. Extract source.
  5. Change into source directory.
  6. Read carefully the README file in the source directory.
  7. Installed dependencies listed in the README:

# apt-get install libpurple-dev libtool intltool pkg-config libglib2.0-dev \
libxml2-dev libnss3-dev libssl-dev libkrb5-dev libnice-dev libgstreamer0.10-dev

These dependencies may change over time, and your particular requirements may be different from mine, so please read the README and that information should take precedence.

Lastly, as an ordinary user, you should now be able to compile. If it fails at any stage, simply read the error and install the missed dependancy.

$ ./configure --prefix=/usr
$ make
$ sudo checkinstall

I found checkinstall was pre-populated with sensible settings, and I was able to continue without making any changes. Once complete a Debian package will have been created in the current directory, but it will have already been installed for you.

For some reason I found that at this stage Pidgin would no longer run, as it was now named /usr/bin/pidgin.orig instead of /usr/bin/pidgin, I tried removing and reinstalling pidgin but to no avail. In the end I created a symlink (ln -s /usr/bin/pidgin.orig /usr/bin/pidgin), but you should not do this unless you experience the same issue. If you know the reason for this I would be delighted to receive your feedback, as this isn’t a problem that I have come across before.

Restarting Pidgin and the Office Communicator sprung into life once more. Sadly I would imagine that this won’t be the last time this plugin will break, such are the vagaries of connecting to closed proprietary networks.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (July and August 2015)

Planet HantsLUG - Tue, 01/09/2015 - 12:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller


Categories: LUG Community Blogs

Bring-A-Box 12th September 2015, Red Hat

Surrey LUG - Mon, 31/08/2015 - 20:12
Start: 2015-09-12 11:00 End: 2015-09-12 17:00

We have regular sessions each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!

Back to the excellent Red Hat offices in Farnborough, Hampshire on Saturday 12th September - thanks to Dominic Cleal for hosting us..

Categories: LUG Community Blogs

Jonathan McDowell: Random post-DebConf 15 thoughts

Planet ALUG - Mon, 24/08/2015 - 16:18

There are a bunch of things I mean to blog about, but as I have just got fully home from Heidelberg and DebConf15 this afternoon that seems most appropriate to start with. It’s a bit of a set of disjoint thoughts, but I figure I should write them down while they’re in my head.

DebConf is an interesting conference. It’s the best opportunity the Debian project has every year to come together and actually spend a decent amount of time with each other. As a result it’s a fairly full on experience, with lots of planned talks as a basis and a wide range of technical discussions and general social interaction filling in whatever gaps are available. I always find it a thoroughly enjoyable experience, but equally I’m glad to be home and doing delightfully dull things like washing my clothes and buying fresh milk.

I have always been of the opinion that the key aspect of DebConf is the face time. It was thus great to see so many people there - we were told several times that this was the largest DebConf so far (~ 570 people IIRC). That’s good in the sense that it meant I got to speak to a lot of people (both old friends and new), but does mean that there are various people I know I didn’t spend enough, or in some cases any, time with. My apologies, but I think many of us were in the same situation. I don’t feel it made the conference any less productive for me - I managed to get a bunch of hacking done, discuss a number of open questions in person with various people and get pulled into various interesting discussions I hadn’t expected. In short, a typical DebConf.

Also I’d like to say that the venue worked out really well. I’ll admit I was dubious when I heard it was in a hostel, but it was well located (about a 30 minute walk into town, and a reasonable bus service available from just outside the door), self-contained with decent facilities (I’m a big believer in having DebConf talks + accommodation be as close as possible to each other) and the room was much better than expected (well, aside from the snoring but I can’t blame the DebConf organisers for that).

One of the surprising and interesting things for me that was different from previous DebConfs was the opportunity to have more conversations with a legal leaning. I expect to go to DebConf and do OpenPGP/general crypto related bits. I wasn’t expecting affirmation about the things I have learnt on my course over the past year, in terms of feeling that I could use that knowledge in the process of helping Debian. It provided me with some hope that I’ll be able to tie my technology and law skills together in a way that I will find suitably entertaining (as did various conversations where people expressed significant interest in the crossover).

Next year is in Cape Town, South Africa. It’s a long way (though I suppose no worse than Portland and I get to stay in the same time zone), and a quick look at flights indicates they’re quite expensive at the moment. The bid presentation did look pretty good though so as soon as the dates are confirmed (I believe this will happen as soon as there are signed contracts in place) I’ll take another look at flights.

In short, excellent DebConf, thanks to the organisers, lovely to see everyone I managed to speak to, apologies to those of you I didn’t manage to speak to. Hopefully see you in Cape Town next year.

Categories: LUG Community Blogs

Andy Smith: Scrobbling to from D-Bus

Planet HantsLUG - Sun, 23/08/2015 - 12:50

Yesterday afternoon I noticed that my music player, Banshee, had not been scrobbling to my for a few weeks. seem to be in the middle of reorganising their site but that shouldn’t affect their API (at least not for scrobbling). However, it seems that it has upset Banshee so no more scrobbling for me.

Banshee has a number of deficiencies but there’s a few things about it that I really do like, so I wasn’t relishing changing to a different player. It’s also written in Mono which doesn’t look like something I could learn very quickly.

I then noticed that Banshee has some sort of D-Bus interface where it writes things about what it it doing, such as the metadata for the currently-playing track… and so a hackish idea was formed.

Here’s a thing that listens to what Banshee is saying over D-Bus and submits the relevant “now playing” and scrobble to The first time you run it it asks you to authorise it and then it remembers that forever.

I’ve never looked at D-Bus before so I’m probably doing it all very wrong, but it appears to work. Look, I have scrobbles again! And after all it would not be Linux on the desktop if it didn’t require some sort of lash-up that would make Heath Robinson cry his way to the nearest Apple store to beg a Genius to install iTunes, right?

Anyway it turns out that there is a standard for this remote control and introspection of media players, called MPRIS, and quite a few of them support it. Even Spotify, apparently. So it probably wouldn’t be hard to adapt this script to scrobble from loads of different things even if they don’t have scrobbling extensions themselves.

Categories: LUG Community Blogs

Mick Morgan: update to domain privacy

Planet ALUG - Thu, 20/08/2015 - 19:55

At the end of last month I noted that I had been receiving multiple emails to each of the proxy addresses listed for my newly registered “private” domains. Intriguingly, whilst I was receiving at least three or four such emails a week before I wrote about it, I have had precisely zero since.

Probably coincidence, but a conspiracy theorist would have field day with that.

Categories: LUG Community Blogs

Mick Morgan: why privacy matters

Planet ALUG - Wed, 19/08/2015 - 18:53

Last month my wife and I shared a holiday with a couple of old friends. We have known this couple since before we got married, indeed, they attended our wedding. We consider them close friends and enjoy their company. One evening in a pub in Yorkshire, we got to discussing privacy, the Snowden revelations, and the implications of a global surveillance mechanism such as is used by both the UK and its Five Eyes partners (the US NSA in particular). To my complete surprise, Al expressed the view that he was fairly relaxed about the possibility that GCHQ should be capable of almost complete surveillance of his on-line activity since, in his view, “nothing I do can be of any interest to them, so why should I worry.”

I have met this view before, but oddly I had never heard Al express himself in quite this way in all the time I have known him. It bothers me that someone I love and trust, someone whose opinions I value, someone I consider to be intelligent and articulate and caring, should be so relaxed about so pernicious an activity as dragnet surveillance. It is not only the fact that Al himself is so relaxed that bothers me so much as the fact that if he does not care, then many, possibly most, people like him will not care either. That attitude plays into the hands of those, like Eric Schmidt, who purport to believe that “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

Back in October last year, Glenn Greenwald gave a TED talk on the topic, “Why privacy matters”. I recommended it to Al and I commend it to anyone who thinks, as he does, that dragnet surveillance doesn’t impact on them because they “are not doing anything wrong”.

Categories: LUG Community Blogs

Welcome to the Wolves LUG!

Wolverhampton LUG News - Mon, 17/08/2015 - 17:42

Welcome to our website. The Wolves LUG is a group of Linux users from the Wolverhampton area who meet every two weeks to discuss Linux, drink lots, eat lots and generally hang out and be social. A very friendly and open atmosphere is encouraged at LUG meetings and on the online LUG resources.

Many people who have come to the Wolves LUG have commented on how we are different to many LUGs that prefer a more formal setting for their meetings. We are a lively, social group that prefer informal meetings that are rich in debate, jokes, sarcasm and just generally fun. Although the group discusses a wide variety of technical and political subjects, don't expect every meeting to be full of dry technical conversation.

Who can join the LUG?

Anyone and everyone is welcome to the Wolves LUG. We encourage new Linux users and experienced users alike. We also encourage those who are considering Linux and would like to pop along to find out more first. Everyone is welcome and there are no pre-requisites for joining and attending meetings. There is also no age limit for joining the LUG and meetings. All ages are welcome. :)

What does it cost

Nothing. As with the software that we promote, there is no charge to join the group and no subscription. Just join in.

OK, you have convinced me. How do I join up?

The first thing you should do is join our mailing list. This is an email discussion list that all members are subscribed to. Joining the list is free and can be done by visiting this page - When you have joined the list you can introduce yourself and get to know the LUG members.

Where do you have meetings and how often?

Meetings take place every two weeks and you can check the latest details on this site. We meet on every other Wednesday and the meeting generally kicks off at 7.30pm, although a few of us get there a little earlier. In general meetings end around 11.00pm, but many members leave earlier and some stay later.

I would like to contribute to the website, what do I need to do?

If you are a member of the mailing list (if not, why not?) or are known to the site administrators, please signup for an account here - All account requests are subject to administrator approval which normally occurs within 24 hours.

Categories: LUG Community Blogs

Debian Bits: Debian turns 22!

Planet HantsLUG - Sun, 16/08/2015 - 22:59

Sorry for posting so late, we're very busy at DebConf15!

Happy 22nd birthday Debian!

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Plateau

Planet HantsLUG - Thu, 13/08/2015 - 22:09

After nearly 22 weeks of continuous and even weight loss I've hit my weigh plateau and not changed my weight for over three weeks now.

There are three basic reasons for this:

  1. My energy intake now equals my energy use. As you lose weight your total metabolic demand falls, so the deliberate energy deficit gradually shrinks to nothing. This is normal.
  2. Calorie creep, over time it's easy to add a little extra to your diet, which means the energy deficit isn't as big as it should be. This is common.
  3. Laziness creep, over time it's easy to slow down and not stick to the exercise plan. This is also common.

The closer you are to your target weight the more likely, and the easier it is to give and stay put. In my case all three are probably happening, my BMR has probably fallen by 168 kcal / 702 kj, which is 400 g of milk or 30 g of almonds - which isn't much but if you eat a few extra nuts or an extra glass of milk, it adds up...

To correct this, I've made sure I don't eat too many nuts (they are good for me in moderation) and I've cut down on the milk in my porridge, substituting water. I've also trimmed my bread, as good though it is, wheat has ~ 360 kcal per 100 g. I'll also try to push harder on the bike and walk faster...

I'm currently stuck under 74 kg, with about 8 kg to go...

Categories: LUG Community Blogs

Steve Kemp: Making an old android phone useful again

Planet HantsLUG - Thu, 13/08/2015 - 15:44

I've got an HTC Desire, running Android 2.2. It is old enough that installing applications such as thsoe from my bank, etc, fails.

The process of upgrading the stock ROM/firmware seems to be:

  • Download an unsigned zip file, from a shady website/forum.
  • Boot the phone in recovery mode.
  • Wipe the phone / reset to default state.
  • Install the update, and hope it works.
  • Assume you're not running trojaned binaries.
  • Hope the thing still works.
  • Reboot into the new O/S.

All in all .. not ideal .. in any sense.

I wish there were a more "official" way to go. For the moment I guess I'll ignore the problem for another year. My nokia phone does look pretty good ..

Categories: LUG Community Blogs

Jonathan McDowell: Programming the FST-01 (gnuk) with a Bus Pirate + OpenOCD

Planet ALUG - Tue, 11/08/2015 - 15:29

Last year at DebConf14 Lucas authorized the purchase of a handful of gnuk devices, one of which I obtained. At the time it only supported 2048 bit RSA keys. I took a look at what might be involved in adding 4096 bit support during DebConf and managed to brick my device several times in doing so. Thankfully gniibe was on hand with his STLinkV2 to help me recover. However subsequently I was loathe to experiment further at home until I had a suitable programmer.

As it is this year has been busy and the 1.1.x release train is supposed to have 4K RSA (as well as ECC) support. DebConf15 is coming up and I felt I should finally sort out playing with the device properly. I still didn’t have a suitable programmer. Or did I? Could my trusty Bus Pirate help?

The FST-01 has an STM32F103TB on it. There is an exposed SWD port. I found a few projects that claimed to do SWD with a Bus Pirate - Will Donnelly has a much cloned Python project, the MC HCK project have a programmer in Ruby and there’s LibSWD though that’s targeted to smarter programmers. None of them worked for me; I could get the Python bits as far as correctly doing the ID of the device, but not reading the option bytes or successfully flashing (though I did manage an erase).

Enter the old favourite, OpenOCD. This already has SWD support and there’s an outstanding commit request to add Bus Pirate support. NodoNogard has a post on using the ST-Link/V2 with OpenOCD and the FST-01 which provided some useful pointers. I grabbed the patch from Gerrit, applied it to OpenOCD git and built an openocd.cfg that contained:

source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB0 buspirate_vreg 1 buspirate_mode normal transport select swd source [find target/stm32f1x.cfg]

My BP has the Seeed Studio probe cable, so my hookups look like this:

That’s BP MOSI (grey) to SWD IO, BP CLK (purple) to SWD CLK, BP 3.3V (red) to FST-01 PWR and BP GND (brown) to FST-01 GND. Once that was done I fired up OpenOCD in one terminal and did the following in another:

$ telnet localhost 4444 Trying ::1... Trying Connected to localhost. Escape character is '^]'. Open On-Chip Debugger > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc Info : device id = 0x20036410 Info : SWD IDCODE 0x1ba01477 Error: Failed to read memory at 0x1ffff7e2 Warn : STM32 flash size failed, probe inaccurate - assuming 128k flash Info : flash size = 128kbytes > stm32f1x unlock 0 Device Security Bit Set stm32x unlocked. INFO: a reset or power cycle is required for the new settings to take effect. > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc > flash write_image erase /home/noodles/checkouts/gnuk/src/build/gnuk.elf auto erase enabled wrote 109568 bytes from file /home/noodles/checkouts/gnuk/src/build/gnuk.elf in 95.055603s (1.126 KiB/s) > stm32f1x lock 0 stm32x locked > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0x08000280 msp: 0x20005000

Then it was a matter of disconnecting the gnuk from the BP, plugging it into my USB port and seeing it come up successfully:

usb 1-2: new full-speed USB device number 11 using xhci_hcd usb 1-2: New USB device found, idVendor=234b, idProduct=0000 usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-2: Product: Gnuk Token usb 1-2: Manufacturer: Free Software Initiative of Japan usb 1-2: SerialNumber: FSIJ-1.1.7-87063020 usb 1-2: ep 0x82 - rounding interval to 1024 microframes, ep desc says 2040 microframes

More once I actually have a 4K key loaded on it.

Categories: LUG Community Blogs

Steve Kemp: A brief look at the weed file store

Planet HantsLUG - Mon, 10/08/2015 - 14:29

Now that I've got a citizen-ID, a pair of Finnish bank accounts, and have enrolled in a Finnish language-course (due to start next month) I guess I can go back to looking at object stores, and replicated filesystems.

To recap my current favourite, despite the lack of documentation, is the Camlistore project which is written in Go.

Looking around there are lots of interesting projects being written in Go, and so is my next one the seaweedfs, which despite its name is not a filesystem at all, but a store which is accessed via HTTP.

Installation is simple, if you have a working go-lang environment:

go get

Once that completes you'll find you have the executable bin/weed placed beneath your $GOPATH. This single binary is used for everything though it is worth noting that there are distinct roles:

  • A key concept in weed is "volumes". Volumes are areas to which files are written. Volumes may be replicated, and this replication is decided on a per-volume basis, rather than a per-upload one.
  • Clients talk to a master. The master notices when volumes spring into existance, or go away. For high-availability you can run multiple masters, and they elect the real master (via RAFT).

In our demo we'll have three hosts one, the master, two and three which are storage nodes. First of all we start the master:

root@one:~# mkdir / root@one:~# weed master -mdir / -defaultReplication=001

Then on the storage nodes we start them up:

root@two:~# mkdir /data; root@two:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

Then the second storage-node:

root@three:~# mkdir /data; root@three:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

At this point we have a master to which we'll talk (on port :9333), and a pair of storage-nodes which will accept commands over :8080. We've configured replication such that all uploads will go to both volumes. (The -max=1 configuration ensures that each volume-store will only create one volume each. This is in the interest of simplicity.)

Uploading content works in two phases:

  • First tell the master you wish to upload something, to gain an ID in response.
  • Then using the upload-ID actually upload the object.

We'll do that like so:

laptop ~ $ curl -X POST http://one.our.domain:9333/dir/assign {"fid":"1,06c3add5c3","url":"","publicUrl":"","count":1} client ~ $ curl -X PUT -F file=@/etc/passwd,06c3add5c3 {"name":"passwd","size":2137}

In the first command we call /dir/assign, and receive a JSON response which contains the IPs/ports of the storage-nodes, along with a "file ID", or fid. In the second command we pick one of the hosts at random (which are the IPs of our storage nodes) and make the upload using the given ID.

If the upload succeeds it will be written to both volumes, which we can see directly by running strings on the files beneath /data on the two nodes.

The next part is retrieving a file by ID, and we can do that by asking the master server where that ID lives:

client ~ $ curl http://one.our.domain:9333/dir/lookup?volumeId=1,06c3add5c3 {"volumeId":"1","locations":[ {"url":"","publicUrl":""}, {"url":"","publicUrl":""} ]}

Or, if we prefer we could just fetch via the master - it will issue a redirect to one of the volumes that contains the file:

client ~$ curl http://one.our.domain:9333/1,06c3add5c3 <a href=",06c3add5c3">Moved Permanently</a>

If you follow redirections then it'll download, as you'd expect:

client ~ $ curl -L http://one.our.domain:9333/1,06c3add5c3 root:x:0:0:root:/root:/bin/bash ..

That's about all you need to know to decide if this is for you - in short uploads require two requests, one to claim an identifier, and one to use it. Downloads require that your storage-volumes be publicly accessible, and will probably require a proxy of some kind to make them visible on :80, or :443.

A single "weed volume .." process, which runs as a volume-server can support multiple volumes, which are created on-demand, but I've explicitly preferred to limit them here. I'm not 100% sure yet whether it's a good idea to allow creation of multiple volumes or not. There are space implications, and you need to read about replication before you go too far down the rabbit-hole. There is the notion of "data centres", and "racks", such that you can pretend different IPs are different locations and ensure that data is replicated across them, or only within-them, but these choices will depend on your needs.

Writing a thin middleware/shim to allow uploads to be atomic seems simple enough, and there are options to allow exporting the data from the volumes as .tar files, so I have no undue worries about data-storage.

This system seems reliable, and it seems well designed, but people keep saying "I'm not using it in production because .. nobody else is" which is an unfortunate problem to have.

Anyway, I like it. The biggest omission is really authentication. All files are public if you know their IDs, but at least they're not sequential ..

Categories: LUG Community Blogs

Andy Smith: SSDs and Linux Native Command Queuing

Planet HantsLUG - Sun, 09/08/2015 - 08:10
Native Command Queueing

Native Command Queuing (NCQ) is an extension of the Serial ATA protocol that allows multiple requests to be sent to a drive, allowing the drive to order them in a way it considers optimal.

This is very handy for rotational media like conventional hard drives, because they have to move the head all over to do random IO, so in theory if they are allowed to optimise ordering then they may be able to do a better job of it. If the drive supports NCQ then it will advertise this fact to the operating system and Linux by default will enable it.

Queue depth

The maximum depth of the queue in SATA is 31 for practical purposes, and so if the drive supports NCQ then Linux will usually set the depth to 31. You can change the depth by writing a number between 1 and 31 to /sys/block/<device>/device/queue_depth. Writing 1 to the file effectively disables NCQ for that device.

NCQ and SSDs

So what about SSDs? They aren’t rotational media; any access is in theory the same as any other access, so no need to optimally order the commands, right?

The sad fact is, many SSDs even today have incompatibilities with SATA drivers and chipsets such that NCQ does not reliably work. There’s advice all over the place that NCQ can be disabled with no ill effect, because supposedly SSDs do not benefit from it. Some posts even go as far as to suggest that NCQ might be detrimental to performance with SSDs.

Well, let’s see what fio has to say about that.

The setup
  • Two Intel DC s3610 1.6TB SSDs in an MD RAID-10 on Debian 8.1.
  • noop IO scheduler.
  • fio operating on a 4GiB test file that is on an ext4 filesystem backed by LVM.
  • fio set to do a 70/30% mix of read vs write operations with 128 simultaneous IO operations in flight.

The goal of this is to simulate a busy highly parallel server load, such as you might see with a database.

The fio command line looks like this:

fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=ncq \ --filename=test \ --bs=4k \ --iodepth=128 \ --size=4G \ --readwrite=randrw \ --rwmixread=70

Expected output will be something like this:

ncq: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128 fio-2.1.11 Starting 1 process Jobs: 1 (f=1): [m(1)] [100.0% done] [50805KB/21546KB/0KB /s] [12.8K/5386/0 iops] [eta 00m:00s] ncq1: (groupid=0, jobs=1): err= 0: pid=11272: Sun Aug 9 06:29:33 2015 read : io=2867.6MB, bw=44949KB/s, iops=11237, runt= 65327msec write: io=1228.5MB, bw=19256KB/s, iops=4813, runt= 65327msec cpu : usr=4.39%, sys=25.20%, ctx=732814, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued : total=r=734099/w=314477/d=0, short=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=128   Run status group 0 (all jobs): READ: io=2867.6MB, aggrb=44949KB/s, minb=44949KB/s, maxb=44949KB/s, mint=65327msec, maxt=65327msec WRITE: io=1228.5MB, aggrb=19255KB/s, minb=19255KB/s, maxb=19255KB/s, mint=65327msec, maxt=65327msec   Disk stats (read/write): dm-0: ios=732755/313937, merge=0/0, ticks=4865644/3457248, in_queue=8323636, util=99.97%, aggrios=734101/314673, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% md4: ios=734101/314673, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=364562/313849, aggrmerge=2519/1670, aggrticks=2422422/2049132, aggrin_queue=4471730, aggrutil=94.37% sda: ios=364664/313901, merge=2526/1618, ticks=2627716/2223944, in_queue=4852092, util=94.37% sdb: ios=364461/313797, merge=2513/1722, ticks=2217128/1874320, in_queue=4091368, util=91.68%

The figures we’re interested in are the iops= ones, in this case 11237 and 4813 for read and write respectively.


Here’s how different NCQ queue depths affected things. Click the graph image for the full size version.


On this setup anything below a queue depth of about 8 is disastrous to performance. The aberration at a queue depth of 19 is interesting. This is actually repeatable. I have no explanation for it.

Don’t believe anyone who tells you that NCQ is unimportant for SSDs unless you’ve benchmarked that and proven it to yourself. Disabling NCQ on an Intel DC s3610 appears to reduce its performance to around 25% of what it would be with even a queue depth of 8. Modern SSDs, especially enterprise ones, have a parallel architecture that allows them to get multiple things done at once. They expect NCQ to be enabled.

It’s easy to guess why 8 might be the magic number for the DC s3610:

The top of the PCB has eight NAND emplacements and Intel’s proprietary eight-channel PC29AS21CB0 controller.

The newer NVMe devices are even more aggressive with this; while the SATA spec stops at one queue with a depth of 32, NVMe specifies up to 65k queues with a depth of up to 65k each! Modern SSDs are designed with this in mind.

Categories: LUG Community Blogs
Syndicate content