Yesterday morning we awoke at silly o'clock to take the train to catch the ferry from Poole to Guernsey. The ferry was rather busy with people going to the Island Games in Jersey, but we got off a St. Peter Port. We walked up the hill to our B&B to discover there had been a booking error and they were actually full - so they took us to another hotel (an extra star) where we stayed instead.
The glorious weather we had for our crossing had mostly deserted us and it had become rather dull and flat. However the predicted rain didn't turn up so we were able to explore the town without getting wet and were able to find some food for dinner.
This morning was great, after our breakfast we went into town to explore further. Once we had bought lunch bits we took the bus towards the airport, getting off one stop shy, then we walked all along the southern coast back to St. Peter Port. The walking was easy and the views were beautiful - very reminiscent of the Brittany coast or Cornwall. More like the UK and less like France they were a bit stingy with with signs and it was a bit confusing in places - the French GR paint marking system is very simple and much easier to navigate with than the occasional sign!
When we made it back to town we had a look at La Valette Underground Military Museum, which was most fascinating, and packed with more stuff than you would imagine could fit in such a small place.
For dinner we decided to try eating out. La Creperie was strange, the staff appeared to be of Slavic origin, half the menu was not crepe or galette, but the galette was actually quite good though the crepe was only average. Definitely fusion food!
The Core Infrastructure Initiative announced today that they will support two Debian Developers, Holger Levsen and Jérémy Bobbio, with $200,000 to advance their Debian work in reproducible builds and to collaborate more closely with other distributions such as Fedora, Ubuntu, OpenWrt to benefit from this effort.
The Core Infrastructure Initiative (CII) was established in 2014 to fortify the security of key open source projects. This initiative is funded by more than 20 companies and managed by The Linux Foundation.
The reproducible builds initiative aims to enable anyone to reproduce bit by bit identical binary packages from a given source, thus enabling anyone to independently verify that a binary matches the source code from which it was said it was derived. For example, this allow the users of Debian to rebuild packages and obtain exactly identical packages to the ones provided by the Debian repositories.
After leaving IBM I’ve joined Pace at their Belfast office. It is quite a change of IT sectors, though still the same sort of job. Software development seems to have a lot in common no matter which industry it is for.
The job is still Software Development, and there should be some fun challenges with things like allowing a TV set top box to do on demand video content when all you have is a one-way data stream from a satellite, for instance, which make for some interesting solutions. I’m working in the Cobalt team which deals with a delivering data from the TV provider onto set top boxes, so things like settings, software updates, programme guides and on demand content and even apps. Other teams in the office work with the actual video content encryption and playback and the UI the set top box shows.
The local office seems to be all running Fedora, so I’m saying goodbye to Ubuntu at work. I already miss it, but hopefully will find Fedora enjoyable in the long term.
The office is on the other side of Belfast so is a marginally longer commute, but it’s still reasonable to get to. Stranmillis seems a nice area of Belfast, and it’s a 10 minute walk to the Botanical gardens so I intend to make some time to see it over lunch, which will be nice as I really miss getting out as I could in Hursley and its surrounding fields.
Also, web services generally spit out one of a fairly common set of formats: (json, xml, html) and I often just want to grab a value from the response and use it in a script - maybe to make the next call in a workflow.
So I made please which makes it super simple to do things like making a web request and grabbing a particular value from the response.
For example, here's how you'd get the page title from this site:please get http://offend.me.uk/ | please parse html.head.title.#text
Or getting a value out of the json returned by jsontest.com's IP address API:please get http://ip.jsontest.com/ | please parse ip
The parse part of please is the most fun; it can convert between a few different formats. Something I do quite often is grabbing a json response from an API and spitting it out as yaml so I can read it easily. For example:please get http://date.jsontest.com/ | please parse -o yaml
(alright so that's a poor example but the difference is huge when it's a complicated bit of json)
Also handy for turning an unreadable mess of xml into yaml (I love yaml for its readability):echo '<docroot type="messydoc"><a><b dir="up">A tree</b><b dir="down">The ground</b></a></docroot>' | please parse -o yaml
As an example, of the kinds of things you can play with, I made this tool for generating graphs from json.
I'm still working on please; there will be bugs; let me know about them.
Recently I've been experimenting with camlistore, which is yet another object storage system.
Camlistore is designed exactly how I'd like to see an object storage-system - each server allows you to:
It should be noted more is possible, there's a pretty web UI for example, but I'm simplifying. Do your own homework :)
With those primitives you can allow a client-library to upload a file once, then in the background a bunch of dumb servers can decide amongst themselves "Hey I have data with ID:33333 - Do you?". If nobody else does they can upload a second copy.
In short this kind of system allows the replication to be decoupled from the storage. The obvious risk is obvious though: if you upload a file the chunks might live on a host that dies 20 minutes later, just before the content was replicated. That risk is minimal, but valid.
There is also the risk that sudden rashes of uploads leave the system consuming all the internal-bandwith constantly comparing chunk-IDs, trying to see if data is replaced that has been copied numerous times in the past, or trying to play "catch-up" if the new-content is larger than the replica-bandwidth. I guess it should possible to detect those conditions, but they're things to be concerned about.
Anyway the biggest downside with camlistore is documentation about rebalancing, replication, or anything other than simple single-server setups. Some people have blogged about it, and I got it working between two nodes, but I didn't feel confident it was as robust as I wanted it to be.
I have a strong belief that Camlistore will become a project of joy and wonder, but it isn't quite there yet. I certainly don't want to stop watching it :)
On to the more personal .. I'm all about the object storage these days. Right now most of my objects are packed in a collection of boxes. On the 6th of next month a shipping container will come pick them up and take them to Finland.
For pretty much 20 days in a row we've been taking things to the skip, or the local charity-shops. I expect that by the time we've relocated the amount of possesions we'll maintain will be at least a fifth of our current levels.
We're working on the general rule of thumb: "If it is possible to replace an item we will not take it". That means chess-sets, mirrors, etc, will not be carried. DVDs, for example, have been slashed brutally such that we're only transferring 40 out of a starting collection of 500+.
Only personal, one-off, unique, or "significant" items will be transported. This includes things like personal photographs, family items, and similar. Clothes? Well I need to take one jacket, but more can be bought. The only place I put my foot down was books. Yes I'm a kindle-user these days, but I spent many years tracking down some rare volumes, and though it would be possible to repeat that effort I just don't want to.
I've also decided that I'm carrying my complete toolbox. Some of the tools I took with me when I left home at 18 have stayed with me for the past 20+ years. I don't need this specific crowbar, or axe, but I'm damned if I'm going to lose them now. So they stay. Object storage - some objects are more important than they should be!
I always seem to forget this one.
To pass F11 or F12 over a serial connection (either real serial or Serial-over-LAN IPMI), it’s Escape followed by ! (Shift+1) or @ (Shift+') respectively.
Note that on a US keyboard ! and @ would be next to each other above the 1 and 2 keys so that would make some vague kind of sense as alternatives to F11 and F12. But it’s literally the @ that matters and since I’m using a UK keyboard then it is Shift+'.
TL;DR: Most motherboards have a serial header in an IDC-10 (5×2 pins) arrangement with the pins as a row of even numbered pins (2,4,6,8,X) followed by a row of odd numbered pins (1,3,5,7,9). Supermicro ones appear to have the pins in sequential order (6,7,8,9,X and then 1,2,3,4,5). As a result a standard IDC-10 to DB-9 cable will not work and you’ll need to either hack one about or buy the Supermicro one.Are we sitting comfortably?
I bought a Supermicro motherboard. It doesn’t have a serial port exposed at the back. I like to use serial ports for a serial console even though I am aware that IPMI exists. IPMI on this board works okay but I like knowing I can always get to the “real” serial port as well.
The motherboard has a COM1 serial header, and I wasn’t using the PCI expansion slot on the back of the chassis, so I decided to put a serial port there. I bought a typical IDC-10 / DB-9 cable and plate:
Didn’t work. Serial-over-LAN (IPMI) worked alright. On COM1 I would get either nothing or a run of garbage characters from time to time. I wasted a good number of hours messing with BIOS settings, baud rates, checking if my USB serial adaptor actually worked with another device (of which I only have one in my home), before I decided to sit down and check the pin numbering for both the header and the cable.
Looking at the motherboard manual we see this:
And the cable?
Notice anything amiss?
The cable’s pins go in a row of odd numbers and then a row of even numbers:2 4 6 8 X 1 3 5 7 9 -
The X is the missing pin (serial uses 9 pins) and the - indicates where the notch for the connector would be: next to pin 5 in this case.
The header’s pins go in sequential order:6 7 8 9 X 1 2 3 4 5 -
As a result all but pin 1 are incorrect.
You actually need a Supermicro cable for this. CBL-0010L is the part number in my case. CBL-0010LP would be the low profile version. Good luck finding it mentioned on Supermicro’s site, but your favourite reseller will probably know of it. As it was I found one on Ebay for £1.58+VAT, and it works now.
After knowing what to search for I also found someone else having a similar issue with a Supermicro board.
You could of course instead hack any existing cable’s pins about or fit an adaptor in between (as the person in the above link did).
Thanks Supermicro. Thupermicro.
Previously I'd mentioned that we were moving from Edinburgh to Newcastle, such that my wife could accept a position in a training-program, and become a more specialized (medical) doctor.
Now the inevitable update: We're still moving, but we're no longer moving to Newcastle, instead we're moving to Helsinki, Finland.
Me? I care very little about where I end up. I love Edinburgh, I always have, and I never expected to leave here, but once the decision was made that we needed to be elsewhere the actual destination does/didn't matter too much to me.
Given the alternative - My wife moves to Finland, and I do not - Moving to Helsinki is a no-brainer.
I'm working on the assumption that I can keep my job and work more-remotely. If that turns out not to be the case that'll be a real shame given the way the past two years have worked out.
So .. 60 days or so left in the UK. Fun.
Long ago, in days of yore, we assumed that any software worth having would be packaged by the operating system we used. Debian with its enormous pile of software (over 20,000 sources last time I looked) looked to basically contain every piece of free software ever. However as more and more people have come to Linux-based and BSD-based systems, and the proliferation of *NIX-based systems has become even more diverse, it has become harder and harder to ensure that everyone has access to all of the software they might choose to use.
Couple that with the rapid development of new projects, who clearly want to get users involved well before the next release cycle of a Linux-based distribution such as Debian, and you end up with this recommendation to bypass the operating system's packaging system and simply curl | sudo bash -.
We, the OS-development literati, have come out in droves to say "eww, nasty, don't do that please" and yet we have brought this upon ourselves. Our tendency to invent, and reinvent, at the very basic levels of distributions has resulted in so many operating systems and so many ways to package software (if not in underlying package format then in policy and process) that third party application authors simply cannot keep up. Couple that with the desire of the consumers to not have their chosen platform discounted, and if you provide Debian packages, you end up needing to provide for Fedora, RHEL, SuSE, SLES, CentOS, Mint, Gentoo, Arch, etc.etc; let alone supporting all the various BSDs. This leads to the simple expedience of curl | sudo bash -.
Nobody, not even those who are most vehemently against this mechanism of installing software, can claim that it is not quick, simple for users, easy to copy/paste out of a web-page, and leaves all the icky complexity of sorting things out up to a script which the computer can run, rather than the nascent user of the software in question. As a result, many varieties of software have ended up using this as a simple installation mechanism, from games to orchestration frameworks - everyone can acknowledge how easy it is to use.
Now, some providers are wising up a little and ensuring that the url you are curling is at least an https:// one. Some even omit the sudo from the copy/paste space and have it in the script, allowing them to display some basic information and prompting the user that this will occur as root before going ahead and elevating. All of these myriad little tweaks to the fundamental idea improve matters but are ultimately just putting lipstick on a fairly sad looking pig.
So, what can be done? Well we (again the OS-development literati) got ourselves into this horrendous mess, so it's up to us to get ourselves back out. We're all too entrenched in our chosen packaging methodologies, processes, and policies, to back out of those; yet we're clearly not properly servicing a non-trivial segment of our userbase. We need to do better. Not everyone who currently honours a curl | sudo bash - is capable of understanding why it's such a bad idea to do so. Some education may reduce that number but it will never eliminate it.
For a long time I advocated a switch to wget && review && sudo ./script approach instead, but the above comment, about people who don't understand why it might be a bad idea, really applies to show how few of those users would even be capable of starting to review a script they downloaded, let alone able to usefully judge for themselves if it is really safe to run. Instead we need something better, something collaborative, something capable of solving the accessibility issues which led to the curl | sudo bash - revolt in the first place.
I don't pretend to know what that solution might be, and I don't pretend to think I might be the one to come up with it, but I can hilight a few things I think we'll need to solve to get there:
Given the above, no solution can be "just get all the apps developers to learn how to package software for all the OS distributions they want their app to run on" since that way madness lies.
I'm sure there are other minor, and major, requirements on any useful solution but the simple fact of the matter is that until and unless we have something which at least meets the above, we will never be rid of curl | sudo bash - :- just like we can never seem to be rid of that one odd person at the party, noone knows who invited them, and noone wants to tell them to leave because they do fill a needed role, but noone really seems to like.
Until then, let's suck it up and while we might not like it, let's just let people keep on curl | sudo bash -ing until someone gets hurt.
P.S. I hate curl | sudo bash - for the record.
http://baldric.net/2015/06/05/why-pay-twice/ asks why the government hires civilians to monitor social media instead of just giving GC HQ the keywords. Us cripples aren’t allowed to comment there (physical ability test) so I reply here:
It’s pretty obvious that they have probably done both, isn’t it?
This way, they’re verifying each other. Politicians probably trust neither civilians or spies completely and that makes it worth paying twice for this.
Unlike lots of things that they seem to want not to pay for at all…
Indeed my most recent tweet regarding Java could hardly be construed as positive towards it.
My OneRNG kickstarter arrived today. I had five units, so I chose three external models and two internal ones. The finish of the external model isn’t really up to the quality of an Entropy Key. Here’s a picture of them together.
Given that the external model looks rather flimsy — I could imagine it getting snapped in half if someone bumped into it — I think I’d probably prefer the internal model in practice. Here’s what that looks like:
The three different connectors are to try to ensure you can find a useful connection angle no matter how your motherboard’s internal USB headers are laid out.
I haven’t yet plugged them in to check out how they work. This is probably going to have to wait a few weeks as I have quite a lot on.
Assuming they work about as well as the Entropy Keys then I only need to keep two of these for myself, so if anyone wants one I would be willing to sell it on to you at cost plus postage.
Yesterday’s Independent newspaper reports that HMG has let a contract with five companies to monitor social media such as twitter, facebook, and blogs for commentary on Goverment activity. The report says:
“Under the terms of the deal five companies have been approved to keep an eye on Facebook, Twitter and blogs and provide daily reports to Whitehall on what’s being said in “real time”.
Ministers, their advisers and officials will provide the firms with “keywords and topics” to monitor. They will also be able to opt in to an Orwellian-sounding Human-Driven Evaluation and Analysis system that will allow them to see “favourability of coverage” across old and new media.”
This seems to me to be a modern spin on the old press cuttings system which was in widespread use in HMG throughout my career. The article goes on to say:
“The Government has always paid for a clippings service which collated press coverage of departments and campaigns across the national, regional and specialist media. They have also monitored digital news on an ad hoc basis for several years. But this is believed to be the first time that the Government has signed up to a cross-Whitehall contract that includes “social” as a specific media for monitoring.”
Apart from the mainstream social media sites noted above, I’d be intrigued to know what criteria are to be applied for including blogs in the monitoring exercise. Some blogs (the “vox populi” types such as Guido Fawkes at order-order) will be obvious candidates. Others in the traditional media, such as journalistic or political blogs will also be included, but I wonder who chooses others, and by what yardsticks. Would trivia be included? And should I care?
According to the Independent, the Cabinet Office, which negotiated the deal, claims that even with the extended range of monitoring by bringing individual departmental contracts together it will be able to save £2.4m over four years whilst “maximising the quality of innovative work offered by suppliers”.
Now since the Cabinet Office is reportedly itself facing a budget cut of £13 million in this FY alone, it strikes me that it would have been much more cost effective to simply use GCHQ’s pre-existing monitoring system rather than paying a separate bunch of relative amateurs to search the same sources.
Just give GCHQ the “keywords” or “topics of interest”. Go on Dave, you know it makes sense.
After nearly 10 years with IBM, I am moving on… Today is my last day with IBM.
I suppose my career with IBM really started as a pre-university placement at IBM, which makes my time in IBM closer to 11 years. I worked with some of the WebSphere technical sales and pre-sales teams in Basingstoke, doing desktop support and Lotus Domino administration and application design, though I don’t like to remind people that I hold qualifications on Domino :p
I then joined as a graduate in 2005, and spent most of my time working on Integration Bus (aka Message Broker, and several more names) and enjoyed working with some great people over the years. The last 8 months or so have been with the QRadar team in Belfast, and I really enjoyed my time working with such a great team.
I have done test roles, development roles, performance work, some time in level 3 support, and enjoyed all of it. Even the late nights the day before release were usually good fun (the huge pizzas helped!).
I got very involved with IBM Hursley’s Blue Fusion events, which were incredible fun and a rather unique opportunity to interact with secondary school children.
Creating an Ubuntu-based linux desktop for IBM, with over 6500 installs, has been very rewarding and something I will remember fondly.
I’ve enjoyed my time in IBM and made some great friends. Thanks to everyone that helped make my time so much fun.
Well, that didn’t last long.
When I decided to force SSL as the default connection to trivia I had forgotten that it is syndicated via RSS on sites like planet alug. And of course as Brett Parker helpfully pointed out to me, self-signed certificates don’t always go down too well with RSS readers. He also pointed out that some spiders (notably google) would barf on my certificate and thus leave the site unindexed.
So I have taken off the forced redirect to port 443. Nevertheless, I would encourage readers to connect to https://baldric.net in order to protect their browsing of this horribly seditious site.
You never know who is watching……..
In my post of 8 May I said it was now time to encrypt much, much more of my everyday activity. One big, and obvious. hole in this policy decision was the fact that the public face of this blog itself has remained unencrypted since I first created it way back in 2006.
Back in September 2013 I mentioned that I had for some time protected all my own connections to trivia with an SSL connection. Given that my own access to trivia has always been encrypted, any of my readers could easily have used the same mechanism to connect (just by using the “https” prefix). However, my logs tell me that that very, very few connections other than my own come in over SSL. There are a couple of probable reasons for this, not least the fact that an unencrypted plain http connection is the obvious (default) way to connect. But another reason may be the fact that I use a self signed (and self generated) X509 certificate. I do this because, like Michael Orlitzky I see no reason why I should pay an extortionist organisation such as a CA good money to produce a certificate which says nothing about me or the trustworthiness of my blog when I can produce a perfectly good certificate of my own.
I particularly like Orlitzky’s description of CAs as “terrorists”. He says:
I oppose CA-signed certificates because it’s bad policy, in the long run, to negotiate with terrorists. I use that word literally — the CAs and browser vendors use fear to achieve their goal: to get your money. The CAs collect a ransom every year to ”renew“ your certificate (i.e. to disarm the time bomb that they set the previous year) and if you don’t pay up, they’ll scare away your customers. ‘Be a shame if sometin’ like that wos to happens to yous…
Unfortunately, however, web browsers get really upset when they encounter self-signed certificates and throw up all sorts of ludicrously overblown warnings. Firefox, for example, gives the error below when first connecting to trivia over SSL.
Any naive reader encountering that sort of error message is likely to press the “get me out of here” button and then bang goes my readership. But that is just daft. If you are happy to connect to my blog in clear, why should you be afraid to connect to it over an encrypted channel just because the browser says it can’t verify my identity? If I wanted to attack you, the reader, then I could just as easily do so over a plain http connection as over SSL. And in any event, I did not create my self signed certificate to provide identity verification, I created it to provide an encrypted channel to the blog. That encryption works, and, I would argue, it is better than the encryption provided by many commercially produced certificates because I have specifically chosen to use only the stronger cyphers available to me.
Encrypting the connection to trivia feels to me like the right thing to do. I personally always feel better about a web connection that is encrypted. Indeed, I use the “https everywhere” plugin as a matter of course. Given that I already have an SSL connection available to offer on trivia, and that I believe that everyone has the right to browse the web free from intrusive gratuitous snooping I think it is now way past time that I provided that protection to my readers. So, as of yesterday I have shifted the whole of trivia to an encrypted channel by default. Any connection to port 80 is now automatically redirected to the SSL protected connection on port 443.
Let’s see what happens to my readership.
We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!
This month's meeting is at The Feathers Pub, Merstham
42 High St, Merstham, Redhill, Surrey, RH1 3EA
01737 645643 · http://www.thefeathersmerstham.co.uk
NOTE the pub opens at 12 Noon.
Continuing the theme from the last post I made, I've recently started working my way down the list of existing object-storage implementations.
tahoe-LAFS is a well-established project which looked like a good fit for my needs:
Getting the system up and running, on four nodes, was very simple. Setup a single/simple "introducer" which is a well-known node that all hosts can use to find each other, and then setup four deamons for storage.
When files are uploaded they are split into chunks, and these chunks are then distributed amongst the various nodes. There are some configuration settings which determine how many chunks files are split into (10 by default), how many chunks are required to rebuild the file (3 by default) and how many copies of the chunks will be created.
The biggest problem I have with tahoe is that there is no rebalancing support: Setup four nodes, and the space becomes full? You can add more nodes, new uploads go to the new nodes, while old ones stay on the old. Similarly if you change your replication-counts because you're suddenly more/less paranoid this doesn't affect existing nodes.
In my perfect world you'd distribute blocks around pretty optimistically, and I'd probably run more services:
The storage nodes would have the primitives "List all blocks", "Get block", "Put block", and using that you could ensure that each node had sent its data to at least N other nodes. This could be done in the background.
The indexer would be responsible for keeping track of which blocks live where, and which blocks are needed to reassemble upload N. There's probably more that it could do.