Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 1 hour 50 min ago

Mick Morgan: get your porn here

Thu, 30/07/2015 - 16:20

Dear Dave is at it again. Sometimes I worry about our PM’s priorities. Not content with his earlier insistence that UK ISPs must introduce “family friendly (read “porn”) filters”, our man in No 10 now wants to “see age restrictions put into place or these (i.e. “porn”) websites will face being shut down”.

El Reg today runs a nice article about Dave’s latest delusion. That article begins:

Prime Minister David Cameron has declared himself “determined to introduce age verification mechanisms to restrict under 18s’ access to pornographic websites” and he is “prepared to legislate to do so if the industry fails to self-regulate.”

It continues in classic El Reg style:

The government will hold a consultation in the autumn, meaning it will be standing on the proverbial street corner and soliciting views on how to stop 17-year-olds running a web search for the phrase “tits”.

and further notes that Baroness Shields (who is apparently our “Minister for internet safety and security”) said:

“Whilst great progress has been made, we remain acutely aware of the risks and dangers that young people face online. This is why we are committed to taking action to protect children from harmful content. Companies delivering adult content in the UK must take steps to make sure these sites are behind age verification controls.”

To which two members of the El Reg commentariat respond:

I give it 5 minutes after the “blockade” is put in place before someone puts a blog post up explaining how to bypass said blockade.

and

Re: 5 minutes

“I give it 5 minutes after the “blockade” is put in place before someone puts a blog post up explaining how to bypass said blockade.”

I can do that now & don’t need a blog.

Q: Are you over 18?

A: Yes

Someone, somewhere, in Government must be able to explain to this bunch of idiots how the internet works. Short of actually pulling the plug on the entire net, any attempt to block access to porn is doomed to failure. China has a well documented and massive censorship mechanism in place (the Great Firewall) in order to control what its populace can watch or read or listen to. That mechanism fails to prevent determined access to censored material. If a Marxist State cannot effectively block free access to the ‘net, then Dear Dave has no chance.

Unless of course he knows that, wants to fail, and plans his own Great Firewall in “reluctant” response.

Categories: LUG Community Blogs

Mick Morgan: domain privacy?

Tue, 28/07/2015 - 20:01

Over the past few months or so I have bought myself a bunch of new domain names (I collect ’em….). On some of those names I have chosen the option of “domain privacy” so that the whois record for the domain in question will show limited information to the world at large. I don’t often do this, for a couple of reasons. Firstly, I usually don’t much care whether or not the world at large knows that I own and manage a particular domain (I have over a dozen of these). Secondly, the privacy provided is largely illusory anyway. Law Enforcement Agencies, determined companies with pushy lawyers and network level adversaries will always be able to link any domain with the real owner should they so choose. In fact, faced with a simple DMCA request, some ISPs have in the past simply rolled over and exposed their customer’s details.

But, I get spam to all the email addresses I advertise in my whois records, and I also expose other personal details required by ICANN rules. I don’t much like that, but I put up with it as a necessary evil. However, for one or two of the new domains I don’t want the world and his dog attributing the name directly to me – at least not without some effort anyway.

Because the whois record must contain contact details, domain privacy systems tend to mask the genuine registrant email address with a proxy address of the form “some-random-alphanumeric-string@dummy.domain” which simply redirects to the genuine registrant email address. Here is one obvious flaw in the process because a network level adversary can simply post an email to the proxy address and then watch where it goes (so domain privacy is pointless if your adversary is GCHQ or NSA – but then if they are your adversaries you have a bigger problem than just maintaining privacy on your domain).

Interestingly, I have received multiple emails to each of the proxy addresses listed for my “private” domains purporting to come from marketing companies offering me the chance to sign up to various special offers. Each of those emails also offers me the chance to “unsubscribe” from their marketing list if I am not interested in their wares.

I’ll leave the task of spotting the obvious flaw in that as an exercise for the class.

Categories: LUG Community Blogs

Jonathan McDowell: Recovering a DGN3500 via JTAG

Tue, 21/07/2015 - 11:34

Back in 2010 when I needed an ADSL2 router in the US I bought a Netgear DGN3500. It did what I wanted out of the box and being based on a MIPS AR9 (ARX100) it seemed likely OpenWRT support might happen. Long story short I managed to overwrite u-boot (the bootloader) while flashing a test image I’d built. I ended up buying a new router (same model) to get my internet connection back ASAP and never getting around to fully fixing the broken one. Until yesterday. Below is how I fixed it; both for my own future reference and in case it’s of use any any other unfortunate soul.

The device has clear points for serial and JTAG and it was easy enough (even with my basic soldering skills) to put a proper header on. The tricky bit is that the flash is connected via SPI, so it’s not just a matter of attaching JTAG, doing a scan and reflashing from the JTAG tool. I ended up doing RAM initialisation, then copying a RAM copy of u-boot in and then using that to reflash. There may well have been a better way, but this worked for me. For reference the failure mode I saw was an infinitely repeating:

ROM VER: 1.1.3 CFG 05

My JTAG device is a Bus Pirate v3b which is much better than the parallel port JTAG device I built the first time I wanted to do something similar. I put the latest firmware (6.1) on it.

All of this was done from my laptop, which runs Debian testing (stretch). I used the OpenOCD 0.9.0-1+b1 package from there.

Daniel Schwierzeck has some OpenOCD scripts which include a target definition for the ARX100. I added a board definition for the DGN3500 (I’ve also send Daniel a patch to add this to his repo).

I tied all of this together with an openocd.cfg that contained:

source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB1 buspirate_vreg 0 buspirate_mode normal buspirate_pullup 0 reset_config trst_only source [find openocd-scripts/target/arx100.cfg] source [find openocd-scripts/board/dgn3500.cfg] gdb_flash_program enable gdb_memory_map enable gdb_breakpoint_override hard

I was then able to power on the router and type dgn3500_ramboot into the OpenOCD session. This fetched my RAM copy of u-boot from dgn3500_ram/u-boot.bin, copied it into the router’s memory and started it running. From there I had a u-boot environment with access to the flash commands and was able to restore the original Netgear image (and once I was sure that was working ok I subsequently upgraded to the Barrier Breaker OpenWRT image).

Categories: LUG Community Blogs

Chris Lamb: Where's the principled opposition to the "WhatsApp ban"?

Fri, 10/07/2015 - 19:23

The Independent reports that David Cameron wishes to ban the instant messaging application WhatsApp due its use of end-to-end encryption.

That we might merely be pawns in manoeuvring for some future political compromise (or merely susceptible to cheap clickbait) should be cause for some concern, but what should worry us more is that if it takes scare stories about WhatsApp for our culture to awaken on the issues of privacy and civil liberties, then the central argument against surveillance was lost a long time ago.

However, the situation worsens once you analyse the disapproval in more detail. One is immediately struck by a predominant narrative of technical considerations; a ban would be "unworkable" or "impractical". A robust defence of personal liberty or a warning about the insidious nature of chilling effects? Perhaps a prescient John Locke quote to underscore the case? No. An encryption ban would "cause security problems."

The argument proceeds in a tediously predictable fashion: it was already difficult to keep track whether one should ipso facto be in favour of measures that benefit the economy, but we are suddenly co-opted as technocrats to consider the "damage" it could to do the recovery or the impact on a now-victimised financial sector. The «coup-de-grâce» finally appeals to our already inflated self-regard and narcissism: someone could "steal your identity."

Perhaps even more disappointing is the reaction from more technically-minded circles who, frankly, should know better. Here, they give the outward impression of metaphorically stockpiling copies of the GnuPG source code in their bunkers, perhaps believing the shallow techno-utopianist worldview that all social and cultural problems can probably be solved with Twitter and a JavaScript intepreter.

The tragedy here is that I suspect that this isn't what the vast majority of people really believe. Given a hypothetical ban that could, somehow, bypass all of the stated concerns, I'm pretty upbeat and confident that most people would remain uncomfortable with it on some level.

So what, exactly, does it take for us to oppose this kind of intervention on enduring principled grounds instead of transient and circumventable practical ones? Is the problem just a lack of vocabulary to discuss these issues on a social scale? A lack of courage?

Whilst it's certainly easier to dissect illiberal measures on technical merit than to make an impassioned case for abstract freedoms, every time we gleefully cackle "it won't work" we are, in essence, conceding the central argument to the authoritarian and the censorious. If one is right but for the wrong reasons, were we even right to begin with?

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Be careful what you ask for

Wed, 01/07/2015 - 14:28
Date: Wed, 01 Jul 2015 06:13:16 -0000 From: 123-reg <noreply@123-reg.co.uk> To: dsilvers@digital-scurf.org Subject: Tell us what you think for your chance to win X-Mailer: MIME::Lite 3.027 (F2.74; T1.28; A2.04; B3.13; Q3.13) Tell us what you think of 123-reg! <!-- .style1 {color: #1996d8} -->

Well 123-reg mostly I think you don't know how to do email.

Categories: LUG Community Blogs

Jonathan McDowell: What Jonathan Did Next

Mon, 29/06/2015 - 23:22

While I mentioned last September that I had failed to be selected for an H-1B and had been having discussions at DebConf about alternative employment, I never got around to elaborating on what I’d ended up doing.

Short answer: I ended up becoming a law student, studying for a Masters in Legal Science at Queen’s University Belfast. I’ve just completed my first year of the 2 year course and have managed to do well enough in the 6 modules so far to convince myself it wasn’t a crazy choice.

Longer answer: After Vello went under in June I decided to take a couple of months before fully investigating what to do next, largely because I figured I’d either find something that wanted me to start ASAP or fail to find anything and stress about it. During this period a friend happened to mention to me that the applications for the Queen’s law course were still open. He happened to know that it was something I’d considered before a few times. Various discussions (some of them over gin, I’ll admit) ensued and I eventually decided to submit an application. This was towards the end of August, and I figured I’d also talk to people at DebConf to see if there was anything out there tech-wise that I could get excited about.

It turned out that I was feeling a bit jaded about the whole tech scene. Another friend is of the strong opinion that you should take a break at least every 10 years. Heeding her advice I decided to go ahead with the law course. I haven’t regretted it at all. My initial interest was largely driven by a belief that there are too few people who understand both tech and law. I started with interests around intellectual property and contract law as well as issues that arise from trying to legislate for the global nature of most tech these days. However the course is a complete UK qualifying degree (I can go on to do the professional qualification in NI or England & Wales) and the first year has been about public law. Which has been much more interesting than I was expecting (even, would you believe it, EU law). Especially given the potential changing constitutional landscape of the UK after the recent general election, with regard to talk of repeal of the Human Rights Act and a referendum on exit from the EU.

Next year will concentrate more on private law, and I’m hoping to be able to tie that in better to what initially drove me to pursue this path. I’m still not exactly sure which direction I’ll go once I complete the course, but whatever happens I want to keep a linkage between my skill sets. That could be either leaning towards the legal side but with the appreciation of tech, returning to tech but with the appreciation of the legal side of things or perhaps specialising further down an academic path that links both. I guess I’ll see what the next year brings. :)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Pretty please

Mon, 22/06/2015 - 15:06

I've been making a thing to solve some problems I always face while building web APIs. Curl is lovely but it's a bit too flexible.

Also, web services generally spit out one of a fairly common set of formats: (json, xml, html) and I often just want to grab a value from the response and use it in a script - maybe to make the next call in a workflow.

So I made please which makes it super simple to do things like making a web request and grabbing a particular value from the response.

For example, here's how you'd get the page title from this site:

please get http://offend.me.uk/ | please parse html.head.title.#text

Or getting a value out of the json returned by jsontest.com's IP address API:

please get http://ip.jsontest.com/ | please parse ip

The parse part of please is the most fun; it can convert between a few different formats. Something I do quite often is grabbing a json response from an API and spitting it out as yaml so I can read it easily. For example:

please get http://date.jsontest.com/ | please parse -o yaml

(alright so that's a poor example but the difference is huge when it's a complicated bit of json)

Also handy for turning an unreadable mess of xml into yaml (I love yaml for its readability):

echo '<docroot type="messydoc"><a><b dir="up">A tree</b><b dir="down">The ground</b></a></docroot>' | please parse -o yaml

As an example, of the kinds of things you can play with, I made this tool for generating graphs from json.

I'm still working on please; there will be bugs; let me know about them.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): In defence of curl | sudo bash -

Thu, 11/06/2015 - 13:32

Long ago, in days of yore, we assumed that any software worth having would be packaged by the operating system we used. Debian with its enormous pile of software (over 20,000 sources last time I looked) looked to basically contain every piece of free software ever. However as more and more people have come to Linux-based and BSD-based systems, and the proliferation of *NIX-based systems has become even more diverse, it has become harder and harder to ensure that everyone has access to all of the software they might choose to use.

Couple that with the rapid development of new projects, who clearly want to get users involved well before the next release cycle of a Linux-based distribution such as Debian, and you end up with this recommendation to bypass the operating system's packaging system and simply curl | sudo bash -.

We, the OS-development literati, have come out in droves to say "eww, nasty, don't do that please" and yet we have brought this upon ourselves. Our tendency to invent, and reinvent, at the very basic levels of distributions has resulted in so many operating systems and so many ways to package software (if not in underlying package format then in policy and process) that third party application authors simply cannot keep up. Couple that with the desire of the consumers to not have their chosen platform discounted, and if you provide Debian packages, you end up needing to provide for Fedora, RHEL, SuSE, SLES, CentOS, Mint, Gentoo, Arch, etc.etc; let alone supporting all the various BSDs. This leads to the simple expedience of curl | sudo bash -.

Nobody, not even those who are most vehemently against this mechanism of installing software, can claim that it is not quick, simple for users, easy to copy/paste out of a web-page, and leaves all the icky complexity of sorting things out up to a script which the computer can run, rather than the nascent user of the software in question. As a result, many varieties of software have ended up using this as a simple installation mechanism, from games to orchestration frameworks - everyone can acknowledge how easy it is to use.

Now, some providers are wising up a little and ensuring that the url you are curling is at least an https:// one. Some even omit the sudo from the copy/paste space and have it in the script, allowing them to display some basic information and prompting the user that this will occur as root before going ahead and elevating. All of these myriad little tweaks to the fundamental idea improve matters but are ultimately just putting lipstick on a fairly sad looking pig.

So, what can be done? Well we (again the OS-development literati) got ourselves into this horrendous mess, so it's up to us to get ourselves back out. We're all too entrenched in our chosen packaging methodologies, processes, and policies, to back out of those; yet we're clearly not properly servicing a non-trivial segment of our userbase. We need to do better. Not everyone who currently honours a curl | sudo bash - is capable of understanding why it's such a bad idea to do so. Some education may reduce that number but it will never eliminate it.

For a long time I advocated a switch to wget && review && sudo ./script approach instead, but the above comment, about people who don't understand why it might be a bad idea, really applies to show how few of those users would even be capable of starting to review a script they downloaded, let alone able to usefully judge for themselves if it is really safe to run. Instead we need something better, something collaborative, something capable of solving the accessibility issues which led to the curl | sudo bash - revolt in the first place.

I don't pretend to know what that solution might be, and I don't pretend to think I might be the one to come up with it, but I can hilight a few things I think we'll need to solve to get there:

  1. Any solution to this problem must be as easy as curl | sudo bash - or easier. This might mean a particular URI format which can have os-specific ways to handle standardised inputs, or it might mean a pervasive tool which does something like that.
  2. Any solution must do its best to securely acquire the content the user actually wanted. This means things like validating SSL certificates, presenting information to the user which a layman stands a chance of evaluating to decide if the content is likely to be what they wanted, and then acting smoothly and cleanly to get that content onto the user's system.
  3. Any solution should not introduce complex file formats or reliance on any particular implementation of a tool. Ideally it would be as easy to implement the solution on FreeBSD in shell, or on Ubuntu as whizzy 3D GUIs written in Haskell. (modulo the pain of working in shell of course)
  4. The solution must be arrived at in a multi-partisan way. For such a mechanism to be as usefully pervasive as curl | sudo bash - as many platforms as possible need to get involved. This means not only Debian, Ubuntu, Fedora and SuSE; but also Arch, FreeBSD, NetBSD, CentOS etc. Maybe even the OpenSolaris/Illumos people need to get involved.

Given the above, no solution can be "just get all the apps developers to learn how to package software for all the OS distributions they want their app to run on" since that way madness lies.

I'm sure there are other minor, and major, requirements on any useful solution but the simple fact of the matter is that until and unless we have something which at least meets the above, we will never be rid of curl | sudo bash - :- just like we can never seem to be rid of that one odd person at the party, noone knows who invited them, and noone wants to tell them to leave because they do fill a needed role, but noone really seems to like.

Until then, let's suck it up and while we might not like it, let's just let people keep on curl | sudo bash -ing until someone gets hurt.

P.S. I hate curl | sudo bash - for the record.

Categories: LUG Community Blogs

MJ Ray: Mick Morgan: here’s why pay twice?

Thu, 11/06/2015 - 04:49

http://baldric.net/2015/06/05/why-pay-twice/ asks why the government hires civilians to monitor social media instead of just giving GC HQ the keywords. Us cripples aren’t allowed to comment there (physical ability test) so I reply here:

It’s pretty obvious that they have probably done both, isn’t it?

This way, they’re verifying each other. Politicians probably trust neither civilians or spies completely and that makes it worth paying twice for this.

Unlike lots of things that they seem to want not to pay for at all…

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Sometimes recruiters really miss the point...

Tue, 09/06/2015 - 16:11

I get quite a bit of recruitment spam, especially via my LinkedIn profile, but today's Twitter-madness (recruiter scraped my twitter and then contacted me) really took the biscuit. I include my response (stripped of identifying marks) for your amusement:

On Tue, Jun 09, 2015 at 10:30:35 +0000, Silly Recruiter wrote: > I have come across your profile on various social media platforms today and > after looking through them I feel you are a good fit for a permanent Java > Developer Role I have available. Given that you followed me on Twitter I'm assuming you found a tweet or two in which I mention how much I hate Java? > I can see you are currently working at Codethink and was wondering if you > were considering a change of role? I am not. > The role on offer is working as a Java Developer for a company based in > Manchester. You will be maintaining and enhancing the company's core websites > whilst using the technologies Java, JavaScript, JSP, Struts, Hibernate XML > and Grails. This sounds like one of my worst nightmares. > Are you interested in hearing more about the role? Please feel free to call > or email me to discuss it further. Thanks, but no. > If not, do you know someone that is interested? We offer a £500 referral fee > for any candidate that is successful. I wouldn't inflict that kind of Lovecraftian nightmare of a software stack on anyone I cared about, sorry. D.

I then decided to take a look back over my Twitter and see if I could find what might have tripped this. There's some discussion of Minecraft modding but nothing which would suggest JavaScript, JSP, Struts, Hibernate XML or Grails.

Indeed my most recent tweet regarding Java could hardly be construed as positive towards it.

Sigh.

Categories: LUG Community Blogs

Mick Morgan: why pay twice?

Fri, 05/06/2015 - 21:09

Yesterday’s Independent newspaper reports that HMG has let a contract with five companies to monitor social media such as twitter, facebook, and blogs for commentary on Goverment activity. The report says:

“Under the terms of the deal five companies have been approved to keep an eye on Facebook, Twitter and blogs and provide daily reports to Whitehall on what’s being said in “real time”.

Ministers, their advisers and officials will provide the firms with “keywords and topics” to monitor. They will also be able to opt in to an Orwellian-sounding Human-Driven Evaluation and Analysis system that will allow them to see “favourability of coverage” across old and new media.”

This seems to me to be a modern spin on the old press cuttings system which was in widespread use in HMG throughout my career. The article goes on to say:

“The Government has always paid for a clippings service which collated press coverage of departments and campaigns across the national, regional and specialist media. They have also monitored digital news on an ad hoc basis for several years. But this is believed to be the first time that the Government has signed up to a cross-Whitehall contract that includes “social” as a specific media for monitoring.”

Apart from the mainstream social media sites noted above, I’d be intrigued to know what criteria are to be applied for including blogs in the monitoring exercise. Some blogs (the “vox populi” types such as Guido Fawkes at order-order) will be obvious candidates. Others in the traditional media, such as journalistic or political blogs will also be included, but I wonder who chooses others, and by what yardsticks. Would trivia be included? And should I care?

According to the Independent, the Cabinet Office, which negotiated the deal, claims that even with the extended range of monitoring by bringing individual departmental contracts together it will be able to save £2.4m over four years whilst “maximising the quality of innovative work offered by suppliers”.

Now since the Cabinet Office is reportedly itself facing a budget cut of £13 million in this FY alone, it strikes me that it would have been much more cost effective to simply use GCHQ’s pre-existing monitoring system rather than paying a separate bunch of relative amateurs to search the same sources.

Just give GCHQ the “keywords” or “topics of interest”. Go on Dave, you know it makes sense.

Categories: LUG Community Blogs

Mick Morgan: de-encrypting trivia

Tue, 02/06/2015 - 17:03

Well, that didn’t last long.

When I decided to force SSL as the default connection to trivia I had forgotten that it is syndicated via RSS on sites like planet alug. And of course as Brett Parker helpfully pointed out to me, self-signed certificates don’t always go down too well with RSS readers. He also pointed out that some spiders (notably google) would barf on my certificate and thus leave the site unindexed.

So I have taken off the forced redirect to port 443. Nevertheless, I would encourage readers to connect to https://baldric.net in order to protect their browsing of this horribly seditious site.

You never know who is watching……..

Categories: LUG Community Blogs

Mick Morgan: encrypting trivia

Mon, 01/06/2015 - 20:26

In my post of 8 May I said it was now time to encrypt much, much more of my everyday activity. One big, and obvious. hole in this policy decision was the fact that the public face of this blog itself has remained unencrypted since I first created it way back in 2006.

Back in September 2013 I mentioned that I had for some time protected all my own connections to trivia with an SSL connection. Given that my own access to trivia has always been encrypted, any of my readers could easily have used the same mechanism to connect (just by using the “https” prefix). However, my logs tell me that that very, very few connections other than my own come in over SSL. There are a couple of probable reasons for this, not least the fact that an unencrypted plain http connection is the obvious (default) way to connect. But another reason may be the fact that I use a self signed (and self generated) X509 certificate. I do this because, like Michael Orlitzky I see no reason why I should pay an extortionist organisation such as a CA good money to produce a certificate which says nothing about me or the trustworthiness of my blog when I can produce a perfectly good certificate of my own.

I particularly like Orlitzky’s description of CAs as “terrorists”. He says:

I oppose CA-signed certificates because it’s bad policy, in the long run, to negotiate with terrorists. I use that word literally — the CAs and browser vendors use fear to achieve their goal: to get your money. The CAs collect a ransom every year to ”renew“ your certificate (i.e. to disarm the time bomb that they set the previous year) and if you don’t pay up, they’ll scare away your customers. ‘Be a shame if sometin’ like that wos to happens to yous…

Unfortunately, however, web browsers get really upset when they encounter self-signed certificates and throw up all sorts of ludicrously overblown warnings. Firefox, for example, gives the error below when first connecting to trivia over SSL.

Any naive reader encountering that sort of error message is likely to press the “get me out of here” button and then bang goes my readership. But that is just daft. If you are happy to connect to my blog in clear, why should you be afraid to connect to it over an encrypted channel just because the browser says it can’t verify my identity? If I wanted to attack you, the reader, then I could just as easily do so over a plain http connection as over SSL. And in any event, I did not create my self signed certificate to provide identity verification, I created it to provide an encrypted channel to the blog. That encryption works, and, I would argue, it is better than the encryption provided by many commercially produced certificates because I have specifically chosen to use only the stronger cyphers available to me.

Encrypting the connection to trivia feels to me like the right thing to do. I personally always feel better about a web connection that is encrypted. Indeed, I use the “https everywhere” plugin as a matter of course. Given that I already have an SSL connection available to offer on trivia, and that I believe that everyone has the right to browse the web free from intrusive gratuitous snooping I think it is now way past time that I provided that protection to my readers. So, as of yesterday I have shifted the whole of trivia to an encrypted channel by default. Any connection to port 80 is now automatically redirected to the SSL protected connection on port 443.

Let’s see what happens to my readership.

Categories: LUG Community Blogs

Jonathan McDowell: I should really learn systemd

Thu, 21/05/2015 - 18:20

As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.

Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.

# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.

Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.

Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.

Categories: LUG Community Blogs

Jonathan McDowell: Stepping down from SPI

Mon, 18/05/2015 - 23:38

I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.

My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.

It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.

I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Andy and Teddy are waving goodbye

Fri, 15/05/2015 - 00:28

Most of the time, when I've got some software I want to write, I do it in python or sometimes bash. Occasionally though, I like to slip into something with a few more brackets. I've written a bit of C in the past and love it but recently I've been learning Go and what's really struck me is how clever it is. I'm not just talking about the technical merits of the language itself; it's clever in several areas:

  • You don't need to install anything to run Go binaries.

    At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.

  • Go has the feeling of C but without all that tedious mucking about in hyperspace memory

    Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.

  • The build system is designed around how we (as developers) use code hosting facilities

    Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:

    import "github.com/codegangsta/cli"

    which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:

    go get github.com/codegangsta/cli

    Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.

In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.

Categories: LUG Community Blogs

Mick Morgan: what is wrong with this sentence?

Thu, 14/05/2015 - 18:41

Yesterday the new Government published a press release about the forthcoming first meeting of the new National Security Council (NSC). That meeting was due to discuss the Tory administration’s plans for a new Counter-Extremism Bill. The press release includes the following extraordinary stement which is attributed to the Prime Minister:

“For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone. “

Forgive me, but what exactly is wrong with that view? Personally I think it admirable that we live in a tolerant society (“passive” or not). Certainly I believe that tolerance of difference, tolerance of free speech, tolerance of the right to hold divergent opinion, and to voice that opinion is to be cherished and lauded. And is it not right and proper that a Government should indeed “leave alone” any and all of its citizens who are obeying the law?

Clearly, however, our Prime Minster disagrees with me and believes that a tolerant society is not what we really need in the UK because the press release continues:

“This government will conclusively turn the page on this failed approach. “

If tolerance is a “failed approach”, what are we likely to see in its place?

Categories: LUG Community Blogs

MJ Ray: Recorrecting Past Mistakes: Window Borders and Edges

Thu, 14/05/2015 - 05:58

A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.

One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.

It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.

I wonder if this is a useful enough improvement that I should report it as an enhancement bug.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Building a componentised application

Thu, 14/05/2015 - 00:14

Without going into any of the details, it's a web application with a front end written using Ember and various services that it calls out to, written using whatever seems appropriate per service.

At the outset of the project, we decided we would bite the bullet and build for Docker from the outset. This meant we would get to avoid the usual dependency and developer environment setup nightmares.

The problem

What we quickly realised as we started to put the bare bones of a few of the services in place, was that we had three seemingly conflicting goals for each component and for the application as a whole.

  1. Build images that can be deployed in production.

  2. Allow developers to run services locally.

  3. Provide a means for running unit tests (both by developers and our CI server).

So here's what we've ended up with:

The solution

Or: docker-compose to the rescue

Folder structure

Here's what the project layout looks like:

Project | +-docker-compose.yml | +-Service 1 | | | +-Dockerfile | | | +-docker.compose.yml | | | +-<other files> | +-Service 2 | | +-Dockerfile | +-docker.compose.yml | +-<other files> Building for production

This is the easy bit and is where we started first. The Dockerfile for each service was designed to run everything with the defaults. Usually, this is something simple like:

FROM python:3-onbuild CMD ["python", "main.py"]

Our CI server can easily take these, produce images, and push them to the registry.

Allowing developers to run services locally

This is slightly harder. In general, each service wants to do something slightly different when being run for development; e.g. automatically restarting when code changes. Additionally, we don't want to have to rebuild an image every time we make a code change. This is where docker-compose comes in handy.

The docker-compose.yml at the root of the project folder looks like this:

service1: build: Service 1 environment: ENV: dev volumes: - Service 1:/usr/src/app links: - service2 - db ports: - 8001:8000 service2: build: Service2 environment: ENV: dev volumes: - Service 2:/usr/src/app links: - service1 - db ports: - 8002:8000 db: image: mongo

This gives us several features right away:

  • We can locally run all of the services together with docker-compose up

  • The ENV environment variable is set to dev in each service so that the service can configure itself when it starts to run things in "dev" mode where needed.

  • The source folder for each service is mounted inside the container. This means you don't need to rebuild the image to try out new code.

  • Each service is bound to a different port so you can connect to each part directly where needed.

  • Each service defines links to the other services it needs.

Running the tests

This was the trickiest part to get right. Some services have dependencies on other things even just to get unit tests running. For example, Eve is a huge pain to get running with a fake database so it's much easier to just link it to a temporary "real" database.

Additionally, we didn't want to mess with the idea that the images should run production services by default but also didn't want to require folks to need to churn out complicated docker invocations like docker run --rm -v $(pwd):/usr/src/app --link db:db service1 python -m unittest just to run the test suite after coding up some new features.

So, it was docker-compose to the rescue again :)

Each service has a docker-compose.yml that looks something like:

tests: build: . command: python -m unittest volumes: - .:/usr/src/app links: - db db: image: mongo

Which sets up any dependencies needed just for the tests, mounts the local source in the container, and runs the desired command for running the tests.

So, a developer (or the CI box) can run the unit tests with:

docker-compose run tests Summary
  • Each Dockerfile builds an image that can go straight into production without further configuration required.

  • Each image runs in "developer mode" if the ENV environment variable is set.

  • Running docker-compose up from the root of the project gets you a full stack running locally in developer mode.

  • Running docker-compose run tests in each service's own folder will run the unit tests for that service - starting any dependencies as needed.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Podgot

Tue, 12/05/2015 - 23:37

I've been meaning to blog about the podcasts I listen to and the setup I use for consuming them as both have evolved a little over the past few months.

The podcasts The setup

First, I use podget running in a cron job to pull the podcasts down to my VPS.

I use syncthing to have those replicated to my laptop and home media server.

From my laptop, I move files that I'm going to listen to on to my mp3 player (see http://offend.me.uk/blog/59/).

When I'm cycling to work or in the car, I use the mp3 player to listen to them. (No, when I'm in the car, I plug it in to the stereo, I don't drive with headphones on :P)

When I'm sitting at a computer or at home, I use Plex to serve up podcasts from my home media box.

I keep on top of everything by making sure that I move (rather than copy) when putting things on the mp3 player and rely on Syncthing to remove listened-to podcasts from everywhere else.

It's not the most elegant setup I've heard of but it's simple and works for me :)

What next?

I find I have a lot of things I want to listen to and not really enough time to listen to them in. I've heard that some people speed podcasts up (I've heard as much as 50%). Does anyone do this? Does it make things any less enjoyable to listen to? I really enjoy the quality of what I listen to; I don't want to feel like I'm just consuming information for the sake of it.

Categories: LUG Community Blogs