Planet ALUG

Syndicate content
Planet ALUG -
Updated: 32 min 31 sec ago

Mick Morgan: christmas present

Mon, 23/11/2015 - 19:12

Like most people in the UK at this time of the year I’ve been doing some on-line shopping lately. Consequently I’m waiting for several deliveries. Some delivery companies (DHL are a ggod example) actually allow you to track your parcels on-line. In order to do this they usually send out text or email messages giving the tracking ID. Today I received an email purporting to come from UKMail. That email message said:

UKMail Info!
Your parcel has not been delivered to your address November 23, 2015, because nobody was at home.
Please view the information about your parcel, print it and go to the post office to receive your package.

UKMail expressly disclaims all conditions, guarantees and warranties, express or implied, in respect of the Service. Where the law prevents such exclusion and implies conditions and warranties into this contract, where legally permissible the liability of UKMail for breach of such condition,
guarantee or warranty is limited at the option of UKMail to either supplying the Service again or paying the cost of having the service supplied again. If you don’t receive a package within 30 working days UKMail will charge you for it’s keeping. You can find any information about the procedure and conditions of parcel keeping in the nearest post office.

Best regards,

I /very/ nearly opened the attached file. That is probably the closest I have come to reacting incorrectly to a phishing attack. Nice try guys. And a very good piece of social engineering given the time of year.

Virustotal suggests that the attached file is a malicious word macro container. Interestingly though, only 7 of the 55 AV products that Virustotal uses identified the attachment as malicious. And even they couldn’t agree on the identity of the malware. I suspect that it may be a relatively new piece of code.

Categories: LUG Community Blogs

Jonathan McDowell: Updating a Brother HL-3040CN firmware from Linux

Sat, 21/11/2015 - 13:27

I have a Brother HL-3040CN networked colour laser printer. I bought it 5 years ago and I kinda wish I hadn’t. I’d done the appropriate research to confirm it worked with Linux, but I didn’t realise it only worked via a 32-bit binary driver. It’s the only reason I have 32 bit enabled on my house server and I really wish I’d either bought a GDI printer that had an open driver (Samsung were great for this in the past) or something that did PCL or Postscript (my parents have an Xerox Phaser that Just Works). However I don’t print much (still just on my first set of toner) and once setup the driver hasn’t needed much kicking.

A more major problem comes with firmware updates. Brother only ship update software for Windows and OS X. I have a Windows VM but the updater wants the full printer driver setup installed and that seems like overkill. I did a bit of poking around and found reference in the service manual to the ability to do an update via USB and a firmware file. Further digging led me to a page on resurrecting a Brother HL-2250DN, which discusses recovering from a failed firmware flash. It provided a way of asking the Brother site for the firmware information.

First I queried my printer details:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.11\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.02\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

I used that to craft an update file which I sent to Brother via curl:

curl -X POST -d @hl3040cn-update.xml -H "Content-Type:text/xml" --sslv3

This gave me back some XML with a URL for the latest main firmware, version 1.19, filename LZ2599_N.djif. I downloaded that and took a look at it, discovering it looked like a PJL file. I figured I’d see what happened if I sent it to the printer:

cat LZ2599_N.djf | nc hl3040cn.local 9100

The LCD on the front of printer proceeded to display something like “Updating Program” and eventually the printer re-DHCPed and indicated the main firmware had gone from 1.11 to 1.19. Great! However the PCLPS firmware was still at 1.02 and I’d got the impression that 1.04 was out. I didn’t manage to figure out how to get the Brother update website to give me the 1.04 firmware, but I did manage to find a copy of LZ2600_D.djf which I was then able to send to the printer in the same way. This led to:

$ snmpwalk -v 2c -c public hl3040cn.local iso. iso. = STRING: "MODEL=\"HL-3040CN series\"" iso. = STRING: "SERIAL=\"G0JXXXXXX\"" iso. = STRING: "SPEC=\"0001\"" iso. = STRING: "FIRMID=\"MAIN\"" iso. = STRING: "FIRMVER=\"1.19\"" iso. = STRING: "FIRMID=\"PCLPS\"" iso. = STRING: "FIRMVER=\"1.04\"" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: "" iso. = STRING: ""

Cool, eh?

[Disclaimer: This worked for me. I’ve no idea if it’ll work for anyone else. Don’t come running to me if you brick your printer.]

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Newstalgia

Tue, 17/11/2015 - 23:57

Well, well, after talking about my time at university only yesterday, tonight I saw Christian Death who were more or less the soundtrack to my degree with their compilation album The Bible.

I'm pleased to say they seemed like thoroughly nice folks and played a good gig. Proving my lack of musical snobbery (which, to be honest, generally goes with the goth scene), I only knew one song but enjoyed everything they played, from new stuff to old.

The support band was a local act called Painted Heathers who (for me, at least) set the scene nicely and represented an innovative, modern take on the musical theme behind the act they were supporting. Essentially an indie band with goth leanings (I wonder if they agree), they suit my current musical bent (I play keyboards in an indie band) and the mood I was in. They're very new, young, and I shall spread their word for them if I can :)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): More ale

Tue, 17/11/2015 - 00:35

There are several reasons I took the degree course I did - joint honours in Philosophy and Linguistics - but the prime one is that I really felt I wanted to study something I would enjoy rather than something that would guarantee me a better career. To be honest, my degree subject versus the line of work I'm in - software development - is usually a good talking point in interviews and has probably landed me more jobs than if I'd had a degree in Computer Science.

The recent events in Paris, leaving politics aside, were an undeniably bad thing and the recent news coverage and various (sometimes depressingly moronic) Facebook posts on the subject got me thinking about moral philosophy again. Specifically, given we see conflicts between groups of people ostensibly because they live according to different moral codes (let's ignore the fact that their motivations are clearly not based on this at all) and those codes are complex and ambiguous (some might say intentionally so), can there be a moral code that's simple, unambiguous, and agreeable?

My 6th form philosophy teacher, Dr. John Beresford-Fry (Dr. Fry - I can't find him online. If anyone knows how to contact him, I'd love to speak to him again), believed he had a simple code that worked:

Do nothing gratuitous.

To my mind, that doesn't quite cut it; I don't think it's actually possible to do anything completely gratuitously; there's always some reason or reasoning behind an action. Maybe he meant something more subtle and it's been lost on me.

Some years ago, I thought I had a nice simple formulation:

Act as though everyone has the right to do whatever they wish.


You may do whatever you want so long as it doesn't restrict anybody's right to do the same.

Today though, I was going round in very big circles trying to think that one through. It works ok for simple, extreme cases (murder, rape, theft) and even plays nicely (I think) in some grey areas (streaming movies illegally) but I really couldn't figure out how to apply it to anyone in a position of power. How could an MP apply that rule when voting on bringing in a law to raise the minimum wage?

Come to think of it, how could an MP apply any rule when voting on any law?

Then I remembered the conclusion I came to when I was nearing the end of my philosophy course: the sentimentalists or the nihilists probably have it right.

Oh well, it kept me busy for a bit, eh ;)

Note to self: I had an idea for a game around moral philosophy, don't forget it!

Categories: LUG Community Blogs

Mick Morgan: torflow

Tue, 10/11/2015 - 12:19

Yesterday, Kenneth Freeman posted a note to the tor-relays list drawing attention to a new resource called TorFlow. TorFlow is a beautiful visualisation of Tor network traffic around the world. It enables you to see where traffic is concentrated (Europe) and where there is almost none (Australasia). Having the data overlaid on a world map gives a startling picture of the unfortunate concentration of Tor nodes in particular locations.

I recently moved my own relay from Amsterdam (190 relays) to London (133) but the network needs much more geo-diversity. Unfortunately, international bandwidth costs are lowest is the areas where relays are currently located. Given that the relays are all (well, nearly all…..) run by volunteers like me and funded out of their own pockets it is perhaps not surprising that this concentration should occur. But it is not healthy for the network.

There appears to be a particularly intriguing concentration of 16 relays on a tiny island in the Gulf of Guinea. Apparently this is an artifact though because those relays are all at (0, 0) which I am told GeoIP uses as a placeholder for “unknown” (in fact, GeoIP location is a somewhat imprecise art so there may be other anomalies in the data.)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Dear diary...

Tue, 10/11/2015 - 01:17

It's been quite some time since I last got round to writing anything here; almost two months. Life has been fairly eventful in that short time. At least, work has.

During every performance review I've had since I joined Proxama, there's one goal I've consistently brought up: that I wanted to have more of an influence over the way we write and deliver software and the tools we use. That's the sort of thing I'm really interested in.

Having made it to head of the server delivery team, I had a good taste of the sort of oversight that I was looking for but a few weeks ago, I got the opportunity to take on a role that encompasses both server and mobile, development and QA so of course I jumped at the chance... and got it!

Naïvely, when I took on the role, I thought I'd be doing more of the same as I was before (a bit of line management, code reviews, shaping upcoming work, architecture, occasionally writing code), just with a larger team. This is turning out not to be the case but in quite a positive way - so far, at least. I feel as though I now have the opportunity to sit a little further back, get some thinking time, and right a few wrongs that have built up over the years. Whether I'm achieving that remains to be seen ;)

Another thought that occurred to me the other day is that way back when I was at school, I never really imagined I'd end up in a technical role. I always imagined I'd either be a maths teacher or that I'd be a writer or editor for a newspaper or magazine. I'm finding out that my new job at Proxama requires me to write quite a lot of papers on various technical subjects. Double win.

In short, I'm enjoying some of my days more, trying very hard (and sometimes failing) not to worry about the details, focus on the bigger picture and trust that the other things will fall in to place (and sort it out where they don't). Is this what it's like going "post technical"? I'm slightly worried I'll forget how to code if I don't do a bit more of it.

Today, I spent a very, very long time fighting Jira. That wasn't fun.

Note to self: book some time in to write some code.

Categories: LUG Community Blogs

Jonathan McDowell: The Joy of Recruiters

Mon, 09/11/2015 - 17:45

Last week Simon retweeted a link to Don’t Feed the Beast – the Great Tech Recruiter Infestation. Which reminded me I’d been meaning to comment on my own experiences from earlier in the year.

I don’t entertain the same level of bile as displayed in the post, but I do have a significant level of disappointment in the recruitment industry. I had conversations with 3 different agencies, all of whom were geographically relevant. One contacted me, the other 2 (one I’d dealt with before, one that was recommended to me) I contacted myself. All managed to fail to communicate with any level of acceptability.

The agency hat contacted me eventually went quiet, after having asked if they could put my CV forward for a role and pushing very hard about when I could interview. The contact in the agency I’d dealt with before replied to say I was being passed to someone else who would get in contact. Who of course didn’t. And the final agency, who had been recommended, passed me between 3 different people, said they were confident they could find me something, and then went dark except for signing me up to their generic jobs list which failed to have anything of relevance on it.

As it happens my availability and skill set were not conducive to results at that point in time, so my beef isn’t with the inability to find a role. Instead it’s with the poor levels of communication presented by an industry which seems, to me, to have communication as part of the core value it should be offering. If anyone had said at the start “Look, it’s going to be tricky, we’ll see what we can do” or “Look, that’s not what we really deal in, we can’t help”, that would have been fine. I’m fine with explanations. I get really miffed when I’m just left hanging.

I’d love to be able to say I’ll never deal with a recruiter again, but the fact of the matter is they do serve a purpose. There’s only so far a company can get with word of mouth recruitment; eventually that network of personal connections from existing employees who are considering moving dries up. Advertising might get you some more people, but it can also result in people who are hugely inappropriate for the role. From the company point of view recruiters nominally fulfil 2 roles. Firstly they connect the prospective employer with a potentially wider base of candidates. Secondly they should be able to do some sort of, at least basic, filtering of whether a candidate is appropriate for a role. From the candidate point of view the recruiter hopefully has a better knowledge of what roles are out there.

However the incentives to please each side are hugely unbalanced. The candidate isn’t paying the recruiter. “If you’re not paying for it, you’re the product” may be bandied around too often, but I believe this is one of the instances where it’s very applicable. A recruiter is paid by their ability to deliver viable candidates to prospective employers. The delivery of these candidates is the service. Whether or not the candidate is happy with the job is irrelevant beyond them staying long enough that the placement fee can be claimed. The lengthy commercial relationship is ideally between the company and the recruitment agency, not the candidate and the agency. A recruiter wants to be able to say “Look at the fine candidate I provided last time, you should always come to me first in future”. There’s a certain element of wanting the candidate to come back if/when they are looking for a new role, but it’s not a primary concern.

It is notable that the recommendations I’d received were from people who had been on the hiring side of things. The recruiter has a vested interest in keeping the employer happy, in the hope of a sustained relationship. There is little motivation for keeping the candidate happy, as long as you don’t manage to scare them off. And, in fact, if you scare some off, who cares? A recruiter doesn’t get paid for providing the best possible candidate. Or indeed a candidate who will fully engage with the role. All they’re required to provide is a hire-able candidate who takes the role.

I’m not sure what the resolution is to this. Word of mouth only scales so far for both employer and candidate. Many of the big job websites seem to be full of recruiters rather than real employers. And I’m sure there are some decent recruiters out there doing a good job, keeping both sides happy and earning their significant cut. I’m sad to say I can’t foresee any big change any time soon.

[Note I’m not currently looking for employment.]

[No recruitment agencies were harmed in the writing of this post. I have deliberately tried to avoid outing anyone in particular.]

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): A haiku about Haiku

Sun, 01/11/2015 - 14:59

I know I don't mention a season, and I'm a few hours late for hallowe'en, but here's a haiku about Haiku:

A death, once again,
The master sighs, and fixes,
It rises up, undead.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in October 2015

Sat, 31/10/2015 - 21:32

Here is my monthly update covering a large part of what I have been doing in the free software world (previously):


My work in the Reproducible Builds project was also covered in more depth in Lunar's weekly reports (#23, #24, #25, #26).


This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:

  • DLA 326-1 for zendframework fixing an SQL injection vulnerability.
  • DLA 332-1 for optipng correcting a use-after-free issue.
  • DLA 333-1 for cakephp preventing a remote Denial of Service attack.
  • DLA 337-1 for busybox fixing a vulnerability when unzipping a specially crafted zip file/
  • DLA 338-1 for xscreensaver preventing a crash when hot-swapping monitors.
  • redis — New upstream release as well as changing the default UNIX socket location and correctly supporting "cluster" mode config file hardening and redis-sentinel's runtime directory handling under systemd. An update for jessie was also uploaded.
  • python-redis — Attempting to get the autopkgtest tests to finally pass.
  • debian-timeline — Making the build reproducible.
  • gunicorn — New upstream release.
Patches contributed RC bugs

I also filed FTBFS bugs against arora, barry, django-ajax-selects, django-polymorphic, django-sitetree, flask-autoindex, flask-babel, genparse, golang-github-jacobsa-ogletest, healpy, jarisplayer, jsurf-alggeo, kmidimon, libmapper, libpreludedb, mathgl, metview, miaviewit, moksha.common, monster-masher, node-connect, node-postgres, opensurgsim, php-xml-rss, pokerth, pylint-django, python-django-contact-form, python-pyqtgraph, python-pyramid, qlipper, r-bioc-cummerbund, r-bioc-genomicalignments, rawdns, ruby-haml-rails, ruby-omniauth-ldap, scute, stellarium, step, synfigstudio, tulip, xdot, & yelp.

Categories: LUG Community Blogs

Jonathan McDowell: Thoughts on the LG G Watch R Android smartwatch

Sat, 31/10/2015 - 15:06

Back in March I was given an LG G Watch R, the first Android Wear smartwatch to have a full round display (the Moto 360 was earlier, but has a bit cut off the bottom of the actual display). I’d promised I’d get round to making some comments about it once I’d had it for a while and have failed to do so until now. Note that this is very much comments on the watch from a user point of view; I haven’t got to the point of trying to do any development or other hacking of it.

Firstly, it’s important to note I already was wearing a watch and have been doing so for all of my adult life. Just a basic, unobtrusive analogue watch (I’ve had a couple since I was 18, before that it was pretty much every type of calculator watch available at some point), but I can’t remember a period where I didn’t. The G Watch R is bulkier than what I was previously wearing, but I haven’t found it obtrusive. And I love the way it looks; if you don’t look closely it doesn’t look like a smart watch (and really it’s only the screen that gives it away).

Secondly, I already would have taken my watch off at night and when I was showering. So while the fact that the battery on the G Watch R will really only last a day and a half is by far and away its most annoying problem, it’s not as bad as it could be for me. The supplied charging dock is magnetic, so it lives on my beside table and I just drop the watch in it when I go to bed.

With those details out of the way, what have I thought of it? It’s certainly a neat gadget. Being able to see my notifications without having to take my phone out of my pocket is more convenient than I expected - especially when it’s something like an unimportant email that I can then easily dismiss by swiping the watch face. My agenda being just a flick away, very convenient, particularly when I’m still at the stage of trying to remember where my next lecture is. Having walking directions from Google Maps show up on the watch (and be accompanied by a gentle vibration when I need to change direction) is pretty handy too. The ability to take pictures via the phone camera, not so much. Perhaps if it showed me roughly what I was about to photograph, but without that it’s no easier than using the phone interface. It’s mostly an interface for consuming information - I’ve tried the text to SMS interface a few times, but it’s just not reliable enough that I’d choose to use it.

I’ve also been pleased to see it get several updates from LG in the time I’ve had it. First the upgrade from Wear 4.4 to Wear 5.1 (probably via 5.0 but I forget), but also the enabling of wifi support. The hardware could always support this, but initially Android Wear didn’t and then there was some uncertainty about the FCC certification for the G Watch R. I can’t say I use it much (mostly the phone is within range) but it’s nice to see the improvements in support when they’re available.

What about the downsides? Battery life, as mentioned above, is definitely the major one. Mostly a day is fine, but the problem comes if I’m ever away. There’s no way to charge without the charging dock, so that becomes another things I have to pack. And it’s really annoying to have your watch go dead on you midday when you forget to do so. I also had a period where I’d frequently (at least once a week) find an “Android Wear isn’t responding. Wait/Restart?” error on the watch screen. Not cool, but thankfully seems to have stopped happening. Finally there’s the additional resource requirements it puts on the phone. I have a fairly basic Moto G 4G that already struggles with Ingress and Chrome at the same time, so adding yet another thing running all the time doesn’t help. I’m sure I could make use of a few more apps if I was more comfortable with loading the phone.

The notable point for me with the watch was DebConf. I’d decided not to bring it, not wanting the hassle of dealing with the daily charging. I switched back to my old analogue watch (a Timex, if you care). And proceeded to spend 3 days looking at it every time my phone vibrated before realising that I couldn’t do that. That marked the point where I accepted that I was definitely seeing benefits from having a smart watch. So when I was away for a week at the start of September, I brought the charger with me (at some point I’ll get round to a second base for travel). I was glad I’d done so. I’m not sure I’m yet at the point I’d replace it in the event it died, but on the whole I’m more of a convert that I was expecting to be.

Categories: LUG Community Blogs

Mick Morgan: lancashire police fail

Thu, 29/10/2015 - 14:18

This is simply depressing. Today I received a classic phishing attack email – the sort I normally bin without thought. According to virustotal, the attachment, which purported to be an MS Word document called “Invoice 7500005791.doc”, was a copy of W97M/Downloader, a word macro trojan which Symantec says is a downloader for additional malware. So far so annoying, but not unusual.

However, the email came from an address given as “” (so it looked as if it came from a Police National Network address allocated to Lancashire Police). Intriguingly, the “From:”, “Return-Path:” and “Return-Receipt-To:” headers all contained the same (legitimate looking) address at that domain. Only one header, “Disposition-Notification-To:” was slightly different. It gave the email address as “”. Now that header is used to request a “Read Receipt” and most email clients will obey that and display a message of the form “This message asks for a return receipt” along with a “send” button. Had I pressed that button, a message /might/ have gone to the “” domain address. I say “might” because there is no such domain, so this could simply be a mistake on the part of the attacker. All the “Received:” headers (i.e. the addresses of mail servers the message went through en route to me) were shown as network 77.75.88.xx – whois records this as belonging to an entity called “Farahnet” registered in Beirut. Unfortunately the whois record does not give an abuse, or admin contact email address.

Most phishing emails simply have a forged “From:” address and all other headers are obviously wrong. This one looked distinctly odd and a little more professional than most. I therefore decided it might be a good idea to tip off the Lancashire Police to the misuse and misrepresentation of their domain name. This is where it got depressing.

Nowhere could I find a simple email address or other electronic contact mechanism to enable me to say to Lancashire Police “Hi guys, see attached, you may have a problem”. The Lancs Police website has a “Contact Us” page giving pointers to various means of providing feedback – but no immediately obvious one for reporting email attacks. Here the banks are way ahead of the Police. All banks I have ever dealt with have an email address (usually of the form “”) to which you can send details of the latest scam. However, the bottom of the contact page on the Lancs Police site shows a link to “online fraud” under the heading “popular pages”. This link takes you to their on-line safety advice page which then has a further link to “Action Fraud“, the National Fraud & Cyber Crime Reporting Centre, that site in turn does actually give you a means of reporting phishing attacks. But it takes too long. I had to click through four pages of feedback with Radio buttons asking what I wanted to report, how the attack arrived, where it purported to come from etc. before I was given a page with the email address and an instruction to email them giving the details I should have been able to provide on the damned form I had just spent ages finding and filling in.

Having obtained this email adddress, I was given a “Fraud Report Summary” (see below) which is precisely useless for anything other than simple statistics. My guess is that this information is collated simply to be used to provide the sort of banal analysis beloved of senior management everywhere.

Not good enough guys, not nearly good enough.

But it gets worse. In my attempts to find what should be an obvious contact point, I plugged “lancashire police cyber crime” (I know, I know) as search terms into my search engine. The first likely entry listed in response (after rubbish like facebook pages or comments on non-existent fora such as was
(note the https). This is a supposedly secure link to the very same page I later found on the Lancs Police site. Try clicking that link. If you use Firefox, this is what you will get (chrome will give you something similar):

So – the site is not trusted because it uses an X509 certificate which is only valid for the commercial domains of the service on which the Police site is presumably hosted. Idiotic. If I got that sort of response from a bank I’d be deeply worried. As it is, I’m just depressed.

Categories: LUG Community Blogs

Chris Lamb: ImportError: cannot import name add_to_builtins under Django 1.9

Tue, 27/10/2015 - 09:20

Whilst upgrading various projects to Django 1.9, I found myself repeatedly searching for the following code snippet so I offer it below as a more permanent note for myself and to aid others.

If you used django.template.base.add_to_builtins to avoid tedious and unsightly {% load module %} blocks in your template files, under Django 1.9 you will get the following traceback:

Traceback (most recent call last): File "django/core/management/", line 324, in execute django.setup() File "django/", line 18, in setup apps.populate(settings.INSTALLED_APPS) File "django/apps/", line 108, in populate app_config.import_models(all_models) File "django/apps/", line 202, in import_models self.models_module = import_module(models_module_name) File "/usr/lib/python2.7/importlib/", line 37, in import_module __import__(name) File "myproject/myproject/utils/", line 1, in <module> from django.template.base import add_to_builtins ImportError: cannot import name add_to_builtins

The solution is to move to defining settings.TEMPLATES instead of calling add_to_builtins. This replaces a number of your existing settings, including TEMPLATE_CONTEXT_PROCESSORS, TEMPLATE_DIRS, TEMPLATE_LOADERS, etc.

For example:

TEMPLATES = [{ 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(BASE_DIR, 'templates'), ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'myproject.utils.context_processors.settings_context', ], 'builtins': [ 'django.contrib.staticfiles.templatetags.staticfiles', ], }, }]

Simply add the modules you previously loaded with add_to_builtins to the builtins key under OPTIONS.

(You can read more in the release notes for Django 1.9, as well as read about settings.TEMPLATES generally.)

Categories: LUG Community Blogs

Mick Morgan: update to privacy policy

Thu, 15/10/2015 - 17:55

As promised in my privacy policy, this post draws attention to the fact that I have amended that policy (very slightly).

The amendments refer to the fact that I no longer use Counterize to collect run time statistics. I prefer instead to use Awstats which runs over my log file on a weekly basis. I have also deleted reference to personal information collected in my feedback form because I no longer use such a form.

Categories: LUG Community Blogs

Jonathan McDowell: The sad state of home entertainment software platforms

Thu, 08/10/2015 - 15:42

I got home from a law lecture[1] today and took apart my TV. As you do.

I’d a couple of motivations for this. The TV is an LG 55LM8600-UC. That -UC is important - it means it’s the US model. I shipped it back to the UK along with the rest of my belongings at the start of the year. It’s a lovely TV, and I’m very happy with the picture quality. Up until now I’ve been running it through a 240v->120v transformer, because the back clearly states 120v 50/60Hz. Having had success converting my Roomba PSU to UK voltage I was wondering if it might be easy enough to do the same for the TV. So the first reason for taking it apart was to look at the PSU board and work out if I could replace it with a UK version, or fix up a handful of components myself. Turns out its rated for 100-240v 50/60Hz so it looks like it should be fine to plug into UK mains directly. And, having done so, it all seems fine. However I don’t take any responsibility if you try this yourself and it all goes up in smoke.

The other reason was to have a general poke at trying to get access to the onboard CPU - the firmware updates are signed and encrypted so I’m a bit stuck working from just those. My TV is from the US; it therefore has an ATSC tuner, and is no good for picking up the DVB-T/T2 broadcast in the UK. It also has a range of “smart” apps built in. Of course are also tailored to the US, so the Amazon Instant Video app connects to and is no use with my Amazon UK account. Netflix is smarter about this and detects where you’re located rather than requiring logging in to the appropriate geographic site. However the Netflix app is old and doesn’t support user profiles. Which causes a problem when I spend an afternoon watching Unbreakable Kimmy Schmidt and it screws up the recommendations for my girlfriend.

In an ideal world I’d easily be able to update the apps on my TV to recent versions appropriate for my geographic location. Instead, the current state of home entertainment software platforms is at least 10 years behind mobile phones. Some devices allow you to get hold of SDKs even as an end user, but you’re stuck with a multiple ecosystems even between different models from the same manufacturer. LG has no interest in updating my 2012-era TV. They’d rather I buy a nice new 2015 TV. Which means at present my TV is mostly a really nice monitor for my XBMC/Kodi box. I can get away with using my Blu-ray player for Netflix, as it does user profile, but it’s also a US model and again Amazon Instant Video doesn’t work with a UK account. I should point out /that/ box is from Panasonic, just in case anyone thinks I’m needlessly picking on LG. My amp is an Onkyo and it would be nice if it could stream music from Amazon/Google as well (it supports Spotify and a couple of others already), but again no joy there.

I understand there are DRM issues around video and audio content, but these problems seem to have largely been solved in the Android world while still providing a fairly standard development environment and allowing end users to install their own software. It would seem to make more sense if there was a standard “TV”/”Blu-ray” platform that the likes of Netflix and Amazon could write apps once for, reducing the effort required on both sides to get the latest and greatest services on end user equipment. I guess this is some of what Google was trying to achieve with Google TV, but it’s probably less interesting to them than mobile devices as I certainly don’t tend to do things like search on my TV; I just want to view content and would be mighty pissed off if it started advertising things at me unexpectedly.

While I’m not planning to buy a new TV any time soon, I could do with a UK spec 3D-capable Blu-ray player. Is there anything out there with some commitment to open apps / proper updates? I haven’t found anything, but I’d love to be proved wrong.

[1] It was on the duty of care and negligence, if you care.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in September 2015

Wed, 30/09/2015 - 22:23

Inspired by Raphaël Hertzog, here is a monthly update covering a large part of what I have been doing in the free software world:


The Reproducible Builds project was also covered in depth on LWN as well as in Lunar's weekly reports (#18, #19, #20, #21, #22).

  • redis — A new upstream release, as well as overhauling the systemd configuration, maintaining feature parity with sysvinit and adding various security hardening features.
  • python-redis — Attempting to get its Debian Continuous Integration tests to pass successfully.
  • libfiu — Ensuring we do not FTBFS under exotic locales.
  • gunicorn — Dropping a dependency on python-tox now that tests are disabled.
Bugs filed Patches contributed
RC bugs

I also filed FTBFS bugs against actdiag, actdiag, bangarang, bmon, bppphyview, cervisia, choqok, cinnamon-control-center, clasp, composer, cpl-plugin-naco, dirspec, django-countries, dmapi, dolphin-plugins, dulwich, elki, eqonomize, eztrace, fontmatrix, freedink, galera-3, golang-git2go, golang-github-golang-leveldb, gopher, gst-plugins-bad0.10, jbofihe, k3b, kalgebra, kbibtex, kde-baseapps, kde-dev-utils, kdesdk-kioslaves, kdesvn, kdevelop-php-docs, kdewebdev, kftpgrabber, kile, kmess, kmix, kmldonkey, knights, konsole4, kpartsplugin, kplayer, kraft, krecipes, krusader, ktp-auth-handler, ktp-common-internals, ktp-text-ui, libdevice-cdio-perl, libdr-tarantool-perl, libevent-rpc-perl, libmime-util-java, libmoosex-app-cmd-perl, libmoosex-app-cmd-perl, librdkafka, libxml-easyobj-perl, maven-dependency-plugin, mmtk, murano-dashboard, node-expat, node-iconv, node-raw-body, node-srs, node-websocket, ocaml-estring, ocaml-estring, oce, odb, oslo-config, oslo.messaging, ovirt-guest-agent, packagesearch, php-svn, php5-midgard2, phpunit-story, pike8.0, plasma-widget-adjustableclock, plowshare4, procps, pygpgme, pylibmc, pyroma, python-admesh, python-bleach, python-dmidecode, python-libdiscid, python-mne, python-mne, python-nmap, python-nmap, python-oslo.middleware, python-riemann-client, python-traceback2, qdjango, qsapecng, ruby-em-synchrony, ruby-ffi-rzmq, ruby-nokogiri, ruby-opengraph-parser, ruby-thread-safe, shortuuid, skrooge, smb4k, snp-sites, soprano, stopmotion, subtitlecomposer, svgpart, thin-provisioning-tools, umbrello, validator.js, vdr-plugin-prefermenu, vdr-plugin-vnsiserver, vdr-plugin-weather, webkitkde, xbmc-pvr-addons, xfsdump & zanshin.

Categories: LUG Community Blogs

Jonathan McDowell: New GPG key

Thu, 24/09/2015 - 14:45

Just before I went to DebConf15 I got around to setting up my gnuk with the latest build (1.1.7), which supports 4K RSA keys. As a result I decided to generate a new certification only primary key, using a live CD on a non-networked host and ensuring the raw key was only ever used in this configuration. The intention is that in general I will use the key via the gnuk, ensuring no danger of leaking the key material.

I took part in various key signings at DebConf and the subsequent UK Debian BBQ, and finally today got round to dealing with the key slips I had accumulated. I’m sure I’ve missed some people off my signing list, but at least now the key should be embedded into the strong set of keys. Feel free to poke me next time you see me if you didn’t get mail from me with fresh signatures and you think you should have.

Key details are:

pub 4096R/0x21E278A66C28DBC0 2015-08-04 [expires: 2018-08-03] Key fingerprint = 3E0C FCDB 05A7 F665 AA18 CEFA 21E2 78A6 6C28 DBC0 uid [ full ] Jonathan McDowell <>

I have no reason to assume my old key (0x94FA372B2DA8B985) has been compromised and for now continue to use that key. Also for the new key I have not generated any subkeys as yet, which caff handles ok but emits a warning about unencrypted mail. Thanks to those of you who sent me signatures despite this.

[Update: I was asked about my setup for the key generation, in particular how I ensured enough entropy, given that it was a fresh boot and without networking there were limited entropy sources available to the machine. I made the decision that the machine’s TPM and the use of tpm-rng and rng-tools was sufficient (i.e. I didn’t worry overly about the TPM being compromised for the purposes of feeding additional information into the random pool). Alternative options would have been flashing the gnuk with the NeuG firmware or using my Entropy Key.]

Categories: LUG Community Blogs

Jonathan McDowell: Getting a Dell E7240 working with a dock + a monitor

Mon, 21/09/2015 - 20:29

I have a Dell E7240. I’m pretty happy with it - my main complaint is that it has a very shiny screen, and that seems to be because it’s the touchscreen variant. While I don’t care about that feature I do care about the fact it means I get FullHD in 12.5”

Anyway. I’ve had issues with using a dock and an external monitor with the laptop for some time, including getting so far as mentioning the problems on the appropriate bug tracker. I’ve also had discussions with a friend who has the same laptop with the same issues, and has some time trying to get it reliably work. However up until this week I haven’t had a desk I’m sitting at for any length of time to use the laptop, so it’s always been low priority for me. Today I sat down to try and figure out if there had been any improvement.

Firstly I knew the dock wasn’t at fault. A Dell E6330 works just fine with multiple monitors on the same dock. The E6330 is Ivybridge, while the E7240 is Haswell, so I thought potentially there might be an issue going on there. Further digging revealed another wrinkle I hadn’t previously been aware of; there is a DisplayPort Multi-Stream Transport (MST) hub in play, in particular a Synaptics VMM2320. Dell have a knowledge base article about Multiple external display issues when docked with a Latitude E7440/E7240 which suggests a BIOS update (I was already on A15) and a firmware update for the MST HUB. Sadly the firmware update is Windows only, so I had to do a suitable dance to be able to try and run it. I then discovered that the A05 update refused to work, complaining I had an invalid product ID. The A04 update did the same. The A01 update thankfully succeeded and told me it was upgrading from 2.00.002 to 2.15.000. After that had completed (and I’d power cycled to switch to the new firmware) I tried A05 again and this time it worked and upgraded me to 2.22.000.

Booting up Linux again I got further than before; it was definitely detecting that there was a monitor but it was very unhappy with lots of [drm:intel_dp_start_link_train] *ERROR* too many full retries, give up errors being logged. This was with 4.2, and as I’d been meaning to try 4.3-rc2 I thought this was a good time to give it a try. Lo and behold, it worked! Even docking and undocking does what I’d expect, with the extra monitor appearing / disappearing as you’d expect.

Now, I’m not going to assume this means it’s all happy, as I’ve seen this sort-of work in the past, but the clue about MST, the upgrade of that firmware (and noticing that it made things better under Windows as well) and the fact that there have been improvements in the kernel’s MST support according to the post 4.2 log gives me some hope that things will be better from here on.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Twofer

Wed, 16/09/2015 - 23:33

After toying with the idea for some time, I decided I'd try setting up 2FA on my laptop. As usual, the arch wiki had a nicely written article on setting up 2FA with the PAM module for Google Authenticator.

I followed the instructions for setting up 2FA for ssh and that worked seamlessly so I decided I'd then go the whole hog and enable the module in /etc/pam.d/system-auth which would mean I'd need it any time I had to login at all.

Adding the line:

auth sufficient

had the expected effect that I could login with just the verification code but that seems to defeat the point a little so I bit my lip and changed sufficient to required which would mean I'd need my password and the code on login.

I switched to another VT and went for it. It worked!

So then I rebooted.

And I couldn't log in.

After a couple of minutes to download an ISO to boot from using another machine, putting it on a USB stick, booting from it, and editing my system-auth file, I realised why:

auth required auth required try_first_pass nullok auth required unwrap

My home partition is encrypted and so the Google authenticator module obviously couldn't load my secret file until I'd already logged in.

I tried moving the line to the bottom of the auth group but that didn't work either.

How could this possibly go wrong...

So, the solution I came up with was to put the 2fa module into the session group. My understanding is that this will mean PAM will ask me to supply a verification code once per session which is fine by me; I don't want to have to put a code in every time I sudo anyway.

My question is, will my minor abuse of PAM bite me in the arse at any point? It seems to do what I expected, even if I log in through GDM.

Here's my current system-auth file:

#%PAM-1.0 auth required try_first_pass nullok auth required unwrap auth optional auth required account required account optional account required password optional password required try_first_pass nullok sha512 shadow password optional session required session required session optional unwrap session optional session required
Categories: LUG Community Blogs

Chris Lamb: Joining strings in POSIX shell

Thu, 10/09/2015 - 21:18

A common programming task is to glue (or "join") items together to create a single string. For example:

>>> ', '.join(['foo', 'bar', 'baz']) "foo, bar, baz"

Notice that we have three items but only two commas — this can be important if the tool we passing doesn't support trailing delimiters or we simply want the result to be human-readable.

Unfortunately, this can be inconvenient in POSIX shell where we construct strings via explicit concatenation. A naïve solution of:

RESULT="" for X in foo bar baz do RESULT="${RESULT}, ${X}" done

... incorrectly returns ", foo, bar, baz". We can solve this with a (cumbersome) counter or flag to only attach the delimiter when we need it:

COUNT=0 RESULT="" for X in foo bar baz do if [ "${COUNT}" = 0 ] then RESULT="${X}" else RESULT="${RESULT}, ${X}" fi COUNT=$((COUNT + 1)) done

One alternative is to use the little-known ":+" expansion modifier. Many people are familiar with ":-" for returning default values:

$ echo ${VAR:-fallback}

By contrast, the ":+" modifier inverts this logic, returning the fallback if the specified variable is actually set. This results in the elegant:

RESULT="" for X in foo bar baz do RESULT="${RESULT:+${RESULT}, }${X}" done
Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Orchestration, a cry for help

Tue, 08/09/2015 - 15:02

Over the past few years, a plethora of orchestration frameworks have been exploding onto the scene. Many have been around for quite a while but not all have the same sort of community behind them. For example there's a very interesting option in Joey Hess' Propellor but that is hurt by needing to be able to build Propellor on all the hosts you manage. On the other hand, Ansible is able to operate without installing extra software on your target hosts, but instead it ends up very latency-bound which can cause problems when your managed hosts are "far away".

I have considered CFEngine, Chef, Puppet and Salt in addition to the above mentioned options, but none of them feel quite right to me. I am looking for a way to manage a small number of hosts, at least one of which is not always online (my laptop) and all of which are essentially snowflakes whose sparkleybits I want some reasonable control over.

I have a few basic requirements which I worry would be hard to meet -- I want to be able to make changes to my hosts by editing a file and committing/pushing it to a git server. I want to be able to manage a host entirely over SSH from one or more systems, ideally without having to install the orchestration software on the target host, but where if the software is present it will get used to accelerate matters. I don't want to have to install Ruby or PHP on any system in order to have orchestration, and some of the systems I wish to manage simply can't compile Haskell stuff sanely. I'm not desperately interested in learning yet more DSLs, but I appreciate that it will be necessary, but I really don't want to have to learn more than one DSL simply to run one frameworks.

I don't want to have to learn strange and confusing combinations of file formats. For example, Ansible quite sensibly uses YAML for its structured data except for its host/group lists. It uses Jinja2 for its templating and looping, except for some things which it generates its own looping constructs inside its YAML. I also personally find Ansible's sportsball oriented terminology to be confusing, but that might just be me.

So what I'm hoping is that someone will be able to point me at a project which combines all the wonderful features of the above, with a need to learn only one DSL and which doesn't require to be installed on the managed host but which can benefit from being so installed, is driven from git, and won't hurt my already overly burdened brain.

Dear Lazyweb, pls. kthxbye.

Categories: LUG Community Blogs