Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 34 min 34 sec ago

Jonathan McDowell: Random post-DebConf 15 thoughts

Mon, 24/08/2015 - 16:18

There are a bunch of things I mean to blog about, but as I have just got fully home from Heidelberg and DebConf15 this afternoon that seems most appropriate to start with. It’s a bit of a set of disjoint thoughts, but I figure I should write them down while they’re in my head.

DebConf is an interesting conference. It’s the best opportunity the Debian project has every year to come together and actually spend a decent amount of time with each other. As a result it’s a fairly full on experience, with lots of planned talks as a basis and a wide range of technical discussions and general social interaction filling in whatever gaps are available. I always find it a thoroughly enjoyable experience, but equally I’m glad to be home and doing delightfully dull things like washing my clothes and buying fresh milk.

I have always been of the opinion that the key aspect of DebConf is the face time. It was thus great to see so many people there - we were told several times that this was the largest DebConf so far (~ 570 people IIRC). That’s good in the sense that it meant I got to speak to a lot of people (both old friends and new), but does mean that there are various people I know I didn’t spend enough, or in some cases any, time with. My apologies, but I think many of us were in the same situation. I don’t feel it made the conference any less productive for me - I managed to get a bunch of hacking done, discuss a number of open questions in person with various people and get pulled into various interesting discussions I hadn’t expected. In short, a typical DebConf.

Also I’d like to say that the venue worked out really well. I’ll admit I was dubious when I heard it was in a hostel, but it was well located (about a 30 minute walk into town, and a reasonable bus service available from just outside the door), self-contained with decent facilities (I’m a big believer in having DebConf talks + accommodation be as close as possible to each other) and the room was much better than expected (well, aside from the snoring but I can’t blame the DebConf organisers for that).

One of the surprising and interesting things for me that was different from previous DebConfs was the opportunity to have more conversations with a legal leaning. I expect to go to DebConf and do OpenPGP/general crypto related bits. I wasn’t expecting affirmation about the things I have learnt on my course over the past year, in terms of feeling that I could use that knowledge in the process of helping Debian. It provided me with some hope that I’ll be able to tie my technology and law skills together in a way that I will find suitably entertaining (as did various conversations where people expressed significant interest in the crossover).

Next year is in Cape Town, South Africa. It’s a long way (though I suppose no worse than Portland and I get to stay in the same time zone), and a quick look at flights indicates they’re quite expensive at the moment. The bid presentation did look pretty good though so as soon as the dates are confirmed (I believe this will happen as soon as there are signed contracts in place) I’ll take another look at flights.

In short, excellent DebConf, thanks to the organisers, lovely to see everyone I managed to speak to, apologies to those of you I didn’t manage to speak to. Hopefully see you in Cape Town next year.

Categories: LUG Community Blogs

Mick Morgan: update to domain privacy

Thu, 20/08/2015 - 19:55

At the end of last month I noted that I had been receiving multiple emails to each of the proxy addresses listed for my newly registered “private” domains. Intriguingly, whilst I was receiving at least three or four such emails a week before I wrote about it, I have had precisely zero since.

Probably coincidence, but a conspiracy theorist would have field day with that.

Categories: LUG Community Blogs

Mick Morgan: why privacy matters

Wed, 19/08/2015 - 18:53

Last month my wife and I shared a holiday with a couple of old friends. We have known this couple since before we got married, indeed, they attended our wedding. We consider them close friends and enjoy their company. One evening in a pub in Yorkshire, we got to discussing privacy, the Snowden revelations, and the implications of a global surveillance mechanism such as is used by both the UK and its Five Eyes partners (the US NSA in particular). To my complete surprise, Al expressed the view that he was fairly relaxed about the possibility that GCHQ should be capable of almost complete surveillance of his on-line activity since, in his view, “nothing I do can be of any interest to them, so why should I worry.”

I have met this view before, but oddly I had never heard Al express himself in quite this way in all the time I have known him. It bothers me that someone I love and trust, someone whose opinions I value, someone I consider to be intelligent and articulate and caring, should be so relaxed about so pernicious an activity as dragnet surveillance. It is not only the fact that Al himself is so relaxed that bothers me so much as the fact that if he does not care, then many, possibly most, people like him will not care either. That attitude plays into the hands of those, like Eric Schmidt, who purport to believe that “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”

Back in October last year, Glenn Greenwald gave a TED talk on the topic, “Why privacy matters”. I recommended it to Al and I commend it to anyone who thinks, as he does, that dragnet surveillance doesn’t impact on them because they “are not doing anything wrong”.

Categories: LUG Community Blogs

Jonathan McDowell: Programming the FST-01 (gnuk) with a Bus Pirate + OpenOCD

Tue, 11/08/2015 - 15:29

Last year at DebConf14 Lucas authorized the purchase of a handful of gnuk devices, one of which I obtained. At the time it only supported 2048 bit RSA keys. I took a look at what might be involved in adding 4096 bit support during DebConf and managed to brick my device several times in doing so. Thankfully gniibe was on hand with his STLinkV2 to help me recover. However subsequently I was loathe to experiment further at home until I had a suitable programmer.

As it is this year has been busy and the 1.1.x release train is supposed to have 4K RSA (as well as ECC) support. DebConf15 is coming up and I felt I should finally sort out playing with the device properly. I still didn’t have a suitable programmer. Or did I? Could my trusty Bus Pirate help?

The FST-01 has an STM32F103TB on it. There is an exposed SWD port. I found a few projects that claimed to do SWD with a Bus Pirate - Will Donnelly has a much cloned Python project, the MC HCK project have a programmer in Ruby and there’s LibSWD though that’s targeted to smarter programmers. None of them worked for me; I could get the Python bits as far as correctly doing the ID of the device, but not reading the option bytes or successfully flashing (though I did manage an erase).

Enter the old favourite, OpenOCD. This already has SWD support and there’s an outstanding commit request to add Bus Pirate support. NodoNogard has a post on using the ST-Link/V2 with OpenOCD and the FST-01 which provided some useful pointers. I grabbed the patch from Gerrit, applied it to OpenOCD git and built an openocd.cfg that contained:

source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB0 buspirate_vreg 1 buspirate_mode normal transport select swd source [find target/stm32f1x.cfg]

My BP has the Seeed Studio probe cable, so my hookups look like this:

That’s BP MOSI (grey) to SWD IO, BP CLK (purple) to SWD CLK, BP 3.3V (red) to FST-01 PWR and BP GND (brown) to FST-01 GND. Once that was done I fired up OpenOCD in one terminal and did the following in another:

$ telnet localhost 4444 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Open On-Chip Debugger > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc Info : device id = 0x20036410 Info : SWD IDCODE 0x1ba01477 Error: Failed to read memory at 0x1ffff7e2 Warn : STM32 flash size failed, probe inaccurate - assuming 128k flash Info : flash size = 128kbytes > stm32f1x unlock 0 Device Security Bit Set stm32x unlocked. INFO: a reset or power cycle is required for the new settings to take effect. > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0xfffffffe msp: 0xfffffffc > flash write_image erase /home/noodles/checkouts/gnuk/src/build/gnuk.elf auto erase enabled wrote 109568 bytes from file /home/noodles/checkouts/gnuk/src/build/gnuk.elf in 95.055603s (1.126 KiB/s) > stm32f1x lock 0 stm32x locked > reset halt target state: halted target halted due to debug-request, current mode: Thread xPSR: 0x01000000 pc: 0x08000280 msp: 0x20005000

Then it was a matter of disconnecting the gnuk from the BP, plugging it into my USB port and seeing it come up successfully:

usb 1-2: new full-speed USB device number 11 using xhci_hcd usb 1-2: New USB device found, idVendor=234b, idProduct=0000 usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-2: Product: Gnuk Token usb 1-2: Manufacturer: Free Software Initiative of Japan usb 1-2: SerialNumber: FSIJ-1.1.7-87063020 usb 1-2: ep 0x82 - rounding interval to 1024 microframes, ep desc says 2040 microframes

More once I actually have a 4K key loaded on it.

Categories: LUG Community Blogs

Mick Morgan: get your porn here

Thu, 30/07/2015 - 16:20

Dear Dave is at it again. Sometimes I worry about our PM’s priorities. Not content with his earlier insistence that UK ISPs must introduce “family friendly (read “porn”) filters”, our man in No 10 now wants to “see age restrictions put into place or these (i.e. “porn”) websites will face being shut down”.

El Reg today runs a nice article about Dave’s latest delusion. That article begins:

Prime Minister David Cameron has declared himself “determined to introduce age verification mechanisms to restrict under 18s’ access to pornographic websites” and he is “prepared to legislate to do so if the industry fails to self-regulate.”

It continues in classic El Reg style:

The government will hold a consultation in the autumn, meaning it will be standing on the proverbial street corner and soliciting views on how to stop 17-year-olds running a web search for the phrase “tits”.

and further notes that Baroness Shields (who is apparently our “Minister for internet safety and security”) said:

“Whilst great progress has been made, we remain acutely aware of the risks and dangers that young people face online. This is why we are committed to taking action to protect children from harmful content. Companies delivering adult content in the UK must take steps to make sure these sites are behind age verification controls.”

To which two members of the El Reg commentariat respond:

I give it 5 minutes after the “blockade” is put in place before someone puts a blog post up explaining how to bypass said blockade.

and

Re: 5 minutes

“I give it 5 minutes after the “blockade” is put in place before someone puts a blog post up explaining how to bypass said blockade.”

I can do that now & don’t need a blog.

Q: Are you over 18?

A: Yes

Someone, somewhere, in Government must be able to explain to this bunch of idiots how the internet works. Short of actually pulling the plug on the entire net, any attempt to block access to porn is doomed to failure. China has a well documented and massive censorship mechanism in place (the Great Firewall) in order to control what its populace can watch or read or listen to. That mechanism fails to prevent determined access to censored material. If a Marxist State cannot effectively block free access to the ‘net, then Dear Dave has no chance.

Unless of course he knows that, wants to fail, and plans his own Great Firewall in “reluctant” response.

Categories: LUG Community Blogs

Mick Morgan: domain privacy?

Tue, 28/07/2015 - 20:01

Over the past few months or so I have bought myself a bunch of new domain names (I collect ’em….). On some of those names I have chosen the option of “domain privacy” so that the whois record for the domain in question will show limited information to the world at large. I don’t often do this, for a couple of reasons. Firstly, I usually don’t much care whether or not the world at large knows that I own and manage a particular domain (I have over a dozen of these). Secondly, the privacy provided is largely illusory anyway. Law Enforcement Agencies, determined companies with pushy lawyers and network level adversaries will always be able to link any domain with the real owner should they so choose. In fact, faced with a simple DMCA request, some ISPs have in the past simply rolled over and exposed their customer’s details.

But, I get spam to all the email addresses I advertise in my whois records, and I also expose other personal details required by ICANN rules. I don’t much like that, but I put up with it as a necessary evil. However, for one or two of the new domains I don’t want the world and his dog attributing the name directly to me – at least not without some effort anyway.

Because the whois record must contain contact details, domain privacy systems tend to mask the genuine registrant email address with a proxy address of the form “some-random-alphanumeric-string@dummy.domain” which simply redirects to the genuine registrant email address. Here is one obvious flaw in the process because a network level adversary can simply post an email to the proxy address and then watch where it goes (so domain privacy is pointless if your adversary is GCHQ or NSA – but then if they are your adversaries you have a bigger problem than just maintaining privacy on your domain).

Interestingly, I have received multiple emails to each of the proxy addresses listed for my “private” domains purporting to come from marketing companies offering me the chance to sign up to various special offers. Each of those emails also offers me the chance to “unsubscribe” from their marketing list if I am not interested in their wares.

I’ll leave the task of spotting the obvious flaw in that as an exercise for the class.

Categories: LUG Community Blogs

Jonathan McDowell: Recovering a DGN3500 via JTAG

Tue, 21/07/2015 - 11:34

Back in 2010 when I needed an ADSL2 router in the US I bought a Netgear DGN3500. It did what I wanted out of the box and being based on a MIPS AR9 (ARX100) it seemed likely OpenWRT support might happen. Long story short I managed to overwrite u-boot (the bootloader) while flashing a test image I’d built. I ended up buying a new router (same model) to get my internet connection back ASAP and never getting around to fully fixing the broken one. Until yesterday. Below is how I fixed it; both for my own future reference and in case it’s of use any any other unfortunate soul.

The device has clear points for serial and JTAG and it was easy enough (even with my basic soldering skills) to put a proper header on. The tricky bit is that the flash is connected via SPI, so it’s not just a matter of attaching JTAG, doing a scan and reflashing from the JTAG tool. I ended up doing RAM initialisation, then copying a RAM copy of u-boot in and then using that to reflash. There may well have been a better way, but this worked for me. For reference the failure mode I saw was an infinitely repeating:

ROM VER: 1.1.3 CFG 05

My JTAG device is a Bus Pirate v3b which is much better than the parallel port JTAG device I built the first time I wanted to do something similar. I put the latest firmware (6.1) on it.

All of this was done from my laptop, which runs Debian testing (stretch). I used the OpenOCD 0.9.0-1+b1 package from there.

Daniel Schwierzeck has some OpenOCD scripts which include a target definition for the ARX100. I added a board definition for the DGN3500 (I’ve also send Daniel a patch to add this to his repo).

I tied all of this together with an openocd.cfg that contained:

source [find interface/buspirate.cfg] buspirate_port /dev/ttyUSB1 buspirate_vreg 0 buspirate_mode normal buspirate_pullup 0 reset_config trst_only source [find openocd-scripts/target/arx100.cfg] source [find openocd-scripts/board/dgn3500.cfg] gdb_flash_program enable gdb_memory_map enable gdb_breakpoint_override hard

I was then able to power on the router and type dgn3500_ramboot into the OpenOCD session. This fetched my RAM copy of u-boot from dgn3500_ram/u-boot.bin, copied it into the router’s memory and started it running. From there I had a u-boot environment with access to the flash commands and was able to restore the original Netgear image (and once I was sure that was working ok I subsequently upgraded to the Barrier Breaker OpenWRT image).

Categories: LUG Community Blogs

Chris Lamb: Where's the principled opposition to the "WhatsApp ban"?

Fri, 10/07/2015 - 19:23

The Independent reports that David Cameron wishes to ban the instant messaging application WhatsApp due its use of end-to-end encryption.

That we might merely be pawns in manoeuvring for some future political compromise (or merely susceptible to cheap clickbait) should be cause for some concern, but what should worry us more is that if it takes scare stories about WhatsApp for our culture to awaken on the issues of privacy and civil liberties, then the central argument against surveillance was lost a long time ago.

However, the situation worsens once you analyse the disapproval in more detail. One is immediately struck by a predominant narrative of technical considerations; a ban would be "unworkable" or "impractical". A robust defence of personal liberty or a warning about the insidious nature of chilling effects? Perhaps a prescient John Locke quote to underscore the case? No. An encryption ban would "cause security problems."

The argument proceeds in a tediously predictable fashion: it was already difficult to keep track whether one should ipso facto be in favour of measures that benefit the economy, but we are suddenly co-opted as technocrats to consider the "damage" it could to do the recovery or the impact on a now-victimised financial sector. The «coup-de-grâce» finally appeals to our already inflated self-regard and narcissism: someone could "steal your identity."

Perhaps even more disappointing is the reaction from more technically-minded circles who, frankly, should know better. Here, they give the outward impression of metaphorically stockpiling copies of the GnuPG source code in their bunkers, perhaps believing the shallow techno-utopianist worldview that all social and cultural problems can probably be solved with Twitter and a JavaScript intepreter.

The tragedy here is that I suspect that this isn't what the vast majority of people really believe. Given a hypothetical ban that could, somehow, bypass all of the stated concerns, I'm pretty upbeat and confident that most people would remain uncomfortable with it on some level.

So what, exactly, does it take for us to oppose this kind of intervention on enduring principled grounds instead of transient and circumventable practical ones? Is the problem just a lack of vocabulary to discuss these issues on a social scale? A lack of courage?

Whilst it's certainly easier to dissect illiberal measures on technical merit than to make an impassioned case for abstract freedoms, every time we gleefully cackle "it won't work" we are, in essence, conceding the central argument to the authoritarian and the censorious. If one is right but for the wrong reasons, were we even right to begin with?

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Be careful what you ask for

Wed, 01/07/2015 - 14:28
Date: Wed, 01 Jul 2015 06:13:16 -0000 From: 123-reg <noreply@123-reg.co.uk> To: dsilvers@digital-scurf.org Subject: Tell us what you think for your chance to win X-Mailer: MIME::Lite 3.027 (F2.74; T1.28; A2.04; B3.13; Q3.13) Tell us what you think of 123-reg! <!-- .style1 {color: #1996d8} -->

Well 123-reg mostly I think you don't know how to do email.

Categories: LUG Community Blogs

Jonathan McDowell: What Jonathan Did Next

Mon, 29/06/2015 - 23:22

While I mentioned last September that I had failed to be selected for an H-1B and had been having discussions at DebConf about alternative employment, I never got around to elaborating on what I’d ended up doing.

Short answer: I ended up becoming a law student, studying for a Masters in Legal Science at Queen’s University Belfast. I’ve just completed my first year of the 2 year course and have managed to do well enough in the 6 modules so far to convince myself it wasn’t a crazy choice.

Longer answer: After Vello went under in June I decided to take a couple of months before fully investigating what to do next, largely because I figured I’d either find something that wanted me to start ASAP or fail to find anything and stress about it. During this period a friend happened to mention to me that the applications for the Queen’s law course were still open. He happened to know that it was something I’d considered before a few times. Various discussions (some of them over gin, I’ll admit) ensued and I eventually decided to submit an application. This was towards the end of August, and I figured I’d also talk to people at DebConf to see if there was anything out there tech-wise that I could get excited about.

It turned out that I was feeling a bit jaded about the whole tech scene. Another friend is of the strong opinion that you should take a break at least every 10 years. Heeding her advice I decided to go ahead with the law course. I haven’t regretted it at all. My initial interest was largely driven by a belief that there are too few people who understand both tech and law. I started with interests around intellectual property and contract law as well as issues that arise from trying to legislate for the global nature of most tech these days. However the course is a complete UK qualifying degree (I can go on to do the professional qualification in NI or England & Wales) and the first year has been about public law. Which has been much more interesting than I was expecting (even, would you believe it, EU law). Especially given the potential changing constitutional landscape of the UK after the recent general election, with regard to talk of repeal of the Human Rights Act and a referendum on exit from the EU.

Next year will concentrate more on private law, and I’m hoping to be able to tie that in better to what initially drove me to pursue this path. I’m still not exactly sure which direction I’ll go once I complete the course, but whatever happens I want to keep a linkage between my skill sets. That could be either leaning towards the legal side but with the appreciation of tech, returning to tech but with the appreciation of the legal side of things or perhaps specialising further down an academic path that links both. I guess I’ll see what the next year brings. :)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Pretty please

Mon, 22/06/2015 - 15:06

I've been making a thing to solve some problems I always face while building web APIs. Curl is lovely but it's a bit too flexible.

Also, web services generally spit out one of a fairly common set of formats: (json, xml, html) and I often just want to grab a value from the response and use it in a script - maybe to make the next call in a workflow.

So I made please which makes it super simple to do things like making a web request and grabbing a particular value from the response.

For example, here's how you'd get the page title from this site:

please get http://offend.me.uk/ | please parse html.head.title.#text

Or getting a value out of the json returned by jsontest.com's IP address API:

please get http://ip.jsontest.com/ | please parse ip

The parse part of please is the most fun; it can convert between a few different formats. Something I do quite often is grabbing a json response from an API and spitting it out as yaml so I can read it easily. For example:

please get http://date.jsontest.com/ | please parse -o yaml

(alright so that's a poor example but the difference is huge when it's a complicated bit of json)

Also handy for turning an unreadable mess of xml into yaml (I love yaml for its readability):

echo '<docroot type="messydoc"><a><b dir="up">A tree</b><b dir="down">The ground</b></a></docroot>' | please parse -o yaml

As an example, of the kinds of things you can play with, I made this tool for generating graphs from json.

I'm still working on please; there will be bugs; let me know about them.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): In defence of curl | sudo bash -

Thu, 11/06/2015 - 13:32

Long ago, in days of yore, we assumed that any software worth having would be packaged by the operating system we used. Debian with its enormous pile of software (over 20,000 sources last time I looked) looked to basically contain every piece of free software ever. However as more and more people have come to Linux-based and BSD-based systems, and the proliferation of *NIX-based systems has become even more diverse, it has become harder and harder to ensure that everyone has access to all of the software they might choose to use.

Couple that with the rapid development of new projects, who clearly want to get users involved well before the next release cycle of a Linux-based distribution such as Debian, and you end up with this recommendation to bypass the operating system's packaging system and simply curl | sudo bash -.

We, the OS-development literati, have come out in droves to say "eww, nasty, don't do that please" and yet we have brought this upon ourselves. Our tendency to invent, and reinvent, at the very basic levels of distributions has resulted in so many operating systems and so many ways to package software (if not in underlying package format then in policy and process) that third party application authors simply cannot keep up. Couple that with the desire of the consumers to not have their chosen platform discounted, and if you provide Debian packages, you end up needing to provide for Fedora, RHEL, SuSE, SLES, CentOS, Mint, Gentoo, Arch, etc.etc; let alone supporting all the various BSDs. This leads to the simple expedience of curl | sudo bash -.

Nobody, not even those who are most vehemently against this mechanism of installing software, can claim that it is not quick, simple for users, easy to copy/paste out of a web-page, and leaves all the icky complexity of sorting things out up to a script which the computer can run, rather than the nascent user of the software in question. As a result, many varieties of software have ended up using this as a simple installation mechanism, from games to orchestration frameworks - everyone can acknowledge how easy it is to use.

Now, some providers are wising up a little and ensuring that the url you are curling is at least an https:// one. Some even omit the sudo from the copy/paste space and have it in the script, allowing them to display some basic information and prompting the user that this will occur as root before going ahead and elevating. All of these myriad little tweaks to the fundamental idea improve matters but are ultimately just putting lipstick on a fairly sad looking pig.

So, what can be done? Well we (again the OS-development literati) got ourselves into this horrendous mess, so it's up to us to get ourselves back out. We're all too entrenched in our chosen packaging methodologies, processes, and policies, to back out of those; yet we're clearly not properly servicing a non-trivial segment of our userbase. We need to do better. Not everyone who currently honours a curl | sudo bash - is capable of understanding why it's such a bad idea to do so. Some education may reduce that number but it will never eliminate it.

For a long time I advocated a switch to wget && review && sudo ./script approach instead, but the above comment, about people who don't understand why it might be a bad idea, really applies to show how few of those users would even be capable of starting to review a script they downloaded, let alone able to usefully judge for themselves if it is really safe to run. Instead we need something better, something collaborative, something capable of solving the accessibility issues which led to the curl | sudo bash - revolt in the first place.

I don't pretend to know what that solution might be, and I don't pretend to think I might be the one to come up with it, but I can hilight a few things I think we'll need to solve to get there:

  1. Any solution to this problem must be as easy as curl | sudo bash - or easier. This might mean a particular URI format which can have os-specific ways to handle standardised inputs, or it might mean a pervasive tool which does something like that.
  2. Any solution must do its best to securely acquire the content the user actually wanted. This means things like validating SSL certificates, presenting information to the user which a layman stands a chance of evaluating to decide if the content is likely to be what they wanted, and then acting smoothly and cleanly to get that content onto the user's system.
  3. Any solution should not introduce complex file formats or reliance on any particular implementation of a tool. Ideally it would be as easy to implement the solution on FreeBSD in shell, or on Ubuntu as whizzy 3D GUIs written in Haskell. (modulo the pain of working in shell of course)
  4. The solution must be arrived at in a multi-partisan way. For such a mechanism to be as usefully pervasive as curl | sudo bash - as many platforms as possible need to get involved. This means not only Debian, Ubuntu, Fedora and SuSE; but also Arch, FreeBSD, NetBSD, CentOS etc. Maybe even the OpenSolaris/Illumos people need to get involved.

Given the above, no solution can be "just get all the apps developers to learn how to package software for all the OS distributions they want their app to run on" since that way madness lies.

I'm sure there are other minor, and major, requirements on any useful solution but the simple fact of the matter is that until and unless we have something which at least meets the above, we will never be rid of curl | sudo bash - :- just like we can never seem to be rid of that one odd person at the party, noone knows who invited them, and noone wants to tell them to leave because they do fill a needed role, but noone really seems to like.

Until then, let's suck it up and while we might not like it, let's just let people keep on curl | sudo bash -ing until someone gets hurt.

P.S. I hate curl | sudo bash - for the record.

Categories: LUG Community Blogs

MJ Ray: Mick Morgan: here’s why pay twice?

Thu, 11/06/2015 - 04:49

http://baldric.net/2015/06/05/why-pay-twice/ asks why the government hires civilians to monitor social media instead of just giving GC HQ the keywords. Us cripples aren’t allowed to comment there (physical ability test) so I reply here:

It’s pretty obvious that they have probably done both, isn’t it?

This way, they’re verifying each other. Politicians probably trust neither civilians or spies completely and that makes it worth paying twice for this.

Unlike lots of things that they seem to want not to pay for at all…

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Sometimes recruiters really miss the point...

Tue, 09/06/2015 - 16:11

I get quite a bit of recruitment spam, especially via my LinkedIn profile, but today's Twitter-madness (recruiter scraped my twitter and then contacted me) really took the biscuit. I include my response (stripped of identifying marks) for your amusement:

On Tue, Jun 09, 2015 at 10:30:35 +0000, Silly Recruiter wrote: > I have come across your profile on various social media platforms today and > after looking through them I feel you are a good fit for a permanent Java > Developer Role I have available. Given that you followed me on Twitter I'm assuming you found a tweet or two in which I mention how much I hate Java? > I can see you are currently working at Codethink and was wondering if you > were considering a change of role? I am not. > The role on offer is working as a Java Developer for a company based in > Manchester. You will be maintaining and enhancing the company's core websites > whilst using the technologies Java, JavaScript, JSP, Struts, Hibernate XML > and Grails. This sounds like one of my worst nightmares. > Are you interested in hearing more about the role? Please feel free to call > or email me to discuss it further. Thanks, but no. > If not, do you know someone that is interested? We offer a £500 referral fee > for any candidate that is successful. I wouldn't inflict that kind of Lovecraftian nightmare of a software stack on anyone I cared about, sorry. D.

I then decided to take a look back over my Twitter and see if I could find what might have tripped this. There's some discussion of Minecraft modding but nothing which would suggest JavaScript, JSP, Struts, Hibernate XML or Grails.

Indeed my most recent tweet regarding Java could hardly be construed as positive towards it.

Sigh.

Categories: LUG Community Blogs

Mick Morgan: why pay twice?

Fri, 05/06/2015 - 21:09

Yesterday’s Independent newspaper reports that HMG has let a contract with five companies to monitor social media such as twitter, facebook, and blogs for commentary on Goverment activity. The report says:

“Under the terms of the deal five companies have been approved to keep an eye on Facebook, Twitter and blogs and provide daily reports to Whitehall on what’s being said in “real time”.

Ministers, their advisers and officials will provide the firms with “keywords and topics” to monitor. They will also be able to opt in to an Orwellian-sounding Human-Driven Evaluation and Analysis system that will allow them to see “favourability of coverage” across old and new media.”

This seems to me to be a modern spin on the old press cuttings system which was in widespread use in HMG throughout my career. The article goes on to say:

“The Government has always paid for a clippings service which collated press coverage of departments and campaigns across the national, regional and specialist media. They have also monitored digital news on an ad hoc basis for several years. But this is believed to be the first time that the Government has signed up to a cross-Whitehall contract that includes “social” as a specific media for monitoring.”

Apart from the mainstream social media sites noted above, I’d be intrigued to know what criteria are to be applied for including blogs in the monitoring exercise. Some blogs (the “vox populi” types such as Guido Fawkes at order-order) will be obvious candidates. Others in the traditional media, such as journalistic or political blogs will also be included, but I wonder who chooses others, and by what yardsticks. Would trivia be included? And should I care?

According to the Independent, the Cabinet Office, which negotiated the deal, claims that even with the extended range of monitoring by bringing individual departmental contracts together it will be able to save £2.4m over four years whilst “maximising the quality of innovative work offered by suppliers”.

Now since the Cabinet Office is reportedly itself facing a budget cut of £13 million in this FY alone, it strikes me that it would have been much more cost effective to simply use GCHQ’s pre-existing monitoring system rather than paying a separate bunch of relative amateurs to search the same sources.

Just give GCHQ the “keywords” or “topics of interest”. Go on Dave, you know it makes sense.

Categories: LUG Community Blogs

Mick Morgan: de-encrypting trivia

Tue, 02/06/2015 - 17:03

Well, that didn’t last long.

When I decided to force SSL as the default connection to trivia I had forgotten that it is syndicated via RSS on sites like planet alug. And of course as Brett Parker helpfully pointed out to me, self-signed certificates don’t always go down too well with RSS readers. He also pointed out that some spiders (notably google) would barf on my certificate and thus leave the site unindexed.

So I have taken off the forced redirect to port 443. Nevertheless, I would encourage readers to connect to https://baldric.net in order to protect their browsing of this horribly seditious site.

You never know who is watching……..

Categories: LUG Community Blogs

Mick Morgan: encrypting trivia

Mon, 01/06/2015 - 20:26

In my post of 8 May I said it was now time to encrypt much, much more of my everyday activity. One big, and obvious. hole in this policy decision was the fact that the public face of this blog itself has remained unencrypted since I first created it way back in 2006.

Back in September 2013 I mentioned that I had for some time protected all my own connections to trivia with an SSL connection. Given that my own access to trivia has always been encrypted, any of my readers could easily have used the same mechanism to connect (just by using the “https” prefix). However, my logs tell me that that very, very few connections other than my own come in over SSL. There are a couple of probable reasons for this, not least the fact that an unencrypted plain http connection is the obvious (default) way to connect. But another reason may be the fact that I use a self signed (and self generated) X509 certificate. I do this because, like Michael Orlitzky I see no reason why I should pay an extortionist organisation such as a CA good money to produce a certificate which says nothing about me or the trustworthiness of my blog when I can produce a perfectly good certificate of my own.

I particularly like Orlitzky’s description of CAs as “terrorists”. He says:

I oppose CA-signed certificates because it’s bad policy, in the long run, to negotiate with terrorists. I use that word literally — the CAs and browser vendors use fear to achieve their goal: to get your money. The CAs collect a ransom every year to ”renew“ your certificate (i.e. to disarm the time bomb that they set the previous year) and if you don’t pay up, they’ll scare away your customers. ‘Be a shame if sometin’ like that wos to happens to yous…

Unfortunately, however, web browsers get really upset when they encounter self-signed certificates and throw up all sorts of ludicrously overblown warnings. Firefox, for example, gives the error below when first connecting to trivia over SSL.

Any naive reader encountering that sort of error message is likely to press the “get me out of here” button and then bang goes my readership. But that is just daft. If you are happy to connect to my blog in clear, why should you be afraid to connect to it over an encrypted channel just because the browser says it can’t verify my identity? If I wanted to attack you, the reader, then I could just as easily do so over a plain http connection as over SSL. And in any event, I did not create my self signed certificate to provide identity verification, I created it to provide an encrypted channel to the blog. That encryption works, and, I would argue, it is better than the encryption provided by many commercially produced certificates because I have specifically chosen to use only the stronger cyphers available to me.

Encrypting the connection to trivia feels to me like the right thing to do. I personally always feel better about a web connection that is encrypted. Indeed, I use the “https everywhere” plugin as a matter of course. Given that I already have an SSL connection available to offer on trivia, and that I believe that everyone has the right to browse the web free from intrusive gratuitous snooping I think it is now way past time that I provided that protection to my readers. So, as of yesterday I have shifted the whole of trivia to an encrypted channel by default. Any connection to port 80 is now automatically redirected to the SSL protected connection on port 443.

Let’s see what happens to my readership.

Categories: LUG Community Blogs

Jonathan McDowell: I should really learn systemd

Thu, 21/05/2015 - 18:20

As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.

Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.

# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.

Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.

Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.

Categories: LUG Community Blogs

Jonathan McDowell: Stepping down from SPI

Mon, 18/05/2015 - 23:38

I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.

My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.

It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.

I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Andy and Teddy are waving goodbye

Fri, 15/05/2015 - 00:28

Most of the time, when I've got some software I want to write, I do it in python or sometimes bash. Occasionally though, I like to slip into something with a few more brackets. I've written a bit of C in the past and love it but recently I've been learning Go and what's really struck me is how clever it is. I'm not just talking about the technical merits of the language itself; it's clever in several areas:

  • You don't need to install anything to run Go binaries.

    At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.

  • Go has the feeling of C but without all that tedious mucking about in hyperspace memory

    Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.

  • The build system is designed around how we (as developers) use code hosting facilities

    Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:

    import "github.com/codegangsta/cli"

    which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:

    go get github.com/codegangsta/cli

    Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.

In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.

Categories: LUG Community Blogs