Planet ALUG

Syndicate content
Planet ALUG -
Updated: 47 min 44 sec ago

Mick Morgan: using a VPN to take back your privacy

Fri, 12/05/2017 - 21:35

With the passage into law of the iniquitous Investigatory Powers (IP) Bill in the UK at the end of November last year, it is way past time for all those who care about civil liberties in this country to exercise their right to privacy.

The new IP Act permits HMG and its various agencies to surveil the entire online population. The Act actually formalises (or in reality, legalises) activity which has long gone on in this country (as in others) in that it gives LEAs and others a blanket right of surveillance.

The Act (PDF) itself states that it is:

“An Act to make provision about the interception of communications, equipment interference and the acquisition and retention of communications data, bulk personal datasets and other information; to make provision about the treatment of material held as a result of such interception, equipment interference or acquisition or retention; to establish the Investigatory Powers Commissioner and other Judicial Commissioners and make provision about them and other oversight arrangements; to make further provision about investigatory powers and national security; to amend sections 3 and 5 of the Intelligence Services Act 1994; and for connected purposes.”

(Don’t you just love the “connected purposes” bit?)

The Open Rights Group says the Act:

“is one of the most extreme surveillance laws ever passed in a democracy. Its impact will be felt beyond the UK as other countries, including authoritarian regimes with poor human rights records, will use this law to justify their own intrusive surveillance regimes.”

Liberty, which believes the Act breeches the public’s rights under the Human Rights Act, is challenging the Act through the Courts. That organisation says:

“Liberty will seek to challenge the lawfulness of the following powers, which it believes breach the public’s rights:

– Bulk hacking – the Act lets police and agencies access, control and alter electronic devices like computers, phones and tablets on an industrial scale, regardless of whether their owners are suspected of involvement in crime – leaving them vulnerable to further attack by hackers.

– Bulk interception – the Act allows the state to read texts, online messages and emails and listen in on calls en masse, without requiring suspicion of criminal activity.

– Bulk acquisition of everybody’s communications data and internet history – the Act forces communications companies and service providers to hand over records of everybody’s emails, phone calls and texts and entire web browsing history to state agencies to store, data-mine and profile at its will.

This provides a goldmine of valuable personal information for criminal hackers and foreign spies.

– “Bulk personal datasets” – the Act lets agencies acquire and link vast databases held by the public or private sector. These contain details on religion, ethnic origin, sexuality, political leanings and health problems, potentially on the entire population – and are ripe for abuse and discrimination.”

ProtonMail, a mail provider designed and built by “scientists, engineers, and developers drawn together by a shared vision of protecting civil liberties online.” announced on Thursday 19 January that they will be providing access to their email service via a Tor onion site, accessible only over the Tor anonymising network. The ProtonMail blog entry announcing the new service says:

“As ProtonMail has evolved, the world has also been changing around us. Civil liberties have been increasingly restricted in all corners of the globe. Even Western democracies such as the US have not been immune to this trend, which is most starkly illustrated by the forced enlistment of US tech companies into the US surveillance apparatus. In fact, we have reached the point where it simply not possible to run a privacy and security focused service in the US or in the UK.

At the same time, the stakes are also higher than ever before. As ProtonMail has grown, we have become increasingly aware of our role as a tool for freedom of speech, and in particular for investigative journalism. Last fall, we were invited to the 2nd Asian Investigative Journalism Conference and were able to get a firsthand look at the importance of tools like ProtonMail in the field.

Recently, more and more countries have begun to take active measures to surveil or restrict access to privacy services, cutting off access to these vital tools. We realize that censorship of ProtonMail in certain countries is not a matter of if, but a matter of when. That’s why we have created a Tor hidden service (also known as an onion site) for ProtonMail to provide an alternative access to ProtonMail that is more secure, private, and resistant to censorship.”

So, somewhat depressingly, the UK is now widely seen as a repressive state, willing to subject its citizens to a frighteningly totalitarian level of surveillance. Personally I am not prepared to put up with this without resistance.

Snowden hype notwithstanding, HMG does not have the resources to directly monitor all electronic communications traffic within the UK or to/from the UK, so it effectively outsources that task to “communications providers” (telcos for telephony and ISPs for internet traffic). Indeed, the IP act is intended, in part, to force UK ISPs to retain internet connection records (ICRs) when required to do so by the Home Secretary. In reality, this means that all the major ISPs, who already have relationships with HMG of various kinds, will be expected to log all their customer’s internet connectivity and to retain such logs for so long as is deemed necessary under the Act. The Act then gives various parts of HMG the right to request those logs for investigatory purposes.

Given that most of us now routinely use the internet for a vast range of activity, not limited just to browsing websites, but actually transacting in the real world, this is akin to requiring that every single library records the book requests of its users, every single media outlet (newsagents, bookshops, record shops etc.) records every purchase in a form traceable back to the purchaser, every single professional service provider (solicitors, lawyers, doctors, dentists, architects, plumbers, builders etc.) record all activity by name and address of visitor. All this on top of the already existing capability of of HMG to track and record every single person, social media site or organisation we contact by email or other form of messaging.

Can you imagine how you would feel if on every occasion you left your home a Police Officer (or in fact officials from any one of 48 separate agencies, including such oddities as the Food Standards Agency, the NHS Business Services Authority or the Gambling Commission) had the right, without a warrant or justifiable cause, to stop you and search you so that (s)he could read every piece of documentation you were carrying? How do you feel about submitting to a fishing trip through your handbag, briefcase, wallet or pockets?

I have no problem whatsoever with targeted surveillance, but forgive me if I find the blanket unwarranted surveillance of the whole populace, on the off-chance it might be useful, completely unacceptable. What happened to the right to privacy and the presumption of innocence in the eyes of the law? The data collected by ISPs and telcos under the IP act gives a treasure trove of information on UK citizens that the former East German Stasi could only have dreamed about.

Now regardless of whether or not you trust HMG to use this information wisely, and only for the reasons laid out under the Act, and only in the strict circumstances laid out in the Act, and only with the effective scrutiny of “independent” oversight, how confident are you that any future administration would be similarly wise and circumspect? What is to stop a future, let us suppose, less enlightened or liberal administration, misusing that data? What happens if in future some act which is currently perfectly legal and permissible, if of somewhat dubious taste, morality and good sense (such as, say, reading the Daily Mail online) were to become illegal? What constraint would there be to prevent a retrospective search for past consumers of such dubious material in order to flag them as “persons of interest”?

And even if you are comfortable with all of that, how comfortable are you with the idea that organised crime could have access to all your personal details? Given the aggregation of data inherent in the requirement for bulk data collection by ISPs, those datasets become massive and juicy targets for data theft (by criminals as as well as foreign nations states). And if you think that could not happen because ISPs and Telcos take really, really, really good care of their customer’s data, then think about TalkTalk or Plusnet or Three or Yahoo.

And they are just a few of the recent ones that we /know/ about.

So long as I use a UK landline or mobile provider for telephony, there is little I can do about the aggregation of metadata about my contacts (and if you think metadata aggregation doesn’t matter, take a look at this EFF note. I can, of course, and do, keep a couple of (cash) pre-paid SIM only mobile ‘phones handy – after all, you never know when you may need one (such as perhaps, in future when they become “difficult” to purchase). And the very fact that I say that probably flags me as suspicious in some people’s minds. (As an aside, ask yourself what comes to mind when you think about someone using a cash paid, anonymous, second hand mobile ‘phone. See? I must be guilty of something. Notice how pernicious suspicion becomes? Tricky isn’t it?) Nor can I do much about protecting my email (unless I use GPG, but that is problematic and in any case does not hide the all important metadata in the to/from/date/subject headers). Given that, I have long treated email just as if it were correspondence by postcard, though somewhat less private. For some long time I used to routinely GPG sign all my email. I have stopped doing that because the signatures meant, of course, that I had no deniability. Nowadays I only sign (and/or encrypt) when I want my correspondents to be sure I am who I say I am (or they want that reassurance).

But that does not mean I think I should just roll over and give up. There is plenty I can do to protect both myself and my immediate family from unnecessary, intrusive, unwarranted and unwanted snooping. For over a year now I have been using my own XMPP server in place of text messaging. I have had my own email server for well over a decade, and so long as I am conversing there with others on one of my domains served by that system, then that email is pretty private too (protected in transit by TLS using my own X509 certificates). My web browsing has also long been protected by Tor. But all that still leaves trails I don’t like leaving. I might, for example, not want my ISP to even know that I am using Tor, and in the case of my browsing activity it becomes problematic to protect others in my household or to cover all the multiple devices we now have which are network connected (I’ve actually lost count and would have to sit down and list them carefully to be sure I had everything covered).

What to do? The obvious solution is to wrap all my network activity in a VPN tunnel through my ISP’s routers before I hit the wider internet. That way my ISP can’t log anything beyond the fact that I am using a VPN. But which VPN to use? And should I go for a commercial service or roll my own? Bear in mind that not all VPNs are created equal, nor are they all necessarily really private or secure. The “P” in VPN refers to the ability to interconnect two separate (probably RFC 1918) private networks across a public untrusted network. It does not actually imply anything about the end user’s privacy. And depending upon the provider chosen and the protocols used, end user privacy may be largely illusory. In the worst case scenario, depending upon the jurisdiction in which you live and your personal threat model, a badly chosen VPN provider may actually reduce privacy by drawing attention to the fact that you value that privacy. (As an aside, using Tor can also have much the same effect. Indeed, there is plenty of anecdotal evidence to suggest that Tor usage lights you up like a christmas tree in the eyes of the main GPAs.)

Back in 2015, a team of researchers from the Sapienza University of Rome and Queen Mary University of London published a paper (PDF) entitled “A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients”. That paper described the researcher’s findings from a survey of 14 of the better known commercial VPN providers. The teams chose the providers in much the same way you or I might do so – they searched on-line for “best VPN” or “anonymous VPN” and chose the providers which came highest or most frequently in the search results. The paper is worth reading. It describes how a poor choice of provider could lead to significant traffic leakage, typically through IPV6 or DNS. The table below is taken from their paper.

The paper describes some countermeasures which may mitigate some of the problems. In my case I disable IPV6 at the router and apply firewall rules at both the desktop and VPS end of the tunnel to deny IPV6. My local DNS resolver files point to the OpenVPN endpoint (where I run a DNS resolver stub) for resolution and both that server and my local DNS resolvers (dnsmasq) point only to opennic DNS servers. It may help.

There are reports that usage of commercial VPN providers has gone up since the passage of the IP act. Many commercial VPN providers will be using the passage of the act as a potential booster for their services. And there are plenty of VPN providers about – just do what the Sapienza and Queen Mary researchers did and search for “VPN Provider” or “VPN services” to get lots of different lists, or take a look at lists provided by such sites as PrivacyTools or BestVPN. One useful point about the better commercial providers is that they usually have substantial infrastructure in place offering VPN exit points in various geographic locations. This can be particularly useful if you want to appear to be based in a particular country. Our own dear old BBC for example will block access to some services if you are not UK based (or if you are UK based and try to access services designed for overseas users). This can be problematic for UK citizens travelling overseas who wish to view UK services. A VPN with a UK exit gets around that problem. VPN users can also use local exits when they wish to access similarly (stupidly) protected services in foreign locales (the idiots in the media companies who are insistent on DRM in all its manifest forms are becoming more than just tiresome).

Some of the commercial services look better than others to me, but they all have one simple flaw as far as I am concerned. I don’t control the service. And no matter what the provider may say about “complete anonymity” (difficult if you want to pay by credit card) or “no logs”, the reality is that either there will be logs or the provider may be forced to divulge information by law. And don’t forget the problem of traffic leakage through IPV6 or DNS noted above. One further problem for me in using a commercial VPN provider rather than my own endpoint(s) is that I cannot then predict my apparent source IP address. This matters to me because my firewall rules limit ssh access to my various servers by source IP address. If I don’t know the IP address I am going to pop out on, then I’m going to have to relax that rule. I choose not to. I have simply amended my iptables rules to permit access from all my VPN endpoints.

The goldenfrog site has an interesting take on VPN anonymity. (Note that Goldenfrog market their own VPN service called “VyprVPN” so they are not entirely disinterested observers, but the post is still worth reading nevertheless). If you are simply concerned with protecting your privacy whilst browsing the net, and you are not concerned about anonymity then there may be a case for you to consider using a commercial provider – just don’t pick a UK company because they will be subject to lawful intercept requests under the IP act. Personally I’d shy away from US based companies too, (a view that is shared by so it’s not just me). I would also only pick a provider which supports OpenVPN (or possibly SoftEther) in preference to less secure protocols such as PPTP, or L2TP. (For a comparison of the options, see this BestVPN blog post.

If you wish to use a commercial VPN provider, then I would strongly recommend that you pay for it – and check the contractual arrangements carefully to ensure that they match your requirements. I suggest this for the same reasons I recommend that you pay for an email service. You get a contract. In my view, using a free VPN service might be worse than using no VPN. Think carefully about the business model for free provision of services on the ‘net. Google is a good example of the sort of free service provider which I find problematic. Using a commercial, paid for, VPN service has the distinct advantage that the provider has a vested interest in keeping his clients’ details, and activity, private. After all, his business depends upon that. Trust is fragile and easily lost. If your business is predicated on trustworthiness then I would argue that you will (or should) work hard to maintain that trust. PrivacyTools has a good set of recommendations for VPN providers.

But what if, like me, you are still unsure about using a commercial VPN? Should you use your own setup (as I do)? Here are some things to think about.

Using a commercial VPN


For Against Probably easier than setting up OpenVPN on a self-managed VPS for most people. The service provider will usually offer configuration files aimed at all the most popular operating systems. In many cases you will get a “point and click” application interface which will allow you to select the country you wish to pop out in. “Easier” does not mean “safer”. For example, the VPN provider may provide multiple users with the same private key wrapped up in its configuration files. Or the provider may not actually use OpenVPN. The provider may not offer support for YOUR chosen OS, or YOUR router. Beware in particular of “binary blob” installation of VPN software or configuration files (this applies particularly to Windows users). Unless you are technically competent (which you may not be if you are relying on this sort of installation) then you have no idea what is in that binary installation. You get a contract (if you pay!) That contract may not be as strong as you might wish, or it might specifically exclude some things you might wish to see covered. Check the AUP before you select your provider. You get what you pay for. Management and maintenance of the service (e.g. software patching) is handled by the provider. You rely on the provider to maintain a secure, up to date, fully patched service. Again, you get what you pay for. The provider (should) take your security and privacy seriously. Their business depends on it. The provider may hold logs, or be forced to log activity if local LE require that. They may also make simple mistakes which leak evidence of your activity (is their DNS secure?)

The VPN service is a large, attractive, juicy target for hostile activity by organised crime and/or Global Passive Adversaries such as GCHQ and NSA. Consider your threat model and act accordingly.

Your network activity is “lost” in the noise of activity of others. But your legal and legitimate activity could provide “cover” for criminal activity of others. If this results in LEA seizure (or otherwise surveillance) of the VPN endpoint then your activity is swept up in the investigation. Are you prepared for the possible consequences of that? You should get “unlimited” bandwidth (if you pay for it). But you may have to trade that off for reduced access speed, particularly if you are in contention for network usage with a large number of other users You (may) be able to set up the account completely anonymously using bitcoin. Using a VPN provider cannot guarantee you are anonymous. All it can do is enhance your privacy. Do not rely on a VPN to hide illegal activity. (And don’t rely on Tor for that either!) You may be able to select from a wide range of exit locations depending upon need. “Most VPN providers are terrible


Using your own VPN


For Against You get full control over the protocol you use, the DNS servers you use, the ciphers you choose and the location(s) you pop up in. You have to know what you are doing and you have to be comfortable in configuring the VPN software. Moreover, you need to be sure that you can actually secure the server on which you install the VPN server software as well as the client end. There is no point in having a “secure” tunnel if the end server leaks like a sieve or is subject to surveillance by the server provider – you have just shifted surveillance from the UK ISP to someone else. It can be cheaper than using a commercial service. It may not be. If you want to be able to pop out in different countries you will have to pay for multiple VPSs in multiple datacentres. You will also be responsible for maintaining those servers. You can be confident that your network activity is actually private because you can enforce your own no logging policy. No you can’t be sure. The VPS provider may log all activity. Check the privacy policy carefully. And be aware that the provider of a 3 euro a month VPS is very likely to dump you in the lap of any LEA who comes knocking on the door should you be stupid enough to use the VPN for illegal activity (or even any activity which breaches their AUP).

Also bear in mind the fact that you have no plausible deniability through hiding in a lot of other’s traffic if you are the only user of the VPN – which you paid for with your credit card.


I’ve used OpenVPN quite a lot in the past. I like it, it has a good record for privacy and security, it is relatively easy to set up, and it is well supported on a range of different devices. I have an OpenVPN endpoint on a server on the outer screened subnet which forms part of my home network so that I can connect privately to systems when I am out and about and wish my source IP to appear to be that at my home address. This can be useful when I am stuck in such places as airport lounges, internet cafes, foreign (or even domestic) hotels etc. So when the IP Act was still but a gleam in the eyes of some of our more manic lords and masters, I set up one or two more OpenVPN servers on various VPSs I have dotted about the world. In testing, I’ve found that using a standard OpenVPN setup (using UDP as the transport) has only a negligible impact on my network usage – certainly much less than using Tor.

Apart from the privacy offered by OpenVPN, particularly when properly configured to use forward secrecy as provided by TLS (see gr3t for some tips on improving security in your configuration), we can also make the tunnel difficult to block. We don’t (yet) see many blanket attempts to block VPN usage in the UK, but in some other parts of the world, notably China or reportedly the UAE for example, such activity can be common. By default OpenVPN uses UDP as the transport protocol and the server listens on port 1194. This well known port and/or protocol combination could easily be blocked at the network level. Indeed, some hotels, internet cafes and airport lounges routinely (and annoyingly) block all traffic to ports other than 80 and 443. If, however, we reconfigure OpenVPN to use TCP as the transport and listen on port 443, then its traffic becomes indistinguishable from HTTPS which makes blocking it much more difficult. There is a downside to this though. The overhead of running TCP over TCP can degrade your network experience. That said however, in my view a slightly slower connection is infinitely preferable to no connection or an unprotected connection.

In my testing, even using Tor over the OpenVPN tunnel (so that my Tor entry point appears to the Tor network to be the OpenVPN endpoint) didn’t degrade my network usage too much. This sort of Tor usage is made easier by the fact that I run my Tor client (either Tails, or Whonix) from within a virtual server instance running on one of my desktops. Thus if the desktop is connected to an OpenVPN tunnel then the Tor client is forced to use that tunnel to connect to Tor and thence the outside world.

However, this set up has a few disadvantages, not least the fact that I might forget to fire up the OpenVPN tunnel on my desktop before starting to use Tor. But the biggest problem I face in running a tunnel from my desktop is that it only protects activity /from/ that desktop. Any network connections from any of my mobile devices, my laptops, my various servers, or other network connected devices (as I said, I have lost count) or most importantly, my family’s devices, are perforce unprotected unless I can set up OpenVPN clients on them. In some cases this may be possible (my wife’s laptop for example) but it certainly isn’t ideal and in many cases (think my kid’s ‘phones for example) it is going to be completely impractical. So the obvious solution is to move the VPN tunnel entry point to my domestic router. That way, /all/ traffic to the net will be forced over the tunnel.

When thinking about this, Initially I considered using a raspberry pi as the router but my own experience of the pi’s networking capability left me wondering whether it would cope with my intended use case. The problem with the pi is that it only has one ethernet port and its broadcom chip only supports USB 2.0 connection. Internally the pi converts ethernet to USB. Since the chip is connected to four USB external ports and I would need to add a USB to ethernet conversion externally as well as USB wifi dongle in order to get the kind of connectivity I want (which includes streaming video) I fear that I might overwhelm the pi – certainly I’m pretty sure the device might become a bottleneck. However, I have /not/ tested this (yet) so I have no empirical evidence either way.

My network is already segmented in that I have a domestic ADSL router connected to my ISP and a separate, internal ethernet/WiFi only router connecting to that external router. It looks (something) like this:



Since all the devices I care most about are inbound of the internal router (and wired rather than wifi where I really care) I can treat the network between the two devices as a sacrificial screened subnet. I consider that subnet to be almost as hostile as the outside world. I could therefore add the pi to the external screened net and thus create another separate internal network which is wifi only. That wouldn’t help with my wired devices (which tend to be the ones I really worry about) but it would give me a good test network which I could use as “guest only” access to the outside world. I have commented in the past about the etiquette of allowing guests access to my network. I currently force such access over my external router so that the guests don’t get to see my internal systems. However, that means that in future they won’t get the protection offered by my VPN. That doesn’t strike me as fair so I might yet set up a pi as described (or in fact add another router, they are cheap enough).

Having discounted the pi as a possibility, then another obvious solution would be re-purpose an old linux box (I have plenty) but that would consume way more power than I need to waste and looks to be overkill so the obvious solution is to stick with the purpose built router option. Now both OpenWrt or its fork LEDE and the more controversial DD-WRT offer the possibility of custom built routers with OpenVPN client capability built in. The OpenWrt wiki has a good description of how to set up OpenVPN. The DD-WRT wiki entry somewhat is less good, but then OpenWrt/LEDE would probably be a better choice in my view anyway. I’ve used OpenWrt in the past (on an Asus WL-500g) but found it a bit flaky. Possibly that is a reflection of the router I used (fairly old, bought cheap off ebay) and I should probably try again with a more modern device. But right now it is possible to buy new, capable SOHO routers with OpenVPN capability off the shelf. A quick search for “openvpn routers” will give you devices by Asus, Linksys, Netgear, Cisco or some really interesting little devices by GL Innovations. The Gli devices actually come with OpenWRT baked in and both the GL-MT300N and the slightly better specced GL-AR300M look to be particularly useful. I leave the choice of router to you, but you should be aware that many SOHO routers have lamentably poor security out of the box and even worse security update histories. You also need to bear in mind that VPN capability is resource intensive so you should choose the device with the fastest CPU and most RAM you can afford. I personally chose an Asus device as my VPN router (and yes, it is patched to the latest level….) simply because they are being actively audited at the moment and seem to be taking security a little more seriously than some of their competitors. I may yet experiment with one of the GL devices though.

Note here that I do /not/ use the OpenVPN router as the external router connected directly to my ISP, my new router replaced my old “inside net” router. This means that whilst all the connections I really care about are tunnelled over the OpenVPN route to my endpoint (which may be in one of several European datacentres depending upon how I feel) I can still retain a connection to the outside world which is /not/ tunnelled. There are a couple of reasons for this. Firstly some devices I use actually sometimes need a UK IP presence (think streaming video from catch-up TV or BBC news for example). Secondly, I also wish to retain a separate screened sub-net to house my internal OpenVPN server (to allow me to appear to be using my home network should I so choose when I’m out and about). And of course I may occasionally just like to use an unprotected connection simply to give my ISP some “noise” for his logs….

So, having chosen the router, we now need to configure it to use OpenVPN in client mode. My router can also be configured as a server, so that it would allow incoming tunnelled connections from the outside to my network, but I don’t want that, and nor probably do you. In my case such inbound connections would in any event fail because my external router is so configured as to only allow inbound connections to a webserver and my (separate) OpenVPN server on the screened subnet. It does not permit any other inbound connections, nor does my internal router accept connections from either the outside world or the screened subnet. My internal screened OpenVPN server is configured to route traffic back out to the outside world because it is intended only for such usage.

My new internal router expects its OpenVPN configuration file to follow a specific format. I found this to be poorly documented (but that is not unusual). Here’s how mine looks (well, not exactly for obvious reasons, in particular the (empty) keys are not real, but the format is correct).


# config file for router to VPN endpoint 1

# MBM 09/12/16

dev tun
proto udp
remote 1194
resolv-retry infinite
user nobody

# Asus router can’t cope with group change so:
# group nogroup















—–BEGIN OpenVPN Static key V1—–

—–END OpenVPN Static key V1—–


key-direction 1
auth SHA512
remote-cert-tls server
cipher AES-256-CBC

# end configuration

If you are using a commercial VPN service rather than your own OpenVPN endpoint, then your provider should give you configuration files much like those above. As I mentioned earlier, beware of “binary blob” non-text configurations.

If your router is anything like mine, you will need to upload the configuration file using the administrative web interface and then activate it. My router allows several different configurations to be stored so that I can vary my VPN endpoints depending on where I wish to pop up on the net. Of course this means that I have to pay for several different VPSs to run OpenVPN on, but at about 3 euros a month for a suitable server, that is not a problem. I choose providers who:

  • are not UK based or owned;
  • have AUPs which allow VPN usage (it helps if they are also Tor friendly);
  • have datacentre presences in more than one location (say Germany, as well as the Ukraine);
  • allow installation of my choice of OS;
  • have decent reputations for connectivity and uptime; and
  • are cheap.

Whilst this may appear at first sight to be problematic, there are in fact a large number of such providers dotted around Europe. Be aware, however, that many small providers are simply resellers of services provided by other, larger, companies. This can mean that whilst you appear to be using ISP “X” in, say, Bulgaria, you are actually using servers owned and managed by a major German company or at least are on networks so owned. Be careful and do your homework before signing up to a service. I have found the lowendtalk site very useful for getting leads and for researching providers. The lowendbox website is also a good starting point for finding cheap deals when you want to test your setup.

Now go take back your privacy.


Some of the sites I found useful when considering my options are listed below.

Check your IP address and the DNS servers you are using at

Also check whether you are leaking DNS requests outside the tunnel at

You can also check for DNS leakage at dnsleaktest. is a very useful resource – and not just for VPN comparisons and look to be two of the better paid for commercial services.

TheBestVPN site offers a VPN Comparison and some reviews of 20 providers.

A very thorough comparison of 180 different commercial VPN providers is given by “that one privacy guy“. The rest of his (or her) site is also well worth exploring.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Yarn architecture discussion

Fri, 05/05/2017 - 16:45

Recently Rob and I visited Soile and Lars. We had a lovely time wandering around Helsinki with them, and I also spent a good chunk of time with Lars working on some design and planning for the Yarn test specification and tooling. You see, I wrote a Rust implementation of Yarn called rsyarn "for fun" and in doing so I noted a bunch of missing bits in the understanding Lars and I shared about how Yarn should work. Lars and I filled, and re-filled, a whiteboard with discussion about what the 'Yarn specification' should be, about various language extensions and changes, and also about what functionality a normative implementation of Yarn should have.

This article is meant to be a write-up of all of that discussion, but before I start on that, I should probably summarise what Yarn is.

Yarn is a mechanism for specifying tests in a form which is more like documentation than code. Yarn follows the concept of BDD story based design/testing and has a very Cucumberish scenario language in which to write tests. Yarn takes, as input, Markdown documents which contain code blocks with Yarn tests in them; and it then runs those tests and reports on the scenario failures/successes.

As an example of a poorly written but still fairly effective Yarn suite, you could look at Gitano's tests or perhaps at Obnam's tests (rendered as HTML). Yarn is not trying to replace unit testing, nor other forms of testing, but rather seeks to be one of a suite of test tools used to help validate software and to verify integrations. Lars writes Yarns which test his server setups for example.

As an example, lets look at what a simple test might be for the behaviour of the /bin/true tool:

SCENARIO true should exit with code zero WHEN /bin/true is run with no arguments THEN the exit code is 0 AND stdout is empty AND stderr is empty

Anyone ought to be able to understand exactly what that test is doing, even though there's no obvious code to run. Yarn statements are meant to be easily grokked by both developers and managers. This should be so that managers can understand the tests which verify that requirements are being met, without needing to grok python, shell, C, or whatever else is needed to implement the test where the Yarns meet the metal.

Obviously, there needs to be a way to join the dots, and Yarn calls those things IMPLEMENTS, for example:

IMPLEMENTS WHEN (\S+) is run with no arguments set +e "${MATCH_1}" > "${DATADIR}/stdout" 2> "${DATADIR}/stderr" echo $? > "${DATADIR}/exitcode"

As you can see from the example, Yarn IMPLEMENTS can use regular expressions to capture parts of their invocation, allowing the test implementer to handle many different scenario statements with one implementation block. For the rest of the implementation, whatever you assume about things will probably be okay for now.

Given all of the above, we (Lars and I) decided that it would make a lot of sense if there was a set of Yarn scenarios which could validate a Yarn implementation. Such a document could also form the basis of a Yarn specification and also a manual for writing reasonable Yarn scenarios. As such, we wrote up a three-column approach to what we'd need in that test suite.

Firstly we considered what the core features of the Yarn language are:

  • Scenario statements themselves (SCENARIO, GIVEN, WHEN, THEN, ASSUMING, FINALLY, AND, IMPLEMENTS, EXAMPLE, ...)
  • Whitespace normalisation of statements
  • Regexp language and behaviour
  • IMPLEMENTS current directory, data directory, home directory, and also environment.
  • Error handling for the statements, or for missing IMPLEMENTS
  • File (and filename) encoding
  • Labelled code blocks (since commonmark includes the backtick code block kind)
  • Exactly one IMPLEMENTS per statement

We considered unusual (or corner) cases and which of them needed defining in the short to medium term:

  • Statements before any SCENARIO or IMPLEMENTS
  • Meaning of split code blocks (concatenation?)
  • Meaning of code blocks not at the top level of a file (ignore?)
  • Meaning of HTML style comments in markdown files
  • Odd scenario ordering (e.g. ASSUMING at the end, or FINALLY at the start)
  • Meaning of empty lines in code blocks or between them.

All of this comes down to how to interpret input to a Yarn implementation. In addition there were a number of things we felt any "normative" Yarn implementation would have to handle or provide in order to be considered useful. It's worth noting that we don't specify anything about an implementation being a command line tool though...

  • Interpreter for IMPLEMENTS (and arguments for them)
  • "Library" for those implementations
  • Ability to require that failed ASSUMING statements lead to an error
  • A way to 'stop on first failure'
  • A way to select a specific scenario to run, from a large suite.
  • Generation of timing reports (per scenario and also per statement)
  • A way to 'skip' missing IMPLEMENTS
  • A clear way to identify the failing step in a scenario.
  • Able to treat multiple input files as a single suite.

There's bound to be more, but right now with the above, we believe we have two roughly conformant Yarn implementations. Lars' Python based implementation which lives in cmdtest (and which I shall refer to as pyyarn for now) and my Rust based one (rsyarn).

One thing which rsyarn supports, but pyyarn does not, is running multiple scenarios in parallel. However when I wrote that support into rsyarn I noticed that there were plenty of issues with running stuff in parallel. (A problem I'm sure any of you who know about threads will appreciate).

One particular issue was that scenarios often need to share resources which are not easily sandboxed into the ${DATADIR} provided by Yarn. For example databases or access to limited online services. Lars and I had a good chat about that, and decided that a reasonable language extension could be:

USING database foo

with its counterpart

RESOURCE database (\S+) LABEL database-$1 GIVEN a database called $1 FINALLY database $1 is torn down

The USING statement should be reasonably clear in its pairing to a RESOURCE statement. The LABEL statement I'll get to in a moment (though it's only relevant in a RESOURCE block, and the rest of the statements are essentially substituted into the calling scenario at the point of the USING.

This is nowhere near ready to consider adding to the specification though. Both Lars and I are uncomfortable with the $1 syntax though we can't think of anything nicer right now; and the USING/RESOURCE/LABEL vocabulary isn't set in stone either.

The idea of the LABEL is that we'd also require that a normative Yarn implementation be capable of specifying resource limits by name. E.g. if a RESOURCE used a LABEL foo then the caller of a Yarn scenario suite could specify that there were 5 foos available. The Yarn implementation would then schedule a maximum of 5 scenarios which are using that label to happen simultaneously. At bare minimum it'd gate new users, but at best it would intelligently schedule them.

In addition, since this introduces the concept of parallelism into Yarn proper, we also wanted to add a maximum parallelism setting to the Yarn implementation requirements; and to specify that any resource label which was not explicitly set had a usage limit of 1.

Once we'd discussed the parallelism, we decided that once we had a nice syntax for expanding these sets of statements anyway, we may as well have a syntax for specifying scenario language expansions which could be used to provide something akin to macros for Yarn scenarios. What we came up with as a starter-for-ten was:

CALLING write foo

paired with

EXPANDING write (\S+) GIVEN bar WHEN $1 is written to THEN success was had by all

Again, the CALLING/EXPANDING keywords are not fixed yet, nor is the $1 type syntax, though whatever is used here should match the other places where we might want it.

Finally we discussed multi-line inputs in Yarn. We currently have a syntax akin to:

GIVEN foo ... bar ... baz

which is directly equivalent to:

GIVEN foo bar baz

and this is achieved by collapsing the multiple lines and using the whitespace normalisation functionality of Yarn to replace all whitespace sequences with single space characters. However this means that, for example, injecting chunks of YAML into a Yarn scenario is a pain, as would be including any amount of another whitespace-sensitive input language.

After a lot of to-ing and fro-ing, we decided that the right thing to do would be to redefine the ... Yarn statement to be whitespace preserving and to then pass that whitespace through to be matched by the IMPLEMENTS or whatever. In order for that to work, the regexp matching would have to be defined to treat the input as a single line, allowing . to match \n etc.

Of course, this would mean that the old functionality wouldn't be possible, so we considered allowing a \ at the end of a line to provide the current kind of behaviour, rewriting the above example as:

GIVEN foo \ bar \ baz

It's not as nice, but since we couldn't find any real uses of ... in any of our Yarn suites where having the whitespace preserved would be an issue, we decided it was worth the pain.

None of the above is, as of yet, set in stone. This blog posting is about me recording the information so that it can be referred to; and also to hopefully spark a little bit of discussion about Yarn. We'd welcome emails to our usual addresses, being poked on Twitter, or on IRC in the common spots we can be found. If you're honestly unsure of how to get hold of us, just comment on this blog post and I'll find your message eventually.

Hopefully soon we can start writing that Yarn suite which can be used to validate the behaviour of pyyarn and rsyarn and from there we can implement our new proposals for extending Yarn to be even more useful.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in April 2017

Sun, 30/04/2017 - 17:35

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • I was elected Debian Project Leader for 2017. I'd like to sincerely thank everyone who voted for me as well as everyone who took part in the election in general especially Mehdi Dogguy for being a worthy opponent. The result was covered on LWN, Phoronix, DistroWatch, iTWire, etc.
  • Added support for the Monzo banking API in social-core, a Python library to allow web applications to authenticate using third-parties. (#68)
  • Fixed a HTML injection attack in a demo of Russell Keith-Magee's BeeWare presentation library. (#3)
  • Updated systemd's documentation to explain why we suggest explicitly calling make all despite the Makefile's "check" target calling it. (#5830)
  • Updated the documentation of a breadth-first version of find(1) called bfs to refer to the newly-uploaded Debian package. (#23)
  • Updated the configuration for the ticketbot IRC bot (zwiebelbot on OFTC) to identify #reproducible-builds as a Debian-related channel. This is so that bug Debian bug numbers are automatically expanded by the bot. (#7)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

I also made the following changes to diffoscope, our recursive and content-aware diff utility used to locate and diagnose reproducibility issues:

  • New features:
    • Add support for comparing Ogg Vorbis files. (0436f9b)
  • Bug fixes:
    • Prevent a traceback when using --new-file with containers. (#861286)
    • Don't crash on invalid archives; print a useful error instead. (#833697).
    • Don't print error output from bzip2 call. (21180c4)
  • Cleanups:
    • Prevent abstraction-level violations by defining visual diff support on Presenter classes. (7b68309)
    • Show Debian packages installed in test output. (c86a9e1)

Debian Patches contributed Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 882-1 for the tryton-server general application platform to fix a path suffix injection attack.
  • Issued DLA 883-1 for curl preventing a buffer read overrun vulnerability.
  • Issued DLA 884-1 for collectd (a statistics collection daemon) to close a potential infinite loop vulnerability.
  • Issued DLA 885-1 for the python-django web development framework patching two open redirect & XSS attack issues.
  • Issued DLA 890-1 for ming, a library to create Flash files, closing multiple heap-based buffer overflows.
  • Issued DLA 892-1 and DLA 891-1 for the libnl3/libnl Netlink protocol libraries, fixing integer overflow issues which could have allowed arbitrary code execution.
  • redis (4:4.0-rc3-1) — New upstream RC release.
  • adminer:
    • 4.3.0-2 — Fix debian/watch file.
    • 4.3.1-1 — New upstream release.
  • bfs:
    • 1.0-1 — Initial release.
    • 1.0-2 — Drop fstype tests as they rely on /etc/mtab being available. (#861471)
  • python-django:
    • 1:1.10.7-1 — New upstream security release.
    • 1:1.11-1 — New upstream stable release to experimental.

I sponsored the following uploads:

I also performed the following QA uploads:

  • gtkglext (1.2.0-7) — Correct installation location of gdkglext-config.h after "Multi-Archification" in 1.2.0-5. (#860007)

Finally, I made the following non-maintainer uploads (NMUs):

  • python-formencode (1.3.0-2) — Don't ship files in /usr/lib/python{2.7,3}/dist-packages/docs. (#860146)
  • django-assets (0.12-2) — Patch pytest plugin to check whether we are running in a Django context, otherwise we can break unrelated testsuites. (#859916)
RC bugs filed

I also filed 2 bugs for packages that access the internet during build (against fail2ban & ruby-rack-proxy). I also filed 11 FTBFS bugs against bup, golang-github-lunny-nodb, hunspell-dict-ko, icinga-web, nanoc, oggvideotools, polygen, python-dogpile.cache, reapr, tendermint-go-merkle & z88.

FTP Team

As a Debian FTP assistant I ACCEPTed 155 packages: aiohttp-cors, bear, colorize, erlang-p1-xmpp, fenrir, firejail, fizmo-console, flask-ldapconn, flask-socketio,, fonts-blankenburg, fortune-zh, fw4spl, fzy, gajim-antispam, gdal, getdns, gfal2, gmime, golang-github-go-macaron-captcha, golang-github-go-macaron-i18n, golang-github-gogits-chardet, golang-github-gopherjs-gopherjs, golang-github-jroimartin-gocui, golang-github-lunny-nodb, golang-github-markbates-goth, golang-github-neowaylabs-wabbit, golang-github-pkg-xattr, golang-github-siddontang-goredis, golang-github-unknwon-cae, golang-github-unknwon-i18n, golang-github-unknwon-paginater, grpc, grr-client-templates, gst-omx, hddemux, highwayhash, icedove, indexed-gzip, jawn, khal, kytos-utils, libbloom, libdrilbo, libhtml-gumbo-perl, libmonospaceif, libpsortb, libundead, llvm-toolchain-4.0, minetest-mod-homedecor, mini-buildd, mrboom, mumps, nnn, node-anymatch, node-asn1.js, node-assert-plus, node-binary-extensions, node-bn.js, node-boom, node-brfs, node-browser-resolve, node-browserify-des, node-browserify-zlib, node-cipher-base, node-console-browserify, node-constants-browserify, node-delegates, node-diffie-hellman, node-errno, node-falafel, node-hash-base, node-hash-test-vectors, node-hash.js, node-hmac-drbg, node-https-browserify, node-jsbn, node-json-loader, node-json-schema, node-loader-runner, node-miller-rabin, node-minimalistic-crypto-utils, node-p-limit, node-prr, node-sha.js, node-sntp, node-static-module, node-tapable, node-tough-cookie, node-tunein, node-umd, open-infrastructure-storage-tools, opensvc, openvas, pgaudit, php-cassandra, protracker, pygame, pypng, python-ase, python-bip32utils, python-ltfatpy, python-pyqrcode, python-rpaths, python-statistics, python-xarray, qtcharts-opensource-src, r-cran-cellranger, r-cran-lexrankr, r-cran-pwt9, r-cran-rematch, r-cran-shinyjs, r-cran-snowballc, ruby-ddplugin, ruby-google-protobuf, ruby-rack-proxy, ruby-rails-assets-underscore, rustc, sbt, sbt-launcher-interface, sbt-serialization, sbt-template-resolver, scopt, seqsero, shim-signed, sniproxy, sortedcollections, starjava-array, starjava-connect, starjava-datanode, starjava-fits, starjava-registry, starjava-table, starjava-task, starjava-topcat, starjava-ttools, starjava-util, starjava-vo, starjava-votable, switcheroo-control, systemd, tilix, tslib, tt-rss-notifier-chrome, u-boot, unittest++, vc, vim-ledger, vis, wesnoth-1.13, wolfssl, wuzz, xandikos, xtensor-python & xwallpaper.

I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against getdns, gfal2, grpc, mrboom, mumps, opensvc, python-ase, sniproxy, starjava-topcat, starjava-ttools, unittest++, wolfssl, xandikos & xtensor-python.

Categories: LUG Community Blogs

Mick Morgan: free Dmitry Bogatov

Thu, 27/04/2017 - 16:11

Dmitry Bogatov, aka KAction, is a Russian free software activist and mathematics teacher at Moscow’s Finance and Law University. He was arrested in Russia on 6 April of this year and charged with extremism. He is currently held in a pre-trial detention centre, and is apparently likely to remain there until early June at least, while investigations continue. The Russian authorities claim that Bogatov published messages on a Russian website, “”, inciting violent action at the opposition protest demonstration held in Moscow on 2 April.

Bogatov is well known in the free software community as a contributor to debian. As a privacy activist he runs a Tor exit node in Russia and it is this latter point which would appear to have caused his difficulty. Apparently, Bogatov’s Tor exit node was logged as the source address for the inflammatory posts in question. The debian project have taken the precaution of revoking Bogatov’s keys which allow him to post material to the project. They see those keys as compromised following his arrest and the seizure of his computing equipment.

Bogatov claims (with some justification it would appear) that he had nothing to do with the posts of which he is accused. Indeed, at the time of the post from his Tor node he claims that he was at a gym with his wife and visited a supermarket immediately afterwards. CCTV footage from the store supports this claim.

Operating a Tor node is not illegal in Russia, nor is it illegal in many other jurisdictions around the world. However, the act of doing so can draw attention to yourself as a possible “dissident” wherever you may live.

I am a passionate fan of free software, I use debian (and its derivatives) as my preferred operating system. I am an advocate of privacy enhancing tools such as GPG, Tor and OpenVPN, and I run a Tor node.

I hope that Dmitry Bogatov is treated fairly and in due course is proved innocent of the charges he faces. I post this message in support.

Categories: LUG Community Blogs

Chris Lamb: Elected Debian Project Leader

Sun, 16/04/2017 - 13:52

I'd like to thank the entire Debian community for choosing me to represent them as the next Debian Project Leader.

I would also like to thank Mehdi for his tireless service and wish him all the best for the future. It is an honour to be elected as the DPL and I am humbled that you would place your faith and trust in me.

You can read my platform here.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in March 2017

Fri, 31/03/2017 - 23:01

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Fixed two issues in, a web-based version of the diffoscope in-depth and content-aware diff utility:
    • Fix command-line API breakage. (commit)
    • Use over (commit)
  • Made a number of improvements to, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change), including:
    • Correctly detecting the distribution to build with for some tags. (commit)
    • Use Lintian from the backports repository where appropriate. (#44)
    • Don't build upstream/ branches even if they contain .travis.yml files. (commit)
  • Fixed an issue in django-staticfiles-dotd, my Django staticfiles adaptor to concatentate .d-style directories, where some .d directories were being skipped. This was caused by modifying the contents of a Python list during iteration. (#3)
  • Performed some miscelleanous cleanups in django12factor, a Django utility to make projects adhere better to the 12-factor web-application philosophy. (#58)
  • Submitted a pull request for Doomsday-Engine, a portable, enhanced source port of Doom, Heretic and Hexen, to make the build reproducible (#16)
  • Created a pull request for gdata-python-client (a Python client library for Google APIs) to make the build reproducible. (#56)
  • Authored a pull request for the MochaJS JavaScript test framework to make the build reproducible. (#2727)
  • Filed a pull request against vine, a Python promises library, to avoid non-determinstic default keyword argument appearing in the documentation. (#12)
  • Filed an issue for the Redis key-value database addressing build failures on the MIPS architecture. (#3874)
  • Submitted a bug report against xdotool — a tool to automate window and keyboard interactions — reporting a crash when searching after binding an action with behave. (#169)
  • Reviewed a pull request from Dan Palmer for django-email-from-template, a library to send emails in Django generated entirely from the templating system, which intends to add an option to send mails upon transaction commit.
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features/optimisations:
    • Extract squashfs archive in one go rather than per-file, speeding up ISO comparison by ~10x.
    • Add support for .docx and .odt files via docx2txt & odt2txt. (#859056).
    • Add support for PGP files via pgpdump. (#859034).
    • Add support for comparing Pcap files. (#858867).
    • Compare GIF images using gifbuild. (#857610).
  • Bug fixes:
    • Ensure that we really are using ImageMagick and not the GraphicsMagick compatibility layer. (#857940).
    • Fix and add test for meaningless 1234-content metadata when introspecting archives. (#858223).
    • Fix detection of ISO9660 images processed with isohybrid.
    • Skip icc tests if the Debian-specific patch is not present. (#856447).
    • Support newer versions of cbfstool to avoid test failures. (#856446).
    • Update the progress bar prior to working to ensure filename is in sync.
  • Cleanups:
    • Use /usr/share/dpkg/ over manual calls to dpkg-parsechangelog in debian/rules.
    • Ensure tests and the runtime environment can locate binaries in /usr/sbin (eg. tcpdump).


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Fix a possible endless loop while stripping .ar files due to trusting the file's own file size data. (#857975).
  • Add support for testing files we should reject and include the filename when evaluating fixtures. is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Add support for Format: 1.0. (#20).
  • Don't parse Format: header as the source package version. (#21).
  • Show the reproducible status of packages.


I submitted my platform for the 2017 Debian Project Leader Elections. This was subsequently covered on LWN and I have been participating in the discussions on the debian-vote mailing list since then.

Patches contributed Debian LTS

This month I have been paid to work 14.75 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 848-1 for the freetype font library fixing a denial of service vulnerability.
  • Issued DLA 851-1 for wget preventing a header injection attack.
  • Issued DLA 863-1 for the deluge BitTorrent client correcting a cross-site request forgery vulnerability.
  • Issued DLA 864-1 for jhead (an EXIF metadata tool) patching an arbitrary code execution vulnerability.
  • Issued DLA 865-1 for the suricata intrusion detection system, fixing an IP protocol matching error.
  • Issued DLA 871-1 for python3.2 fixing a TLS stripping vulnerability in the smptlib library.
  • Issued DLA 873-1 for apt-cacher preventing a HTTP response splitting vulnerability.
  • Issued DLA 876-1 for eject to prevent an issue regarding the checking of setuid(2) and setgid(2) return values.
  • python-django:
    • 1:1.10.6-1 — New upstream bugfix release.
    • 1:1.11~rc1-1 — New upstream release candidate.
  • redis:
    • 3:3.2.8-2 — Avoid conflict between RuntimeDirectory and tmpfiles.d(5) both attempting to create /run/redis with differing permissions. (#856116)
    • 3:3.2.8-3 — Revert the creation of a /usr/bin/redis-check-rdb to /usr/bin/redis-server symlink to avoid a dangling symlink if only the redis-tools package is installed. (#858519)
  • gunicorn 19.7.0-1 & 19.7.1-1 — New upstream releases.
  • adminer 4.3.0-1 — New upstream release.

Finally, I also made the following non-maintainer uploads (NMUs):

Debian bugs filed

I additionally filed 5 bugs for packages that access the internet during build against golang-github-mesos-mesos-go, ipywidgets, ruby-bunny, ruby-http & sorl-thumbnail.

I also filed 13 FTBFS bugs against android-platform-frameworks-base, ariba, calendar-exchange-provider, cylc, git, golang-github-grpc-ecosystem-go-grpc-prometheus, node-dateformat, python-eventlet, python-tz, sogo-connector, spyder-memory-profiler, sushi & tendermint-go-rpc.

FTP Team

As a Debian FTP assistant I ACCEPTed 121 packages: 4pane, adql, android-platform-system-core, android-sdk-helper, braillegraph, deepnano, dh-runit, django-auth-ldap, django-dirtyfields, drf-extensions, gammaray, gcc-7, gnome-keysign, golang-code.gitea-sdk, golang-github-bluebreezecf-opentsdb-goclient, golang-github-bsm-redeo, golang-github-cupcake-rdb, golang-github-denisenkom-go-mssqldb, golang-github-exponent-io-jsonpath, golang-github-facebookgo-ensure, golang-github-facebookgo-freeport, golang-github-facebookgo-grace, golang-github-facebookgo-httpdown, golang-github-facebookgo-stack, golang-github-facebookgo-subset, golang-github-go-openapi-loads, golang-github-go-openapi-runtime, golang-github-go-openapi-strfmt, golang-github-go-openapi-validate, golang-github-golang-geo, golang-github-gorilla-pat, golang-github-gorilla-securecookie, golang-github-issue9-assert, golang-github-issue9-identicon, golang-github-jaytaylor-html2text, golang-github-joho-godotenv, golang-github-juju-errors, golang-github-kisielk-gotool, golang-github-kubernetes-gengo, golang-github-lpabon-godbc, golang-github-lunny-log, golang-github-makenowjust-heredoc, golang-github-mrjones-oauth, golang-github-nbutton23-zxcvbn-go, golang-github-neelance-sourcemap, golang-github-ngaut-deadline, golang-github-ngaut-go-zookeeper, golang-github-ngaut-log, golang-github-ngaut-pools, golang-github-ngaut-sync2, golang-github-optiopay-kafka, golang-github-quobyte-api, golang-github-renstrom-dedent, golang-github-sergi-go-diff, golang-github-siddontang-go, golang-github-smartystreets-go-aws-auth, golang-github-xanzy-go-cloudstack, golang-github-xtaci-kcp, golang-github-yohcop-openid-go, graywolf, haskell-raaz, hfst-ospell, hikaricp, iptraf-ng, kanboard-cli, kcptun, kreport, libbluray, libcatmandu-store-elasticsearch-perl, libcsfml, libnet-prometheus-perl, libosmocore, libpandoc-wrapper-perl, libseqlib, matrix-synapse, mockldap, nfs-ganesha, node-buffer, node-pako, nose-el, nvptx-tools, nx-libs, open-ath9k-htc-firmware, pagein, paleomix, pgsql-ogr-fdw, profanity, pyosmium, python-biotools, python-django-extra-views, python-django-otp, python-django-push-notifications, python-dnslib, python-gmpy, python-gmpy2, python-holidays, python-kanboard, python-line-profiler, python-pgpy, python-pweave, python-raven, python-xapian-haystack, python-xopen, r-cran-v8, repetier-host, ruby-jar-dependencies, ruby-maven-libs, ruby-psych, ruby-retriable, seafile-client, spyder-unittest, stressant, systray-mdstat, telegram-desktop, thawab, tigris, tnseq-transit, typesafe-config, vibe.d, x2goserver & xmlrpc-c.

I additionally filed 14 RC bugs against packages that had incomplete debian/copyright files against: golang-github-cupcake-rdb, golang-github-sergi-go-diff, graywolf, hfst-ospell, libbluray, pgsql-ogr-fdw, python-gmpy, python-gmpy2, python-pgpy, python-xapian-haystack, repetier-host, telegram-desktop, tigris & xmlrpc-c.

Categories: LUG Community Blogs

Mick Morgan: pwned

Sat, 18/03/2017 - 13:55

I recently received a spam email to one of my email addresses. In itself this is annoying, but not particularly interesting or that unusual (despite my efforts to avoid such nuisances). What was unusual was the form of the address because it contained a username I have not used in a long time, and only on one specific site.

The address took the form “username” <realaddress@realdomain> and the email invited me to hook up with a “hot girl” who “was missing me”. The return address was at a Russian domain.

Intrigued as to how this specific UID and address had appeared in my inbox I checked Troy Hunt’s haveibeenpwned database and found that, sure enough, the site I had signed up to with that UID had been compromised. I have since both changed the password on that site (too late of course because it would seem that the password database was stored insecurely) and deleted the account (which I haven’t used in years anyway). I don’t /think/ that I have used that particular UID/password combination anywhere else, but I’m checking nonetheless.

The obvious lesson here is that a) password re-use is a /very/ bad idea and b) even old unused accounts can later cause you difficulty if you don’t manage them actively.

But you knew that anyway. Didn’t you?

Categories: LUG Community Blogs

Jonathan McDowell: Rational thoughts on the GitHub ToS change

Thu, 02/03/2017 - 19:13

I woke this morning to Thorsten claiming the new GitHub Terms of Service could require the removal of Free software projects from it. This was followed by joeyh removing everything from github. I hadn’t actually been paying attention, so I went looking for some sort of summary of whether I should be worried and ended up reading the actual ToS instead. TL;DR version: No, I’m not worried and I don’t think you should be either.

First, a disclaimer. I’m not a lawyer. I have some legal training, but none of what I’m about to say is legal advice. If you’re really worried about the changes then you should engage the services of a professional.

The gist of the concerns around GitHub’s changes are that they potentially circumvent any license you have applied to your code, either converting GPL licensed software to BSD style (and thus permitting redistribution of binary forms without source) or making it illegal to host software under certain Free software licenses on GitHub due to being unable to meet the requirements of those licenses as a result of GitHub’s ToS.

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service. There are sadly too many people who upload code there without a license, meaning that technically no one can do anything with it. Don’t do this people; make sure that any project you put on GitHub has some sort of license attached to it (don’t write your own - it’s highly likely one of Apache/BSD/GPL will suit your needs) so people know whether they can make use of it or not. “I don’t care” is not a valid reason not to do this.

Section D, relating to user generated content, is the one causing the problems. It’s possibly easiest to walk through each subsection in order.

D1 says GitHub don’t take any responsibility for your content; you make it, you’re responsible for it, they’re not accepting any blame for harm your content does nor for anything any member of the public might do with content you’ve put on GitHub. This seems uncontentious.

D2 reaffirms your ownership of any content you create, and requires you to only post 3rd party content to GitHub that you have appropriate rights to. So I can’t, for example, upload a copy of ‘Friday’ by Rebecca Black.

Thorsten has some problems with D3, where GitHub reserve the right to remove content that violates their terms or policies. He argues this could cause issues with licenses that require unmodified source code. This seems to be alarmist, and also applies to any random software mirror. The intent of such licenses is in general to ensure that the pristine source code is clearly separate from 3rd party modifications. Removal of content that infringes GitHub’s T&Cs is not going to cause an issue.

D4 is a license grant to GitHub, and I think forms part of joeyh’s problems with the changes. It affirms the content belongs to the user, but grants rights to GitHub to store and display the content, as well as make copies such as necessary to provide the GitHub service. They explicitly state that no right is granted to sell the content at all or to distribute the content outside of providing the GitHub service.

This term would seem to be the minimum necessary for GitHub to ensure they are allowed to provide code uploaded to them for download, and provide their web interface. If you’ve actually put a Free license on your code then this isn’t necessary, but from GitHub’s point of view I can understand wanting to make it explicit that they need these rights to be granted. I don’t believe it provides a method of subverting the licensing intent of Free software authors.

D5 provides more concern to Thorsten. It seems he believes that the ability to fork code on GitHub provides a mechanism to circumvent copyleft licenses. I don’t agree. The second paragraph of this subsection limits the license granted to the user to be the ability to reproduce the content on GitHub - it does not grant them additional rights to reproduce outside of GitHub. These rights, to my eye, enable the forking and viewing of content within GitHub but say nothing about my rights to check code out and ignore the author’s upstream license.

D6 clarifies that if you submit content to a GitHub repo that features a license you are licensing your contribution under these terms, assuming you have no other agreement in place. This looks to be something that benefits projects on GitHub receiving contributions from users there; it’s an explicit statement that such contributions are under the project license.

D7 confirms the retention of moral rights by the content owner, but states they are waived purely for the purposes of enabling GitHub to provide service, as stated under D4. In particular this right is revocable so in the event they do something you don’t like you can instantly remove all of their rights. Thorsten is more worried about the ability to remove attribution and thus breach CC-BY or some BSD licenses, but GitHub’s whole model is providing attribution for changesets and tracking such changes over time, so it’s hard to understand exactly where the service falls down on ensuring the provenance of content is clear.

There are reasons to be wary of GitHub (they’ve taken a decentralised revision control system and made a business model around being a centralised implementation of it, and they store additional metadata such as PRs that aren’t as easily extracted), but I don’t see any indication that the most recent changes to their Terms of Service are something to worry about. The intent is clearly to provide GitHub with the legal basis they need to provide their service, rather than to provide a means for them to subvert the license intent of any Free software uploaded.

Categories: LUG Community Blogs

Brett Parker (iDunno): Using the Mythic Beasts IPv4 -&gt; IPv6 Proxy for Websites on a v6 only Pi and getting the right REMOTE_ADDR

Wed, 01/03/2017 - 19:35

So, more because I was intrigued than anything else, I've got a pi3 from Mythic Beasts, they're supplied with IPv6 only connectivity and the file storage is NFS over a private v4 network. The proxy will happily redirect requests to either http or https to the Pi, but this results (without turning on the Proxy Protocol) with getting remote addresses in your logs of the proxy servers, which is not entirely useful.

I've cheated a bit, because the turning on of ProxyProtocol for the addresses is currently not exposed to customers (it's on the list!), to do it without access to Mythic's backends use your own domainname (I've also got mapped to this Pi).

So, first step first, we get our RPi and we make sure that we can login to it via ssh (I'm nearly always on a v6 connection anyways, so this was a simple case of sshing to the v6 address of the Pi). I then installed haproxy and apache2 on the Pi and went about configuring them, with apache2 I changed it to listen to localhost only and on ports 8080 and 4443, I hadn't at this point enabled the ssl module so, really, the change for 4443 didn't kick in. Here's my /etc/apache2/ports.conf file:

# If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default.conf Listen [::1]:8080 <IfModule ssl_module> Listen [::1]:4443 </IfModule> <IfModule mod_gnutls.c> Listen [::1]:4443 </IfModule> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet

I then edited /etc/apache2/sites-available/000-default.conf to change the VirtualHost line to [::1]:8080.

So, with that in place, now we deploy haproxy infront of it, the basic /etc/haproxy/haproxy.cfg config is:

global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend any_http option httplog option forwardfor acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1 tcp-request connection expect-proxy layer4 if is_from_proxy bind :::80 default_backend any_http backend any_http server apache2 ::1:8080

Obviously after that you then do:

systemctl restart apache2 systemctl restart haproxy

Now you have a proxy protocol'd setup from the proxy servers, and you can still talk directly to the Pi over ipv6, you're not yet logging the right remote ips, but we're a step closer. Next enable mod_remoteip in apache2:

a2enmod remoteip

And add a file, /etc/apache2/conf-available/remoteip-logformats.conf containing:

LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" remoteip_vhost_combined

And edit the /etc/apache2/sites-available/000-default.conf to change the CustomLog line to use remoteip_vhost_combined rather than combined as the LogFormat and add the relevant RemoteIP settings:

RemoteIPHeader X-Forwarded-For RemoteIPTrustedProxy ::1 CustomLog ${APACHE_LOG_DIR}/access.log remoteip_vhost_combined

Now, enable the config and restart apache2:

a2enconf remoteip-logformats systemctl restart apache2

Now you'll get the right remote ip in the logs (cool, huh!), and, better still, the environment that gets pushed through to cgi scripts/php/whatever is now also correct.

So, you can now happily visit http://www.<your-pi-name>, e.g.

Next up, you'll want something like dehydrated - I grabbed the packaged version from debian's jessie-backports repository - so that you can make yourself some nice shiny SSL certificates (why wouldn't you, after all!), once you've got dehydrated installed, you'll probably want to tweak it a bit, I have some magic extra files that I use, I also suggest getting the dehydrated-apache2 package, which just makes it all much easier too.








#!/bin/sh action="$1" domain="$2" case $action in deploy_cert) privkey="$3" cert="$4" fullchain="$5" chain="$6" cat "$privkey" "$fullchain" > /etc/ssl/private/srwpi.pem chmod 640 /etc/ssl/private/srwpi.pem ;; *) ;; esac

/etc/dehydrated/hooks/srwpi has the execute bit set (chmod +x /etc/dehydrated/hooks/srwpi), and is really only there so that the certificate can be used easily in haproxy.

And finally the file /etc/dehydrated/domains.txt:

Obviously, use your own pi name in there, or better yet, one of your own domain names that you've mapped to the proxies.

Run dehydrated in cron mode (it's noisy, but meh...):

dehydrated -c

That s then generated you some shiny certificates (hopefully). For now, I'll just tell you how to do it through the /etc/apache2/sites-available/default-ssl.conf file, just edit that file and change the SSLCertificateFile and SSLCertificateKeyFile to point to /var/lib/dehydrated/certs/ and /var/llib/dehydrated/certs/ files, do the edit for the CustomLog as you did for the other default site, and change the VirtualHost to be [::1]:443 and enable the site:

a2ensite default-ssl a2enmod ssl

And restart apache2:

systemctl restart apache2

Now time to add some bits to haproxy.cfg, usefully this is only a tiny tiny bit of extra config:

frontend any_https option httplog option forwardfor acl is_from_proxy src 2a00:1098:0:82:1000:3b:1:1 2a00:1098:0:80:1000:3b:1:1 tcp-request connection expect-proxy layer4 if is_from_proxy bind :::443 ssl crt /etc/ssl/private/srwpi.pem default_backend any_https backend any_https server apache2 ::1:4443 ssl ca-file /etc/ssl/certs/ca-certificates.crt

Restart haproxy:

systemctl restart haproxy

And we're all done! REMOTE_ADDR will appear as the correct remote address in the logs, and in the environment.

Categories: LUG Community Blogs

Brett Parker (iDunno): Ooooooh! Shiny!

Wed, 01/03/2017 - 16:12

Yay! So, it's a year and a bit on from the last post (eeep!), and we get the news of the Psion Gemini - I wants one, that looks nice and shiny and just the right size to not be inconvenient to lug around all the time, and far better for ssh usage than the onscreen keyboard on my phone!

Categories: LUG Community Blogs

Chris Lamb: Free software activities in February 2017

Tue, 28/02/2017 - 23:09

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Submitted a number of pull requests to the Django web development framework:
    • Add a --mode=unified option to the "diffsettings" management command. (#8113)
    • Fix a crash in setup_test_environment() if ALLOWED_HOSTS is a tuple. (#8101)
    • Use Python 3 "shebangs" now that the master branch is Python 3 only. (#8105)
    • URL namespacing warning should consider nested namespaces. (#8102)
  • Created an experimental patch against the Python interpreter in order to find reproducibility-related assumptions in dict handling in arbitrary Python code. (#29431)
  • Filed two issues against dh-virtualenv, a tool to package Python virtualenv environments in Debian packages:
    • Fix "upgrage-pip" typo in usage documentation. (#195)
    • Missing DH_UPGRADE_SETUPTOOLS equivalent for dh_virtualenv (#196)
  • Fixed a large number of spelling corrections in Samba, a free-software re-implementation of the Windows networking protocols.
  • Reviewed and merged a pull request by @jheld for django-slack (my library to easily post messages to the Slack group-messaging utility) to support per-message backends and channels. (#63)
  • Created a pull request for django-two-factor-auth, a complete Two-Factor Authentication (2FA) framework for projects using the Django web development framework to drop use of the @lazy_property decorator to ensure compatibility with Django 1.11. (#195)
  • Filed, triaged and eventually merged a change from @evgeni to fix an autopkgtest-related issue in, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change) (#41)
  • Submitted a pull request against social-core — a library to allow Python applications to authenticate against third-party web services such as Facebook, Twitter, etc. — to use the more-readable X if Y else Z construction over Y and X or Z. (#44)
  • Filed an issue against freezegun (a tool to make it easier to write Python tests involving times) to report that dateutils was missing from requirements.txt. (#173)
  • Submitted a pull request against the Hypothesis "QuickCheck"-like testing framework to make the build reproducible. (#440)
  • Fixed an issue reported by @davidak in trydiffoscope (a web-based version of the diffoscope in-depth and content-aware diff utility) where the maximum upload size was incorrectly calculated. (#22)
  • Created a pull request for the Mars Simulation Project to remove some embedded timestamps from the changelog.gz and mars-sim.1.gz files in order to make the build reproducible. (#24)
  • Filed a bug against the cpio archiving utility to report that the testsuite fails when run in the UTC +1300 timezone. (Thread)
  • Submitted a pull request against the "pnmixer" system-tray volume mixer in order to make the build reproducible. (#153)
  • Sent a patch to Testfixtures (a collection of helpers and mock objects that are useful when writing Python unit tests or doctests) to make the build reproducible. (#56)
  • Created a pull request for the "Cloud" Sphinx documentation theme in order to make the output reproducible. (#22)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Add a machine-readable JSON output format. (Closes: #850791).
    • Add an --exclude option. (Closes: #854783).
    • Show results from debugging packages last. (Closes: #820427).
    • Extract archive members using an auto-incrementing integer avoiding the need to sanitise filenames. (Closes: #854723).
    • Apply --max-report-size to --text output. (Closes: #851147).
    • Specify <html lang="en"> in the HTML output. (re. #849411).
  • Bug fixes:
    • Fix errors when comparing directories with non-directories. (Closes: #835641).
    • Device and RPM fallback comparisons require xxd. (Closes: #854593).
    • Fix tests that call xxd on Debian Jessie due to change of output format. (Closes: #855239).
    • Add missing Recommends for comparators. (Closes: #854655).
    • Importing submodules (ie. parent.child) will attempt to import parent. (Closes: #854670).
    • Correct logic of module_exists ensuring we correctly skip the debian.deb822 tests when python3-debian is not installed. (Closes: #854745).
    • Clean all temporary files in the signal handler thread instead of attempting to pass the exception back to the main thread. (Closes: #852013).
    • Fix behaviour of setting report maximums to zero (ie. no limit).
  • Optimisations:
    • Don't uselessly run xxd(1) on non-directories.
    • No need to track libarchive directory locations.
    • Optimise create_limited_print_func.
  • Tests:
    • When comparing two empty directories, ensure that the mtime of the directory is consistent to avoid non-deterministic failures.
    • Ensure we can at least import the "deb_fallback" and "rpm_fallback" modules.
    • Add test for symlink differing in destination.
    • Add tests for --progress, --status-fd and profiling output options as well as the Deb{Changes,Buildinfo,Dsc} and RPM fallback comparisons.
    • Add get_data and @skip_unless_module_exists test helpers.
    • Mark impossible-to-reach code to improve test coverage. is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Drop raw_text fields now as we've moved these to Amazon S3.
  • Drop storage of Installed-Build-Depends and subsequently-orphaned Binary package instances to recover diskspace.


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Print log entry when fixing a file. (Closes: #777239).
  • Run our entire testsuite in autopkgtests, not just the first test. (Closes: #852517).
  • Don't test for stat(2)'s blksize and block attributes. (Closes: #854937).
  • Use error() from over "manual" die().

Debian Patches contributed Debian LTS

This month I have been paid to work 13 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 817-1 for libphp-phpmailer, correcting a local file disclosure vulnerability where insufficient parsing of HTML messages could potentially be used by attacker to read a local file.
  • Issued DLA 826-1 for wireshark which fixes a denial of service vulnerability in wireshark, where a malformed NATO Ground Moving Target Indicator Format ("STANAG 4607") capture file could cause a memory exhausion/infinite loop.
  • python-django (1:1.11~beta1-1) — New upstream beta release.
  • redis (3:3.2.8-1) — New upstream release.
  • gunicorn (19.6.0-11) — Use ${misc:Pre-Depends} to populate Pre-Depends for dpkg-maintscript-helper.
  • dh-virtualenv (1.0-1~bpo8+1) — Upload to jessie-backports.

I sponsored the following uploads:

I also performed the following QA uploads:

  • dh-kpatches (0.99.36+nmu4) — Make kernel kernel builds reproducible.

Finally, I made the following non-maintainer uploads:

  • cpio (2.12+dfsg-3) — Remove rmt.8.gz to prevent a piuparts error.
  • dot-forward (1:0.71-2.2) — Correct a FTBFS; we don't install anything to /usr/sbin, so use GNU Make's $(wildcard ..) over the shell's own * expansion.
Debian bugs filed

I also filed 15 FTBFS bugs against binaryornot, chaussette, examl, ftpcopy, golang-codegangsta-cli, hiro, jarisplayer, libchado-perl, python-irc, python-stopit, python-stopit, python-stopit, python-websockets, rubocop & yash.

FTP Team

As a Debian FTP assistant I ACCEPTed 116 packages: autobahn-cpp, automat, bglibs, bitlbee, bmusb, bullet, case, certspotter, checkit-tiff, dash-el, dash-functional-el, debian-reference, el-x, elisp-bug-hunter, emacs-git-messenger, emacs-which-key, examl, genwqe-user, giac, golang-github-cloudflare-cfssl, golang-github-docker-goamz, golang-github-docker-libnetwork, golang-github-go-openapi-spec, golang-github-google-certificate-transparency, golang-github-karlseguin-ccache, golang-github-karlseguin-expect, golang-github-nebulouslabs-bolt, gpiozero, gsequencer, jel, libconfig-mvp-slicer-perl, libcrush, libdist-zilla-config-slicer-perl, libdist-zilla-role-pluginbundle-pluginremover-perl, libevent, libfunction-parameters-perl, libopenshot, libpod-weaver-section-generatesection-perl, libpodofo, libprelude, libprotocol-http2-perl, libscout, libsmali-1-java, libtest-abortable-perl, linux, linux-grsec, linux-signed, lockdown, lrslib, lua-curses, lua-torch-cutorch, mariadb-10.1, mini-buildd, mkchromecast, mocker-el, node-arr-exclude, node-brorand, node-buffer-xor, node-caller, node-duplexer3, node-ieee754, node-is-finite, node-lowercase-keys, node-minimalistic-assert, node-os-browserify, node-p-finally, node-parse-ms, node-plur, node-prepend-http, node-safe-buffer, node-text-table, node-time-zone, node-tty-browserify, node-widest-line, npd6, openoverlayrouter, pandoc-citeproc-preamble, pydenticon, pyicloud, pyroute2, pytest-qt, pytest-xvfb, python-biomaj3, python-canonicaljson, python-cgcloud, python-gffutils, python-h5netcdf, python-imageio, python-kaptan, python-libtmux, python-pybedtools, python-pyflow, python-scrapy, python-scrapy-djangoitem, python-signedjson, python-unpaddedbase64, python-xarray, qcumber, r-cran-urltools, radiant, repo, rmlint, ruby-googleauth, ruby-os, shutilwhich, sia, six, slimit, sphinx-celery, subuser, swarmkit, tmuxp, tpm2-tools, vine, wala & x265.

I additionally filed 8 RC bugs against packages that had incomplete debian/copyright files against: checkit-tiff, dash-el, dash-functional-el, libcrush, libopenshot, mkchromecast, pytest-qt & x265.

Categories: LUG Community Blogs

Mick Morgan: this is what a scary man looks like

Thu, 09/02/2017 - 16:23

No, I mean the one on the right – the one Trump is pointing at.

General John Kelly is just one of Trump’s controversial appointments (and not necessarily the worst) and I guess that by writing this now, I have finally nailed down the lid on the coffin of my ever returning to the US. Pity. I had promised my wife that I would take her to San Francisco in the near future so that she could see for herself why I like it. I’ve visited the USA several times in the past, but only on business and never with my lady. Now it would seem that I cannot go, because I will not submit her, nor myself, to the indignity of being treated like a criminal simply because I wish to enter the country.

Today, El Reg reports that General Kelly has said that he wants the right to demand passwords for social media and financial accounts from some visa applicants so that immigration and homeland securty officers can vet Twitter, Facebook or online banking accounts.

Kelly is reported to have said:

“We want to say ‘what kind of sites do you visit and give us your passwords,’ so we can see what they do. We want to get on their social media with passwords – what do you do, what do you say. If they don’t want to cooperate then they don’t come in. If they truly want to come to America they’ll cooperate, if not then ‘next in line’.”

Now as El Reg points out:

“By “they”, Kelly was referring to refugees and visa applicants from the seven Muslim countries subject to President Trump’s anti-immigration executive order, which was signed last month.”

But it goes on:

“Given the White House’s tough stance on immigration, we can imagine the scope of this “enhanced vetting” creeping from that initial subset to cover visitors of other nationalities. Just simply wait for the president to fall out with another country.”

Or for individuals to draw attention to themselves by being publicly critical of some of the more worrying developments in the USA…..

My own experience of US immigration, even whilst travelling under an A2 Visa, is such that I would most certainly not wish to enter the country if I were to be treated with anything like the hostility I know could be possible. Unfortunately that also means that I might have a problem should I ever wish to fly anywhere else in the world which necessitates a stopover in the US.

The reason I think Kelly may be truly scary? He is reported to have told Representative Kathleen Rice under questioning that:

“I work for one man, his name is Donald Trump, and he told me ‘Kelly, secure the border,’ and that’s what I’m going to do,”

In typical El Reg commentard style, some responders have been less than subtle about this response, evoking obvious references to Godwin’s Law, but one poster, called Jim-234 notes:

“This is a truly stupid plan that is bound to fail on so many levels and will do nothing but upset decent people and open them up to hacking & identity theft while doing nothing to actually stop people who want to cause harm. It reeks of lazy ignorant fools who want to be seen to do something rather than actually do something that works…..

“This is just going to be security theater and bothering everyone and invading their privacy for no net effect at all. As soon as it goes live, all the bad guys will know they need a clean profile online, there will probably even be special paid services to make your online profile all nice and minty fresh, probably even with posting and messaging “good” stuff to make sure you look nice online.”

Jim-234 concludes:

“They want to start demanding your passwords for your phones & laptops?

.. well pretty soon all they will find is factory reset phones, laptops with a never used OS and a new booming business for Chinese, Russian and European data centers of “whole system data backups”.

The only good news is that if this goes live, everyone will probably start scrubbing their Facebook profiles to be about as informative as Zuckerberg’s page… so maybe then Facebook will finally go the way of MySpace.”

Depressingly, I see the same tendency in the UK for security theatre because politicians think “we must be seen to be doing something” in order to make the people feel safer. As the saying goes, “the road to hell is paved with good intentions”.

And what about when the intentions themselves are not good?

Categories: LUG Community Blogs

Jonathan McDowell: GnuK on the Maple Mini

Tue, 07/02/2017 - 19:34

Last weekend, as a result of my addiction to buying random microcontrollers to play with, I received some Maple Minis. I bought the Baite clone direct from AliExpress - so just under £3 each including delivery. Not bad for something that’s USB capable, is based on an ARM and has plenty of IO pins.

I’m not entirely sure what my plan is for the devices, but as a first step I thought I’d look at getting GnuK up and running on it. Only to discover that chopstx already has support for the Maple Mini and it was just a matter of doing a ./configure --vidpid=234b:0000 --target=MAPLE_MINI --enable-factory-reset ; make. I’d hoped to install via the DFU bootloader already on the Mini but ended up making it unhappy so used SWD by following the same steps with OpenOCD as for the FST-01/BusPirate. (SWCLK is D21 and SWDIO is D22 on the Mini). Reset after flashing and the device is detected just fine:

usb 1-1.1: new full-speed USB device number 73 using xhci_hcd usb 1-1.1: New USB device found, idVendor=234b, idProduct=0000 usb 1-1.1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-1.1: Product: Gnuk Token usb 1-1.1: Manufacturer: Free Software Initiative of Japan usb 1-1.1: SerialNumber: FSIJ-1.2.3-87155426

And GPG is happy:

$ gpg --card-status Reader ...........: 234B:0000:FSIJ-1.2.3-87155426:0 Application ID ...: D276000124010200FFFE871554260000 Version ..........: 2.0 Manufacturer .....: unmanaged S/N range Serial number ....: 87155426 Name of cardholder: [not set] Language prefs ...: [not set] Sex ..............: unspecified URL of public key : [not set] Login data .......: [not set] Signature PIN ....: forced Key attributes ...: rsa2048 rsa2048 rsa2048 Max. PIN lengths .: 127 127 127 PIN retry counter : 3 3 3 Signature counter : 0 Signature key ....: [none] Encryption key....: [none] Authentication key: [none] General key info..: [none]

While GnuK isn’t the fastest OpenPGP smart card implementation this certainly seems to be one of the cheapest ways to get it up and running. (Plus the fact that chopstx already runs on the Mini provides me with a useful basis for other experimentation.)

Categories: LUG Community Blogs

Chris Lamb: The ChangeLog #237: Reproducible Builds and Secure Software

Sat, 04/02/2017 - 21:39

I recently appeared on the Changelog podcast to talk about the Reproducible Builds project:

Whilst I am an avid podcast listener, this was actually my first appearance on one. It was an curious and somewhat disconcerting feeling to be "just" talking to Adam and Jerod in the moment yet knowing all the time that anything and everything I said would be distributed more widely in the future.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in January 2017

Tue, 31/01/2017 - 09:54

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Created github-sync, a tool to mirror arbitrary repositories onto GitHub.
  • Submitted two pull requests to the word-wrap Chrome browser extension that adds the ability to wrap text via the right-click context menu:
    • Support dynamically-added <textarea> elements in "rich" Javascript applications such as mail clients, etc. (#2)
    • Avoid an error message if no "editable" has been selected yet. (#1)
  • Submitted a pull request to wordwarvi (a "retro-styled old school side-scrolling shooter") to ensure the build is reproducible. (#5)
  • Filed a pull request with the yard Ruby documentation tool to ensure the generated output is reproducible. (#1048)
  • Made some improvements to, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change:
    • Merged a pull request from Evgeni Golov to allow for skipped tests. (#39)
    • Add logging when running autopkgtests. (commit)
  • Merged a pull request from jwilk for python-fadvise my Python interface to the posix_fadvise(2) interface to predeclare an pattern for accessing data. (#6)
  • Filed an issue against the redis key-value database regarding build failures on non-x86 architectures. (#3768)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have previously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

I also made the following changes to our tooling:


diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Comparators:
    • Display magic file type when we know the file format but can't find file-specific details. (Closes: #850850).
    • Ensure our "APK metadata" file appears first, fixing non-deterministic tests. (998b288)
    • Fix APK extration with absolute filenames. (Closes: #850485).
    • Don't error if directory containing ELF debug symbols already exists. (Closes: #850807).
    • Support comparing .ico files (Closes: #850730).
    • If we don't have a tool (eg. apktool), don't blow up when attempting to unpack it.
  • Output formats:
    • Add Markdown output format. (Closes: #848141).
    • Add RestructuredText output format.
    • Use an optimised indentation routine throughout all presenters.
    • Move text presenter to use the Visitor pattern.
    • Correctly escape value of href="" elements (re. #849411).
  • Tests:
    • Prevent FTBFS by loading fixtures as UTF-8 in case surrounding terminal is not Unicode-aware. (Closes: #852926).
    • Skip tests if binutils can't handle the object file format. (Closes: #851588).
    • Actually compare the output of text/ReST/Markdown formats to fixtures.
    • Add tests for: Comparing two empty directories, HTML output, image.ICOImageFile, --html-dir, --text-color & no arguments (beyond the filenames) emits the text output.
  • Profiling:
    • Count the number of calls, not just the total time.
    • Skip as much profiling overhead when not enabled for a ~2% speedup.
  • Misc:
    • Alias an expensive Config() lookup for a 10% optimisation.
    • Avoid expensive regex creation until we actually need it, speeding up diff parsing by 2X.
    • Use Pythonic logging functions based on __name__, etc.
    • Drop milliseconds from logging output. is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Store files directly onto S3.
  • Drop big unique_together index to save disk space.
  • Show SHA256 checksums where space permits.

Debian LTS

This month I have been paid to work 12.75 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 773-1 for python-crypto fixing a vulnerability where calling with an invalid parameter could crash the Python interpreter.
  • Issued DLA 777-1 for libvncserver addressing two heap-based buffer overflow attacks based on invalid FramebufferUpdate data.
  • Issued DLA 778-1 for pcsc-lite correcting a use-after-free vulnerability.
  • Issued DLA 795-1 for hesiod which fixed a weak SUID check as well as removed the hard-coding of a fallback domain if the configuration file could not be found.
  • Issued DLA 810-1 for libarchive fixing a heap buffer overflow.
  • python-django:
    • 1:1.10.5-1 — New upstream stable release.
    • 1:1.11~alpha1-1 — New upstream experimental release.
  • gunicorn (19.6.0-10) — Moved debian/README.Debian to debian/NEWS so that the recent important changes will be displayed to users when upgrading to stretch.
  • redis:
    • 3:3.2.6-2 & 4:4.0-rc2-2 — Tidy patches and rename RunTimeDirectory to RuntimeDirectory in .service files. (Closes: #850534)
    • 3:3.2.6-3 — Remove a duplicate redis-server binary by symlinking /usr/bin/redis-check-rdb. This was found by the dedup service.
    • 3:3.2.6-4 — Expand the documentation in redis-server.service and redis-sentinel.service regarding the default hardening options and how, in most installations, they can be increased.
    • 3:3.2.6-5, 3:3.2.6-6, 4:4.0-rc2-3 & 4:4.0-rc2-4 — Add taskset calls to try and avoid build failures due to parallelism in upstream test suite.

I also made the following non-maintainer uploads:

  • cpio:
    • 2.12+dfsg-1 — New upstream release (to experimental), refreshing all patches, etc.
    • 2.12+dfsg-2 — Add missing autoconf to Build-Depends.
  • xjump (2.7.5-6.2) — Make the build reproducible by passing -n to gzip calls in debian/rules. (Closes: #777354)
  • magicfilter (1.2-64.1) — Make the build reproducible by passing -n to gzip calls in debian/rules. (Closes: #777478)
Debian bugs filed RC bugs

I also filed 16 FTBFS bugs against bzr-git, coq-highschoolgeometry, eclipse-anyedit, eclipse-gef, libmojolicious-plugin-assetpack-perl, lua-curl, node-liftoff, node-liftoff, octave-msh, pcb2gcode, qtile, rt-authen-externalauth, ruby-hamster, ruby-sshkit, tika & txfixtures.

FTP Team

As a Debian FTP assistant I ACCEPTed 35 packages: chromium-browser, debichem, flask-limiter, golang-github-golang-leveldb, golang-github-nebulouslabs-demotemutex, golang-github-nwidger-jsoncolor, libatteanx-endpoint-perl, libproc-guard-perl, libsub-quote-perl, libtest-mojibake-perl, libytnef, linux, lua-sql, node-graceful-readlink, node-invariant, node-rollup,, node-timed-out, olefile, packaging-tutorial, pgrouting, pyparallel, python-coards, python-django-tagging, python-graphviz, python-irc, python-mechanicalsoup, python-persistent, python-scandir, python-stopit, r-cran-zelig, ruby-ast, ruby-whitequark-parser, sagetex & u-boot-menu.

Categories: LUG Community Blogs