LUG Community Blogs

Jonathan McDowell: Using collectd for Exim stats

Planet ALUG - Wed, 25/04/2018 - 19:16

I like graphing things; I find it’s a good way to look for abnormal patterns or try to track down the source of problems. For monitoring systems I started out with MRTG. It’s great for monitoring things via SNMP, but everything else needs some custom scripts. So at one point I moved my home network over to Munin, which is much better at graphing random bits and pieces, and coping with collecting data from remote hosts. Unfortunately it was quite heavyweight on the Thecus N2100 I was running as the central collection point at the time; data collection resulted in a lot of forking and general sluggishness. So I moved to collectd, which is written in C, relies much more on compiled plugins and doesn’t do a load of forks. It also supports a UDP based network protocol with authentication + encryption, which makes it great for running on hosts that aren’t always up - the collection point doesn’t hang around waiting for them when they’re not around.

The problem is that when it comes to things collectd doesn’t support out of the box it’s not quite so easy to get the stats - things a simple script would sort in MRTG need a bit more thought. You can go the full blown Python module route as I did for my Virgin Super Hub scripts, but that requires a bit of work. One of the things in particular I wanted to graph were stats for my mail servers and having to write a chunk of Python to do that seemed like overkill. Searching around found the Tail plugin, which follows a log file and applies regexes to look for stats. There are some examples for Exim on that page, but none were quite what I wanted. In case it’s of interest/use to anyone else, here’s what I ended up with (on Debian, of course, but I can’t see why it wouldn’t work elsewhere with minimal changes).

First I needed a new data set specification for email counts. I added this to /usr/share/collectd/types.db:

mail_count value:COUNTER:0:65535

Note if you’re logging to a remote collectd host this needs to be on both the host where the stats are collected and the one receiving the stats.

I then dropped a file in /etc/collectd/collectd.conf.d/ called exim.conf containing the following. It’ll need tweaked depending on exactly what you log, but the first 4 <Match> stanzas should be generally useful. I have some additional logging (via log_message entries in the exim.conf deny statements) that helps me track mails that get greylisted, rejected due to ClamAV or rejected due to being listed in a DNSRBL. Tailor as appropriate for your setup:

LoadPlugin tail <Plugin tail> <File "/var/log/exim4/mainlog"> Instance "exim" Interval 60 <Match> Regex "S=([1-9][0-9]*)" DSType "CounterAdd" Type "ipt_bytes" Instance "total" </Match> <Match> Regex "<=" DSType "CounterInc" Type "mail_count" Instance "incoming" </Match> <Match> Regex "=>" DSType "CounterInc" Type "mail_count" Instance "outgoing" </Match> <Match> Regex "==" DSType "CounterInc" Type "mail_count" Instance "defer" </Match> <Match> Regex ": greylisted.$" DSType "CounterInc" Type "mail_count" Instance "greylisted" </Match> <Match> Regex "rejected after DATA: Malware:" DSType "CounterInc" Type "mail_count" Instance "malware" </Match> <Match> Regex "> rejected RCPT <.* is listed at" DSType "CounterInc" Type "mail_count" Instance "dnsrbl" </Match> </File> </Plugin>

Finally, because my mail servers are low volume these days, I added a scaling filter to give me emails/minute rather than emails/second. This went in /etc/collectd/collectd.conf.d/filters.conf:

PreCacheChain "PreCache" LoadPlugin match_regex LoadPlugin target_scale <Chain "PreCache"> <Rule> <Match "regex"> Plugin "^tail$" PluginInstance "^exim$" Type "^mail_count$" Invert false </Match> <Target "scale"> Factor 60 </Target> </Rule> </Chain>

Update: Some examples…

Categories: LUG Community Blogs

Andy Smith: Disabling edge tiling on GNOME 3.28 / Debian testing (buster)

Planet HantsLUG - Tue, 24/04/2018 - 00:07
We’ve been here before

In an earlier post I mentioned how to disable edge tiling. That was for my desktop machine which at the time was running Ubuntu 17.10 and GNOME 3.26.

My laptop, however, currently runs Debian testing (buster) with GNOME 3.28, and this method does not work.

Things that work

In fact, one of the ways the Internet suggested that didn’t work for Ubuntu, does work on my Debian laptop. That is:

$ gsettings set org.gnome.shell.overrides edge-tiling false

I have no idea why, sorry.

Things that don’t work

So, for my Debian buster laptop running GNOME 3.28 under Xorg, the things that don’t work are:

$ dconf write /org/gnome/shell/extensions/classic-overrides/edge-tiling false $ dconf write /org/gnome/mutter/edge-tiling false $ dconf write /org/gnome/shell/overrides/edge-tiling false
Categories: LUG Community Blogs

Chris Lamb: Re-elected as Debian Project Leader

Planet ALUG - Wed, 18/04/2018 - 19:24

I have been extremely proud to have served as the Debian Project Leader since my election in early 2017. During this time I've learned a great deal about the inner workings of the Project as well as about myself. I have grown as a person thanks to all manner of new interactions and fresh experiences.

I believe is a privilege simply to be a Debian Developer, let alone to be selected as their representative. It was therefore an even greater honour to learn that I have been re-elected by the community for another year. I profoundly and wholeheartedly thank everyone for placing their trust in me for another term.



Being the "DPL" is a hard job. It is even difficult to even communicate exactly how and any statistics somehow fail to capture it. However, I now understand the look in previous Leaders' eyes when they congratulated me on my appointment and future candidates should not nominate themselves lightly.

Indeed, I was unsure whether I would stand for re-appointment and I might not have done had it not been for some touching and encouraging words from some close confidants. They underlined to me that a year is not a long time, further counselling that I should consider myself just getting started and only now prepared to start to take on the bigger picture.



Debian itself will always face challenges but I sincerely believe that the Project remains as healthy as ever. We are uniquely cherished and remain remarkably poised to improve the free software ecosystem as a whole. Moreover, our stellar reputation for technical excellence, stability and software freedom remains highly respected and without peer. It is truly an achievement to be proud of.



I thank everyone who had the original confidence, belief and faith in me, but I offer my further sincere and humble thanks to all those who have felt they could extend this to a second term, especially with such a high turnout. I am truly excited and looking forward to the year ahead.


Categories: LUG Community Blogs

Chris Lamb: Free software activities in March 2018

Planet ALUG - Sat, 31/03/2018 - 20:10

Here is my monthly update covering what I have been doing in the free software world during March 2018 (previous month):


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:



Debian Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, reviewing other maintainers' packages, etc.
  • Issued DLA 1299-1 fixing an XML External Entity (XXE) attack in the JGraphX diagramming library.
  • Issued DLA 1304-1 for zsh to fix four vulnerabilities, including a privilege-elevation issue.
  • Issued DLA 1306-1 for the libvips image processing library where attackers could cause a remote denial of service.
  • Issued DLA 1311-1 for the adminer web-based database administration tool. I also updated jessie-backports and proposed updates for stretch (#893803) and jessie (#893804).
  • Issued DLA 1317-1 for the net-snmp server management framework to correct a heap corruption vulnerability.
  • Issued DLA 1318-1 for irssi closing an issue where nicknames could result in out-of-bounds access.
Uploads
  • python-django (2.0.3-1 & 1.11.11-1) — New upstream security releases.
  • libfiu:
    • 0.95-5 — Add support for cross-compilation. (#892946)
    • 0.95-6 & 0.95-7 — Incorrect attempts to fix a build failure. (#893049)
    • 0.96-1 — New upstream release, fixing the aforementioned build failure caused by Make parellelism.
  • redisearch (1.0.9-1) — New upstream release.
Debian bugs filed
  • postgresql-10: Please update NEWS entry for 10.3-1. (#891898)
  • python-meshio: Installs test files to /usr/lib/python3/dist-packages. (#892019)
  • clang-tidy-7: Missing depends on libclang-common-7-dev. (#891999)
  • lintian: Warn about "old" X-Python3-Version fields? (#892304)
  • todoman: Bogus description in manpage. (#892381)
FTP Team

As a Debian FTP assistant I ACCEPTed 53 packages: akonadi, akonadi-calendar, arch-install-scripts, beets, calligraplan, cenon.app, cross-toolchain-base-ports, dcontainers, debiman, deepin-movie-reborn, deepin-screenshot, deepin-terminal, dput-ng, dump1090-mutability, fonts-ubuntu, gcc-7-cross, gcc-8-cross, gnome-themes-extra, iotjs, isl, isl-0.18, ksmtp, ldc, ledger-wallets-udev, lexicon, libmath-random-secure-perl, libtgvoip, linux, magic-wormhole-transit-relay, mailman-suite, mailman3, mustache-d, node-split-string, nvidia-cuda-toolkit, nvidia-graphics-drivers, python-memoize, python-orderedattrdict, python-requests-ntlm, python-test-server, python-wsproto, pyzabbix, r-cran-pbmcapply, r-cran-spdata, r-cran-squarem, r-cran-zeligchoice, ruby-batch-loader, ruby-commonmarker, ruby-enum, ruby-fast-blank, social-auth-core, stdx-allocator, tldextract & xorgproto.

I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: cenon.app, gnome-themes-extra, isl, isl-0.18, libtgvoip & python-wsproto.

Categories: LUG Community Blogs

Andy Smith: Stewart Lee interviewed on The Comedian’s Comedian Podcast

Planet HantsLUG - Wed, 28/03/2018 - 11:51

Really good interview with Stewart Lee on The Comedian’s Comedian Podcast with Stuart Goldsmith. It’s lengthy—over 2 hours when Goldsmith’s extras are tacked on—and not a work of comedy in itself, so don’t listen if you’re expecting a laugh-fest. It is an actual interview with the person, not the character, inasmuch as they can ever be fully separated.

Also don’t read the description and be put off by how much Goldsmith talks him up in it; it’s a good humble interview that goes into the craft of it.

Some of the incidents he mentions in the interview were either filmed professionally or caught on camera phone and if you are a Stewart Lee fangirl like me then they’re interesting to watch in light of his comments about them.

1. Glaswegian audience member gets hung up on Lee’s Caffe Nero card during DVD recording.

2. “Stewart Lee destroys a heckler” which Lee complains is misnamed because he doesn’t set out to destroy hecklers, but rather to simply address their concerns, in character.

3. “I think he’d feel flattered to be misquoted by me“. Out of context on YouTube it would be easy to mistake this for a genuine (but very silly) debate, but it was entirely written by Lee and is presented in-character as part of a show, as a justification by the character of Stewart Lee as to why he can mock and misquote Russell Brand.

4. “There’s just so much now it’s unmanageable … it had a positive effect on the act in that I just decided to become more like the thing they hate.” Lee’s now-frozen file of abusive online critiques is also worth linking to.

Categories: LUG Community Blogs

Andy Smith: Using a different theme for Mediawiki’s SyntaxHighlight extension

Planet HantsLUG - Tue, 20/03/2018 - 02:51

Probably the best syntax highlighting plugin for Mediawiki at the moment is the one simply called SyntaxHighlight. It uses Pygments to do the heavy lifting. What sets it apart from the other extensions is that it supports line numbers and picking out highlighted lines.

Unfortunately the default style (theme) is dark-on-light whereas for most of my syntax highlighting I am giving examples of either shell sessions or code. All my shell sessions and code are viewed as light-on-dark, so I would prefer that the wiki’s syntax highlighting followed suit.

I spent quite a while messing about with editing the extension itself but to little effect, until Robert pointed out that I just needed to edit the Common.css file inside the wiki itself. Then you get some decent results.

I used something like this to generate the correct CSS for the “native” style:

$ ./extensions/SyntaxHighlight_GeSHi/pygments/pygmentize -S native -f html|sed -e 's/^/.mw-highlight > pre /' .mw-highlight > pre .hll { background-color: #404040 } .mw-highlight > pre .c { color: #999999; font-style: italic } /* Comment */ .mw-highlight > pre .err { color: #a61717; background-color: #e3d2d2 } /* Error */ .mw-highlight > pre .esc { color: #d0d0d0 } /* Escape */ .mw-highlight > pre .g { color: #d0d0d0 } /* Generic */ .mw-highlight > pre .k { color: #6ab825; font-weight: bold } /* Keyword */ .mw-highlight > pre .l { color: #d0d0d0 } /* Literal */ .mw-highlight > pre .n { color: #d0d0d0 } /* Name */ .mw-highlight > pre .o { color: #d0d0d0 } /* Operator */ .mw-highlight > pre .x { color: #d0d0d0 } /* Other */ .mw-highlight > pre .p { color: #d0d0d0 } /* Punctuation */ .mw-highlight > pre .ch { color: #999999; font-style: italic } /* Comment.Hashbang */ .mw-highlight > pre .cm { color: #999999; font-style: italic } /* Comment.Multiline */ .mw-highlight > pre .cp { color: #cd2828; font-weight: bold } /* Comment.Preproc */ .mw-highlight > pre .cpf { color: #999999; font-style: italic } /* Comment.PreprocFile */ .mw-highlight > pre .c1 { color: #999999; font-style: italic } /* Comment.Single */ .mw-highlight > pre .cs { color: #e50808; font-weight: bold; background-color: #520000 } /* Comment.Special */ .mw-highlight > pre .gd { color: #d22323 } /* Generic.Deleted */ .mw-highlight > pre .ge { color: #d0d0d0; font-style: italic } /* Generic.Emph */ .mw-highlight > pre .gr { color: #d22323 } /* Generic.Error */ .mw-highlight > pre .gh { color: #ffffff; font-weight: bold } /* Generic.Heading */ .mw-highlight > pre .gi { color: #589819 } /* Generic.Inserted */ .mw-highlight > pre .go { color: #cccccc } /* Generic.Output */ .mw-highlight > pre .gp { color: #aaaaaa } /* Generic.Prompt */ .mw-highlight > pre .gs { color: #d0d0d0; font-weight: bold } /* Generic.Strong */ .mw-highlight > pre .gu { color: #ffffff; text-decoration: underline } /* Generic.Subheading */ .mw-highlight > pre .gt { color: #d22323 } /* Generic.Traceback */ .mw-highlight > pre .kc { color: #6ab825; font-weight: bold } /* Keyword.Constant */ .mw-highlight > pre .kd { color: #6ab825; font-weight: bold } /* Keyword.Declaration */ .mw-highlight > pre .kn { color: #6ab825; font-weight: bold } /* Keyword.Namespace */ .mw-highlight > pre .kp { color: #6ab825 } /* Keyword.Pseudo */ .mw-highlight > pre .kr { color: #6ab825; font-weight: bold } /* Keyword.Reserved */ .mw-highlight > pre .kt { color: #6ab825; font-weight: bold } /* Keyword.Type */ .mw-highlight > pre .ld { color: #d0d0d0 } /* Literal.Date */ .mw-highlight > pre .m { color: #3677a9 } /* Literal.Number */ .mw-highlight > pre .s { color: #ed9d13 } /* Literal.String */ .mw-highlight > pre .na { color: #bbbbbb } /* Name.Attribute */ .mw-highlight > pre .nb { color: #24909d } /* Name.Builtin */ .mw-highlight > pre .nc { color: #447fcf; text-decoration: underline } /* Name.Class */ .mw-highlight > pre .no { color: #40ffff } /* Name.Constant */ .mw-highlight > pre .nd { color: #ffa500 } /* Name.Decorator */ .mw-highlight > pre .ni { color: #d0d0d0 } /* Name.Entity */ .mw-highlight > pre .ne { color: #bbbbbb } /* Name.Exception */ .mw-highlight > pre .nf { color: #447fcf } /* Name.Function */ .mw-highlight > pre .nl { color: #d0d0d0 } /* Name.Label */ .mw-highlight > pre .nn { color: #447fcf; text-decoration: underline } /* Name.Namespace */ .mw-highlight > pre .nx { color: #d0d0d0 } /* Name.Other */ .mw-highlight > pre .py { color: #d0d0d0 } /* Name.Property */ .mw-highlight > pre .nt { color: #6ab825; font-weight: bold } /* Name.Tag */ .mw-highlight > pre .nv { color: #40ffff } /* Name.Variable */ .mw-highlight > pre .ow { color: #6ab825; font-weight: bold } /* Operator.Word */ .mw-highlight > pre .w { color: #666666 } /* Text.Whitespace */ .mw-highlight > pre .mb { color: #3677a9 } /* Literal.Number.Bin */ .mw-highlight > pre .mf { color: #3677a9 } /* Literal.Number.Float */ .mw-highlight > pre .mh { color: #3677a9 } /* Literal.Number.Hex */ .mw-highlight > pre .mi { color: #3677a9 } /* Literal.Number.Integer */ .mw-highlight > pre .mo { color: #3677a9 } /* Literal.Number.Oct */ .mw-highlight > pre .sa { color: #ed9d13 } /* Literal.String.Affix */ .mw-highlight > pre .sb { color: #ed9d13 } /* Literal.String.Backtick */ .mw-highlight > pre .sc { color: #ed9d13 } /* Literal.String.Char */ .mw-highlight > pre .dl { color: #ed9d13 } /* Literal.String.Delimiter */ .mw-highlight > pre .sd { color: #ed9d13 } /* Literal.String.Doc */ .mw-highlight > pre .s2 { color: #ed9d13 } /* Literal.String.Double */ .mw-highlight > pre .se { color: #ed9d13 } /* Literal.String.Escape */ .mw-highlight > pre .sh { color: #ed9d13 } /* Literal.String.Heredoc */ .mw-highlight > pre .si { color: #ed9d13 } /* Literal.String.Interpol */ .mw-highlight > pre .sx { color: #ffa500 } /* Literal.String.Other */ .mw-highlight > pre .sr { color: #ed9d13 } /* Literal.String.Regex */ .mw-highlight > pre .s1 { color: #ed9d13 } /* Literal.String.Single */ .mw-highlight > pre .ss { color: #ed9d13 } /* Literal.String.Symbol */ .mw-highlight > pre .bp { color: #24909d } /* Name.Builtin.Pseudo */ .mw-highlight > pre .fm { color: #447fcf } /* Name.Function.Magic */ .mw-highlight > pre .vc { color: #40ffff } /* Name.Variable.Class */ .mw-highlight > pre .vg { color: #40ffff } /* Name.Variable.Global */ .mw-highlight > pre .vi { color: #40ffff } /* Name.Variable.Instance */ .mw-highlight > pre .vm { color: #40ffff } /* Name.Variable.Magic */ .mw-highlight > pre .il { color: #3677a9 } /* Literal.Number.Integer.Long */

(Yes, I also need to do the light-on-dark thing here in this blog)

To get a list of available styles:

$ ./extensions/SyntaxHighlight_GeSHi/pygments/pygmentize -L styles Pygments version 2.2.0, (c) 2006-2017 by Georg Brandl. Styles: ~~~~~~~ * manni: A colorful style, inspired by the terminal highlighting style. * igor: Pygments version of the official colors for Igor Pro procedures. * lovelace: The style used in Lovelace interactive learning environment. Tries to avoid the "angry fruit salad" effect with desaturated and dim colours. * xcode: Style similar to the Xcode default colouring theme. * vim: Styles somewhat like vim 7.0 * autumn: A colorful style, inspired by the terminal highlighting style. * abap: * vs: * rrt: Minimalistic "rrt" theme, based on Zap and Emacs defaults. * native: Pygments version of the "native" vim theme. * perldoc: Style similar to the style used in the perldoc code blocks. * borland: Style similar to the style used in the borland IDEs. * arduino: The Arduino® language style. This style is designed to highlight the Arduino source code, so exepect the best results with it. * tango: The Crunchy default Style inspired from the color palette from the Tango Icon Theme Guidelines. * emacs: The default style (inspired by Emacs 22). * friendly: A modern style based on the VIM pyte theme. * monokai: This style mimics the Monokai color scheme. * paraiso-dark: * colorful: A colorful style, inspired by CodeRay. * murphy: Murphy's style from CodeRay. * bw: * pastie: Style similar to the pastie default style. * rainbow_dash: A bright and colorful syntax highlighting theme. * algol_nu: * paraiso-light: * trac: Port of the default trac highlighter design. * default: The default style (inspired by Emacs 22). * algol: * fruity: Pygments version of the "native" vim theme.

Although you may find it easier looking at the Pygments style gallery.

Categories: LUG Community Blogs

Jonathan McDowell: First impressions of the Gemini PDA

Planet ALUG - Mon, 19/03/2018 - 21:41

Last March I discovered the IndieGoGo campaign for the Gemini PDA, a plan to produce a modern PDA with a decent keyboard inspired by the Psion 5. At that point in time the estimated delivery date was November 2017, and it wasn’t clear they were going to meet their goals. As someone has owned a variety of phones with keyboards, from a Nokia 9000i to a T-Mobile G1 I’ve been disappointed about the lack of mobile devices with keyboards. The Gemini seemed like a potential option, so I backed it, paying a total of $369 including delivery. And then I waited. And waited. And waited.

Finally, one year and a day after I backed the project, I received my Gemini PDA. Now, I don’t get as much use out of such a device as I would have in the past. The Gemini is definitely not a primary phone replacement. It’s not much bigger than my aging Honor 7 but there’s no external display to indicate who’s calling and it’s a bit clunky to have to open it to dial (I don’t trust Google Assistant to cope with my accent enough to have it ring random people). The 9000i did this well with an external keypad and LCD screen, but then it was a brick so it had the real estate to do such things. Anyway. I have a laptop at home, a laptop at work and I cycle between the 2. So I’m mostly either in close proximity to something portable enough to move around the building, or travelling in a way that doesn’t mean I could use one.

My first opportunity to actually use the Gemini in anger therefore came last Friday, when I attended BelFOSS. I’d normally bring a laptop to a conference, but instead I decided to just bring the Gemini (in addition to my normal phone). I have the LTE version, so I put my FreedomPop SIM into it - this did limit the amount I could do with it due to the low data cap, but for a single day was plenty for SSH, email + web use. I already have the Pro version of the excellent JuiceSSH, am a happy user of K-9 Mail and tend to use Chrome these days as well. All 3 were obviously perfectly happy on the Android 7.1.1 install.

Aside: Why am I not running Debian on the device? Planet do have an image available form their Linux Support page, but it’s running on top of the crufty 3.18 Android kernel and isn’t yet a first class citizen - it’s not clear the LTE will work outside Android easily and I’ve no hope of ARM opening up the Mali-T880 drivers. I’ve got plans to play around with improving the support, but for the moment I want to actually use the device a bit until I find sufficient time to be able to make progress.

So how did the day go? On the whole, a success. Battery life was great - I’d brought a USB battery pack expecting to need to boost the charge at some point, but I last charged it on Thursday night and at the time of writing it’s still claiming 25% battery left. LTE worked just fine; I had a 4G signal for most of the day with occasional drops down to 3G but no noticeable issues. The keyboard worked just fine; much better than my usual combo of a Nexus 7 + foldable Bluetooth keyboard. Some of the symbols aren’t where you’d expect, but that’s understandable on a scaled down keyboard. Screen resolution is great. I haven’t used the USB-C ports other than to charge and backup so far, but I like the fact there are 2 provided (even if you need a custom cable to get HDMI rather than it following the proper standard). The device feels nice and solid in your hand - the case is mostly metal plates that remove to give access to the SIM slot and (non-removable but user replaceable) battery. The hinge mechanism seems robust; I haven’t been worried about breaking it at any point since I got the device.

What about problems? I can’t deny there are a few. I ended up with a Mediatek X25 instead of an X27 - that matches what was initial promised, but there had been claims of an upgrade. Unfortunately issues at the factory meant that the initial production run got the older CPU. Later backers are support to get the upgrade. As someone who took the early risk this does leave a slightly bitter taste but I doubt I’ll actually notice any significant performance difference. The keys on the keyboard are a little lop sided in places. This seems to be just a cosmetic thing and I haven’t noticed any issues in typing. The lack of first class Debian support is disappointing, but I believe will be resolved in time (by the community if not Planet). The camera isn’t as good as my phone, but then it’s a front facing webcam style thing and it’s at least as good as my laptop at that.

Bottom line: Would I buy it again? At $369, absolutely. At the current $599? Probably not - I’m simply not on the move enough to need this on a regular basis, so I’d find it hard to justify. Maybe the 2nd gen, assuming it gets a bit more polish on the execution and proper mainline Linux support. Don’t get me wrong, I think the 1st gen is lovely and I’ve had lots of envious people admiring it, I just think it’s ended up priced a bit high for what it is. For the same money I’d be tempted by the GPD Pocket instead.

Categories: LUG Community Blogs

Andy Smith: Let’s Encrypt wildcard certificates, acme.sh and automated DNS verification

Planet HantsLUG - Mon, 19/03/2018 - 11:59
Let’s Encrypt’s wildcard certificates

Now that Let’s Encrypt can issue wildcard TLS certificates I found some time to look into that.

I already use a Lua script with haproxy which takes care of automatically answering http-01 ACME challenges, but to issue/renew a wildcard certificate you need to answer a dns-01 challenge. A different client/setup would be needed.

dns-01 ACME challenges

Most of the clients that support ACME v2 offer a range of integrations for DNS providers, plus a manual mode that prints out the DNS record that you need to add and then waits for you to indicate that you’ve done it. I run my own DNS infrastructure so the thing to do would be RFC2136 dynamic DNS updates.

One wrinkle here is that currently none of my DNS zones have dynamic updates enabled. At the moment I manage them as zone files (some are automatically generated by scripts though). After looking at a few of the client options I found that acme.sh supports an “alias zone”.

Basically, in your main zone you create a CNAME for the challenge record that points at another zone, and then enable dynamic updates in that other zone. The other zone is dedicated for this purpose, so the only updates which will be happening will be for the purpose of answering dns-01 ACME challenges. I made my dynamic zone a sub-zone of my main one:

strugglers.net zone file content

These records need to be added to the main zone for this to work.

. . . ; sub-zone purely used for dns-01 ACME challenges. acmesh NS a.authns.bitfolk.co.uk. NS b.authns.bitfolk.com. NS c.authns.bitfolk.com. ; Alias the dns-01 challenge record into the dedicated zone. _acme-challenge CNAME _acme-challenge.acmesh.strugglers.net. . . . acmesh.strugglers.net zone file content

Initially this just needs to be an empty zone with only SOA and NS records, so this is the entire content of the file.

$ORIGIN . $TTL 86400 ; 1 day acmesh.strugglers.net IN SOA a.authns.bitfolk.co.uk. hostmaster.bitfolk.com. ( 2018031905 ; serial 14400 ; refresh (4 hours) 7200 ; retry (2 hours) 1209600 ; expire (2 weeks) 43200 ; minimum (12 hours) ) NS a.authns.bitfolk.co.uk. NS b.authns.bitfolk.com. NS c.authns.bitfolk.com. DNS server configuration

The DNS server needs to know a key by which it will authenticate acme.sh‘s updates, and also needs to be told that the new zone is a dynamic zone. I use BIND, so it goes as follows.

Generate a key for dynamic DNS updates

Use the dnssec-keygen command to generate a key suitable for authenticating DNS updates.

$ dnssec-keygen -r /dev/urandom -a HMAC-SHA512 -b 512 -n HOST DDNS_UPDATE

This creates two files named like Kddns_update.+165+14059.key and Kddns_update.+165+14059.private.

Put the key in the BIND config

Look in the private file and take the key from the line that starts “Key:”. Put that in some config file that you will load into your BIND like this:

key "strugglers" { algorithm hmac-sha512; secret "Sb8nvwpO8bhiU4haPB+NiJKoMO6vVJumrr29Bj3daSuB8hBoTKoqPKMBKTYLRUv12pbKPwJATgdsU6BtL4Hmcw=="; };

The thing in quotes after “key” is a symbolic name for this key and can be anything that makes sense to you. The “secret” is the key from the private file. You can delete the two Kddns_update.+165+14059.* files now.

Put the new zone into the BIND config

The config for the zone itself looks something like this:

zone "acmesh.strugglers.net" { type master; file "/path/to/acmesh.strugglers.net"; allow-update { key "strugglers"; }; }; Reload the DNS server

Once BIND has been reloaded the log file should indicate that the acemsh.strugglers.net zone was loaded correctly, and in my case that triggers DNS NOTIFY to my secondary servers which automatically begin zone transfers.

Check things out with nsupdate

At this point it might be worth using the nsupdate command to check that you can do dynamic DNS updates.

Just type the nsupdate line in the shell, the > is a prompt at which you will type the updates you wish to send. We’ll add a trivial TXT record. The -k argument is the path to the file containing the key.

$ nsupdate -k /path/to/strugglers.key -v > server a.authns.bitfolk.co.uk > debug yes > zone acmesh.strugglers.net. > update add foo.acmesh.strugglers.net. 86400 TXT "bar" > show Outgoing update query: ;; ->>HEADER- opcode: UPDATE, status: NOERROR, id: 0 ;; flags:; ZONE: 0, PREREQ: 0, UPDATE: 0, ADDITIONAL: 0 ;; ZONE SECTION: ;acmesh.strugglers.net. IN SOA ;; UPDATE SECTION: foo.acmesh.strugglers.net. 86400 IN TXT "bar" > send Sending update to 85.119.80.222#53 Outgoing update query: ;; ->>HEADER- opcode: UPDATE, status: NOERROR, id: 19987 ;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 1, ADDITIONAL: 1 ;; ZONE SECTION: ;acmesh.strugglers.net. IN SOA ;; UPDATE SECTION: foo.acmesh.strugglers.net. 86400 IN TXT "bar" ;; TSIG PSEUDOSECTION: strugglers. 0 ANY TSIG hmac-sha512. 1521454639 300 64 dPndp1/ZyqzmSEn0AKIsGR62HrsplJBhntWioM4oBdPlNXUIAwg7Jwpg DGSM2S3kY+5hfGTleNqwXZrMvnBhUQ== 19987 NOERROR 0 Reply from update query: ;; ->>HEADER- opcode: UPDATE, status: NOERROR, id: 19987 ;; flags: qr; ZONE: 1, PREREQ: 0, UPDATE: 0, ADDITIONAL: 1 ;; ZONE SECTION: ;acmesh.strugglers.net. IN SOA ;; TSIG PSEUDOSECTION: strugglers. 0 ANY TSIG hmac-sha512. 1521454639 300 64 NfH/78kvq6f+59RXnyJwC6kfFRLGjG6Rh9jdYRId7UjH0jwIbtRVpqCu xx4HToGmlJrDTUqpgbYZq2orUOZlkQ== 19987 NOERROR 0 > [Ctrl-D]

And to verify it really got added (though the status of NOERROR should be confirmation enough):

$ dig +short -t txt foo.acmesh.strugglers.net "bar"

That it; you can do dynamic DNS updates.

acme.sh

I’m going to assume you’ve installed acme.sh according to one of its supported installation methods. Personally I am not into curl | sh so I:

  • Create a system user that can’t log in.
  • git clone the source.
  • acme.sh --install it as that user.

acme.sh doesn’t have to be run on the primary DNS server, because it’s going to use a dynamic DNS update to do all the DNS things. It just needs access to the dynamic DNS update key file. Either you can install acme.sh on each host that will need to generate/renew certificates and copy the DNS key there, or else do all the certificate generation/renewal in one place and copy the certificate files around.

However you manage it, make sure that the user you’re going to run acme.sh as can read the dynamic DNS update key file.

Issuing the first wildcard certificate

The first time you issue the certificate you need to set NSUPDATE_KEY and NSUPDATE_SERVER in your environment. After the first successful issuance acme.sh will store these variables in its configuration for use in the automated renewals.

$ NSUPDATE_SERVER=a.authns.bitfolk.co.uk NSUPDATE_KEY=/path/to/strugglers.key ./acme.sh --issue -d strugglers.net -d '*.strugglers.net' --challenge-alias acmesh.strugglers.net --dns dns_nsupdate [Mon 19 Mar 09:19:00 UTC 2018] Multi domain='DNS:strugglers.net,DNS:*.strugglers.net' [Mon 19 Mar 09:19:00 UTC 2018] Getting domain auth token for each domain [Mon 19 Mar 09:19:03 UTC 2018] Getting webroot for domain='strugglers.net' [Mon 19 Mar 09:19:03 UTC 2018] Getting webroot for domain='*.strugglers.net' [Mon 19 Mar 09:19:04 UTC 2018] Found domain api file: /path/to/acmesh/dnsapi/dns_nsupdate.sh [Mon 19 Mar 09:19:04 UTC 2018] adding _acme-challenge.acmesh.strugglers.net. 60 in txt "WmenhbXRtenhpNLYLOBjznyHcVvFk-jjxurCVTrhWc8" [Mon 19 Mar 09:19:04 UTC 2018] Found domain api file: /path/to/acmesh/dnsapi/dns_nsupdate.sh [Mon 19 Mar 09:19:04 UTC 2018] adding _acme-challenge.acmesh.strugglers.net. 60 in txt "fwZPUBHijOQkJJaoOF_nIn3Z_FtuVU9R635NDVz_hPA" [Mon 19 Mar 09:19:04 UTC 2018] Sleep 120 seconds for the txt records to take effect

At this point a DNS update has been crafted and sent so you should see your zone update and zone transfer happen to any secondary servers. If that doesn’t happen within 120 seconds then when Let’s Encrypt tries to verify the challenge it might query a DNS server that doesn’t yet have the record. Your zone transfers need to be reliable.

[Mon 19 Mar 09:21:08 UTC 2018] Verifying:strugglers.net [Mon 19 Mar 09:21:12 UTC 2018] Success [Mon 19 Mar 09:21:12 UTC 2018] Verifying:*.strugglers.net [Mon 19 Mar 09:21:15 UTC 2018] Success [Mon 19 Mar 09:21:15 UTC 2018] Removing DNS records. [Mon 19 Mar 09:21:15 UTC 2018] removing _acme-challenge.acmesh.strugglers.net. txt [Mon 19 Mar 09:21:16 UTC 2018] removing _acme-challenge.acmesh.strugglers.net. txt [Mon 19 Mar 09:21:16 UTC 2018] Verify finished, start to sign. [Mon 19 Mar 09:21:18 UTC 2018] Cert success. -----BEGIN CERTIFICATE----- MIIFETCCA/mgAwIBAgISAz4ZQV27n1FgemVAEhIqiUZnMA0GCSqGSIb3DQEBCwUA MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD . . . NeAmr5I= -----END CERTIFICATE----- [Mon 19 Mar 09:21:18 UTC 2018] Your cert is in /path/to/acmesh/.acme.sh/strugglers.net/strugglers.net.cer [Mon 19 Mar 09:21:18 UTC 2018] Your cert key is in /path/to/acmesh/.acme.sh/strugglers.net/strugglers.net.key [Mon 19 Mar 09:21:18 UTC 2018] The intermediate CA cert is in /path/to/acmesh/.acme.sh/strugglers.net/ca.cer [Mon 19 Mar 09:21:18 UTC 2018] And the full chain certs is there: /path/to/acmesh/.acme.sh/strugglers.net/fullchain.cer Examining a certificate

Just for peace of mind…

$ openssl x509 -text -noout -certopt no_subject,no_header,no_version,no_serial,no_signame,no_subject,no_issuer,no_pubkey,no_sigdump,no_aux -in /path/to/acmesh/.acme.sh/strugglers.net/strugglers.net.cer Validity Not Before: Mar 19 08:21:17 2018 GMT Not After : Jun 17 08:21:17 2018 GMT X509v3 extensions: X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Extended Key Usage: TLS Web Server Authentication, TLS Web Client Authentication X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: BF:C7:8E:F5:87:05:D0:6E:15:AC:7B:37:9F:82:05:C3:E3:11:B7:32 X509v3 Authority Key Identifier: keyid:A8:4A:6A:63:04:7D:DD:BA:E6:D1:39:B7:A6:45:65:EF:F3:A8:EC:A1 Authority Information Access: OCSP - URI:http://ocsp.int-x3.letsencrypt.org CA Issuers - URI:http://cert.int-x3.letsencrypt.org/ X509v3 Subject Alternative Name: DNS:*.strugglers.net, DNS:strugglers.net X509v3 Certificate Policies: Policy: 2.23.140.1.2.1 Policy: 1.3.6.1.4.1.44947.1.1.1 CPS: http://cps.letsencrypt.org User Notice: Explicit Text: This Certificate may only be relied upon by Relying Parties and only in accordance with the Certificate Policy found at https://letsencrypt.org/repository/

From the Subject Alternative Name we can see it is a wildcard certificate.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in February 2018

Planet ALUG - Wed, 28/02/2018 - 19:36

Here is my monthly update covering what I have been doing in the free software world in February 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:



I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Add support for comparing Berkeley DB files. (Unfortunately this is currently incomplete because the libraries do not report metadata reliably!) (#890528)
  • Add support for comparing "XMLBeans" binary schemas. [...]
  • Drop spurious debugging code in Android tests. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed
  • debian-policy: Replace dh_systemd_install with dh_installsystemd. (#889167)
  • juce: Missing build-depends on graphviz. (#890035)
  • roffit: debian/rules does not override targets as intended. (#889975)
  • bugs.debian.org: Please add rel="canonical" to bug pages. (#890338)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

Uploads
  • redis:
    • 4.0.8-1 — New upstream release and fix a potential hardlink vulnerability.
    • 4.0.8-2 — Also listen on ::1 (IPv6) by default. (#891432)
  • python-django:
    • 1.11.10-1 — New upstream security release.
    • 2.0.2-1 — New upstream security release.
  • redisearch:
    • 1.0.6-1 — New upstream release.
    • 1.0.7-1 — New upstream release & add Lintian overrides for package-does-not-install-examples.
    • 1.0.8-1 — New upstream release, which includes my reproducibility-related change improvement.
  • adminer:
    • 4.6.1-1 — New upstream release and override debian-watch-does-not-check-gpg-signature as upstream do not release signatures.
    • 4.6.2-1 — New upstream release.
  • process-cpp:
    • 3.0.1-3 — Make the documentation reproducible.
    • 3.0.1-4 — Correct Vcs-Bzr to Vcs-Git.
  • sleekxmpp (1.3.3-3) — Make the build reproducible. (#890193)
  • python-redis (2.10.6-2) — Correct autopkgtest dependencies and misc packaging updates.
  • bfs (1.2.1-1) — New upstream release.

I also made misc packaging updates for docbook-to-man (1:2.0.0-41), gunicorn (19.7.1-4), installation-birthday (8) & python-daiquiri (1.3.0-3).

Finally, I performed the following sponsored uploads: check-manifest (0.36-2), django-ipware (2.0.1-1), nose2 (0.7.3-3) & python-keyczar (0.716+ds-2).

Debian bugs filed
  • zsh: Please make apt install completion work on "local" files. (#891140)
  • git-gui: Ignores git hooks. (#891552)
  • python-coverage:
    • Installs pyfile.html into wrong directory breaking HTML report generation. (#890560)
    • Document copyright information for bundled JavaScript source. (#890578)
FTP Team

As a Debian FTP assistant I ACCEPTed 123 packages: apticron, aseba, atf-allwinner, bart-view, binutils, browserpass, bulk-media-downloader, ceph-deploy, colmap, core-specs-alpha-clojure, ctdconverter, debos, designate, editorconfig-core-py, essays1743, fis-gtm, flameshot, flex, fontmake, fonts-league-spartan, fonts-ubuntu, gcc-8, getdns, glyphslib, gnome-keyring, gnome-themes-extra, gnome-usage, golang-github-containerd-cgroups, golang-github-go-debos-fakemachine, golang-github-mattn-go-zglob, haskell-regex-tdfa-text, https-everywhere, ibm-3270, ignition-fuel-tools, impass, inetsim, jboss-bridger, jboss-threads, jsonrpc-glib, knot-resolver, libctl, liblouisutdml, libopenraw, libosmo-sccp, libtest-postgresql-perl, libtickit, linux, live-tasks, minidb, mithril, mutter, neuron, node-acorn-object-spread, node-babel, node-call-limit, node-color, node-colormin, node-console-group, node-consolidate, node-cosmiconfig, node-css-color-names, node-date-time, node-err-code, node-gulp-load-plugins, node-html-comment-regex, node-icss-utils, node-is-directory, node-mdn-data, node-mississippi, node-mutate-fs, node-node-localstorage, node-normalize-range, node-postcss-filter-plugins, node-postcss-load-options, node-postcss-load-plugins, node-postcss-minify-font-values, node-promise-retry, node-promzard, node-require-from-string, node-rollup, node-rollup-plugin-buble, node-ssri, node-validate-npm-package-name, node-vue-resource, ntpsec, nvidia-cuda-toolkit, nyx, pipsi, plasma-discover, pokemmo, pokemmo-installer, polymake, privacybadger, proxy-switcher, psautohint, purple-discord, pytest-astropy, pytest-doctestplus, pytest-openfiles, python-aiomeasures, python-coverage, python-fitbit, python-molotov, python-networkmanager, python-os-service-types, python-pluggy, python-stringtemplate3, python3-antlr3, qpack, quintuple, r-cran-animation, r-cran-clustergeneration, r-cran-phytools, re2, sat-templates, sfnt2woff-zopfli, sndio, thunar, uhd, undertime, usbauth-notifier, vmdb2 & xymonq.

I additionally filed 15 RC bugs against packages that had incomplete debian/copyright files against: browserpass, designate, fis-gtm, flex, gnome-keyring, ibm-3270, knot-resolver, libopenraw, libtest-postgresql-perl, mithril, mutter, ntpsec, plasma-discover, pytest-arraydiff & r-cran-animation.

Categories: LUG Community Blogs

Jonathan McDowell: Getting Debian booting on a Lenovo Yoga 720

Planet ALUG - Wed, 21/02/2018 - 22:46

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware bcdedit /copy "{bootmgr}" /d "Debian" bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

Categories: LUG Community Blogs

MJ Ray: How hard can typing æ, ø and å be?

Planet ALUG - Wed, 21/02/2018 - 17:14

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

Categories: LUG Community Blogs

Mick Morgan: database failure

Planet ALUG - Sun, 18/02/2018 - 16:31

In 1909, Franz Kafka wrote the “Inclusion of Private Automobile Firms in the Compulsory Insurance Program” as part of “The Office Writings”. His experience of tortuous bureaucracy in Insurance and elsewhere was later reflected in one of his most famous novels “Der Process” (known in English translation as “The Trial”).

Back in October last year I bought another motorcycle to go with my GSX 1250. I’d just sold three other older bikes and felt the need to fill up the resultant hole in my garage. Besides, a man can never have too many motorcycles. At the time I bought the new Yamaha I spoke to my insurers about getting it added to my existing policy. Unfortunately they had recently changed their systems and I could no longer have one policy covering both bikes. So I took out a new separate policy. Oddly enough, that policy cost me twice as much as I paid for cover on the GSX, a bike with over twice the power and a lot more grunt than my new Yamaha. I was told that whilst /I/ was still the same risk, the underwriters assumed that my Yamaha was a riskier vehicle to insure. The ways of insurers are odd indeed and beyond the ken of mortal man.

For the past few months, both my bikes have been wrapped up warm and dry in my garage awaiting a change in the weather so that I no longer have to use the car for everything. This turns out to be a very good thing indeed.

A couple of days ago I received a letter from the Motor Insurer’s Bureau and DVLA. That letter, headed “Stay Insured, Stay Legal” gave the registration number of my Yamaha and stated, in red, “Do not ignore this letter” and went on to say “To avoid a penalty, you will need to take action immediately”. “The record of insurance for your vehicle [REG NO] does not appear on the Motor Insurance Database (MID) and this means if you take no action, you will get a fine.”

The letter also explained that it was my responsibility, as registered keeper, to ensure that my bike was insured. If I was certain that my bike was insured, I was instructed to “contact [my] Insurance provider” since “MIB and DVLA cannot update your records on the MID”.

Pretty worrying and very specific about what I needed to do. So, firstly I checked the MID at “askmid.com” and sure enough, my bike did not appear.

I then ‘phoned my Insurers who confirmed that I was insured and had been since October of last year when I took out the policy. I explained that I knew that was the case because I had the policy in front of me. But that didn’t help me because both DVLA and the MIB believed otherwise. Worse, the MID is used by the Police who will therefore similarly believe otherwise. Worse even than that, is the fact that an extract of the MIB database is supplied for use by ANPR cameras across the UK (See www.mib.org.uk). This means that I only have to pass an ANPR (which I do – a lot) whilst riding that particular bike to almost guarantee a police stop. I therefore asked my insurers to do what the MIB suggested and update my records. No can do, say my insurers. According to their systems I /am/ already on the MIB. After several, rather fruitless conversations (they called me back, I called them again) they suggested that I call the MIB. I explained again that the MIB had clearly stated that /they/ could do nothing, it was down to my insurer and them alone to ensure that my records were correct. Furthermore, the askmid website reinforces the message that “askMID and MIB do not sell insurance nor can we update the Motor Insurance Database (MID). These services are provided by your chosen insurer or broker”.

Nevertheless, since I was getting nowhere with my insurer, I agreed to try to speak to the MIB and, if necesssary, get them to talk to my insurer. Here, dear reader, is where the situation spirals further into the absurd. The letter from the MIB gives a contact telephone number which is completely automated. That advice line (you know the type, “press 1 for this option, 2 for that” etc.) eventually gave me the advice I had already received from the MIB letter and the askmid website – viz: “We cannot do anything, you must talk to your insurer”. So I went back to my insurer. You will not be surprised to read that my insurer, whilst sympathetic and understanding felt that they had done their bit and the fault lay elsewhere.

Now, as a paying customer of a (compulsory) service I don’t care where the fault lies. My only point of leverage is with my insurer. I pay them for a service which does not simply stop with them issuing cover. They must also ensure that the relevant databases are kept up to date. This requirement is laid upon them by Statutory Instrument no 37 of 2003 – “The Motor Vehicles (Compulsory Insurance) (Information Centre and Compensation Body) Regulations 2003”.

The person I spoke to on my third, or possibly fourth, conversation with my Insurer suggested that in order to show that I /was/ fully insured I should carry a copy of my policy with me at all times when riding my bike.

This completely misses the point. It is a legal requirement for my bike’s records on the MIB database to be correct. Only my Insurer can do that. If those records are not correct, I face the almost certain chance of being stopped by the police. Now whilst I can (if I remember to “carry my papers” in the correct Orwellian manner) show the Officers stopping me that I /am/ insured, that will have wasted my time and the Police Officers’ time.

Not good. Not good at all. I’m sure Kafka would have understood my frustration.

And guess what may happen when the time comes for me to renew my insurance – on all my vehicles.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Epic Journey in my Ioniq

Planet ALUG - Wed, 14/02/2018 - 22:18

This weekend just-gone was my father's 90th birthday, so since we don't go to Wales very often, we figured we should head down to visit. As this would be our first major journey in the Ioniq (I've done Manchester to Cambridge a few times now, but this is almost 3 times further) we took an additional day off (Friday) so that we could easily get from our home in southern Manchester to my parent's house in St Davids, Pembrokeshire.

I am not someone to enter into these experiences lightly. I spent several hours consulting with zap-map and also Google maps, looking at chargers en-route. In the UK there's a significant number of chargers on the motorway system provided by Ecotricity but this infrastructure is not pervasive and doesn't really extend beyond the motorway service stations (and some IKEAs). I made my plan for the journey to Wales, ensuring that each planned stop was simply the first in a line of possible stops in order that if something went wrong, I'd have enough charge to move forwards from there.

First leg took us from our home to the Ecotricity charger at Hilton Park Southbound services. My good and dear friend Tim very kindly offered to charge us for free and he used one of his fifty-two free charges to top us up. This went flawlessly and set us in a very good mood for the journey to come. Since we would then have a very long jump from the M5 to the M4, we decided that our second charge would be to top-up at Chateau Impney which has a Polar charger. Unfortunately by this point the wind and rain were up and the charger failed to work properly, eventually telling us that its input voltages were unbalanced and then powering itself off entirely. We decided to head to the other Polar charger at Webbs of Wychbold. That charger started up fine so we headed in, had a loo visit, grabbed some lunch, watched the terrapins swimming around, and when a sufficient time had passed for the car to charge, headed back only to discover that it had emergency-stopped mere moments after we'd left the car, so we had no charge for the entire time we were there. No matter we thought - we'd sit in the car while it charged, and eat our lunch. Sadly we were defeated, the charger repeatedly e-stopped, so we gave up.

Our fallback position was to charge at the Strensham services at the M5/M50 junction. Sadly the southbound services have no chargers at all (they're under a lot of building work right now, so perhaps that's part of it) so we had to get to the northbound services and charge there. That charge went fine, and with a £2.85 bill from Ecotricity automatically paid, we snuck our way along back-roads and secret junctions to the southbound services, and headed off down the M50. Sadly we're now a lot later than we should have been, having lost about ninety minutes in total to the wasted time at the two Polar chargers, which meant that we hit a lot of congestion at Monmouth and around Newport on the M4.

We made it to Cardiff Gate where we plugged in, set it charging, and then headed into the service area where we happened to meet my younger brother who was heading home too. He went off, and I looked at the Ecotricity app on my phone which had decided at that point that I wasn't charging at all. I went out to check, the charger was still delivering current, so, chalking it up to a bit of a de-sync, we went in, had a coffee and a relax, and then headed out to the car to wait for it to finish charging. It finished, we unplugged, and headed out. But to this day I've not been charged by Ecotricity for that so "yay".

Our final stop along the M4 was Swansea West. Unfortunately the Pont Abraham services don't have a rapid charger compatible with my car so we have to stop earlier. Fortunately there are three chargers at Swansea West. Unfortunately the CCS was plugged into an i3 which wasn't charging but was set to keep the connector locked in so I couldn't snarf it. I plugged into a slower (AC) charger to get a bit of juice while we went in to wait for the i3 owner to leave. I nipped out after 10 minutes and conveniently they'd gone, so I swapped the car over to the CCS charger and set it going. 37 minutes later and that charger had actually worked, charged me up, and charged me a princely £5.52 for the privilege.

From here we nipped along the A48/A40, dropped in on my sister-in-law to collect a gift for my father, and then got to St Davids at around nine pm. A mere eleven hours after we left Manchester. By comparison, when I drove a Passat, I would leave Manchester at 3pm, drive 100 fewer miles, and arrive at around 9pm, having had a few nice stops for loo breaks and dinner.

Saturday it had been raining quite hard overnight, St Davids has one (count it, ONE) charger compatible with my car (type 2 in this instance) but fortunately it's free to use (please make donation in the tourist-information-office). Unfortunately after the rain, the parking space next to the charger was under a non-trivial amount of water, so poor Rob had to mountaineer next to the charger to plug in without drowning. We set the car charging and went to have a nice breakfast in St Davids. A few hours later, I wandered back up to the car park with Rob and we unplugged and retrieved the car. Top marks for the charger, but a pity the space was a swimming pool.

Sunday morning dawned bright and early we headed out to Llandewi Velfrey to visit my brother who runs Silverstone Green Energy. We topped up there and then headded to Sarn Parc at his suggestion. It's a nice service area, unfortunately the AC/Chademo charger was giving 'Remote Start Error' so the Leaf there was on the Chademo/CCS charger. However as luck would have it, that charger was on free-vend, so once we got on the charger (30m later or so) we got to charge for free. Thanks Ecotricity.

From Sarn Parc, we decided that since we'd had such a good experience at Strensham North, we'd go directly there. We arrived with 18m to spare in the "tank" but unfortunately the CCS/Chademo charger was broken (with an error along the lines of PWB1 is 0x0008) and there was an eGolf there which also had wanted to use CCS but had to charge slowly in order to get enough range to get to another charger. As a result we had to sit there for an hour to wait for him to have enough in his 'tank' that he was prepared to let us charge. We then got a "full" 45 minute charge (£1.56, 5.2kWh) which gave us enough to get north again to Chateau Impney (which had been marked working again on Zap-map).

The charge there worked fine (yay) so we drove on north to Keele services. We arrived in the snow/hail/rain (yay northern weather) found the charger, plugged in, tried to set it going using the app, and we were told "Unable to contact charger". So I went through the process again and we were told "Charger in use". It bloody well wasn't in use, because I was plugged into it and it definitely wasn't charging my car. We waited for the rain to die down again and looked at the charger, which at that moment said "Connect vehicle" and then it started up charging the car (yay). We headed in for a loo and dinner break. Unfortunately the app couldn't report on progress but it had started charging so we were confident we'd be fine. More fool us. It had stopped charging moments after we'd left the car and once again we wasted time because it wasn't charging when we thought it was. We returned, discovered the car hadn't charged, but then discovered the charger had switched to free-vend so we charged up again for free, but that was another 40 minute wait.

Finally we got home (via a short stop at the pub) and on Monday I popped along to a GMEV rapid charger, and it worked perfectly as it has every single time I've used it.

So, in conclusion, the journey was reasonably cheap, which is nice, but we had two failed charge attempts on Polar, and several Ecotricity cockups (though they did mostly end up in our favour in terms of money) which cost us around 90 to 120 minutes in each direction. The driving itself (in the Ioniq) was fine and actually meant I wasn't frazzled and unhappy the whole time, but the charging infrastructure is simply not good enough. It's unreliable, Ecotricity don't have support lines at the weekend (or evenings/early mornings), and is far too sparse to be useful when one wishes to travel somewhere not on the motorway network. If I'd tried to drive my usual route, I'd have had to spend four hours in Aberystwyth using my granny charger to put about 40 miles in the tank from a public 3 pin socket.

Categories: LUG Community Blogs

Jonathan McDowell: collectd scripts for the Virgin Media Super Hub

Planet ALUG - Tue, 06/02/2018 - 19:33

As I’ve previously stated I’m no longer using Virgin Media but when I was I had written a script to scrape statistics from the cable modem and import them into collectd. Primarily I was recording the upstream/downstream line speed and the per channel signal figures, but they could easily be extended to do more if you wanted. Useful to see when Virgin increase your line speed, or see if your line quality has deteriorated. I’ve shoved the versions I had for the Super Hub v1 and v3 in GitHub in the hope they’ll be of use to someone. Note that I posted my SuperHub 3 back to Virgin yesterday so I no longer have any hardware that needs these scripts.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in January 2018

Planet ALUG - Wed, 31/01/2018 - 23:20

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published whydoesaptnotusehttps.com, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the mentors.debian.net review service.
Patches contributed
  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.
Uploads
  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:

Debian bugs filed
  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install test_isort.py to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.

RC bugs
  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)


I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.

FTP Team

As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

Categories: LUG Community Blogs

Jonathan McDowell: Going to FOSDEM 2018

Planet ALUG - Tue, 23/01/2018 - 21:13

Laura comments that she has no idea who is going to FOSDEM. I’m slightly embarrassed to admit I’ve only been once before, way back in 2005. A mixture of good excuses and disorganisation about arranging to go has meant I haven’t been back since. So a few months ago I made the decision to attend and sorted out the appropriate travel and hotel bookings and I’m pleased to say I’m attending FOSDEM 2018. I get in late Friday evening and fly out on Sunday evening, so I’ll miss the Friday beering but otherwise be around for the whole event. Hope to catch up with a bunch of people there!

Categories: LUG Community Blogs

Jonathan McDowell: How Virgin Media lost me as a supporter

Planet ALUG - Tue, 09/01/2018 - 09:39

For a long time I’ve been a supporter of Virgin Media (from a broadband perspective, though their triple play TV/Phone/Broadband offering has seemed decent too). I know they have a bad reputation amongst some people, but I’ve always found their engineers to be capable, their service in general reliable, and they can manage much faster speeds than any UK ADSL/VDSL service at cheaper prices. I’ve used their services everywhere I’ve lived that they were available, starting back in 2001 when I lived in Norwich. The customer support experience with my most recent move has been so bad that I am no longer of the opinion it is a good idea to use their service.

Part of me wonders if the customer support has got worse recently, or if I’ve just been lucky. We had a problem about 6 months ago which was clearly a loss of signal on the line (the modem failed to see anything and I could clearly pinpoint when this had happened as I have collectd monitoring things). Support were insistent they could do a reset and fix things, then said my problem was the modem and I needed a new one (I was on an original v1 hub and the v3 was the current model). I was extremely dubious but they insisted. It didn’t help, and we ended up with an engineer visit - who immediately was able to say they’d been disconnecting noisy lines that should have been unused at the time my signal went down, and then proceeded to confirm my line had been unhooked at the cabinet and then when it was obvious the line was noisy and would have caused problems if hooked back up patched me into the adjacent connection next door. Great service from the engineer, but support should have been aware of work in the area and been able to figure out that might have been a problem rather than me having a 4-day outage and numerous phone calls when the “resets” didn’t fix things.

Anyway. I moved house recently, and got keys before moving out of the old place, so decided to be organised and get broadband setup before moving in - there was no existing BT or Virgin line in the new place so I knew it might take a bit longer than usual to get setup. Also it would have been useful to have a connection while getting things sorted out, so I could work while waiting in for workmen. As stated at the start I’ve been pro Virgin in the past, I had their service at the old place and there was a CableTel (the Belfast cable company NTL acquired) access hatch at the property border so it was clear it had had service in the past. So on October 31st I placed an order on their website and was able to select an installation date of November 11th (earlier dates were available but this was a Saturday and more convenient).

This all seemed fine; Virgin contacted me to let me know there was some external work that needed to be done but told me it would happen in time. This ended up scheduled for November 9th, when I happened to be present. The engineers turned up, had a look around and then told me there was an issue with access to their equipment - they needed to do a new cable pull to the house and although the ducting was all there the access hatch for the other end was blocked by some construction work happening across the road. I’d had a call about this saying they’d be granted access from November 16th, so the November 11th install date was pushed out to November 25th. Unfortunate, but understandable. The engineers also told me that what would happen is the external team would get a cable hooked up to a box on the exterior of the house ready for the internal install, and that I didn’t need to be around for that.

November 25th arrived. There was no external box, so I was dubious things were actually going to go ahead, but I figured there was a chance the external + internal teams would turn up together and get it sorted. No such luck. The guy who was supposed to do the internal setup turned up, noticed the lack of an external box and informed me he couldn’t do anything without that. As I’d expected. I waited a few days to hear from Virgin and heard nothing, so I rang them and was told the installation had moved to December 6th and the external bit would be done before that - I can’t remember the exact date quoted but I rang a couple of times before the 6th and was told it would happen that day “for sure” each time.

December 5th arrives and I get an email telling me the installation has moved to December 21st. This is after the planned move date and dangerously close to Christmas - I’m aware that in the event of any more delays I’m unlikely to get service until the New Year. Lo and behold on December 7th I’m informed my install is on hold and someone will contact me within 5 working days to give me an update.

Suffice to say I do not get called. I ring towards the end of the following week and am told they are continuing to have trouble carrying out work on the access hatch. So I email the housing company doing the work across the road, asking if Virgin have actually been in touch and when the building contractors plan to give them the access they require. I get a polite response saying Virgin have been on-site but did not ask for anything to be moved or make it clear they were trying to connect a customer. And providing an email address for the appropriate person in the construction company to arrange access.

I contact Virgin to ask about this on December 20th. There’s no update but this time I manage to get someone who actually seems to want to help, rather than just telling me it’s being done today or soon. I get email confirmation that the matter is being escalated to the Area Field Manager (I’d been told this by phone on December 16th as well but obviously nothing had happened), and provide the contact details for the construction company.

And then I wait. I’m aware things wind down over the Christmas period, so I’m not expecting an install before the New Year, but I did think I might at least get a call or email with an update. Nothing. My wife rang to finally cancel our old connection last week (it’s been handy to still have my backup host online and be able to go and do updates in the old place) and they were aware of the fact we were trying to get a new connection and that there had been issues, but had no update and then proceeded to charge a disconnection fee, even though Virgin state no disconnection if you move and continue with Virgin Media.

So last week I rang and cancelled the order. And got the usual story of difficulty with access and asked to give them 48 hours to get back to me. I said no, that the customer service so far had been appalling and to cancel anyway. Which I’m informed has now been done.

Let’s be clear on what I have issue with here. While the long delay is annoying I don’t hold Virgin entirely responsible - there is construction work going on and things slow down over Christmas (though the order was placed long enough beforehand that this really shouldn’t have impacted things). The problem is the customer service and complete lack of any indication Virgin are managing this process well internally - the fact the interior install team turned up when the exterior work hadn’t been completed is shocking! If Virgin had told me at the start (or once they’d done the first actual physical visit to the site and seen the situation) that there was going to be a delay and then been able to provide a firm date, I’d have been much more accepting. Instead, the numerous reschedules, an inability to call back and provide updates when promised and the empty assurances that exterior work will be carried out on certain dates all leave me lacking any faith in what Virgin tell me. Equally, the fact they have charged a disconnection fee when their terms state they wouldn’t is ridiculous (a complaint has been raised but at the time of writing the complaints team has, surprise, surprise, not been in contact). If they’re so poor when I’m trying to sign up as a new customer, why should I have any faith in their ability to provide decent support when I actually have their service?

It’s also useful to contrast my Virgin experience with 2 others. Firstly, EE who I used for 4G MiFi access. Worked out of the box, and when I rang to cancel (because I no longer need it) were quick and efficient about processing the cancellation and understood that I’d been pleased with the service but no longer needed it, so didn’t do a hard retention sell.

Secondly, I’ve ended up with FTTC over a BT Openreach line from a local Gamma reseller, MCL Services. I placed this order on December 8th, after Virgin had put my install on hold. At the point of order I had an install date of December 19th, but within 3 hours it was delayed until January 3rd. At this point I thought I was going to have similar issues, so decided to leave both orders open to see which managed to complete first. I double-checked with MCL on January 2nd that there’d been no updates, and they confirmed it was all still on and very unlikely to change. And, sure enough, on the morning of January 3rd a BT engineer turned up after having called to give a rough ETA. Did a look around, saw the job was bigger than expected and then, instead of fobbing me off, got the job done. Which involved needing a colleague to turn up to help, running a new cable from a pole around the corner to an adjacent property and then along the wall, and installing the master socket exactly where suited me best. In miserable weather.

What did these have in common that Virgin does not? First, communication. EE were able to sort out my cancellation easily, at a time that suited me (after 7pm, when I’d got home from work and put dinner on). MCL provided all the installation details for my FTTC after I’d ordered, and let me know about the change in date as soon as BT had informed them (it helps I can email them and actually get someone who can help, rather than having to phone and wait on hold for someone who can’t). BT turned up and discovered problems and worked out how to solve them, rather than abandoning the work - while I’ve had nothing but good experiences with Virgin’s engineers up to this point there’s something wrong if they can’t sort out access to their network in 2 months. What if I’d been an existing customer with broken service?

This is a longer post than normal, and no one probably cares, but I like to think that someone in Virgin might read it and understand where my frustrations throughout this process have come from. And perhaps improve things, though I suspect that’s expecting too much and the loss of a single customer doesn’t really mean a lot to them.

Categories: LUG Community Blogs

Bring-A-Box, Saturday 11 June 2016, All Saints, Mitcham

Surrey LUG - Fri, 15/04/2016 - 19:54
Start: 2016-06-11 12:00 End: 2016-06-11 12:00

We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!

This month's meeting is at the All Saints Centre, Mitcham, Surrey.  CR4 4JN

New members are very welcome. We're not a cliquey bunch, so you won't feel out of place! Usually between 15 and 30 people come along.

Categories: LUG Community Blogs

Bring-A-Box, Saturday 14th May 2016

Surrey LUG - Fri, 15/04/2016 - 19:50
Start: 2016-05-14 12:00 End: 2016-05-14 12:00

Venue to be found.  Watch this space!  No!  Better still, find a venue and discuss it on the mailing list!

Categories: LUG Community Blogs

Bring-A-Box, Saturday 9th April 2016, Station pub, W Byfleet

Surrey LUG - Thu, 07/04/2016 - 16:04
Start: 2016-04-09 12:00 End: 2016-04-09 12:00

We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!

This month's meeting is at the Station Pub in West Byfleet, Surrey.

New members are very welcome. We're not a cliquey bunch, so you won't feel out of place! Usually between 15 and 30 people come along.

Categories: LUG Community Blogs
Syndicate content