News aggregator

Chris Lamb: Free software activities in September 2015

Planet ALUG - Wed, 30/09/2015 - 22:23

Inspired by Raphaël Hertzog, here is a monthly update covering a large part of what I have been doing in the free software world:


The Reproducible Builds project was also covered in depth on LWN as well as in Lunar's weekly reports (#18, #19, #20, #21, #22).

  • redis — A new upstream release, as well as overhauling the systemd configuration, maintaining feature parity with sysvinit and adding various security hardening features.
  • python-redis — Attempting to get its Debian Continuous Integration tests to pass successfully.
  • libfiu — Ensuring we do not FTBFS under exotic locales.
  • gunicorn — Dropping a dependency on python-tox now that tests are disabled.
Bugs filed Patches contributed
RC bugs

I also filed FTBFS bugs against actdiag, actdiag, bangarang, bmon, bppphyview, cervisia, choqok, cinnamon-control-center, clasp, composer, cpl-plugin-naco, dirspec, django-countries, dmapi, dolphin-plugins, dulwich, elki, eqonomize, eztrace, fontmatrix, freedink, galera-3, golang-git2go, golang-github-golang-leveldb, gopher, gst-plugins-bad0.10, jbofihe, k3b, kalgebra, kbibtex, kde-baseapps, kde-dev-utils, kdesdk-kioslaves, kdesvn, kdevelop-php-docs, kdewebdev, kftpgrabber, kile, kmess, kmix, kmldonkey, knights, konsole4, kpartsplugin, kplayer, kraft, krecipes, krusader, ktp-auth-handler, ktp-common-internals, ktp-text-ui, libdevice-cdio-perl, libdr-tarantool-perl, libevent-rpc-perl, libmime-util-java, libmoosex-app-cmd-perl, libmoosex-app-cmd-perl, librdkafka, libxml-easyobj-perl, maven-dependency-plugin, mmtk, murano-dashboard, node-expat, node-iconv, node-raw-body, node-srs, node-websocket, ocaml-estring, ocaml-estring, oce, odb, oslo-config, oslo.messaging, ovirt-guest-agent, packagesearch, php-svn, php5-midgard2, phpunit-story, pike8.0, plasma-widget-adjustableclock, plowshare4, procps, pygpgme, pylibmc, pyroma, python-admesh, python-bleach, python-dmidecode, python-libdiscid, python-mne, python-mne, python-nmap, python-nmap, python-oslo.middleware, python-riemann-client, python-traceback2, qdjango, qsapecng, ruby-em-synchrony, ruby-ffi-rzmq, ruby-nokogiri, ruby-opengraph-parser, ruby-thread-safe, shortuuid, skrooge, smb4k, snp-sites, soprano, stopmotion, subtitlecomposer, svgpart, thin-provisioning-tools, umbrello, validator.js, vdr-plugin-prefermenu, vdr-plugin-vnsiserver, vdr-plugin-weather, webkitkde, xbmc-pvr-addons, xfsdump & zanshin.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: French Clothing Sizes

Planet HantsLUG - Sun, 27/09/2015 - 11:24

On Friday we spent several hours and quite a bit of money buying stuff in Decathlon. We needed some specific things and some things were on sale after the summer. Owing to my body shape change as a result of losing nearly 20 kg this year, a lot of my clothes - regular and sport - don't fit properly. Some I've been able to alter and some I can get away with, but some are now uncomfortable to wear or look absurd...

Decathlon design a lot of their own kit and then have it made all over the world. In that respect they are no different from many other companies both British and foreign. What is striking though is that unlike British and American brands, the stated size is more often the actual stated size, rather than a vanity size. For example to buy M&S or Next trousers I need to buy one size smaller than the quoted size or they fall down, but at Decathlon, I just need to buy the correct size and they fit...

Categories: LUG Community Blogs

Jonathan McDowell: New GPG key

Planet ALUG - Thu, 24/09/2015 - 14:45

Just before I went to DebConf15 I got around to setting up my gnuk with the latest build (1.1.7), which supports 4K RSA keys. As a result I decided to generate a new certification only primary key, using a live CD on a non-networked host and ensuring the raw key was only ever used in this configuration. The intention is that in general I will use the key via the gnuk, ensuring no danger of leaking the key material.

I took part in various key signings at DebConf and the subsequent UK Debian BBQ, and finally today got round to dealing with the key slips I had accumulated. I’m sure I’ve missed some people off my signing list, but at least now the key should be embedded into the strong set of keys. Feel free to poke me next time you see me if you didn’t get mail from me with fresh signatures and you think you should have.

Key details are:

pub 4096R/0x21E278A66C28DBC0 2015-08-04 [expires: 2018-08-03] Key fingerprint = 3E0C FCDB 05A7 F665 AA18 CEFA 21E2 78A6 6C28 DBC0 uid [ full ] Jonathan McDowell <>

I have no reason to assume my old key (0x94FA372B2DA8B985) has been compromised and for now continue to use that key. Also for the new key I have not generated any subkeys as yet, which caff handles ok but emits a warning about unencrypted mail. Thanks to those of you who sent me signatures despite this.

[Update: I was asked about my setup for the key generation, in particular how I ensured enough entropy, given that it was a fresh boot and without networking there were limited entropy sources available to the machine. I made the decision that the machine’s TPM and the use of tpm-rng and rng-tools was sufficient (i.e. I didn’t worry overly about the TPM being compromised for the purposes of feeding additional information into the random pool). Alternative options would have been flashing the gnuk with the NeuG firmware or using my Entropy Key.]

Categories: LUG Community Blogs

Feelings of an imposter

Planet SurreyLUG - Tue, 22/09/2015 - 10:00
I recently came across a question that was ask: Do I know of any women who consider themselves  to be an imposter?  For the life of me I cannot find the thread. But I did comment and here is my thoughts: I do think sometimes women do feel like an imposter, a phony, someone who shouldn’t […]
Categories: LUG Community Blogs

Home Working &#8211; Co Working with Friends

Planet SurreyLUG - Tue, 22/09/2015 - 00:22
I’ve been working from home (WFH) for the last few years  and I love it! No more commuting no more dealing with trains not working and delays. Each day I get up and go to my designated working space in my home. It suits me and I’m rather used to it. I do know many […]
Categories: LUG Community Blogs

Jonathan McDowell: Getting a Dell E7240 working with a dock + a monitor

Planet ALUG - Mon, 21/09/2015 - 20:29

I have a Dell E7240. I’m pretty happy with it - my main complaint is that it has a very shiny screen, and that seems to be because it’s the touchscreen variant. While I don’t care about that feature I do care about the fact it means I get FullHD in 12.5”

Anyway. I’ve had issues with using a dock and an external monitor with the laptop for some time, including getting so far as mentioning the problems on the appropriate bug tracker. I’ve also had discussions with a friend who has the same laptop with the same issues, and has some time trying to get it reliably work. However up until this week I haven’t had a desk I’m sitting at for any length of time to use the laptop, so it’s always been low priority for me. Today I sat down to try and figure out if there had been any improvement.

Firstly I knew the dock wasn’t at fault. A Dell E6330 works just fine with multiple monitors on the same dock. The E6330 is Ivybridge, while the E7240 is Haswell, so I thought potentially there might be an issue going on there. Further digging revealed another wrinkle I hadn’t previously been aware of; there is a DisplayPort Multi-Stream Transport (MST) hub in play, in particular a Synaptics VMM2320. Dell have a knowledge base article about Multiple external display issues when docked with a Latitude E7440/E7240 which suggests a BIOS update (I was already on A15) and a firmware update for the MST HUB. Sadly the firmware update is Windows only, so I had to do a suitable dance to be able to try and run it. I then discovered that the A05 update refused to work, complaining I had an invalid product ID. The A04 update did the same. The A01 update thankfully succeeded and told me it was upgrading from 2.00.002 to 2.15.000. After that had completed (and I’d power cycled to switch to the new firmware) I tried A05 again and this time it worked and upgraded me to 2.22.000.

Booting up Linux again I got further than before; it was definitely detecting that there was a monitor but it was very unhappy with lots of [drm:intel_dp_start_link_train] *ERROR* too many full retries, give up errors being logged. This was with 4.2, and as I’d been meaning to try 4.3-rc2 I thought this was a good time to give it a try. Lo and behold, it worked! Even docking and undocking does what I’d expect, with the extra monitor appearing / disappearing as you’d expect.

Now, I’m not going to assume this means it’s all happy, as I’ve seen this sort-of work in the past, but the clue about MST, the upgrade of that firmware (and noticing that it made things better under Windows as well) and the fact that there have been improvements in the kernel’s MST support according to the post 4.2 log gives me some hope that things will be better from here on.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Twofer

Planet ALUG - Wed, 16/09/2015 - 23:33

After toying with the idea for some time, I decided I'd try setting up 2FA on my laptop. As usual, the arch wiki had a nicely written article on setting up 2FA with the PAM module for Google Authenticator.

I followed the instructions for setting up 2FA for ssh and that worked seamlessly so I decided I'd then go the whole hog and enable the module in /etc/pam.d/system-auth which would mean I'd need it any time I had to login at all.

Adding the line:

auth sufficient

had the expected effect that I could login with just the verification code but that seems to defeat the point a little so I bit my lip and changed sufficient to required which would mean I'd need my password and the code on login.

I switched to another VT and went for it. It worked!

So then I rebooted.

And I couldn't log in.

After a couple of minutes to download an ISO to boot from using another machine, putting it on a USB stick, booting from it, and editing my system-auth file, I realised why:

auth required auth required try_first_pass nullok auth required unwrap

My home partition is encrypted and so the Google authenticator module obviously couldn't load my secret file until I'd already logged in.

I tried moving the line to the bottom of the auth group but that didn't work either.

How could this possibly go wrong...

So, the solution I came up with was to put the 2fa module into the session group. My understanding is that this will mean PAM will ask me to supply a verification code once per session which is fine by me; I don't want to have to put a code in every time I sudo anyway.

My question is, will my minor abuse of PAM bite me in the arse at any point? It seems to do what I expected, even if I log in through GDM.

Here's my current system-auth file:

#%PAM-1.0 auth required try_first_pass nullok auth required unwrap auth optional auth required account required account optional account required password optional password required try_first_pass nullok sha512 shadow password optional session required session required session optional unwrap session optional session required
Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Vanity Sizing

Planet HantsLUG - Wed, 16/09/2015 - 12:32

Owing to a reduction in my body mass I've been forced to buy new clothing. It's been an expensive process and annoying to replace otherwise perfectly usable clothes...

I happened to have some older clothes that were not used much as they had previously been a little on the small size. The good news is that I've been able to wear them more regularly so I've avoided buying a few things!

One thing does annoy me. Unlike women's clothing which uses strange numbers, most men's clothing using simple actual measurements, waist of x inches / y centimetres. It's a fairly straight forward system, you know how big your waist size etc and should should be able to buy off-the-peg clothing without too much of a worry. However it's not true anymore! My older clothes when measured with a tape are the size that the label says. The modern ones vary wildly - though they all seem to be much larger than the label says. For example:

  • 20 year old pairs of M&S St. Michael Jeans, says 34", are 34"
  • 20 year old pairs of BHS chinos, says 34", are 34"
  • Modern M&S relaxed fit chinos, says 36", at least 38" more as they age
  • Modern Craghoppers Kiwi walking trousers, says 30", more like 33"

Some clothes, like jeans and those with elasticated bits do get larger with wear and washing, but even so modern clothes seem to be at least once size (2"/5 cm) larger than the label says they should be. Older clothing from the same company is the size it says...

At the moemnt I'm about a real size of 33"/84 cm waist, which is a pain as British men's trousers don't come in odd sizes, so it's either a real size of 32" (tight) or 34" (loose), but in vanity sizing that could be anything from 30"...

Categories: LUG Community Blogs

Steve Kemp: All about sharing files easily

Planet HantsLUG - Sun, 13/09/2015 - 12:39

Although I've been writing a bit recently about file-storage, this post is about something much more simple: Just making a random file or two available on an ad-hoc basis.

In the past I used to have my email and website(s) hosted on the same machine, and that machine was well connected. Making a file visible just involved running ~/bin/publish, which used scp to write a file beneath an apache document-root.

These days I use "my computer", "my work computer", and "my work laptop", amongst other hosts. The SSH-keys required to access my personal boxes are not necessarily available on all of these hosts. Add in firewall constraints and suddenly there isn't an obvious way for me to say "Publish this file online, and show me the root".

I asked on twitter but nothing useful jumped out. So I ended up writing a simple server, via sinatra which would allow:

  • Login via the site, and a browser. The login-form looks sexy via bootstrap.
  • Upload via a web-form, once logged in. The upload-form looks sexy via bootstrap.
  • Or, entirely seperately, with HTTP-basic-auth and a HTTP POST (i.e. curl)

This worked, and was even secure-enough, given that I run SSL if you import my CA file.

But using basic auth felt like cheating, and I've been learning more Go recently, and I figured I should start taking it more seriously, so I created a small repository of learning-programs. The learning programs started out simply, but I did wire up a simple TOTP authenticator.

Having TOTP available made me rethink things - suddenly even if you're not using SSL having an eavesdropper doesn't compromise future uploads.

I'd also spent a few hours working out how to make extensible commands in go, the kind of thing that lets you run:

cmd sub-command1 arg1 arg2 cmd sub-command2 arg1 .. argN

The solution I came up with wasn't perfect, but did work, and allow the seperation of different sub-command logic.

So suddenly I have the ability to run "subcommands", and the ability to authenticate against a time-based secret. What is next? Well the hard part with golang is that there are so many things to choose from - I went with gorilla/mux as my HTTP-router, then I spend several hours filling in the blanks.

The upshot is now that I have a TOTP-protected file upload site:

publishr init - Generates the secret publishr secret - Shows you the secret for import to your authenticator publishr serve - Starts the HTTP daemon

Other than a lack of comments, and test-cases, it is complete. And stand-alone. Uploads get dropped into ./public, and short-links are generated for free.

If you want to take a peak the code is here:

The only annoyance is the handling of dependencies - which need to be "go got ..". I guess I need to look at godep or similar, for my next learning project.

I guess there's a minor gain in making this service available via golang. I've gained protection against replay attacks, assuming non-SSL environment, and I've simplified deployment. The downside is I can no longer login over the web, and I must use curl, or similar, to upload. Acceptible tradeoff.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Reduction Rate

Planet HantsLUG - Sun, 13/09/2015 - 11:21

Over the weekend I recalculated my weight reduction rate target. As you lose weight your BMR falls, mine has come down by about 700 kj. That means to lose weight at the same rate as before I have to eat even less than I was previously. That means eating such a low energy diet that I'll probably miss important stuff out and I could start to lose muscle mass rather than fat.

Since hitting my weight plateau a few weeks ago I've been careful to not over indulge and to push harder on the bike. Re-plotting my weight against target on the new weekly reduction rate of 550 g per week rather than 750 g per week has resulted in a more realistic trajectory that I'm sticking to. Even after my holiday I'm still on target and should hit a healthy weight at the end of November this year.

One problem I do face is clothing. Lots of my clothes now fit me like a tent. Trousers fall down and shirts flap about in the wind... I've bought some smaller clothing, men's size small or medium rather than large or extra-large as previous, but I'm waiting until I reach my healthy target weight so I don't end up with new clothes that are too large. One problem I will face is that, in Basingstoke at least, I can't buy men's casual trousers in a small enough size in any of the local department stores, they don't stock anything small enough! Jeans I can get as they sell them to teenagers who should be smaller than full grown men, but they aren't really allowed for work...

Categories: LUG Community Blogs

Chris Lamb: Joining strings in POSIX shell

Planet ALUG - Thu, 10/09/2015 - 21:18

A common programming task is to glue (or "join") items together to create a single string. For example:

>>> ', '.join(['foo', 'bar', 'baz']) "foo, bar, baz"

Notice that we have three items but only two commas — this can be important if the tool we passing doesn't support trailing delimiters or we simply want the result to be human-readable.

Unfortunately, this can be inconvenient in POSIX shell where we construct strings via explicit concatenation. A naïve solution of:

RESULT="" for X in foo bar baz do RESULT="${RESULT}, ${X}" done

... incorrectly returns ", foo, bar, baz". We can solve this with a (cumbersome) counter or flag to only attach the delimiter when we need it:

COUNT=0 RESULT="" for X in foo bar baz do if [ "${COUNT}" = 0 ] then RESULT="${X}" else RESULT="${RESULT}, ${X}" fi COUNT=$((COUNT + 1)) done

One alternative is to use the little-known ":+" expansion modifier. Many people are familiar with ":-" for returning default values:

$ echo ${VAR:-fallback}

By contrast, the ":+" modifier inverts this logic, returning the fallback if the specified variable is actually set. This results in the elegant:

RESULT="" for X in foo bar baz do RESULT="${RESULT:+${RESULT}, }${X}" done
Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Cambridge Gage Jam

Planet HantsLUG - Wed, 09/09/2015 - 22:11

Last year our gage tree (probably a Cambridge Gage) had plenty of fruit but they were all inedible. This year it had plenty of fruit, so much so that as fast as we collect it there is even more ready to collect....

It's been a while since we had gages to jam. I used the same method as previously, though I added a fraction more sugar as the fruit wasn't fully ripe. Today's batch was 1.7 kg of fruit (cleaned and destoned), 350 g water and 1.2 kg of sugar, plus the usual juice of a frozen and defrosted lemon. The yield was pretty good and as we have loads of fruit left, even after we give some of them away I'll do another batch later this week.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Orchestration, a cry for help

Planet ALUG - Tue, 08/09/2015 - 15:02

Over the past few years, a plethora of orchestration frameworks have been exploding onto the scene. Many have been around for quite a while but not all have the same sort of community behind them. For example there's a very interesting option in Joey Hess' Propellor but that is hurt by needing to be able to build Propellor on all the hosts you manage. On the other hand, Ansible is able to operate without installing extra software on your target hosts, but instead it ends up very latency-bound which can cause problems when your managed hosts are "far away".

I have considered CFEngine, Chef, Puppet and Salt in addition to the above mentioned options, but none of them feel quite right to me. I am looking for a way to manage a small number of hosts, at least one of which is not always online (my laptop) and all of which are essentially snowflakes whose sparkleybits I want some reasonable control over.

I have a few basic requirements which I worry would be hard to meet -- I want to be able to make changes to my hosts by editing a file and committing/pushing it to a git server. I want to be able to manage a host entirely over SSH from one or more systems, ideally without having to install the orchestration software on the target host, but where if the software is present it will get used to accelerate matters. I don't want to have to install Ruby or PHP on any system in order to have orchestration, and some of the systems I wish to manage simply can't compile Haskell stuff sanely. I'm not desperately interested in learning yet more DSLs, but I appreciate that it will be necessary, but I really don't want to have to learn more than one DSL simply to run one frameworks.

I don't want to have to learn strange and confusing combinations of file formats. For example, Ansible quite sensibly uses YAML for its structured data except for its host/group lists. It uses Jinja2 for its templating and looping, except for some things which it generates its own looping constructs inside its YAML. I also personally find Ansible's sportsball oriented terminology to be confusing, but that might just be me.

So what I'm hoping is that someone will be able to point me at a project which combines all the wonderful features of the above, with a need to learn only one DSL and which doesn't require to be installed on the managed host but which can benefit from being so installed, is driven from git, and won't hurt my already overly burdened brain.

Dear Lazyweb, pls. kthxbye.

Categories: LUG Community Blogs

Steve Kemp: The Jessie 8.2 point-release broke for me

Planet HantsLUG - Mon, 07/09/2015 - 09:37

I have about 18 personal hosts, all running the Jessie release of Debian GNU/Linux. To keep up with security updates I use unattended-upgrades.

The intention is that every day, via cron, the system will look for updates and apply them. Although I mostly expect it to handle security updates I also have it configured such that point-releases will be applied by magic too.

Unfortunately this weekend, with the 8.2 release, things broke in a significant way - The cron deamon was left in a broken state, such that all cronjobs failed to execute.

I was amazed that nobody had reported a bug, as several people on twitter had the same experience as me, but today I read through a lot of bug-reports and discovered that #783683 is to blame:

  • Old-cron runs.
  • Scheduled unattended-upgrades runs.
  • This causes cron to restart.
  • When cron restarts the jobs it was running are killed.
  • The system is in a broken state.

The solution:

# dpkg --configure -a # apt-get upgrade

I guess the good news is I spotted it promptly, with the benefit of hindsight the bug report does warn of this as being a concern, but I guess there wasn't a great solution.

Anyway I hope others see this, or otherwise spot the problem themselves.


In unrelated news the seaweedfs file-store I previously introduced is looking more and more attractive to me.

I reported a documentation-related bug which was promptly handled, even though it turned out I was wrong, and I contributed CIDR support to whitelisting hosts which was merged in well.

I've got a two-node "cluster" setup at the moment, and will be expanding that shortly.

I've been doing a lot of little toy-projects in Go recently. This weekend I was mostly playing with the message-bus, and tying it together with sinatra.

Categories: LUG Community Blogs

Back online

West Yorkshire LUG News - Fri, 04/09/2015 - 15:19

To to those of you who may have been worrired that WYLUG had slipped beneath the waves. I can comfirm that I am able to use email again and can hand out user names etc.

An Apology

West Yorkshire LUG News - Thu, 03/09/2015 - 13:20

To those of you who were expecting to reiceve email from me with passwords and other details. Unfortunalely, due to a botched update on my machine, ( not the one that runs this web site) I have been unable to send email for some time. I will try updating my own machine again and try to sort things out.

Linux Microsoft Skype for Business / Lync 2013 Client

Planet SurreyLUG - Wed, 02/09/2015 - 09:14

I was surprised to learn that Ubuntu 14.04 can talk to Skype for Business AKA Lync 2013 using the Pidgin Instant Messaging client. The general steps were:
# apt-get install pidgin pidgin-sipe

And then restart Pidgin and add a new Account. The Office Communicator is the relevant plugin, with the following parameters:

  • Protocol: Office Communicator
  • Username: Your Office 365 or Skype for Business username – probably your email address
  • Password: Your password is obviously required – and will be stored unencrypted in the config file, so you may wish to leave this blank and enter at each login
  • Server[:Port]: Leave empty if your set-up has autodiscovery
  • Connection type: Auto
  • User Agent: UCCAPI/15.0.4420.1017 OC/15.0.4420.1017
  • Authentication scheme: TLS-DSK

I am unclear why the user agent is required, and whether that will need to change from time to time or not. So far it has worked fine here.

Unfortunately a few days ago the above set-up stopped working, with “Failed to authenticate with server”. It seems that you must now use version 1.20 of the Sipe plugin, which fixes “Office365 rejects RC4 in TLS-DSK”. As this version was only completed three days ago, it is not yet available in any of the Ubuntu repositories that I have been able to find, you will probably have to compile yourself.

Broadly speaking I followed these key stages:

  1. Install build tools if you don’t already have them (apt-get install build-essential).
  2. Install checkinstall if you don’t already have it (apt-get install checkinstall).
  3. Download source files.
  4. Extract source.
  5. Change into source directory.
  6. Read carefully the README file in the source directory.
  7. Installed dependencies listed in the README:

# apt-get install libpurple-dev libtool intltool pkg-config libglib2.0-dev \
libxml2-dev libnss3-dev libssl-dev libkrb5-dev libnice-dev libgstreamer0.10-dev

These dependencies may change over time, and your particular requirements may be different from mine, so please read the README and that information should take precedence.

Lastly, as an ordinary user, you should now be able to compile. If it fails at any stage, simply read the error and install the missed dependancy.

$ ./configure --prefix=/usr
$ make
$ sudo checkinstall

I found checkinstall was pre-populated with sensible settings, and I was able to continue without making any changes. Once complete a Debian package will have been created in the current directory, but it will have already been installed for you.

For some reason I found that at this stage Pidgin would no longer run, as it was now named /usr/bin/pidgin.orig instead of /usr/bin/pidgin, I tried removing and reinstalling pidgin but to no avail. In the end I created a symlink (ln -s /usr/bin/pidgin.orig /usr/bin/pidgin), but you should not do this unless you experience the same issue. If you know the reason for this I would be delighted to receive your feedback, as this isn’t a problem that I have come across before.

Restarting Pidgin and the Office Communicator sprung into life once more. Sadly I would imagine that this won’t be the last time this plugin will break, such are the vagaries of connecting to closed proprietary networks.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (July and August 2015)

Planet HantsLUG - Tue, 01/09/2015 - 11:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller


Categories: LUG Community Blogs

Bring-A-Box 12th September 2015, Red Hat

Surrey LUG - Mon, 31/08/2015 - 19:12
Start: 2015-09-12 11:00 End: 2015-09-12 17:00

We have regular sessions each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!

Back to the excellent Red Hat offices in Farnborough, Hampshire on Saturday 12th September - thanks to Dominic Cleal for hosting us..

Categories: LUG Community Blogs

Jonathan McDowell: Random post-DebConf 15 thoughts

Planet ALUG - Mon, 24/08/2015 - 15:18

There are a bunch of things I mean to blog about, but as I have just got fully home from Heidelberg and DebConf15 this afternoon that seems most appropriate to start with. It’s a bit of a set of disjoint thoughts, but I figure I should write them down while they’re in my head.

DebConf is an interesting conference. It’s the best opportunity the Debian project has every year to come together and actually spend a decent amount of time with each other. As a result it’s a fairly full on experience, with lots of planned talks as a basis and a wide range of technical discussions and general social interaction filling in whatever gaps are available. I always find it a thoroughly enjoyable experience, but equally I’m glad to be home and doing delightfully dull things like washing my clothes and buying fresh milk.

I have always been of the opinion that the key aspect of DebConf is the face time. It was thus great to see so many people there - we were told several times that this was the largest DebConf so far (~ 570 people IIRC). That’s good in the sense that it meant I got to speak to a lot of people (both old friends and new), but does mean that there are various people I know I didn’t spend enough, or in some cases any, time with. My apologies, but I think many of us were in the same situation. I don’t feel it made the conference any less productive for me - I managed to get a bunch of hacking done, discuss a number of open questions in person with various people and get pulled into various interesting discussions I hadn’t expected. In short, a typical DebConf.

Also I’d like to say that the venue worked out really well. I’ll admit I was dubious when I heard it was in a hostel, but it was well located (about a 30 minute walk into town, and a reasonable bus service available from just outside the door), self-contained with decent facilities (I’m a big believer in having DebConf talks + accommodation be as close as possible to each other) and the room was much better than expected (well, aside from the snoring but I can’t blame the DebConf organisers for that).

One of the surprising and interesting things for me that was different from previous DebConfs was the opportunity to have more conversations with a legal leaning. I expect to go to DebConf and do OpenPGP/general crypto related bits. I wasn’t expecting affirmation about the things I have learnt on my course over the past year, in terms of feeling that I could use that knowledge in the process of helping Debian. It provided me with some hope that I’ll be able to tie my technology and law skills together in a way that I will find suitably entertaining (as did various conversations where people expressed significant interest in the crossover).

Next year is in Cape Town, South Africa. It’s a long way (though I suppose no worse than Portland and I get to stay in the same time zone), and a quick look at flights indicates they’re quite expensive at the moment. The bid presentation did look pretty good though so as soon as the dates are confirmed (I believe this will happen as soon as there are signed contracts in place) I’ll take another look at flights.

In short, excellent DebConf, thanks to the organisers, lovely to see everyone I managed to speak to, apologies to those of you I didn’t manage to speak to. Hopefully see you in Cape Town next year.

Categories: LUG Community Blogs
Syndicate content