Planet HantsLUG

Syndicate content
Planet HantsLUG -
Updated: 16 min 13 sec ago

Debian Bits: Tails installer is now in Debian

Thu, 11/02/2016 - 13:30

Tails (The amnesic incognito live system) is a live OS based on Debian GNU/Linux which aims at preserving the user's privacy and anonymity by using the Internet anonymously and circumventing censorship. Installed on a USB device, it is configured to leave no trace on the computer you are using unless asked explicitly.

As of today, the people the most needy for digital security are not computer experts. Being able to get started easily with a new tool is critical to its adoption, and even more in high-risk and stressful environments. That's why we wanted to make it faster, simpler, and more secure to install Tails for new users.

One of the components of Tails, the Tails Installer is now in Debian thanks to the Debian Privacy Tools Maintainers Team.

Tails Installer is a graphical tool to install or upgrade Tails on a USB stick from an ISO image. It aims at making it easier and faster to get Tails up and running.

The previous process for getting started with Tails was very complex and was problematic for less tech-savvy users. It required starting Tails three times, and copying the full ISO image onto a USB stick twice before having a fully functional Tails USB stick with persistence enabled.

This can now be done simply by installing Tails Installer in your existing Debian system, using sid, stretch or jessie-backports, plugging a USB stick and choosing if one wants to update the USB stick or to install Tails using a previously downloaded ISO image.

Tails Installer also helps Tails users to create an encrypted persistent storage for personal files and settings in the rest of the available space.

Categories: LUG Community Blogs

Steve Kemp: Redesigning my clustered website

Sun, 07/02/2016 - 10:28

I'm slowly planning the redesign of the cluster which powers the Debian Administration website.

Currently the design is simple, and looks like this:

In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.

(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)

When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.

Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.

Recently though I've begun planning how it will be deployed in the future and I have a new design:

Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:

  • POST /users/login
    • POST a username/password and return 200 if valid. If bogus details return 403. If the user doesn't exist return 404.
  • GET /users/Steve
    • Return a JSON hash of user-information.
    • Return 404 on invalid user.

I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.

This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.

Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.

Categories: LUG Community Blogs

Andy Smith: Your Debian netboot suddenly can’t do Ext4?

Fri, 05/02/2016 - 09:50

If, like me, you’ve just done a Debian netboot install over PXE and discovered that the partitioner suddenly seems to have no option for Ext4 filesystem (leaving only btrfs and XFS), despite the fact that it worked fine a couple of weeks ago, do not be alarmed. You aren’t losing your mind. It seems to be a bug.

As the comment says, downloading netboot.tar.gz version 20150422+deb8u3 fixes it. You can find your version in the debian-installer/amd64/boot-screens/f1.txt file. I was previously using 20150422+deb8u1 and the commenter was using 20150422+deb8u2.

Looking at the dates on the files I’m guessing this broke on 23rd January 2016. There was a Debian point release around then, so possibly you are supposed to download a new netboot.tar.gz with each one – not sure. Although if this is the case it would still be nice to know you’re doing something wrong as opposed to having the installer appear to proceed normally except for denying the existence of any filesystems except XFS and btrfs.

Oh and don’t forget to restart your TFTP daemon. tftpd-hpa at least seems to cache things (or maybe hold the tftp directory open, as I had just moved the old directory out of the way), so I was left even more confused when it still seemed to be serving 20150422+deb8u1.

Categories: LUG Community Blogs

Steve Kemp: Best practice - Don't serve writeable PHP files

Tue, 02/02/2016 - 19:10

I deal with compromises often enough of PHP-based websites that I wish to improve hardening.

One obvious way to improve things is to not serve PHP files which are writeable by the webserver-user. This would ensure that things like wp-content/uploads didn't get served as PHP if a compromise wrote valid PHP there.

In the past using php5-suhosin would have allowd this via the suhosin.executor.include.allow_writable_files flag.

Since suhosin is no longer supported under Debian Jessie I wonder if there is a simple way to achieve this?

I've written a toy-module which allows me to call stat on every request, and return a 403 on access to writeable files/directories. But it seems like I shouldn't need to write my own code for this functionality.

Any pointers welcome; happy to post my code if that is useful but suspect not - it just shouldn't exist.

Categories: LUG Community Blogs

Steve Kemp: So life in Finland goes on

Wed, 20/01/2016 - 15:50

So after living here in Finland for 6 months I've now bought a flat.

We have a few days to sort out mortgage paperwork, and assuming there are no problems we'll be moving into the new place on/around the 1st of March.

Finally I'll be living in Finland, with a sauna of my very own.

Interesting times.

In more developer-friendly news I made a new release of Lumail with the integrated support for IMAP. Let us hope people like it.

Categories: LUG Community Blogs

Steve Kemp: Lumail has IMAP .. almost

Sat, 16/01/2016 - 12:16

A couple of years ago I was dissatisfied with mutt, mostly because the mutt-sidebar patch was dropped from the Debian package. That lead to me thinking "How hard can it be to write a modal, console-based mail-client?"

It turns out writing a client is pretty simple if you limit yourself solely to Maildirs, and as I typically read my mail over SSH on the mailhost itself that suited me pretty well.

Recently I restarted the mail-client. Putting it together from scratch to simplify the implementation, and unify a lot of the adhoc scripting which is provided by Lua. People seem to like the client, but the single largest complaint was "Can't use it - no IMAP."

This week I've mostly been adding IMAP support, and today I'll commit the last few bits that mean it is roughly-functional:

  • Connecting to a mail-server works.
  • Getting the folders works.
  • Getting the messages works.

The outstanding niggles will be relating to getting/setting the new/read/seen/unseen flags, and similar. But I'm pleased that the job wasn't insurmountable.

I've used libcurl to provide the IMAP functionality because most of the IMAP libraries I looked at were big, scary, and complex. Using curl to access IMAP is pretty neat, simple, and straightforward. The downside is you're making a lot of "http" requests. So I might need to revisit things.

Happily my imap wrapper doesn't need much functionality. So if I can find a better library swapping it out will be simple.

In conclusion: Lumail almost has IMAP support, and that might mean it'll be more useful to others.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (November and December 2015)

Tue, 12/01/2016 - 11:30

The following contributors got their Debian Developer accounts in the last two months:

  • Stein Magnus Jodal (jodal)
  • Prach Pongpanich (prach)
  • Markus Koschany (apo)
  • Bernhard Schmidt (berni)
  • Uwe Kleine-König (ukleinek)
  • Timo Weingärtner (tiwe)
  • Sebastian Andrzej Siewior (bigeasy)
  • Mattia Rizzolo (mattia)
  • Alexandre Viau (aviau)
  • Lev Lamberov (dogsleg)
  • Adam Borowski (kilobyte)
  • Chris Boot (bootc)

The following contributors were added as Debian Maintainers in the last two months:

  • Alf Gaida
  • Andrew Ayer
  • Marcio de Souza Oliveira
  • Alexandre Detiste
  • Dave Hibberd
  • Andreas Boll
  • Punit Agrawal
  • Edward Betts
  • Shih-Yuan Lee
  • Ivan Udovichenko
  • Andrew Kelley
  • Benda Xu
  • Russell Sim
  • Paulo Roberto Alves de Oliveira
  • Marc Fournier
  • Scott Talbert
  • Sergio Durigan Junior
  • Guillaume Turri
  • Michael Lustfield


Categories: LUG Community Blogs

Steve Kemp: Restoring my system .. worked

Sat, 02/01/2016 - 08:52

A while back I wrote about some issues with converting a two-disk RAID system to a one-disk system, but just to recap:

  • We knew were were moving to Finland.
  • The shared/main computer we used in the UK was old and slow.
  • A new computer in Finland would be more expensive than it should be.
  • Equally transporting a big computer from the UK would also be silly.

In the end we bought a small form-factor PC, with only a single drive and I moved one of the two drives from the old machine into it. Then converted it to run happily with only a single drive, and not email every day to say "device missing".

So there things stood, we had a desktop with a single drive, and I ensured that I took full daily backup via attic.

Over Chrismas the two-year old drive failed. To the extent I couldn't even get it to be recognized by the BIOS, and thus couldn't pull data off it. Time to test my backups in anger! I bought a new drive, installed a minimal installation of the Jessie release of Debian onto the system, and then ran:

cd / .. restore latest backup ..

Two days later I'd pulled 1.3Tb over the network, and once I fixed up grub, /etc/fstab, and a couple of niggles it all just worked. Rebooted to make sure the temporary.home hostname, etc, was all gone and life was good.

Restored backup! No errors! No data-loss! Perfect!

The backup-script I use every day was very very good at making sure nothing was missed:

attic create --stats --checkpoint-interval=7200 attic@${remote}:/attic/storage::${host}-$(date +%Y-%m-%d-%H) --exclude=/proc \ --exclude=/sys \ --exclude=/run \ --exclude=/dev \ --exclude=/tmp \ --exclude=/var/tmp \ --exclude=/var/log \ /

In other news I published my module for controlling the new smart lights I've bought

Categories: LUG Community Blogs

Debian Bits: Debian mourns the passing of Ian Murdock

Wed, 30/12/2015 - 19:15

With a heavy heart Debian mourns the passing of Ian Murdock, stalwart proponent of Free Open Source Software, Father, Son, and the 'ian' in Debian.

Ian started the Debian project in August of 1993, releasing the first versions of Debian later that same year. Debian would go on to become the world's Universal Operating System, running on everything from embedded devices to the space station.

Ian's sharp focus was on creating a Distribution and community culture that did the right thing, be it ethically, or technically. Releases went out when they were ready, and the project's staunch stance on Software Freedom are the gold standards in the Free and Open Source world.

Ian's devotion to the right thing guided his work, both in Debian and in the subsequent years, always working towards the best possible future.

Ian's dream has lived on, the Debian community remains incredibly active, with thousands of developers working untold hours to bring the world a reliable and secure operating system.

The thoughts of the Debian Community are with Ian's family in this hard time.

His family has asked for privacy during this difficult time and we very much wish to respect that. Within our Debian and the larger Linux community condolences may be sent to where they will be kept and archived.

Categories: LUG Community Blogs

Steve Kemp: I joined the internet of things.

Wed, 30/12/2015 - 07:03

In my old flat I had a couple of simple radio-controlled switches, which allowed me to toggle power to a pair of standing lamps - one at each side of the bed. This was very lazy, but also really handy and I've always been curious about automation..

When it comes to automation there seems to be three main flavours:


The original standard, with stuff produced by many vendors and good Linux support.

X10 supports two ways of sending/receiving commands - over the electrical wiring, and over RF.


This is the newcomer, which despite that seems to be well-supported and extensible. It allows "measurements" to be sent/received in addition to the broadcast of events like "switch on", and "switch off".

Other systems - often lighting-centric

There are toy-things like the previously noted power-controlling things, there are also stand-alone devices from people like Philips with their philips hue system, but given how Philips recently crippled their devices to disable third-party bulbs I've no desire to use them.

One company caught my eye though, Osram make a smart lightbulb and mini-hub to work with it.

So I bought one of the osram lightify systems, consisting of a magic box and a pair of lightbulbs. The box connects to your wifi, and gets an IP address. The IP address is then used by the application on your mobile phone (i.e. the magic box does the magic, not the bulbs). The phone application can be used to trigger "on", "off", "dim", "brighter", and the various colour-changing commands, as you would expect.

You absolutely must use the phone-based application to do the setup, but after that the whole point was that I could automate things. I wanted to be able to setup my desktop computer to schedule events, and started hacking.

I've written a simple Perl module to let me discover bulbs, and turn them off and on. No doubt it'll be on CPAN in the near future, once I can pick a suitable name for it:

$ ol --bridge= --list hall MAC:8418260000d9c70c RGBW:255,255,255,255 STATE:On kitchen MAC:8418260000cb433b RGBW:255,255,255,255 STATE:On $ ol --bridge= --off=kitchen $ ol --bridge= --list hall MAC:8418260000d9c70c RGBW:255,255,255,255 STATE:On kitchen MAC:8418260000cb433b RGBW:255,255,255,255 STATE:Off

The only niggle was the fiddly pairing, and the lack of any decent documentation. The code I wrote was loosely based on the python project python-lightify written by Mikael Magnusson. Also worth noting that the bridge/magic-box only exposes a single port so you can find the device on your VLAN by nmapping for port 4000:

$ nmap -v -p 4000

The device doesn't seem to allow any network setup at all - it only uses DHCP. So you might want to make sure it gets assigned a stable IP.

Anyway I'm going to bed. When I do so I'll turn the lights off with my mobile phone. Neat.

In the future I will look at more complex automation, and I think Z-wave is the way I'll go. Right now I'm in a rented flat so replacing wall-switches, etc, is something I can't do. But the systems I've looked at seem neat, and this current setup will keep me amused for several months!

Categories: LUG Community Blogs

Steve Kemp: Some things are universal?

Sun, 27/12/2015 - 07:03

I don't often do retrospectives, but this year has been an unusual one for me, as I moved to Finland almost six months ago.

The topic has come up in conversation a lot over the past few months, so when people ask me what I think I can give some simple answers without too much thought. Here's a brief summary.

There are some obvious changes:

The Traffic

The traffic drives on the right-hand side of the roads, which took a bit of getting used to, but isn't a huge surprise as I've travelled in Europe in the past. There aren't so many countries that drive on the left after all so most people probably wouldn't even notice this as odd.

When it comes to traffic one thing nice about Helsinki is that most junctions are "zebra crossings". Sure they don't have flashing lights, but they have shaded areas, and pedestrians have right of way.

As for transport the city of Helsinki has local trains, trams, buses and taxis. The trams and buses all use the same card for payment so transport is integrated very well. I buy a time-based card, spending about €50 for a month of unlimited travel. If you prefer you may add euros to your card and pay for distinct journeys - but that works out more expensive if you travel twice, or more, a day.

The Money

Finland uses the Euro these days, having switched from the Finnish markka in 2002.

Enough said.

Costs are largely in line with what I'd expect: Cigarettes are cheap, beer is expensive. Some things are very expensive, some things are very cheap. Largely the expensive things are those that are imported. It is a very small country after all.

The Language

Finnish is .. complex.

But I've not struggled too much. Mostly I can buy what I want without difficulty. There are weird exceptions though for example I went out to buy soup one day and had to return carrying only shame and disappointment: I can't read the language on the tins and what I thought was soup turned out to be a can of chopped tomatoes.

Food is good though, and available easily (!!). The only significant surprise when it comes to shopping is that loose goods must be weighed yourself. You pick up a bunch of bananas, take it to the scales, press the button that has a picture of a banana on it, and it prints out a label you attach to them - at the till the cashier will scan the label and charge you. If you forget, or don't know how to do it they'll tut and complain.

In daily life I use two phrases frequently and they are sufficient for communcation:

  • "minua haluan ... kahvi|kakku|olut"
    • "I want ... coffee|cake|beer".
  • "kiitos"
    • "Thanks"

Usually people speak to me in English, which is a little annoying as it means I'm not learning as much as I could. But that said over the past few months I've had proper conversations entirely in Finnish with shop-keepers, and similar. So I'm getting better.

The Culture

Finnish people are friendly, but terse. That's the reputation.

The Finnish people are alcoholics, and have high rates of suicide. Also the reputation.

Finally we know that the Finnish people consume more coffee than the rest of the world.

All those things are true, but they're not enough by far to describe the people. Obviously they're all different, and we have a lot of people from other parts of the world here too - Russians, Asians, Somalians. So culture is complex .. but markedly different than in the UK.

I could write more about this, but I think for the moment I'll just draw a line under culture and say that I'm enjoying the interactions with people here, and while many things are slightly "off", it's not bad. Just different.

Also saunas are fun. I've never had any qualms about being naked with strangers, so I don't really understand why Americans, and others, find this so difficult/surprising. But yeah, saunas are great.

Things that Finland is known for internationally: The invention of the molotov cocktail, rally-driving, hockey, world's strongest man, Moomins, Tom of Finland, Salmiakki.

The Weather

Not too hot. Not too cold. But that's largely because I'm one of those "hot" people who doesn't really get cold even at the best of times.

My ideal temperatures are about 13°C. My wife prefers 15°C, or more. We don't fight any more. Mostly.

Winter is apparently full of snow, but this year has been poor. We had the first snowfall yesterday, here in Helsinki, and we woke this morning a blanket of snow about two inches high. It looks pretty.

The biggest thing about weather in Finland is the constant darkness in winter, and the constant sun in Summer. In Summer there were like 22 hours of sunlight a day which made sleeping hard when we moved into our flat - with no curtains.

In winter it feels like there is 20 minutes of sunlight a day. It's not that bad here in the south, although I think it is something like five hours and less in the north. I've never had any real issues with depression, or similar: People have good days and bad days, I'd generally be "OK" or "great". In the darkness? I've been grumpy at colleagues, I've made bad choices, I've lapsed attention. I'm not sure I can blame it on the weather, or my reaction to the weather, but I know I've not been as "happy" as I "should".

It requires effort to be enthusiastic in a way I've never experienced before. Thankfully once I (slowly) realized this I took action and I think I'm good now.

Unlike the UK the buildings here are relatively modern. I think that's the biggest contributing factor to how houses are "warm". Houses have all been built in the last 50-100 years, so you have proper insulation. Even though it might be very very cold outdoors indoors you can be naked without heating. Try that in the UK and you might freeze in some of the older leakier houses!

You do have to laugh, though, when people point out "the oldest pub" in the city though. Where I come from if I pub isn't 500+ years old you wouldn't give it a seconds thought - places like The Golden Fleece, etc.

I could write more. I probably should. But it has been an interesting year, and although there are things I miss about the UK, and Edinburgh specifically, I have no regrets. I'm glad I came.

What triggered this post? I said "Some things are universal" to my wife, when I saw a child riding a bicycle they'd obviously just received for Christmas. Her reaction "No Finnish person would buy a bicycle at Christmas - they'd expect too much snow!". So perhaps it was another immigrant family.

Christmas bicycles universal, or not, it doesn't really matter.

Categories: LUG Community Blogs

Steve Kemp: Finding and reporting trivial security issues

Tue, 22/12/2015 - 15:20

This week I'll be mostly doing drive-by bug-reporting.

As with last year we start by using the Debian Code Search, to look for obviously broken patterns such as "system.>./tmp/.*"

Once we find a fun match we examine the code and then report the bugs we find. Today that was stalin which runs some fantastic things on startup:

(system "uname -m >/tmp/QobiScheme.tmp") (system "rm -f /tmp/QobiScheme.tmp"))

We can exploit this like so:

$ ln -s /home/steve/HACK /tmp/QobiScheme.tmp $ ls -l /home/steve/HACK ls: cannot access /home/steve/HACK: No such file or directory

Now we run the script:

$ cd /tmp/stalin-0.11/benchmarks $ ./make-hello

And we see this:

$ ls -l /home/steve/HACK -rw-r--r-- 1 steve steve 6 Dec 22 08:30 /home/steve/HACK

For future reference the lsat looks horrifically bad - it writes multiple times to /tmp/lsat1.lsat and although it tries to detect races I'm not convinced. Something to look at in the future.

Categories: LUG Community Blogs

Alan Pope: Testing Ubuntu Apps As A Service

Wed, 16/12/2015 - 12:20

tl;dr. Stuart Langridge and I made an simple, easy to use, experimental app tester called Marvin, for Ubuntu Click Packages, which emails you screenshots and logs of your app while running on a real device you may not own.

I frequently get asked by new developers in the community to help test their apps on Ubuntu Phone. Typically they don’t want extensive testing of all features, just a simple “does it start and what does it look like on the device you have?”.

Often they don’t have a physical device when developing on the desktop with our SDK, but want an on-device sanity check before they upload to the store. Sometimes they have one device such as a phone, but want to see what their app looks like on a different one, perhaps a tablet.

I’ve been happy to help developers test their apps on various devices, but this doesn’t scale well, is time consuming and relies on me being online and having a phone which I’m happy to install random click packages on.

Meanwhile, at OggCamp I gave a short talk about our recent security incident on Ubuntu phone. During the Q&A and in the bar afterwards a couple of people suggested that we should have some system which enables automated testing of devices. They were coming at it from the security point of view, suggesting heavy instrumentation to find these kinds of issues before they hit the store.

While we (Canonical) already have tools which review apps before they go in the store, we currently don’t actually install and execute the apps on devices, and have no plan to implement such a service (that I know of).

I’m aware that other platforms have implemented automated systems for testing and instrumenting apps and wondered how hard it would be to setup something really basic to cover at least one of the two use cases above. So I took to Telegram to brainstorm with my good friend Stuart Langridge.

We thrashed out what was needed for a ‘minimum viable product’ and some nice-to-have future enhancements. Pretty soon after, with a bit of python and some hacked-together shell scripts, ‘Marvin‘ was born. I then approached Daniel McGuire who kindly provided some CSS to make it look prettier.

A developer can upload a click package to the site, and specify their email address & one or more of the available devices. Some time later they’ll get an email showing a few screenshots of the app or scope running on a device and pertinent logs extracted after it ran. While the developer waits, the website shows the current status as ‘pending’ (you’re in a queue), ‘claimed’ (by a device) and ‘finished’ (check your inbox).

This fulfills the simplest of use cases, making sure the app starts, and extracting the log if it didn’t. Clearly there’s plenty more it could potentially do, but this was our first target met.

Under the covers, there’s a device attached to a computer which checks periodically for uploaded clicks and processes them in sequence. In between each run the phone is cleaned up, so each test is done on a blank device. Currently it tests traditional apps/games and scopes, webapps are rejected, but may be supported later.

The reason we reject webapps is because currently the devices have no network access at all – no wifi or cellular data. So running webapps would just result in this:-

It’s experimental so not completely robust, being a prototype hacked together over a couple of weekends/evenings, but it works (for the most part). There’s no guarantees of availability of the service or indeed the devices. It could go offline at any time. Did I mention it’s experimental?

Significantly, I’ve disabled network access completely on the device, with no SIM inside, so any app which requires external network access is going to have a bad day. Locally installed apps however, will work fine.

We currently don’t do any interaction with the uploaded applications, but simply run them and wait a few seconds (to give it time to quiesce) then take a screenshot. The image at the top of this post shows what a typical email from Marvin looks like.

It contains:-

  • click-review.txt – The output from running the click review tools
    • Note: Apps which fail the click review process (the same one run by the click store) will not be installed or tested.
  • install.txt – Output from the commands used to install the click on the device – good for debugging install failures
  • Screenshot-0 – What the “home” app scope looks like with the click installed – useful for showing the icon and description
  • Screenshot-1 – What happens immediately after starting the app, showing the splash screen
  • Screenshot-2 – The app after 5 seconds
  • Screenshot-3 – The app after 10 seconds
    • Note: We attempt to de-duplicate the screenshots so you may not get all four if any are identical
  • application-log.txt – The actual output (stdout) from the application, pulled from ~/.cache/upstart
  • dmesg.txt – Any kernel logging generated from the app during the app run
  • device-version.txt – The output of ‘system-image-cli –info’ run on the device, so the developer knows what OTA level, channel and device it ran on

There’s clearly a ton of other things that could be added to the mail, or extra items which could be instrumented or monitored, and features we could add. Off the top of my head we could potentially add:-

  • Scripted touch/gestures
  • Networking
  • VPN endpoints (so the phone looks like it’s in a particular region)
  • Orientation changes
  • Faked GPS location
  • Video/screencast recording during runtime
  • Input from microphone / camera(s)
  • Specify which release / channel to flash on the device prior to testing

Clearly all of these need some careful thought and planning, especially those enabling network access from the device.

We’re interested in feedback from developers who might use Marvin, and suggestions for improvements we might make. There are a limited number of devices in the pool, and not all supported devices are currently available. In the future we may have more devices connected to Marvin as they become available.

So go and test your apps at!

Categories: LUG Community Blogs

Andy Smith: Disabling the default IPMI credentials on a Supermicro server

Sat, 12/12/2015 - 00:34

In an earlier post I mentioned that you should disable the default ADMIN / ADMIN credentials on the IPMI controller. Here’s how.

Install ipmitool

ipmitool is the utility that you will use from the command line of another machine in order to interact with the IPMI controllers on your servers.

# apt-get install ipmitool List the current users $ ipmitool -I lanplus -H -U ADMIN -a user list Password: ID Name Callin Link Auth IPMI Msg Channel Priv Limit 2 ADMIN false false true ADMINISTRATOR

Here you are specifying the IP address of the server’s IPMI controller. ADMIN is the IPMI user name you will use to log in, and it’s prompting you for the password which is also ADMIN by default.

Add a new user

You should add a new user with a name other than ADMIN.

I suppose it would be safe to just change the password of the existing ADMIN user, but there is no need to have it named that, so you may as well pick a new name.

$ ipmitool -I lanplus -H -U ADMIN -a user set name 3 somename Password: $ ipmitool -I lanplus -H -U ADMIN -a user set password 3 Password: Password for user 3: Password for user 3: $ ipmitool -I lanplus -H -U ADMIN -a channel setaccess 1 3 link=on ipmi=on callin=on privilege=4 Password: $ ipmitool -I lanplus -H -U ADMIN -a user enable 3 Password:

From this point on you can switch to using the new user instead.

$ ipmitool -I lanplus -H -U somename -a user list Password: ID Name Callin Link Auth IPMI Msg Channel Priv Limit 2 ADMIN false false true ADMINISTRATOR 3 somename true true true ADMINISTRATOR Disable ADMIN user

Before doing this bit you may wish to check that the new user you added works for everything you need it to. Those things might include:

  • ssh to somename@
  • Log in on web interface at
  • Various ipmitool commands like querying power status: $ ipmitool -I lanplus -H -U somename -a power status Password: Chassis power is on

If all of that is okay then you can disable ADMIN:

$ ipmitool -I lanplus -H -U somename -a user disable 2 Password:

If you are paranoid (or this is just the first time you’ve done this) you could now check to see that none of the above things now work when you try to use ADMIN / ADMIN.

Specifying the password

I have not done so in these examples but if you get bored of typing the password every time then you could put it in the IPMI_PASSWORD environment variable and use -E instead of -a on the ipmitool command line.

When setting the IPMI_PASSWORD environment variable you probably don’t want it logged in your shell’s history file. Depending on which shell you use there may be different ways to achieve that.

With bash, if you have ignorespace in the HISTCONTROL environment variable then commands prefixed by one or more spaces won’t be logged. Alternatively you could temporarily disable history logging with:

$ set +o history $ sensitive commend goes here $ set -o history # re-enable history logging

So anyway…

$ echo $HISTCONTROL ignoredups:ignorespace $ export IPMI_PASSWORD=letmein $ # ^ note the leading spaces here $ # to prevent the shell logging it $ ipmitool -I lanplus -H -U somename -E power status Chassis Power is on
Categories: LUG Community Blogs

Andy Smith: Installing Debian by PXE using Supermicro IPMI Serial over LAN

Fri, 11/12/2015 - 18:50

Here’s how to install Debian jessie on a Supermicro server using PXE boot and the IPMI serial-over-LAN.

Using these instructions you will be able to complete an install of a remote machine, although you will initially need access to the BIOS to configure the IPMI part.

BIOS settings

This bit needs you to be in the same location as the machine, or else have someone who is make the required changes.

Press DEL to go into the BIOS configuration.

Under Advanced > PCIe/PCI/PnP Configuration make sure that the network interface through which you’ll reach your PXE server has the “PXE” option ROM set:

Under Advanced > Serial Port Console Redirection you’ll want to enable SOL Console Redirection.

(Pictured here is also COM1 Console Redirection. This is for the physical serial port on the machine, not the one in the IPMI.)

Under SOL Console Redirection Settings you may as well set the Bits per second to 115200.

Now it’s time to configure the IPMI so you can interact with it over the network. Under IPMI > BMC Network Configuration, put the IPMI on your management network:

Connecting to the IPMI serial

With the above BIOS settings in place you should be able to save and reboot and then connect to the IPMI serial console. The default credentials are ADMIn / ADMIN which you should of course change with ipmitool, but that is for a different post.

There’s two ways to connect to the serial-over-LAN: You can ssh to the IPMI controller, or you can use ipmitool. Personally I prefer ssh, but the ipmitool way is like this:

$ ipmitool -I lanplus -H -U ADMIN -a sol activate

The ssh way:

$ ssh ADMIN@ The authenticity of host ' (' can't be established. RSA key fingerprint is b7:e1:12:94:37:81:fc:f7:db:6f:1c:00:e4:e0:e1:c4. Are you sure you want to continue connecting (yes/no)? Warning: Permanently added ',' (RSA) to the list of known hosts. ADMIN@'s password:   ATEN SMASH-CLP System Management Shell, version 1.05 Copyright (c) 2008-2009 by ATEN International CO., Ltd. All Rights Reserved     -> cd /system1/sol1 /system1/sol1   -> start /system1/sol1 press <Enter>, <Esc>, and then <T> to terminate session (press the keys in sequence, one after the other)

They both end up displaying basically the same thing.

The serial console should just be displaying the boot process, which won’t go anywhere.

DHCP and TFTP server

You will need to configure a DHCP and TFTP server on an already-existing machine on the same LAN as your new server. They can both run on the same host.

The DHCP server responds to the initial requests for IP address configuration and passes along where to get the boot environment from. The TFTP server serves up that boot environment. The boot environment here consists of a kernel, initramfs and some configuration for passing arguments to the bootloader/kernel. The boot environment is provided by the Debian project.


I’m using isc-dhcp-server. Its configuration file is at /etc/dhcp/dhcpd.conf.

You’ll need to know the MAC address of the server, which can be obtained either from the front page of the IPMI controller’s web interface (i.e. in this case) or else it is displayed on the serial console when it attempts to do a PXE boot. So, add a section for that:

subnet netmask { }   host foo { hardware ethernet 0C:C4:7A:7C:28:40; fixed-address; filename "pxelinux.0"; next-server; option subnet-mask; option routers; }

Here we set the network configuration of the new server with fixed-address, option subnet-mask and option routers. The IP address in next-server refers to the IP address of the TFTP server, and pxelinux.0 is what the new server will download from it.

Make sure that is running:

# service isc-dhcp-server start

DHCP uses UDP port 67, so make sure that is allowed through your firewall.


A number of different TFTP servers are available. I use tftpd-hpa, which is mostly configured by variables in /etc/default/tftp-hpa:


TFTP_DIRECTORY is where you’ll put the files for the PXE environment.

Make sure that the TFTP server is running:

# service tftpd-hpa start

TFTP uses UDP port 69, so make sure that is allowed through your firewall.

Download the netboot files from your local Debian mirror:

$ cd /srv/tftp $ curl -s | sudo tar zxvf - ./ ./ ./ldlinux.c32 ./pxelinux.0 ./pxelinux.cfg ./debian-installer/ ./debian-installer/amd64/ ./debian-installer/amd64/bootnetx64.efi ./debian-installer/amd64/grub/ ./debian-installer/amd64/grub/grub.cfg ./debian-installer/amd64/grub/font.pf2 …

(This assumes you are installing a device with architecture amd64.)

At this point your TFTP server root should contain a debian-installer subdirectory and a couple of links into it:

$ ls -l . total 8 drwxrwxr-x 3 root root 4096 Jun 4 2015 debian-installer lrwxrwxrwx 1 root root 47 Jun 4 2015 ldlinux.c32 -> debian-installer/amd64/boot-screens/ldlinux.c32 lrwxrwxrwx 1 root root 33 Jun 4 2015 pxelinux.0 -> debian-installer/amd64/pxelinux.0 lrwxrwxrwx 1 root root 35 Jun 4 2015 pxelinux.cfg -> debian-installer/amd64/pxelinux.cfg -rw-rw-r-- 1 root root 61 Jun 4 2015

You could now boot your server and it would call out to PXE to do its netboot, but would be displaying the installer process on the VGA output. If you intend to carry it out using the Remote Console facility of the IPMI interface then that may be good enough. If you want to do it over the serial-over-LAN though, you’ll need to edit some of the files that came out of the netboot.tar.gz to configure that.

Here’s a list of the files you need to edit. All you are doing in each one is telling it to use serial console. The changes are quite mechanical so you can easily come up with a script to do it, but here I will show the changes verbosely. All the files live in the debian-installer/amd64/boot-screens/ directory.

ttyS1 is used here because this system has a real serial port on ttyS0. 115200 is the baud rate of ttyS1 as configured in the BIOS earlier.



label expert menu label ^Expert install kernel debian-installer/amd64/linux append priority=low vga=788 initrd=debian-installer/amd64/initrd.gz --- include debian-installer/amd64/boot-screens/rqtxt.cfg label auto menu label ^Automated install kernel debian-installer/amd64/linux append auto=true priority=critical vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet


label expert menu label ^Expert install kernel debian-installer/amd64/linux append priority=low console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- include debian-installer/amd64/boot-screens/rqtxt.cfg label auto menu label ^Automated install kernel debian-installer/amd64/linux append auto=true priority=critical console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- quiet rqtxt.cfg


label rescue menu label ^Rescue mode kernel debian-installer/amd64/linux append vga=788 initrd=debian-installer/amd64/initrd.gz rescue/enable=true --- quiet


label rescue menu label ^Rescue mode kernel debian-installer/amd64/linux append console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz rescue/enable=true --- quiet syslinux.cfg


# D-I config version 2.0 # search path for the c32 support libraries (libcom32, libutil etc.) path debian-installer/amd64/boot-screens/ include debian-installer/amd64/boot-screens/menu.cfg default debian-installer/amd64/boot-screens/vesamenu.c32 prompt 0 timeout 0


serial 1 115200 console 1 # D-I config version 2.0 # search path for the c32 support libraries (libcom32, libutil etc.) path debian-installer/amd64/boot-screens/ include debian-installer/amd64/boot-screens/menu.cfg default debian-installer/amd64/boot-screens/vesamenu.c32 prompt 0 timeout 0 txt.cfg


default install label install menu label ^Install menu default kernel debian-installer/amd64/linux append vga=788 initrd=debian-installer/amd64/initrd.gz --- quiet


default install label install menu label ^Install menu default kernel debian-installer/amd64/linux append console=ttyS1,115200n8 initrd=debian-installer/amd64/initrd.gz --- quiet Perform the install

Connect to the serial-over-LAN and get started. If the server doesn’t have anything currently installed then it should go straight to trying PXE boot. If it does have something on the storage that it would boot then you will have to use F12 at the BIOS screen to convince it to jump straight to PXE boot.

$ ssh ADMIN@ ADMIN@'s password:   ATEN SMASH-CLP System Management Shell, version 1.05 Copyright (c) 2008-2009 by ATEN International CO., Ltd. All Rights Reserved     -> cd /system1/sol1 /system1/sol1   -> start /system1/sol1 press <Enter>, <Esc>, and then <T> to terminate session (press the keys in sequence, one after the other)   Intel(R) Boot Agent GE v1.5.13 Copyright (C) 1997-2013, Intel Corporation   CLIENT MAC ADDR: 0C C4 7A 7C 28 40 GUID: 00000000 0000 0000 0000 0CC47A7C2840 CLIENT IP: MASK: DHCP IP: GATEWAY IP:   PXELINUX 6.03 PXE 20150107 Copyright (C) 1994-2014 H. Peter Anvin et al             ┌───────────────────────────────────────┐ │ Debian GNU/Linux installer boot menu │ ├───────────────────────────────────────┤ │ Install │ │ Advanced options > │ │ Help │ │ Install with speech synthesis │ │ │ │ │ │ │ │ │ │ │ │ │ └───────────────────────────────────────┘       Press ENTER to boot or TAB to edit a menu entry     ┌───────────────────────┤ [!!] Select a language ├────────────────────────┐ │ │ │ Choose the language to be used for the installation process. The │ │ selected language will also be the default language for the installed │ │ system. │ │ │ │ Language: │ │ │ │ C │ │ English │ │ │ │ <Go Back> │ │ │ └─────────────────────────────────────────────────────────────────────────┘         <Tab> moves; <Space> selects; <Enter> activates buttons

…and now the installation proceeds as normal.

At the end of this you should be left with a system that uses ttyS1 for its console. You may need to tweak that depending on whether you want the VGA console also.

Categories: LUG Community Blogs

Andy Smith: Audience tickets for Stewart Lee’s Comedy Vehicle

Thu, 10/12/2015 - 06:59

Last night Jenny and I got the chance to be in the audience for a recording of what will become (some percentage of) four episodes of Stewart Lee’s Comedy Vehicle season 4. Once we actually got in it was a really enjoyable experience, although as usual SRO Audiences were somewhat chaotic with their ticketing procedures.

I’d heard about the chance to get priority audience tickets from the Stewart Lee mailing list, so I applied, but the tickets I got were just their standard ones. From past experience I knew this would mean having to get there really early and queue for ages and still not be sure of getting in, so for most shows on the SRO Audiences site I don’t normally bother. As I particularly like Stewart Lee I decided to persevere this time.

The instructions said they’d be greeting us from 6.20pm, so I decided getting there about an hour early would be a good idea. I know from past experience that they massively over-subscribe their tickets in order to never ever have empty seats. That makes it very difficult to guess how early to be, and I hadn’t been to a Comedy Vehicle recording before either.

The venue was The Mildmay Club in Stoke Newington which was also the venue for all previous recordings of Comedy Vehicle. A bit of a trek from Feltham – train to Richmond then most of the way along the Overground towards Stratford; a good 90 minutes door to door. Nearest station Canonbury but we decided to go early and get some food at Nando’s Dalston first.

We got to the Mildmay Club about 5.25pm and there were already about 15 people queuing outside. Pretty soon the doorman let us in, but only as far as a table just inside the doors where a guy gave us numbered wristbands and told us to come back at 7pm.

This was a bit confusing as we weren’t sure whether that meant we were definitely getting in or if we’d still have to queue (and thus should actually come back before 7). So I asked,

“does the wristband mean we’re definitely getting in?”

“We’ll do our best to get as many people in as we can. We won’t know until 7pm,”

was the non-answer. People piling up behind us and they wanted us out of the way, so off we went.

Having already eaten we didn’t really have anything else to do, so we had a bit of an aimless wander around Newington Green for half an hour or so before arriving back outside the club again, where the queue was now a crowd bustling around the entrance and trailing off in both directions along the street. We decided to get back in the queue going to the right of the club, which was slowly shrinking, with the idea of asking if we were in the right place once we got to the front. All of the people in this queue were yet to collect their wristbands.

Having got to the front of this queue it was confirmed that we should wait around outside until 7pm, though still no idea whether we would get in or by what process this would be decided. We shuffled into the other queue to the left of the club which consisted of people like us who already had wristbands.

While in this queue, we heard calls for various colours of wristband that weren’t ours (white), and eventually all people in front of us had been called into the club. By about quarter past 6 we’d watched quite a large number of people with colourful wristbands get into the building, and we were starting to seriously consider that we might not be getting into this thing, despite the fact that we were amongst the first 15 people to arrive.

At this point a different member of staff came out and told us off for queuing to the left of the club, because

“you’re not allowed to queue past the shops”

and told us to queue to the right with all the other people who still hadn’t got wristbands yet. Various grumblings on the subject of the queue being really long and how will we know what is going on were heard, to which the response was,

“it doesn’t matter where you are, your wristbands are numbered and we’ll call people in number order anyway. You can go away and come back at 7pm if you like. Nothing is happening before 7pm.”

Well, we didn’t have anything else to do for the next 45 minutes anyway, and there was lack of trust that everyone involved was giving us the same/correct information, so we decided to remain in this mostly-linear-collection-of-people-which-was-not-a-queue-because-it-would-be-called-in-number-order.

About 6.55pm a staff member popped their head out the door and shouted,

“we’re delayed by about ten minutes but we do love you and we’ll start getting you inside soon.”

And then just a minute or two later he’s back and shouting out,

“wristband numbers below 510, come this way!”

We were 506 and 507.

The exterior of the Mildmay Club isn’t in the best condition. It looks pretty shabby. Inside though it’s quite nice. We were ushered into the bar area which is pretty much the same as the bar of every working men’s club or British Legion club that you have ever seen.

Even though we were amongst the first few white wristband people in, the room was really full already. These must have been all the priority ticket people we saw going in ahead of us. Nowhere for us to sit except the edge of a low stage directly in front of a speaker pumping out blues and Hendrix. Again we started to worry that we would not be getting in to the recording.

It must have been about 7.20pm when they started calling the colourful wristband people out of the bar and in to the theatre. The room slowly drained until it seemed like there were only about ten of us left. And then,

“white wristbands numbered 508 and below please!”

We rushed into the theatre to be confronted with mostly full seating.

“You want to be sat together don’t you?”


“Oh, just take those reserved seats, they’ve blown it now, they’re too late.”

Score! I prodded Jenny in the direction of a set of four previously reserved seats that were in a great position. We were amongst the last twenty or so people to get in. I think if we had shown up even ten minutes later to get the wristbands then we wouldn’t have made it.

In contrast to the outside of the building the theatre itself was really quite nice, very interesting decor, and surprisingly large compared to the impression you get from seeing it on television.

Stewart did two sets of 28 minute pieces, then a short interval and then another 2×28 minutes, so almost two hours. I believe there were recordings on three nights so that’s potentially 12 episodes worth of material, but given that

  1. All the previous series had 6 episodes.
  2. Stewart made a comment at one point about moving something on stage for continuity with the previous night’s recording.

then I assume there’s two recordings of each episode’s material from which they’ll edit together the best bits.

The material itself was great, so fans of Comedy Vehicle have definitely got something to look forward to. If you have previously attempted to consume Stewart Lee’s comedy and found the experience unpalatable then I don’t think anything is going to change for you – in fact it might upset you even more, to be honest. Other than that I’m not going to say anything about it as that would spoil it and I couldn’t do it justice anyway.

Oh, apart from that it’s really endearing to see Stew make himself laugh in the middle of one of his own rants and have to take a moment to compose himself.

As for SRO Audiences, I possibly shouldn’t moan as I have no actual experience of trying to cram hundreds of people into a free event and their first concern has got to be having the audience side of things run smoothly for the production, not for the audience. I get that. All I would say is that:

  • Being very clear with people at wristband issuing time that they will be called in by number, and giving a realistic time for when the numbers would be called, would be helpful. This wasn’t clear for us so on the one hand we hung around being in the way a bit, but on the other hand I’m glad that we didn’t leave it until 7pm to come back because our numbers were called before 7pm and we did only just get in.
  • Doing your best to turn people away early when they have no realistic chance of getting in would be good. There were loads of people with higher number wristbands than us that we did not see in the theatre later. Unsure if they got eventually sent home or if they ended up watching the recording on TV in the bar. At previous SRO Audiences recordings I’ve waited right up until show start time to be told to go home though.
Categories: LUG Community Blogs

Steve Kemp: I jumped on the SSL-bandwagon

Fri, 04/12/2015 - 18:03

Like everybody else on the internet today was the day I started rolling out SSL certificates, via let's encrypt.

The process wasn't too difficult, but I did have to make some changes. Pretty much every website I have runs under its own UID, and I use a proxy to pass content through to the right back-end.

Running 15+ webservers feels like overkill, but it means that the code running cannot read/modify/break the code that is running this blog - because they run as different UIDs.

To start with I made sure that all requests to the top-level /.well-known directory were shunted to a local directory - via this in /etc/apache2/conf-enabled/well-known.conf:

Alias /.well-known/ /srv/well-known/ <Directory "/srv/well-known/"> ForceType text/plain Options Indexes FollowSymLinks MultiViews AllowOverride all AuthType None Require all granted </Directory>

Then configured each proxy to avoid forwarding that path to the back-ends, by adding this to each of the individual virtual-hosts that run proxying:

<Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /.well-known ! ProxyPass / http://localhost:$port/ ..

Then it came to be time to actually generate the certificates. Rather than using the official client I used a simpler one that allowed me to generate requests easily:

CSR=/etc/apache2/ssl/csr/ KEYS=/etc/apache2/ssl/keys/ CERTS=/etc/apache2/ssl/certs/ # generate a key openssl genrsa 4096 > $KEYS/lumail.key # make a CSR openssl req -new -sha256 -key $KEYS/lumail.key -subj "/" -reqexts SAN \ -config <(cat /etc/ssl/openssl.cnf \ <(printf "[SAN]\,")) \ > $CSR/lumail.csr # Do the validation --account-key ./account.key --csr $CSR/lumail.csr \ --acme-dir /srv/well-known/acme-challenge/ > $CERTS/

And then I was done. Along the way I found some niggles:

  • If you have a host that listens on IPv6 only you cannot validate your request - this seems like a clear failure.
  • It is assumed that you generate all your certificates in their live-location. e.g. You cannot generate a certificate for on the host
  • If you forward HTTP -> HTTPS the validation fails. I had to setup rewrite rules to avoid this, for example contains this:
    • RewriteEngine On
    • RewriteCond %{REQUEST_URI} !^/.well-known
    • RewriteRule ^/(.*)$1 [L]

The first issue is an annoyance. The second issue is a real pain. For example * listens on one machine except for Since there are no wildcards created a single certificate with Alt-names for a bunch of names such as:

  • ..
  • ..

Then seperately create a certificate for the webmail host - which I've honestly not done yet.

Still I wrote a nice little script to generate SSL for a number of domains, with different Alt-Names, wrapping around the script, and regenerating all my deployed certificates is now a two minute job.

(People talk about renewing certificates. I don't see the gain. Just replace them utterly every two months and you'll be fine.)

Categories: LUG Community Blogs

Debian Bits: Software Freedom Conservancy needs your support!

Thu, 03/12/2015 - 23:30

"Software Freedom Conservancy helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects.", that is how Software Freedom Conservancy defines itself. Organizations like Conservancy allow free software developers to focus on what they do the best by doing copyleft enforcement, taking care of legal aspects and provide many services to its project members.

Last August, Debian and Conservancy announced a partnership and formed the Copyright Aggregation Project where, among other things, Conservancy will be able to hold copyrights for some Debian works and ensure compliance with copyleft so that those works remain in free software.

Recently, Conservancy launched a major fundraising campaign and needs more individual supporters to gain more sustainable and independent funding. This will allow the Conservancy to continue its efforts towards convincing more companies to comply with free software licenses such as the GPL and take legal actions when dialogue turns out to be unsuccessful. Conservancy needs your support now, more than ever!

Many Debian Developers and Contributors have already become Conservancy supporters. Please consider signing up as a supporter on!

Categories: LUG Community Blogs

Steve Kemp: Spent the weekend improving the internet

Sun, 29/11/2015 - 15:58

This weekend I've mostly been tidying up some personal projects and things.

This was updated to use recaptcha on the sign-up page, which is my attempt to cut down on the 400+ spam-registrations it receives every day.

I've purged a few thousand bogus-accounts, which largely existed to point to spam-sites in their profile-pages. I go through phases where I do this, but my heuristics have always been a little weak.

This site offers free dynamic DNS for a few hundred users. I closed fresh signups due to it being abused by spammers, but it does have some users and I sometimes add new people who ask politely.

Unfortunately some users hammer it, trying to update their DNS records every 60 seconds or so. (One user has spent the past few months updating their IP address every 30 seconds, ironically their external IP hadn't changed in all that time!)

So I suspended a few users, and implemented a minimum-update threshold: Nobody can update their IP address more than once every fifteen minutes now.

Literate Emacs Configuration File

Working towards my stateless home-directory I've been tweaking my dotfiles, and the last thing I did today was move my Emacs configuration over to a literate fashion.

My main emacs configuration-file is now a markdown file, which contains inline-code. The inline-code is parsed at runtime, and executed when Emacs launches. The init.el file which parses/evals is pretty simple, and I'm quite pleased with it. Over time I'll extend the documantion and move some of the small snippets into it.

Offsite backups

My home system(s) always had a local backup, maintained on an external 2Tb disk-drive, along with a remote copy of some static files which were maintained using rsync. I've now switched to having a virtual machine host the external backups with proper incrementals - via attic, which beats my previous "only one copy" setup.

Virtual Machine Backups

On a whim a few years ago I registered which I use to maintain backups of my personal virtual machines. That still works, though I'll probably drop the domain and use or similar in the future.

FWIW the external backups are hosted on BigV, which gives me a 2Tb "archive" disk for a £40 a month. Perfect.

Categories: LUG Community Blogs

Steve Kemp: A transient home-directory?

Wed, 25/11/2015 - 14:00

For the past few years all my important work has been stored in git repositories. Thanks to the mr tool I have a single configuration file that allows me to pull/maintain a bunch of repositories with ease.

Having recently wiped & reinstalled a pair of desktop systems I'm now wondering if I can switch to using a totally transient home-directory.

The basic intention is that:

  • Every time I login "rm -rf $HOME/*" will be executed.

I see only three problems with this:

  • Every time I login I'll have to reclone my "dotfiles", passwords, bookmarks, etc.
  • Some programs will need their configuration updated, post-login.
  • SSH key management will be a pain.

My dotfiles contain my my bookmarks, passwords, etc. But they don't contain setup for GNOME, etc.

So there might be some configuration that will become annoying - For example I like "Ctrl-Alt-t" to open a new gnome-terminal command. That's configured on each new system I login to the first time.

My images/videos/books are all stored beneath /srv and not in my home directory - so the only thing I'll be losing is program configuration, caches, and similar.

Ideally I'd be using a smartcard for my SSH keys - but I don't have one - so for the moment I might just have to rsync them into place, but that's grossly bad.

I'll be interesting to see how well this works out, but I see a potential gain in portability and discipline at the very least.

Categories: LUG Community Blogs