LUG Community Blogs

Steve Kemp: kvm-hosting will be ceasing, soon.

Planet HantsLUG - Wed, 10/09/2014 - 16:27

Seven years ago I wanted to move on from the small virtual machine I had to a larger one. Looking at the the options available it seemed the best approach was to rent a big host, and divide it up into virtual machines myself.

Renting a machine with 8Gb of RAM and 500Gb of disk-space, then dividing that into eights would give a decent spec and assuming that I found enough users to pay for the other slots/shares it would be economically viable too.

After a few weeks I took the plunge, advertised here, and found users.

I had six users:

  • 1/8th for me.
  • 1/8th left empty/idle for the host machine.
  • 6/8th for other users.

There were some niggles, one user seemed to suffer from connectivity problems more than the others, but on the whole the experiment worked out well.

These days, thanks to BigV, Digital Ocean, and all the new-comers there is less need for this kind of thing so last December I announced that the service would cease - and gave all current users 1 year of free service to give them time to migrate away.

The service was due to terminate in December, but triggered by some upcoming downtime where our host would have been moved, in the back of a van, from Manchester to York, I've taken the decision to stop it early.

It was a fun experiment, it provided me with low cost hosting (subsidized by the other paying users), and provided some other people with hosting of their own that was setup nicely.

The only outstanding question is what to do with the domain-names? I could let them expire, I could try to sell them, or I could donate them to other people running hosting setups.

If anybody reading this has a use for kvm-hosting.org, kvm-hosting.net, or kvm-hosting.com, then do feel free to get in touch. No promises, obviously, but it'd be a shame for them to end up hosting adverts in a year or twos time..

Categories: LUG Community Blogs

Adam Trickett: Picasa Web: Summer Holiday 2014

Planet HantsLUG - Mon, 08/09/2014 - 07:00

Our summer holiday in Denmark

Location: Denmark
Date: 8 Sep 2014
Number of Photos in Album: 117

View Album

Categories: LUG Community Blogs

Jonathan McDowell: Breaking up with America

Planet ALUG - Sat, 06/09/2014 - 22:38

Back in January I changed jobs. This took me longer to decide to do than it should have. My US visa (an L-1B) was tied to the old job, and not transferable, so leaving the old job also meant leaving the US. That was hard to do; I'd had a mostly fun 3 and a half years in the SF Bay Area.

The new job had an office in Belfast, and HQ in the Bay Area. I went to work in Belfast, and got sent out to the US to meet coworkers and generally get up to speed. During that visit the company applied for an H-1B visa for me. This would have let me return to the US in October 2014 and start working in the US office; up until that point I'd have continued to work from Belfast. Unfortunately there were 172,500 applications for 85,000 available visas and mine was not selected for processing.

I'm disappointed by this. I've enjoyed my time in the US. I had a green card application in process, but after nearly 2 years it still hadn't completed the initial hurdle of the labor certification stage (a combination of a number of factors, human, organizational and governmental). However the effort of returning to live here seems too great for the benefits gained. I can work for a US company with a non-US office and return on an L-1B after a year. And once again have to leave should I grow out of the job, or the job change in some way that doesn't suit me, or the company hit problems and have to lay me off. Or I can try again for an H-1B next year, aiming for an October 2015 return, and hope that this time my application gets selected for processing.

Neither really appeals. Both involve putting things on hold in the hope longer terms pans out as I hope. And to be honest I'm bored of that. I've loved living in America, but I ended up spending at least 6 months longer in the job I left in January than I'd have done if I'd been freely able to change employer without having to change continent. So it seems the time has come to accept that America and I must part ways, sad as that is. Which is why I'm currently sitting in SFO waiting for a flight back to Belfast and for the first time in 5 years not having any idea when I might be back in the US.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Lessons learned

Planet ALUG - Fri, 05/09/2014 - 08:51
  • Apparently I am unable to summarise.

  • When going on holiday somewhere, research things we might do once there rather than rely on local knowledge.

  • I am mildly allergic to raw tomatoes and need to stop bloody eating them.

  • Fork out for the TomTom map wherever we go. My aged TomTom One is still far better than anything I've found on Android so far.

    • Google Maps does not do navigation in Turkey.

    • Not all road signs in Turkey are reliable. Some rely on local knowledge.

  • Whatever the heat, keep feet covered at night; the mosquitos love them. Ouch.

  • Lost luggage will only turn up after you've given up hope and have bought replacements for the important stuff.

  • Turkey has inherited several things from French immigrants of yore. Notably, quite a bit of vocabulary and their driving style.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): O Baggage Where Art Thou

Planet ALUG - Fri, 05/09/2014 - 08:32

Owing to various factors, I'm finding it difficult to recall the things that have happened and in what order over the last few days so, for my own purposes, I'm going to note them down here.</pointless-intro>

Edit: Those were not notes. I'm a waffler.

tl;dr: We got tired, the airline lost one of our bags, we did stuff, the airline found the bag.

Monday

Woke up around 9, considered the fact that we had until around 5pm to tackle tidy the house, tackle the Everest of dishes, wash all clothes, pack, and then leave for our holiday.

Farted around a fair bit and eventually resigned ourselves to coming back to a less-than-perfectly-tidy house. I scaled Mount Crockery at least.

Around 18:30 we eventually left for Stansted. We made good time and arrived plenty early enough for our 23:35 flight to Istanbul. On check-in, we were told the flight was delayed and was expected to depart between 01:00 and 01:30. Just what we needed with our already over-tired 2 year old.

We decided we would try to take it easy; we had a pint and I walked around the airport with the little man until he had calmed down a bit.

Tuesday

Eventually, the plane was ready for us to board at 01:15; we did so.

The flight passed easily enough. We were served a hot meal as soon as we hit cruising altitude and then we all slept through until descent. The landing was smooth and early morning Istanbul seemed warm and inviting.

Until we had found ourselves still waiting for our luggage ninety minutes later.

2 hours later, we learned that one of our bags had been lost. After some half-hearted arguing (we were just too tired), we filled in a form and left the arrivals hall with our remaining luggage. Unfortunately, the one that was missing contained most of our clothes and, frustratingly, toys and clothes for my sister-in-law's newborn.

Brother-in-law was waiting patiently outside for us. I guess he'd been there a while because he looked very relieved to see us. We made our way to the car hire place to find that, because we were so much later than we'd told them (by this time we were 3 hours later than we had booked the car for) they had decided we'd cancelled and gave our car to someone else. After some more arguing (half-hearted again), they found us another car of "equivalent size" and told us to wait round the front.

The car was a Ford Fiesta. I'm not one of those blokey types that know about cars. But I can say with certainty that I will never buy a Ford Fiesta and hope never to have to drive one again. It was tiny and weird. If we'd had our missing bag, I don't think we could have fit everything in the car. mumble mumble small mercies or summat

With the help of b-i-l, we found our way to his house - driving on the "wrong" side of the road in a "wrong"-hand-drive car after a long and stressful night with not much sleep was fun - and greeted s-i-l and her new baby and then had breakfast.

Then we slept. Then we went to the park. Then we slept.

Wednesday

The oddity of travelling at night then sleeping in the day but still being tired enough to sleep again at night was a new experience for me and I am still feel quite confused but I think I've managed to convince myself that everything above under "Tuesday" is correct.

On Wednesday, we decided on the strength of internet reviews to visit Polonezköy. Don't bother, it's rubbish. We pressed on then to the "nearby" beach. It turned out to be a 45 minute drive and a storm broke out along the way. When we arrived at the little seaside town (I don't remember its name) there was nowhere to park. Being already in a grump, we decided to head home and call the day a complete loss. Half way home, we decided we would visit Kartal instead; a town near s-i-l's.

Kartal was nice :)

Thursday

Shopping in Kadıköy, ferry to Beşiktaş, more shopping, ferry back, home. In all, a nice day. Rounded off by some quality time with a beer on the balcony. It is way too hot indoors, even at night.

Just after midnight, the airline called us to say that they had found our missing luggage and would be sending it to us tomorrow.

Fresh pants!

Categories: LUG Community Blogs

Steve Kemp: If you signed my old key, please consider repeating the process

Planet HantsLUG - Thu, 04/09/2014 - 17:08

I'm in the process of rejoining the Debian project. When I was previously a member I had a 1024-bit key, which is considered to be a poor size these days.

Happily I've already generated a new key, which is much bigger.

If you've signed my old key, and thus trust my identity was confirmed at some point in time, then please do consider repeating the process with the new one.

As I've signed the new with the old there should be no concern that it is random/spurious/malicious.

Obviously the ideal scenario is that I meet local-people to perform signing rites, in exchange for cake, beer, or other bribery.

Old key:

pub 1024D/CD4C0D9D 2002-05-29 Key fingerprint = DB1F F3FB 1D08 FC01 ED22 2243 C0CF C6B3 CD4C 0D9D uid Steve Kemp <steve@steve.org.uk> sub 2048g/AC995563 2002-05-29

New key:

pub 4096R/0C626242 2014-03-24 Key fingerprint = D516 C42B 1D0E 3F85 4CAB 9723 1909 D408 0C62 6242 uid Steve Kemp (Edinburgh, Scotland) <steve@steve.org.uk> sub 4096R/229A4066 2014-03-24
Categories: LUG Community Blogs

Steve Kemp: systemd, a brave new world

Planet HantsLUG - Thu, 04/09/2014 - 01:47

After spending a while fighting with upstart, at work, I decided that systemd couldn't be any worse and yesterday morning upgraded one of my servers to run it.

I have two classes of servers:

  • Those that run standard daemons, with nothing special.
  • Those that run different services under runit
    • For example docker guests, node.js applications, and similar.

I thought it would be a fair test to upgrade one of each systems, to see how it worked.

The Debian wiki has instructions for installing Systemd, and both systems came up just fine.

Although I realize I should replace my current runit jobs with systemd units I didn't want to do that. So I wrote a systemd .service file to launch runit against /etc/service, as expected, and that was fine.

Docker was a special case. I wrote a docker.service + docker.socket file to launch the deamon, but when I wrote a graphite.service file to start a docker instance it kept on restarting, or failing to stop.

In short I couldn't use systemd to manage running a docker guest, but that was probably user-error. For the moment the docker-host has a shell script in root's home directory to launch the guest:

#!/bin/sh # # Run Graphite in a detached state. # /usr/bin/docker run -d -t -i -p 8080:80 -p 2003:2003 skxskx/graphite

Without getting into politics (ha), systemd installation seemed simple, resulted in a faster boot, and didn't cause me horrific problems. Yet.

ObRandom: Not sure how systemd is controlling prosody, for example. If I run the status command I can see it is using the legacy system:

root@chat ~ # systemctl status prosody.service prosody.service - LSB: Prosody XMPP Server Loaded: loaded (/etc/init.d/prosody) Active: active (running) since Wed, 03 Sep 2014 07:59:44 +0100; 18h ago CGroup: name=systemd:/system/prosody.service └ 942 lua5.1 /usr/bin/prosody

I've installed systemd and systemd-sysv, so I thought /etc/init.d was obsolete. I guess it is making pretend-services for things it doesn't know about (because obviously not all packages contain /lib/systemd/system entries), but I'm unsure how that works.

Categories: LUG Community Blogs

Meeting at "The Moon Under Water"

Wolverhampton LUG News - Mon, 01/09/2014 - 07:00
Event-Date: Wednesday, 3 September, 2014 - 19:30 to 23:00Body: 53-55 Lichfield St Wolverhampton West Midlands WV1 1EQ Eat, Drink and talk Linux
Categories: LUG Community Blogs

Steve Kemp: A diversion - The National Health Service

Planet HantsLUG - Sun, 31/08/2014 - 11:51

Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.

Actually there are four such services in the UK, only one of which has this name:

  • The national health service (England)
  • Health and Social Care in Northern Ireland.
  • NHS Scotland.
  • NHS Wales.

In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.

(Differences? In Scotland you get eye-tests for free, in England you pay.)

My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.

The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)

Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.

Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!

So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.

I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.

Adults come to hospital mostly because they're sick, injured, or drunk.

Children come to hospital mostly because their parents are paranoid.

A child has a rash? Doctors are closed? Lets go to the emergency ward!

A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!

I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.

Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.

Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".

In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.

This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.

And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.

Categories: LUG Community Blogs

Steve Kemp: Migration of services and hosts

Planet HantsLUG - Fri, 29/08/2014 - 13:28

Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.

I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.

The more I use serverspec the more I like it. My own personal servers have good rules now:

shelob ~/Repos/git.steve.org.uk/server/testing $ make .. Finished in 1 minute 6.53 seconds 362 examples, 0 failures

Slow, but comprehensive.

In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.

That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.

Needless to say I wrote a policy for this host too:

# # The host should be wheezy. # describe command("lsb_release -d") do its(:stdout) { should match /wheezy/ } end # # Our gitbucket instance should be running, under runit. # describe supervise('gitbucket') do its(:status) { should eq 'run' } end # # nginx will proxy to our back-end # describe service('nginx') do it { should be_enabled } it { should be_running } end describe port(80) do it { should be_listening } end # # Host should resolve # describe host("git.steve.org.uk" ) do it { should be_resolvable.by('dns') } end

Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.

Categories: LUG Community Blogs

Steve Kemp: Updates on git-hosting and load-balancing

Planet HantsLUG - Mon, 25/08/2014 - 12:32

To round up the discussion of the Debian Administration site yesterday I flipped the switch on the load-balancing. Rather than this:

https -> pound \ \ http -------------> varnish --> apache

We now have the simpler route for all requests:

http -> haproxy -> apache https -> haproxy -> apache

This means we have one less HTTP-request for all incoming secure connections, and these days secure connections are preferred since a Strict-Transport-Security header is set.

In other news I've been juggling git repositories; I've setup an installation of GitBucket on my git-host. My personal git repository used to contain some private repositories and some mirrors.

Now it contains mirrors of most things on github, as well as many more private repositories.

The main reason for the switch was to get a prettier interface and bug-tracker support.

A side-benefit is that I can use "groups" to organize repositories, so for example:

Most of those are mirrors of the github repositories, but some are new. When signed in I see more sources, for example the source to http://steve.org.uk.

I've been pleased with the setup and performance, though I had to add some caching and some other magic at the nginx level to provide /robots.txt, etc, which are not otherwise present.

I'm not abandoning github, but I will no longer be using it for private repositories (I was gifted a free subscription a year or three ago), and nor will I post things there exclusively.

If a single canonical source location is required for a repository it will be one that I control, maintain, and host.

I don't expect I'll give people commit access on this mirror, but it is certainly possible. In the past I've certainly given people access to private repositories for collaboration, etc.

Categories: LUG Community Blogs

Steve Kemp: Updating Debian Administration, the code

Planet HantsLUG - Sat, 23/08/2014 - 08:04

So I previously talked about the setup behind Debian Administration, and my complaints about the slownes.

The previous post talked about the logical setup, and the hardware. This post talks about the more interesting thing. The code.

The code behind the site was originally written by Denny De La Haye. I found it and reworked it a lot, most obviously adding structure and test cases.

Once I did that the early version of the site was born.

Later my version became the official version, as when Denny setup Police State UK he used my codebase rather than his.

So the code huh? Well as you might expect it is written in Perl. There used to be this layout:

] yawns/cgi-bin/index.cgi yawns/cgi-bin/Pages.pl yawns/lib/... yawns/htdocs/

Almost every request would hit the index.cgi script, which would parse the request and return the appropriate output via the standard CGI interface.

How did it know what you wanted? Well sometimes there would be a paramater set which would be looked up in a dispatch-table:

/cgi-bin/index.cgi?article=40 - Show article 40 /cgi-bin/index.cgi?view_user=Steve - Show the user Steve /cgi-bin/index.cgi?recent_comments=10 - Show the most recent comments.

Over time the code became hard to update because there was no consistency, and over time the site became slow because this is not a quick setup. Spiders, bots, and just average users would cause a lot of perl processes to run.

So? What did I do? I moved the thing to using FastCGI, which avoids the cost of forking Perl and loading (100k+) the code.

Unfortunately this required a bit of work because all the parameter handling was messy and caused issues if I just renamed index.cgi -> index.fcgi. The most obvious solution was to use one parameter, globally, to specify the requested mode of operation.

Hang on? One parameter to control the page requested? A persistant environment? What does that remind me of? Yes. CGI::Application.

I started small, and pulled some of the code out of index.cgi + Pages.pl, and over into a dedicated CGI::Application class:

  • Application::Feeds - Called via /cgi-bin/f.fcgi.
  • Application::Ajax - Called via /cgi-bin/a.fcgi.

So now every part of the site that is called by Ajax has one persistent handler, and every part of the site which returns RSS feeds has another.

I had some fun setting up the sessions to match those created by the old stuff, but I quickly made it work, as this example shows:

The final job was the biggest, moving all the other (non-feed, non-ajax) modes over to a similar CGI::Application structure. There were 53 modes that had to be ported, and I did them methodically, first porting all the Poll-related requests, then all the article-releated ones, & etc. I think I did about 15 a day for three days. Then the rest in a sudden rush.

In conclusion the code is now fast because we don't use CGI, and instead use FastCGI.

This allowed minor changes to be carried out, such as compiling the HTML::Template templates which determine the look and feel, etc. Those things don't make sense in the CGI environment, but with persistence they are essentially free.

The site got a little more of a speed boost when I updated DNS, and a lot more when I blacklisted a bunch of IP-space.

As I was wrapping this up I realized that the code had accidentally become closed - because the old repository no longer exists. That is not deliberate, or intentional, and will be rectified soon.

The site would never have been started if I'd not seen Dennys original project, and although I don't think others would use the code it should be possible. I remember at the time I was searching for things like "Perl CMS" and finding Slashcode, and Scoop, which I knew were too heavyweight for my little toy blog.

In conclusion Debian Administration website is 10 years old now. It might not have changed the world, it might have become less relevant, but I'm glad I tried, and I'm glad there were years when it really was the best place to be.

These days there are HowtoForges, blogs, spam posts titled "How to install SSH on Trusty", "How to install SSH on Wheezy", "How to install SSH on Precise", and all that. No shortage of content, just finding the good from the bad is the challenge.

Me? The single best resource I read these days is probably LWN.net.

Starting to ramble now.

Go look at my quick hack for remote command execution https://github.com/skx/nanoexec ?

Categories: LUG Community Blogs

Steve Engledow (stilvoid): All fired up

Planet ALUG - Thu, 21/08/2014 - 22:52

After putting it off for various reasons for at least a couple of years, I've finally switched back from Chromium to Firefox and I'm very glad I did so.

The recent UI change seems to have upset a lot of Firefox users but it was instrumental in prompting my return and I'm sure others will have felt the same; Firefox once again looks and feels like a modern browser.

I have to say also that it feels an imperial bucketload snappier than Chromium too. The exact opposite was one of the reasons I left in the first place.

Good job Firefolk :)

Categories: LUG Community Blogs

Steve Kemp: Updating Debian Administration

Planet HantsLUG - Thu, 21/08/2014 - 08:50

Recently I've been getting annoyed with the Debian Administration website; too often it would be slower than it should be considering the resources behind it.

As a brief recap I have six nodes:

  • 1 x MySQL Database - The only MySQL database I personally manage these days.
  • 4 x Web Nodes.
  • 1 x Misc server.

The misc server is designed to display events. There is a node.js listener which receives UDP messages and stores them in a rotating buffer. The messages might contain things like "User bob logged in", "Slaughter ran", etc. It's a neat hack which gives a good feeling of what is going on cluster-wide.

I need to rationalize that code - but there's a very simple predecessor posted on github for the curious.

Anyway enough diversions, the database is tuned, and "small". The misc server is almost entirely irrelevent, non-public, and not explicitly advertised.

So what do the web nodes run? Well they run a lot. Potentially.

Each web node has four services configured:

  • Apache 2.x - All nodes.
  • uCarp - All nodes.
  • Pound - Master node.
  • Varnish - Master node.

Apache runs the main site, listening on *:8080.

One of the nodes will be special and will claim a virtual IP provided via ucarp. The virtual IP is actually the end-point visitors hit, meaning we have:

Master HostOther hosts

Running:

  • Apache.
  • Pound.
  • Varnish

Running:

  • Apache.

Pound is configured to listen on the virtual IP and perform SSL termination. That means that incoming requests get proxied from "vip:443 -> vip:80". Varnish listens on "vip:80" and proxies to the back-end apache instances.

The end result should be high availability. In the typical case all four servers are alive, and all is well.

If one server dies, and it is not the master, then it will simply be dropped as a valid back-end. If a single server dies and it is the master then a new one will appear, thanks to the magic of ucarp, and the remaining three will be used as expected.

I'm sure there is a pathological case when all four hosts die, and at that point the site will be down, but that's something that should be atypical.

Yes, I am prone to over-engineering. The site doesn't have any availability requirements that justify this setup, but it is good to experiment and learn things.

So, with this setup in mind, with incoming requests (on average) being divided at random onto one of four hosts, why is the damn thing so slow?

We'll come back to that in the next post.

(Good news though; I fixed it ;)

Categories: LUG Community Blogs
Syndicate content