News aggregator

Andy Smith: OneRNG kickstarter arrived!

Planet HantsLUG - Sat, 06/06/2015 - 12:43

My OneRNG kickstarter arrived today. I had five units, so I chose three external models and two internal ones. The finish of the external model isn’t really up to the quality of an Entropy Key. Here’s a picture of them together.

Given that the external model looks rather flimsy — I could imagine it getting snapped in half if someone bumped into it — I think I’d probably prefer the internal model in practice. Here’s what that looks like:

The three different connectors are to try to ensure you can find a useful connection angle no matter how your motherboard’s internal USB headers are laid out.

I haven’t yet plugged them in to check out how they work. This is probably going to have to wait a few weeks as I have quite a lot on.

Assuming they work about as well as the Entropy Keys then I only need to keep two of these for myself, so if anyone wants one I would be willing to sell it on to you at cost plus postage.

Categories: LUG Community Blogs

Mick Morgan: why pay twice?

Planet ALUG - Fri, 05/06/2015 - 21:09

Yesterday’s independent newspaper reports that HMG has let a contract with five companies to monitor social media such twitter, facebook, and blogs for commentary on Goverment activity. The report says:

“Under the terms of the deal five companies have been approved to keep an eye on Facebook, Twitter and blogs and provide daily reports to Whitehall on what’s being said in “real time”.

Ministers, their advisers and officials will provide the firms with “keywords and topics” to monitor. They will also be able to opt in to an Orwellian-sounding Human-Driven Evaluation and Analysis system that will allow them to see “favourability of coverage” across old and new media.”

This seems to me to be a modern spin on the old press cuttings system which was in widespread use in HMG throughout my career. The article goes on to say:

“The Government has always paid for a clippings service which collated press coverage of departments and campaigns across the national, regional and specialist media. They have also monitored digital news on an ad hoc basis for several years. But this is believed to be the first time that the Government has signed up to a cross-Whitehall contract that includes “social” as a specific media for monitoring.”

Apart from the mainstream social media sites noted above, I’d be intrigued to know what criteria are to be applied for including blogs in the monitoring exercise. Some blogs (the “vox populi” types such as Guido Fawkes at order-order will be obvious candidates. Others in the traditional media, such as journalistic or political blogs will also be included, but I wonder who chooses others, and by what yardsticks. Would trivia be included? And should I care?

According to the Independent, the Cabinet Office, which negotiated the deal, claim that even with the extended range of monitoring by bringing individual departmental contracts together they will be able to save £2.4m over four years whilst “maximising the quality of innovative work offered by suppliers”.

Now since the Cabinet Office is reportedly itself facing a budget cut of £13 million in this FY alone, it strikes me that it would have been much more cost effective to simply use GCHQ’s pre-existing monitoring system rather than paying a separate bunch of relative amateurs to search the same sources.

Just give GCHQ the “keywords” or “topics of interest”. Go on Dave, you know it makes sense.

Categories: LUG Community Blogs

Anton Piatek: Bye bye big blue

Planet HantsLUG - Thu, 04/06/2015 - 11:00

After nearly 10 years with IBM, I am moving on… Today is my last day with IBM.

I suppose my career with IBM really started as a pre-university placement at IBM, which makes my time in IBM closer to 11 years.  I worked with some of the WebSphere technical sales and pre-sales teams in Basingstoke, doing desktop support and Lotus Domino administration and application design, though I don’t like to remind people that I hold qualifications on Domino :p

I then joined as a graduate in 2005, and spent most of my time working on Integration Bus (aka Message Broker, and several more names) and enjoyed working with some great people over the years. The last 8 months or so have been with the QRadar team in Belfast, and I really enjoyed my time working with such a great team.

I have done test roles, development roles, performance work, some time in level 3 support, and enjoyed all of it. Even the late nights the day before release were usually good fun (the huge pizzas helped!).

I got very involved with IBM Hursley’s Blue Fusion events, which were incredible fun and a rather unique opportunity to interact with secondary school children.

Creating an Ubuntu-based linux desktop for IBM, with over 6500 installs, has been very rewarding and something I will remember fondly.

I’ve enjoyed my time in IBM and made some great friends. Thanks to everyone that helped make my time so much fun.

 

Categories: LUG Community Blogs

Mick Morgan: de-encrypting trivia

Planet ALUG - Tue, 02/06/2015 - 17:03

Well, that didn’t last long.

When I decided to force SSL as the default connection to trivia I had forgotten that it is syndicated via RSS on sites like planet alug. And of course as Brett Parker helpfully pointed out to me, self-signed certificates don’t always go down too well with RSS readers. He also pointed out that some spiders (notably google) would barf on my certificate and thus leave the site unindexed.

So I have taken off the forced redirect to port 443. Nevertheless, I would encourage readers to connect to https://baldric.net in order to protect their browsing of this horribly seditious site.

You never know who is watching……..

Categories: LUG Community Blogs

Mick Morgan: encrypting trivia

Planet ALUG - Mon, 01/06/2015 - 20:26

In my post of 8 May I said it was now time to encrypt much, much more of my everyday activity. One big, and obvious. hole in this policy decision was the fact that the public face of this blog itself has remained unencrypted since I first created it way back in 2006.

Back in September 2013 I mentioned that I had for some time protected all my own connections to trivia with an SSL connection. Given that my own access to trivia has always been encrypted, any of my readers could easily have used the same mechanism to connect (just by using the “https” prefix). However, my logs tell me that that very, very few connections other than my own come in over SSL. There are a couple of probable reasons for this, not least the fact that an unencrypted plain http connection is the obvious (default) way to connect. But another reason may be the fact that I use a self signed (and self generated) X509 certificate. I do this because, like Michael Orlitzky I see no reason why I should pay an extortionist organisation such as a CA good money to produce a certificate which says nothing about me or the trustworthiness of my blog when I can produce a perfectly good certificate of my own.

I particularly like Orlitzky’s description of CAs as “terrorists”. He says:

I oppose CA-signed certificates because it’s bad policy, in the long run, to negotiate with terrorists. I use that word literally — the CAs and browser vendors use fear to achieve their goal: to get your money. The CAs collect a ransom every year to ”renew“ your certificate (i.e. to disarm the time bomb that they set the previous year) and if you don’t pay up, they’ll scare away your customers. ‘Be a shame if sometin’ like that wos to happens to yous…

Unfortunately, however, web browsers get really upset when they encounter self-signed certificates and throw up all sorts of ludicrously overblown warnings. Firefox, for example, gives the error below when first connecting to trivia over SSL.

Any naive reader encountering that sort of error message is likely to press the “get me out of here” button and then bang goes my readership. But that is just daft. If you are happy to connect to my blog in clear, why should you be afraid to connect to it over an encrypted channel just because the browser says it can’t verify my identity? If I wanted to attack you, the reader, then I could just as easily do so over a plain http connection as over SSL. And in any event, I did not create my self signed certificate to provide identity verification, I created it to provide an encrypted channel to the blog. That encryption works, and, I would argue, it is better than the encryption provided by many commercially produced certificates because I have specifically chosen to use only the stronger cyphers available to me.

Encrypting the connection to trivia feels to me like the right thing to do. I personally always feel better about a web connection that is encrypted. Indeed, I use the “https everywhere” plugin as a matter of course. Given that I already have an SSL connection available to offer on trivia, and that I believe that everyone has the right to browse the web free from intrusive gratuitous snooping I think it is now way past time that I provided that protection to my readers. So, as of yesterday I have shifted the whole of trivia to an encrypted channel by default. Any connection to port 80 is now automatically redirected to the SSL protected connection on port 443.

Let’s see what happens to my readership.

Categories: LUG Community Blogs

Bring-A-Box, Saturday 13th June 2015, Merstham

Surrey LUG - Fri, 29/05/2015 - 22:52
Start: 2015-06-13 12:00 End: 2015-06-13 12:00

We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!

This month's meeting is at The Feathers Pub, Merstham

42 High St, Merstham, Redhill, Surrey, RH1 3EA ‎
01737 645643 ‎ · http://www.thefeathersmerstham.co.uk

NOTE the pub opens at 12 Noon.

Categories: LUG Community Blogs

Steve Kemp: A brief examination of tahoe-lafs

Planet HantsLUG - Thu, 28/05/2015 - 01:00

Continuing the theme from the last post I made, I've recently started working my way down the list of existing object-storage implementations.

tahoe-LAFS is a well-established project which looked like a good fit for my needs:

  • Simple API.
  • Good handling of redundancy.

Getting the system up and running, on four nodes, was very simple. Setup a single/simple "introducer" which is a well-known node that all hosts can use to find each other, and then setup four deamons for storage.

When files are uploaded they are split into chunks, and these chunks are then distributed amongst the various nodes. There are some configuration settings which determine how many chunks files are split into (10 by default), how many chunks are required to rebuild the file (3 by default) and how many copies of the chunks will be created.

The biggest problem I have with tahoe is that there is no rebalancing support: Setup four nodes, and the space becomes full? You can add more nodes, new uploads go to the new nodes, while old ones stay on the old. Similarly if you change your replication-counts because you're suddenly more/less paranoid this doesn't affect existing nodes.

In my perfect world you'd distribute blocks around pretty optimistically, and I'd probably run more services:

  • An introducer - To allow adding/removing storage-nodes on the fly.
  • An indexer - to store the list of "uploads", meta-data, and the corresponding block-pointers.
  • The storage-nodes - to actually store the damn data.

The storage nodes would have the primitives "List all blocks", "Get block", "Put block", and using that you could ensure that each node had sent its data to at least N other nodes. This could be done in the background.

The indexer would be responsible for keeping track of which blocks live where, and which blocks are needed to reassemble upload N. There's probably more that it could do.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Rhubarb Jam

Planet HantsLUG - Mon, 25/05/2015 - 19:40

Over the bank holiday weekend I made two batches of jam: rhubarb & ginger and rhubarb & orange. I made a small batch last year - which we've not yet eaten - but it's quite a while since I've made so much and for sale.

This year I remembered to grade the rhubarb first, so that each batch was made from stems of similar diameter, which means that they cook evenly and you don't end up with a heterogeneous mixture - which is bad.

Categories: LUG Community Blogs

Anton Piatek: Customise your linux prompt

Planet HantsLUG - Fri, 22/05/2015 - 13:47

If you use a Linux or Unix box with bash or zsh, and you haven’t come across Liquid Prompt, then I suggest you head there right now to install it. I’m loving having more info on the status line, especially near code version control, but even having cpu load and temperature along with battery life right under where I am typing is really useful

Categories: LUG Community Blogs

Jonathan McDowell: I should really learn systemd

Planet ALUG - Thu, 21/05/2015 - 18:20

As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.

Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.

# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.

Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.

Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.

Categories: LUG Community Blogs

Jonathan McDowell: Stepping down from SPI

Planet ALUG - Mon, 18/05/2015 - 23:38

I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.

My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.

It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.

I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.

Categories: LUG Community Blogs

Wylug monthly meeting: 25th MAY

West Yorkshire LUG News - Mon, 18/05/2015 - 15:30

Time has come once again to announce WYLUG’s monthly meeting. As usual it will be held in the Lord Darcy. As usual it will start at around 7pm. There will be no obligation to book in advance. However, the man with the red hat lanyard will not be there to guide you steps, this month.

Anton Piatek: Raspberry pi cloud print server

Planet HantsLUG - Mon, 18/05/2015 - 13:43

It turns out that a raspberry pi does a very good job of being a print server for a google cloud printer. Thanks to https://matthew.mceachen.us/blog/add-google-cloudprint-wifi-access-to-your-older-printer-with-a-raspberry-pi-1342.html I can now print at home directly from my phone!

Categories: LUG Community Blogs

Andy Smith: Has your CurrentCost ever done this?

Planet HantsLUG - Fri, 15/05/2015 - 20:35

Update: Replacing the battery and retraining the receiver fixed it. I suppose it must have had enough juice to flash the LED but not transmit.

A few days ago my CurrentCost starting reading just dashes. There’s also no transmitter icon, so I think it’s not receiving anything from the transmitter. It looks like this:

I went and fished the transmitter box out of the meter closet expecting its batteries to be dead, but it still has its red LED flashing periodically, so I don’t think it’s that.

I did the thing where you hold down the button on the transmitter for 9 seconds and also hold down the V button on the display to make them pair. The display showed its “searching” screen for a while but then it went back to how it looks above.

Anyone had that happen before? It’s otherwise worked fine for 4 years or so (batteries replaced once).

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Andy and Teddy are waving goodbye

Planet ALUG - Fri, 15/05/2015 - 00:28

Most of the time, when I've got some software I want to write, I do it in python or sometimes bash. Occasionally though, I like to slip into something with a few more brackets. I've written a bit of C in the past and love it but recently I've been learning Go and what's really struck me is how clever it is. I'm not just talking about the technical merits of the language itself; it's clever in several areas:

  • You don't need to install anything to run Go binaries.

    At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.

  • Go has the feeling of C but without all that tedious mucking about in hyperspace memory

    Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.

  • The build system is designed around how we (as developers) use code hosting facilities

    Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:

    import "github.com/codegangsta/cli"

    which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:

    go get github.com/codegangsta/cli

    Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.

In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.

Categories: LUG Community Blogs

Mick Morgan: what is wrong with this sentence?

Planet ALUG - Thu, 14/05/2015 - 18:41

Yesterday the new Government published a press release about the forthcoming first meeting of the new National Security Council (NSC). That meeting was due to discuss the Tory administration’s plans for a new Counter-Extremism Bill. The press release includes the following extraordinary stement which is attributed to the Prime Minister:

“For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone. “

Forgive me, but what exactly is wrong with that view? Personally I think it admirable that we live in a tolerant society (“passive” or not). Certainly I believe that tolerance of difference, tolerance of free speech, tolerance of the right to hold divergent opinion, and to voice that opinion is to be cherished and lauded. And is it not right and proper that a Government should indeed “leave alone” any and all of its citizens who are obeying the law?

Clearly, however, our Prime Minster disagrees with me and believes that a tolerant society is not what we really need in the UK because the press release continues:

“This government will conclusively turn the page on this failed approach. “

If tolerance is a “failed approach”, what are we likely to see in its place?

Categories: LUG Community Blogs

MJ Ray: Recorrecting Past Mistakes: Window Borders and Edges

Planet ALUG - Thu, 14/05/2015 - 05:58

A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.

One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.

It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.

I wonder if this is a useful enough improvement that I should report it as an enhancement bug.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Building a componentised application

Planet ALUG - Thu, 14/05/2015 - 00:14

Without going into any of the details, it's a web application with a front end written using Ember and various services that it calls out to, written using whatever seems appropriate per service.

At the outset of the project, we decided we would bite the bullet and build for Docker from the outset. This meant we would get to avoid the usual dependency and developer environment setup nightmares.

The problem

What we quickly realised as we started to put the bare bones of a few of the services in place, was that we had three seemingly conflicting goals for each component and for the application as a whole.

  1. Build images that can be deployed in production.

  2. Allow developers to run services locally.

  3. Provide a means for running unit tests (both by developers and our CI server).

So here's what we've ended up with:

The solution

Or: docker-compose to the rescue

Folder structure

Here's what the project layout looks like:

Project | +-docker-compose.yml | +-Service 1 | | | +-Dockerfile | | | +-docker.compose.yml | | | +-<other files> | +-Service 2 | | +-Dockerfile | +-docker.compose.yml | +-<other files> Building for production

This is the easy bit and is where we started first. The Dockerfile for each service was designed to run everything with the defaults. Usually, this is something simple like:

FROM python:3-onbuild CMD ["python", "main.py"]

Our CI server can easily take these, produce images, and push them to the registry.

Allowing developers to run services locally

This is slightly harder. In general, each service wants to do something slightly different when being run for development; e.g. automatically restarting when code changes. Additionally, we don't want to have to rebuild an image every time we make a code change. This is where docker-compose comes in handy.

The docker-compose.yml at the root of the project folder looks like this:

service1: build: Service 1 environment: ENV: dev volumes: - Service 1:/usr/src/app links: - service2 - db ports: - 8001:8000 service2: build: Service2 environment: ENV: dev volumes: - Service 2:/usr/src/app links: - service1 - db ports: - 8002:8000 db: image: mongo

This gives us several features right away:

  • We can locally run all of the services together with docker-compose up

  • The ENV environment variable is set to dev in each service so that the service can configure itself when it starts to run things in "dev" mode where needed.

  • The source folder for each service is mounted inside the container. This means you don't need to rebuild the image to try out new code.

  • Each service is bound to a different port so you can connect to each part directly where needed.

  • Each service defines links to the other services it needs.

Running the tests

This was the trickiest part to get right. Some services have dependencies on other things even just to get unit tests running. For example, Eve is a huge pain to get running with a fake database so it's much easier to just link it to a temporary "real" database.

Additionally, we didn't want to mess with the idea that the images should run production services by default but also didn't want to require folks to need to churn out complicated docker invocations like docker run --rm -v $(pwd):/usr/src/app --link db:db service1 python -m unittest just to run the test suite after coding up some new features.

So, it was docker-compose to the rescue again :)

Each service has a docker-compose.yml that looks something like:

tests: build: . command: python -m unittest volumes: - .:/usr/src/app links: - db db: image: mongo

Which sets up any dependencies needed just for the tests, mounts the local source in the container, and runs the desired command for running the tests.

So, a developer (or the CI box) can run the unit tests with:

docker-compose run tests Summary
  • Each Dockerfile builds an image that can go straight into production without further configuration required.

  • Each image runs in "developer mode" if the ENV environment variable is set.

  • Running docker-compose up from the root of the project gets you a full stack running locally in developer mode.

  • Running docker-compose run tests in each service's own folder will run the unit tests for that service - starting any dependencies as needed.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Podgot

Planet ALUG - Tue, 12/05/2015 - 23:37

I've been meaning to blog about the podcasts I listen to and the setup I use for consuming them as both have evolved a little over the past few months.

The podcasts The setup

First, I use podget running in a cron job to pull the podcasts down to my VPS.

I use syncthing to have those replicated to my laptop and home media server.

From my laptop, I move files that I'm going to listen to on to my mp3 player (see http://offend.me.uk/blog/59/).

When I'm cycling to work or in the car, I use the mp3 player to listen to them. (No, when I'm in the car, I plug it in to the stereo, I don't drive with headphones on :P)

When I'm sitting at a computer or at home, I use Plex to serve up podcasts from my home media box.

I keep on top of everything by making sure that I move (rather than copy) when putting things on the mp3 player and rely on Syncthing to remove listened-to podcasts from everywhere else.

It's not the most elegant setup I've heard of but it's simple and works for me :)

What next?

I find I have a lot of things I want to listen to and not really enough time to listen to them in. I've heard that some people speed podcasts up (I've heard as much as 50%). Does anyone do this? Does it make things any less enjoyable to listen to? I really enjoy the quality of what I listen to; I don't want to feel like I'm just consuming information for the sake of it.

Categories: LUG Community Blogs

Debian Bits: Debian Ruby team sprint 2015

Planet HantsLUG - Mon, 11/05/2015 - 23:01

The Debian Ruby Ruby team had a first sprint in 2014. The experience was very positive, and it was decided to do it again in 2015. Last April, the team once more met at the IRILL offices, in Paris, France.

The participants worked to improve the quality Ruby packages in Debian, including fixing release critical and security bugs, improving metadata and packaging code, and triaging test failures on the Debian Continuous Integration service.

The sprint also served to prepare the team infrastructure for the future Debian 9 release:

  • the gem2deb packaging helper to improve the semi-automated generation of Debian source packages from existing standard-compliant Ruby packages from Rubygems.

  • there was also an effort to prepare the switch to Ruby 2.2, the latest stable release of the Ruby language which was released after the Debian testing suite was already frozen for the Debian 8 release.

Left to right: Christian Hofstaedtler, Tomasz Nitecki, Sebastien Badia and Antonio Terceiro.

A full report with technical details has been posted to the relevant Debian mailing lists.

Categories: LUG Community Blogs
Syndicate content