LUG Community Blogs

Jonathan McDowell: Automatic inline signing for mutt with RT

Planet ALUG - Thu, 18/09/2014 - 10:39

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign" send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Update: Apparently Mutt in Jessie and beyond doesn't have the pgp_autosign option; you want crypt_autosign instead (and maybe crypt_autopgp but that defaults to yes so unless you've configured your setup to do S/MIME by default you should be fine). Thanks to Luca Capello for pointing this out.

Categories: LUG Community Blogs

Steve Kemp: If this goes well I have a new blog engine

Planet HantsLUG - Wed, 17/09/2014 - 17:23

Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.

Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)

Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.

When chronicle runs it :

  • It reads each post into a complex data-structure.
  • Then it walks this multiple times.
  • Finally it outputs a whole bunch of posts.

In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.

Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.

So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.

Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.

The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.

In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.

PS. This entry was written in markdown, just because I wanted to be sure it worked.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Apple and Damson Jam

Planet HantsLUG - Wed, 17/09/2014 - 08:52

I made damson jam, it's something I've not made for years. It's the first year we've had fruit from our damson tree. Sadly we were on holiday when the fruit was at it's prime so we lost some to birds and age. In the end we had about 1.5 kg of fruit that was okay, to which we added a further 1.5 kg of cooking apples we collected from a box in the village.

I gave everything a wash and picked out the past redemption fruit, and put the cleaned damsons and roughly chopped apples in the jam pan and added 400 ml of water. I then heated the mixture and gave it a good mashing until it was all broken up and soft. Previously I've tried making jelly but the yield is dreadful, so instead I forced the mush through a collander to hold back the stones, pips and coarse material.

Next I cleaned everything up and started on the jamimg process. I added 1.5 kg of sugar, 300 ml of water and the juice of a lemon into the jam pan and heated to boiling. I then poured in my 1.5 kg of apple and damson puree, and brought quickly to the boil. I let it have a full rolling boil for about three minutes (damsons and apples have a lot of pectin and if you boil for too long you just get rubber). I then potted it up and left it to cool.

From this batch I got 9.75 jars of jam, which isn't too bad at all. It's not as clear jelly, but it tastes plenty good and it's a lot easier to make than proper whole fruit jam. When I put it in the jars it was very runny and didn't lool jam at all, this morning when I inspected it had set well, I'll find out tonight if it's over cooked!

Categories: LUG Community Blogs

Steve Kemp: Applications updating & phoning home

Planet HantsLUG - Tue, 16/09/2014 - 19:42

Personally I believe that any application packaged for Debian should neither phone home, attempt to download plugins over HTTP at run-time, or update itself.

On that basis I've filed #761828.

As a project we have guidelines for what constitutes a "serious" bug, which generally boil down to a package containing a security issue, causing data-loss, or being unusuable.

I'd like to propose that these kind of tracking "things" are equally bad. If consensus could be reached that would be a good thing for the freedom of our users.

(Ooops I slipped into "us", "our user", I'm just an outsider looking in. Mostly.)

Categories: LUG Community Blogs

Meeting at "The Moon Under Water"

Wolverhampton LUG News - Mon, 15/09/2014 - 14:16
Event-Date: Wednesday, 17 September, 2014 - 19:30 to 23:00Body: 53-55 Lichfield St Wolverhampton West Midlands WV1 1EQ Eat, Drink and talk Linux
Categories: LUG Community Blogs

Steve Kemp: Storing and distributing secrets.

Planet HantsLUG - Fri, 12/09/2014 - 20:10

I run a number of hosts, and they are controlled via a server automation tool I wrote called slaughter [Documentation].

The policies I use to control my hosts are public and I don't want to make them private because they server as good examples.

Because the roles are public I don't want to embed passwords in them, which means I need something to hold secrets securely. In my case secrets are things like plaintext-passwords. I want those secrets to be secure and unavailable from untrusted hosts.

The simplest solution I could think of was an IP-address based ACL and a simple webserver. A client requests something like:

  • http://secret.example.com/user-passwords

That returns a JSON object, if the requesting host is permitted to read the data. Otherwise it returns a HTTP 403 error.

The layout is very simple:

|-- secrets | |-- 206.190.139.148 | | `-- auth.json | |-- 127.0.0.1 | | `-- example.json | `-- 80.68.84.109 | `-- chat.json

Each piece of data is beneath a directory/symlink which controls the read-only access. If the request comes in from the suitable IP it is granted, if not it is denied.

For example a failing case:

skx@desktop ~ $ curl http://sss.steve.org.uk/chat missing/permission denied

A working case :

root@chat ~ # curl http://sss.steve.org.uk/chat { "steve": "haha", "bot": "notreally" }

(The JSON suffix is added automatically.)

It is hardly rocket-science, but I couldn't find anything else packaged neatly for this - only things like auth/secstore and factotum. So I'll share if it is useful.

Simple Secret Sharing, or Steve's secret storage.

Categories: LUG Community Blogs

Jonathan McDowell: Back from DebConf 14

Planet ALUG - Fri, 12/09/2014 - 14:03

I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.

This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.

Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.

For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!

Categories: LUG Community Blogs

Steve Kemp: A small email utility and other updates.

Planet HantsLUG - Thu, 11/09/2014 - 10:28

Last night I was looking for an image I knew a model had mailed me a few months ago, as we were talking about rescheduling a shoot at the weekend. I couldn't find it, even with my awesome mail client and filing system.

With some free time I figured I could write a little utility to dump all attachments from email folders, and find it that way.

It did cross my mind that there is the simple mail-utility for dumping headers, etc, called formail, which is distributed alongside procmail, but it doesn't handle attachments ..

I was tempted to write a general purpose script to dump attachments, email header values, etc, etc but given the lack of time I merely solved my own problem.

I suspect there is room for a "mail utilities" package, similar to Joey's "moreutils" and my "sysadmin utils". However I note that there is a GNU Mailutils which does things differently than I'd expect - i.e. it contains a POP3 server.

Still if you want to dump attachments from emails, have GMIME installed, and want to filter by attachment-name, or MIME-type, you might look at my trivial attachment-dump program.

Related to that I spent some time last night updating my photography site, so the animals & pets section has updated images at least.

During the course of that I found a bug in my static-site generator, templer which stopped it from automatically populating image height/widths when called in a glob:

Title: Pets & Animals Images: file_glob( "*.jpg" ) --- This is the page body, it now has access to a variable called 'images' which is a HTML::Template loop-structure containing name/height/width/etc for each image in the current directory.

That should now be resolved, and life should once again be good.

Categories: LUG Community Blogs
Syndicate content