Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.
It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.
I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!
So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:
~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release
That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.
So that left three hosts that had a legitimate use for Python:
So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.
My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.
I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.
One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.
I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.
On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).
Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.
Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:$ find ~/foo -xattr user.comment /home/skx/foo/bar/t.txt /home/skx/foo/bar/xc.txt /home/skx/foo/bar/x.txt
There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.
Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)
Assuming this post shows up then I'll have successfully migrated from Chronicle to a temporary replacement.
Chronicle is awesome, and despite a lack of activity recently it is not dead. (No activity because it continued to do everything I needed for my blog.)
Unfortunately though there is a problem with chronicle, it suffers from a bit of a performance problem which has gradually become more and more vexing as the nubmer of entries I have has grown.
When chronicle runs it :
In the general case you rebuild a blog because you've made a entry, or received a new comment. There is some code which tries to use memcached for caching, but in general chronicle just isn't fast and it is certainly memory-bound if you have a couple of thousand entries.
Currently my test data-set contains 2000 entries and to rebuild that from a clean start takes around 4 minutes, which is pretty horrific.
So what is the alternative? What if you could parse each post once, add it to an SQLite database, and then use that for writing your output pages? Instead of the complex data-structure in-RAM and the need to parse a zillion files you'd have a standard/simple SQL structure you could use to build a tag-cloud, an archive, & etc. If you store the contents of the parsed-blog, along with the mtime of the source file you can update it if the entry is changed in the future, as I sometimes make typos which I only spot once Ive run make steve on my blog sources.
Not surprisingly the newer code is significantly faster if you have 2000+ posts. If you've imported the posts into SQLite the most recent entries are updated in 3 seconds. If you're starting cold, parsing each entry, inserting it into SQLite, and then generating the blog from scratch the build time is still less than 10 seconds.
The downside is that I've removed features, obviously nothing that I use myself. Most notably the calendar view is gone, as is the ability to use date-based URLs. Less seriously there is only a single theme, which is what is used upon this site.
In conclusion I've written something last night which is a stepping stone between the current chronicle and chronicle2 which will appear in due course.
PS. This entry was written in markdown, just because I wanted to be sure it worked.