I was reading Phones need 'bed mode' to protect sleep on the BBC, that computers should have a blue light filter to help people to sleep properly at night. As I am frequently working on my laptop until late in the evening, I felt that I should investigate further. Currently I do turn down the brightness, but it seems that is not enough.
Getting this working on my laptop was trivial:sudo add-apt-repository ppa:kilian/f.lux sudo apt-get update sudo apt-get install fluxgui
Finally I ran the program and set it to launch on start-up. I entered latitude as 51.2, which is close enough I believe.Operation
The software feels fairly basic, whether this is the same for Windows and Mac I don't know. That said the applet seems to run perfectly in the Ubuntu notification area.
It is very noticeable that the display is more muted and much more comfortable to view in the evening. I don't yet know whether this will work well during the day, nor whether it will improve my sleeping. But the logic behind it seems sound and there is no reason why it shouldn't help.
Following my recent announcement, I thought I would give some of my reasons for the move and some early impressions of using Jekyll.What is Jekyll?
I am assuming that most of my audience have at least a passing knowledge of Markdown, but basically it is a very clean virtually syntax-free way of writing text files, so that they can be easily converted into different formats. The key to markdown is the conversion utility and I currently use Pandoc. I write the file once, and then I can convert into whatever format I want it in:
I would imagine most people start using Markdown so that can continue to use the favourite text editor - Vim or Emacs. At work I have found myself using it in preference to a word-processor, I have written a simple md2pdf perl script, so that in vim I can simply type :md2pdf % to have my document saved as a PDF. And the PDFs that Pandoc produces are beautiful and headings and sub-headings are automatically converted into PDF Bookmarks, giving your documents an effortless professionalism.
For complicated documents I sometimes start in Markdown and then move to LaTeX, but increasingly I am finding myself able to do virtually everything in Markdown, including simple images and hyperlinks. But you also have the option of extending plain markdown with HTML tags.
So in simplest terms Jekyll is just an automated way of creating Markdown files and converting them to HTML.But why change from Wordpress?
Wordpress has been great for me, it's relatively simple, has great statistical tools, a build in commenting system and much more besides. So why leave all that behind for something which is fundamentally so basic?Benefits of Jekyll
I cannot tell you how wonderful it is to grep -l searchtext *.markdown | vim - and be able to edit each matching file in Vim.Bootpolish Blog
There was another reason too, which was that I still had an old blog at bootpolish.net, which I wanted to close down. I could have merged it into my Wordpress blog, but I thought it would be easier to transfer it to Jekyll. To be honest I can't say that it was particularly easy, but thankfully it is now done.The Migration Process
I followed the GitHub Instructions for Jekyll. I used rsync to copy the underlying text files from Bootpolish.net into the _drafts folder, before using bash for loops to auto-process each into a Jekyll markdown post. I used the Wordpress Importer to transfer my Wordpress blog. The importer did not work particularly well, so I ended up editing each file in turn.
I found there was some customisation required:
I have written a script to build all the tags and categories, which is working well. I would like to integrate this into the git commit command somehow, so that I don't forget to run it!
Any new categories would require additional RSS feed files creating, by simply copying the feed template into the relevant category/feed folder.Conclusions
This has been much more work than I would have liked. That said, I now have my Markdown files in a format under my control. I can move those files to any Jekyll provider, or indeed to any web provider and have them hosted.
In short, I am hoping that this is the last time I will move platforms!
Lastly, if you're unfamiliar with Markdown, you can view the Markdown version of this page.
I'm starting to get close to my target weight - only 3 kg to go according to BMI or a few centimetres according to height to waist ratio. Sadly for the past month I've not been able to get as much exercise as normal and I've hit another weight plateau.
Another strange phenomenon I'm experiencing is being cold. Normally even in mid winter I'm fine at work or home with short sleeved shirts and only on the coldest days do I need to wear a T-shirt under a regular shirt. In fact I've not worn or owned a proper vest for decades.
Last weekend I felt cold at home and put a jumper on. Something I wouldn't normally do in autumn. On Monday at work I felt cold but it was such a strange sensation that I didn't even recognise it. On Tuesday I went into M&S and bought a vest! Human fat doesn't provide the same level of insulation that blubber does for seals and whales, but it does provide some and more importantly on a diet the body down regulates thermogenesis to save energy. As I've been starving my self for 9 months to lose weight, it's hardly surprising that my body has decided that keeping warm isn't that important when I've shed over 21 kg.
Today isn't that cold - but it is pretty miserable. I've now got a vest, shirt, thin fleece and thick fleece gilet on. I even raised the temperature on the central heating thermostat up by 2°C...!
I've moved this here blog thing from Joomla to Jekyll.
I've not (yet) moved the site content over to this new site, as currently I'm unsure if any of the content I had written was really worth preserving, or if it was best to start anew.
The resons for moving to Jekyll are simple and mostly obvious:No requirement for a server-side scripting language or database
This should be pretty clear but the main reason is that working with PHP and the popular PHP CMSes out there on a daily basis teaches you to be wary of them, this deployment mechanism means that the site itself cannot be hacked directly or used as an attack vector.
Of course, nothing is truely "hack-proof" so precautions still need to be taken, but it removes the vulnerabilities that a CMS like Wordpress would introduce.Native Markdown Editing
Because most CMSes are not designed for people like me, who use markdown as their de-facto standard for formatting prose text. Many use an HTML WYSIWYG editor, which is great for most users, but ends up making editing less efficient, and the output less elegant. It also means that the only format the content can be delivered in is HTML.No dedicated editing application
Using Jekyll, and a git based deployment process means that deploying changes to the site is simple and easy, and I can do it anywhere thanks to github's online editor. I only need to be logged into my github account in order to make changes or write a new post.
Currently, I'm using a git hook to rebuild the site and publish the changes, this is triggered by a git push to my server.
This script clones the repo from github to a temporary directory, builds it to the public directory, then deletes the temporary copy.
My next post will probably be about this deployment mechanism.Warp Speed
Finally, this item actually returns to my first point, but the lack of server-side programming in this site means that it can be delivered at break-neck speeds. Even without any kind of CDN or HTTP accellerator like Varnish, a well tuned nginx configuration and a lack of server-side scripting means that the all-important TTFB is much lower.
I hope that these items above and the transition to Jekyll will give me cause to write better, more frequent posts here.
I normally don’t like using the web interface of Supermicro IPMI because it’s extremely clunky, unintuitive and uses Java in some places.
The other day however I needed to look at the console of a machine which had been left running Memtest86+. You can make Memtest86+ output to serial which is generally preferred for looking at it remotely, but this wasn’t run in that mode so was outputting nothing to the IPMI serial-over-LAN. I would have to use the Java remote console viewer.
As an added wrinkle, the IPMI network interfaces are on a network that I can’t access except through an SSH jump host.
So, I just gave it a go without doing anything special other than launching an SSH tunnel:$ ssh me@jumphost -L127.0.0.1:1443:192.168.1.21:443 -N
This tunnels my localhost port 1443 to port 443 of 192.168.1.21 as available from the jump host. Local port 1443 used because binding low ports requires root privileges.
This allowed me to log in to the web interface of the IPMI at https://localhost:1443/, though it kept putting up a dialog which said I needed a newer JDK. Going to “Remote Control / Console Redirection” attempted to download a JNLP file and then said it failed to download.
This was with openjdk-7-jre and icedtea-7-plugin installed.
I decided maybe it would work better if I installed the Oracle Java 8 stuff (ugh). That was made easy by following these simple instructions. That’s an Ubuntu PPA which does everything for you, after you agree that you are a bad person who should feel badaccept the license.
This time things got a little further, but still failed saying it couldn’t download a JAR file. I noticed that it was trying to download the JAR from https://127.0.0.1:443/ even though my tunnel was on port 1443.
I eventually did get the remote console viewer to work but I’m not 100% convinced it was because I switched to Oracle Java.
So, basic networking issue here. Maybe it really needs port 443?
Okay, ran SSH as root so it could bind port 443. Got a bit further but now says “connection failed” with no diagnostics as to exactly what connection had failed. Still, gut instinct was that this was the remote console app having started but not having access to some port it needed.
Okay, ran SSH as a SOCKS proxy instead, set the SOCKS proxy in my browser. Same problem.
Did a search to see what ports the Supermicro remote console needs. Tried a new SSH command:$ sudo ssh me@jumphost \ -L127.0.0.1:443:192.168.1.21:443 \ -L127.0.0.1:5900:192.168.1.21:5900 \ -L127.0.0.1:5901:192.168.1.21:5901 \ -L127.0.0.1:5120:192.168.1.21:5120 \ -L127.0.0.1:5123:192.168.1.21:5123 -N
Apart from a few popup dialogs complaining about “MalformedURLException: unknown protocol: socket” (wtf?), this now appears to work.
For the past few weeks I have migrating chrisjrob.com from Wordpress to Jekyll. I have also been merging in my previous blog at bootpolish.net.
This process has proved to be much more work than I expected, but this evening it all came together and I finally pressed the button to transfer the DNS over to the Jekyll site, hosted by GitHub.
I have tried to maintain the URL structure, along with tags, categories and RSS feeds, but it can't be perfect and there will be breakage.
If you notice any problems please do comment below.
The following contributors got their Debian Developer accounts in the last two months:
The following contributors were added as Debian Maintainers in the last two months:
Yesterday, Kenneth Freeman posted a note to the tor-relays list drawing attention to a new resource called TorFlow. TorFlow is a beautiful visualisation of Tor network traffic around the world. It enables you to see where traffic is concentrated (Europe) and where there is almost none (Australasia). Having the data overlaid on a world map gives a startling picture of the unfortunate concentration of Tor nodes in particular locations.
I recently moved my own relay from Amsterdam (190 relays) to London (133) but the network needs much more geo-diversity. Unfortunately, international bandwidth costs are lowest is the areas where relays are currently located. Given that the relays are all (well, nearly all…..) run by volunteers like me and funded out of their own pockets it is perhaps not surprising that this concentration should occur. But it is not healthy for the network.
There appears to be a particularly intriguing concentration of 16 relays on a tiny island in the Gulf of Guinea. Apparently this is an artifact though because those relays are all at (0, 0) which I am told GeoIP uses as a placeholder for “unknown” (in fact, GeoIP location is a somewhat imprecise art so there may be other anomalies in the data.)
It's been quite some time since I last got round to writing anything here; almost two months. Life has been fairly eventful in that short time. At least, work has.
During every performance review I've had since I joined Proxama, there's one goal I've consistently brought up: that I wanted to have more of an influence over the way we write and deliver software and the tools we use. That's the sort of thing I'm really interested in.
Having made it to head of the server delivery team, I had a good taste of the sort of oversight that I was looking for but a few weeks ago, I got the opportunity to take on a role that encompasses both server and mobile, development and QA so of course I jumped at the chance... and got it!
Naïvely, when I took on the role, I thought I'd be doing more of the same as I was before (a bit of line management, code reviews, shaping upcoming work, architecture, occasionally writing code), just with a larger team. This is turning out not to be the case but in quite a positive way - so far, at least. I feel as though I now have the opportunity to sit a little further back, get some thinking time, and right a few wrongs that have built up over the years. Whether I'm achieving that remains to be seen ;)
Another thought that occurred to me the other day is that way back when I was at school, I never really imagined I'd end up in a technical role. I always imagined I'd either be a maths teacher or that I'd be a writer or editor for a newspaper or magazine. I'm finding out that my new job at Proxama requires me to write quite a lot of papers on various technical subjects. Double win.
In short, I'm enjoying some of my days more, trying very hard (and sometimes failing) not to worry about the details, focus on the bigger picture and trust that the other things will fall in to place (and sort it out where they don't). Is this what it's like going "post technical"? I'm slightly worried I'll forget how to code if I don't do a bit more of it.
Today, I spent a very, very long time fighting Jira. That wasn't fun.
Note to self: book some time in to write some code.
Last week Simon retweeted a link to Don’t Feed the Beast – the Great Tech Recruiter Infestation. Which reminded me I’d been meaning to comment on my own experiences from earlier in the year.
I don’t entertain the same level of bile as displayed in the post, but I do have a significant level of disappointment in the recruitment industry. I had conversations with 3 different agencies, all of whom were geographically relevant. One contacted me, the other 2 (one I’d dealt with before, one that was recommended to me) I contacted myself. All managed to fail to communicate with any level of acceptability.
The agency hat contacted me eventually went quiet, after having asked if they could put my CV forward for a role and pushing very hard about when I could interview. The contact in the agency I’d dealt with before replied to say I was being passed to someone else who would get in contact. Who of course didn’t. And the final agency, who had been recommended, passed me between 3 different people, said they were confident they could find me something, and then went dark except for signing me up to their generic jobs list which failed to have anything of relevance on it.
As it happens my availability and skill set were not conducive to results at that point in time, so my beef isn’t with the inability to find a role. Instead it’s with the poor levels of communication presented by an industry which seems, to me, to have communication as part of the core value it should be offering. If anyone had said at the start “Look, it’s going to be tricky, we’ll see what we can do” or “Look, that’s not what we really deal in, we can’t help”, that would have been fine. I’m fine with explanations. I get really miffed when I’m just left hanging.
I’d love to be able to say I’ll never deal with a recruiter again, but the fact of the matter is they do serve a purpose. There’s only so far a company can get with word of mouth recruitment; eventually that network of personal connections from existing employees who are considering moving dries up. Advertising might get you some more people, but it can also result in people who are hugely inappropriate for the role. From the company point of view recruiters nominally fulfil 2 roles. Firstly they connect the prospective employer with a potentially wider base of candidates. Secondly they should be able to do some sort of, at least basic, filtering of whether a candidate is appropriate for a role. From the candidate point of view the recruiter hopefully has a better knowledge of what roles are out there.
However the incentives to please each side are hugely unbalanced. The candidate isn’t paying the recruiter. “If you’re not paying for it, you’re the product” may be bandied around too often, but I believe this is one of the instances where it’s very applicable. A recruiter is paid by their ability to deliver viable candidates to prospective employers. The delivery of these candidates is the service. Whether or not the candidate is happy with the job is irrelevant beyond them staying long enough that the placement fee can be claimed. The lengthy commercial relationship is ideally between the company and the recruitment agency, not the candidate and the agency. A recruiter wants to be able to say “Look at the fine candidate I provided last time, you should always come to me first in future”. There’s a certain element of wanting the candidate to come back if/when they are looking for a new role, but it’s not a primary concern.
It is notable that the recommendations I’d received were from people who had been on the hiring side of things. The recruiter has a vested interest in keeping the employer happy, in the hope of a sustained relationship. There is little motivation for keeping the candidate happy, as long as you don’t manage to scare them off. And, in fact, if you scare some off, who cares? A recruiter doesn’t get paid for providing the best possible candidate. Or indeed a candidate who will fully engage with the role. All they’re required to provide is a hire-able candidate who takes the role.
I’m not sure what the resolution is to this. Word of mouth only scales so far for both employer and candidate. Many of the big job websites seem to be full of recruiters rather than real employers. And I’m sure there are some decent recruiters out there doing a good job, keeping both sides happy and earning their significant cut. I’m sad to say I can’t foresee any big change any time soon.
[Note I’m not currently looking for employment.]
[No recruitment agencies were harmed in the writing of this post. I have deliberately tried to avoid outing anyone in particular.]
I feel like I’ve been seeing a lot more threads on the linux-raid mailing list recently where people’s arrays have broken, they need help putting them back together (because they aren’t familiar with what to do in that situation), and it turns out that there’s nothing much wrong with the devices in question other than device timeouts.
When I say “a lot”, I mean, “more than I used to.”
I think the reason for the increase in failures may be that HDD vendors have been busy segregating their products into “desktop” and “RAID” editions in a somewhat arbitrary fashion, by removing features from the “desktop” editions in the drive firmware. One of the features that today’s consumer desktop drives tend to entirely lack is configurable error timeouts, also known as SCTERC, also known as TLER.TL;DR
If you use redundant storage but may be using non-RAID drives, you absolutely must check them for configurable timeout support. If they don’t have it then you must increase your storage driver’s timeout to compensate, otherwise you risk data loss.How do storage timeouts work, and when are they a factor?
When the operating system requests from or write to a particular drive sector and fails to do so, it keeps trying, and does nothing else while it is trying. An HDD that either does not have configurable timeouts or that has them disabled will keep doing this for quite a long time—minutes—and won’t be responding to any other command while it does that.
At some point Linux’s own timeouts will be exceeded and the Linux kernel will decide that there is something really wrong with the drive in question. It will try to reset it, and that will probably fail, because the drive will not be responding to the reset command. Linux will probably then reset the entire SATA or SCSI link and fail the IO request.
In a single drive situation (no RAID redundancy) it is probably a good thing that the drive tries really hard to get/set the data. If it really persists it just may work, and so there’s no data loss, and you are left under no illusion that your drive is really unwell and should be replaced soon.
In a multiple drive software RAID situation it’s a really bad thing. Linux MD will kick the drive out because as far as it is concerned it’s a drive that stopped responding to anything for several minutes. But why do you need to care? RAID is resilient, right? So a drive gets kicked out and added back again, it should be no problem.
Well, a lot of the time that’s true, but if you happen to hit another unreadable sector on some other drive while the array is degraded then you’ve got two drives kicked out, and so on. A bus / controller reset can also kick multiple drives out. It’s really easy to end up with an array that thinks it’s too damaged to function because of a relatively minor amount of unreadable sectors. RAID6 can’t help you here.
If you know what you’re doing you can still coerce such an array to assemble itself again and begin rebuilding, but if its component drives have long timeouts set then you may never be able to get it to rebuild fully!
What should happen in a RAID setup is that the drives give up quickly. In the case of a failed read, RAID just reads it from elsewhere and writes it back (causing a sector reallocation in the drive). The monthly scrub that Linux MD does catches these bad sectors before you have a bad time. You can monitor your reallocated sector count and know when a drive is going bad.How to check/set drive timeouts
You can query the current timeout setting with smartctl like so:# for drive in /sys/block/sd*; do drive="/dev/$(basename $drive)"; echo "$drive:"; smartctl -l scterc $drive; done
You hopefully end up with something like this:/dev/sda: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds) /dev/sdb: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds)
That’s a good result because it shows that configurable error timeouts (scterc) are supported, and the timeout is set to 70 all over. That’s in centiseconds, so it’s 7 seconds.
Consumer desktop drives from a few years ago might come back with something like this though:SCT Error Recovery Control: Read: Disabled Write: Disabled
That would mean that the drive supports scterc, but does not enable it on power up. You will need to enable it yourself with smartctl again. Here’s how:# smartctl -q errorsonly -l scterc,70,70 /dev/sda
That will be silent unless there is some error.
More modern consumer desktop drives probably won’t support scterc at all. They’ll look like this:Warning: device does not support SCT Error Recovery Control command
Here you have no alternative but to tell Linux itself to expect this drive to take several minutes to recover from an error and please not aggressively reset it or its controller until at least that time has passed. 180 seconds has been found to be longer than any observed desktop drive will try for.# echo 180 > /sys/block/sda/device/timeout I’ve got a mix of drives that support scterc, some that have it disabled, and some that don’t support it. What now?
It’s not difficult to come up with a script that leaves your drives set into their most optimal error timeout condition on each boot. Here’s a trivial example:#!/bin/sh for disk in `find /sys/block -maxdepth 1 -name 'sd*' | xargs -n 1 basename` do smartctl -q errorsonly -l scterc,70,70 /dev/$disk if test $? -eq 4 then echo "/dev/$disk doesn't suppport scterc, setting timeout to 180s" '/o\' echo 180 > /sys/block/$disk/device/timeout else echo "/dev/$disk supports scterc " '\o/' fi done
If you call that from your system’s startup scripts (e.g. /etc/rc.local on Debian/Ubuntu) then it will try to set scterc to 7 seconds on every /dev/sd* block device. If it works, great. If it gets an error then it sets the device driver timeout to 180 seconds instead.
There are a couple of shortcomings with this approach, but I offer it here because it’s simple to understand.
It may do odd things if you have a /dev/sd* device that isn’t a real SATA/SCSI disk, for example if it’s iSCSI, or maybe some types of USB enclosure. If the drive is something that can be unplugged and plugged in again (like a USB or eSATA dock) then the drive may reset its scterc setting while unpowered and not get it back when re-plugged: the above script only runs at system boot time.
A more complete but more complex approach may be to get udev to do the work whenever it sees a new drive. That covers both boot time and any time one is plugged in. The smartctl project has had one of these scripts contributed. It looks very clever—for example it works out which devices are part of MD RAIDs—but I don’t use it yet myself as a simpler thing like the script above works for me.What about hardware RAIDs?
A hardware RAID controller is going to set low timeouts on the drives itself, so as long as they support the feature you don’t have to worry about that.
If the support isn’t there in the drive then you may or may not be screwed there: chances are that the RAID controller is going to be smarter about how it handles slow requests and just ignore the drive for a while. If you are unlucky though you will end up in a position where some of your drives need the setting changed but you can’t directly address them with smartctl. Some brands e.g. 3ware/LSI do allow smartctl interaction through a control device.
When using hardware RAID it would be a good idea to only buy drives that support scterc.What about ZFS?
I don’t know anything about ZFS and a quick look gives some conflicting advice:
Drives with scterc support don’t cost that much more, so I’d probably want to buy them and check it’s enabled if it were me.What about btrfs?
As far as I can see btrfs does not disable drives, it leaves it until Linux does that, so you’re probably not at risk of losing data.
If your drives do support scterc though then you’re still best off making sure it’s set as otherwise things will crawl to a halt at the first sign of trouble.What about NAS devices?
The thing about these is, they’re quite often just low-end hardware running Linux and doing Linux software RAID under the covers. With the disadvantage that you maybe can’t log in to them and change their timeout settings. This post claims that a few NAS vendors say they have their own timeouts and ignore scterc.So which drives support SCTERC/TLER and how much more do they cost?
I’m not going to list any here because the list will become out of date too quickly. It’s just something to bear in mind, check for, and take action over.Fart fart fart
Comments along the lines of “Always use hardware RAID” or “always use $filesystem” will be replaced with “fart fart fart,” so if that’s what you feel the need to say you should probably just do so on Twitter instead, where I will only have the choice to read them in my head as “fart fart fart.”
Let’s be clear from the outset: there’s no word that adequately defines MozFest. The Mozilla Festival is, simply, crazy. Perhaps it’s more kindly described as chaotic? Possibly. A loosely-coupled set of talks, discussion groups, workshops and hackathons, roughly organised into allocated floors, feed the strangely-complimenting hemispheres of work and relaxation.How MozFest works
Starting from the seeming calm of Ravensbourne’s smart entrance, you stroll in, unaware of the soon-experienced confusion. A bewildering and befuddling set of expectations and realisations come and go in rapid succession. From the very first thought – “ok, I’m signed in – what now?”, to the second – “perhaps I need to go upstairs?”, third – “or do I? there’s no obvious signage, just a load of small notices”…. and so on, descending quickly but briefly into self-doubt before emerging victorious from the uneasy, childlike dependency you have on others’ goodwill.
Volunteers in #MozHelp t-shirts, I’m looking at you. Thanks.
The opening evening started this year with the Science Fair, which featured – in my experience – a set of exciting hardware and software projects which were all in some way web-enabled, or web-connected, or web-controlled. Think Internet of Things, but built by enthusiasts, tinkerers and hackers – the way it should be.
“Open Hardware” projects, interactive story-telling, video games and robots being controlled by the orientation of the smartphone (by virtue of its gyroscopic capability).. the demonstration of genius and creativity is not even limited by the hardware available. If it didn’t already exist, it got designed and built.An Open Web, for Free Society
As made clear from the opening keynotes on Saturday morning, MozFest is not a place for debate. Don’t think this as a bad thing. The intention is simply to help communicate ideas, as opposed to getting bogged down in the mire of detail. “Free” vs “Open”? Not here. The advice given was to use one’s ears much more than one’s mouth, and it’s sound advice – no pun intended. I have generally been considered a good listener, so I felt at home not having to “prove” anything by making a point. There was no point.
So the work on lumail2 is going well, and already I can see that it is a good idea. The main reason for (re)writing it is to unify a lot of the previous ad-hoc primitives (i.e. lua functions) and to try and push as much of the code into Lua, and out of C++, as possible. This work is already paying off with the introduction of new display-modes and simpler implementation.
View modes are an important part of lumail, because it is a modal mail-client. You're always in one mode:
This is nothing new, but there are two new modes:
Each of these modes draws lines of text on the screen, and those lines consist of things that Lua generated. So there is a direct mapping:ModeLua Function maildirfunction maildir_view() indexfunction index_view() messagefunction message_view() luafunction lua_view()
With that in mind it is possible to write a function to scroll to the next line containing a pattern like so:function find() local pattern = Screen:get_line( "Search for:" ) -- Get the global mode. local mode = Config:get("global.mode") -- Use that to get the lines we're currently displaying loadstring( "out = " .. mode .. "_view()" )() -- At this point "out" is a table containing lines that -- the current mode wishes to display. -- .. do searching here. end
Thus the whole thing is dynamic and mode-agnostic.
The other big change is pushing things to lua. So to reply to an email, populating the new message, appending your ~/.signature, is handled by Lua. As is forwarding a message, or composing a new mail.
The downside is that the configuration-file is now almost 1000 lines long, thanks to the many little function definitions, and key-binding setup.
I realised that one of my desktop systems is now over a decade old, I bought it in 2005 and its starting to show it's age. The case and DVD drives are fine, the PSU and the hard disk have been upgraded mid-life, and there is nothing wrong with the display, keyboard and mouse. The problem is that the CPU is under-powered, there isn't enough RAM and the graphics subsystem is slow and no longer up to the job.
I've considered several option, but I think the least disruptive is a modern motherboard and faster current generation CPU. I can keep the case and most of the rest of the parts and just do a heart transplant.
I know I don't mention a season, and I'm a few hours late for hallowe'en, but here's a haiku about Haiku:
A death, once again,
The master sighs, and fixes,
It rises up, undead.
Here is my monthly update covering a large part of what I have been doing in the free software world (previously):
This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:
I also filed FTBFS bugs against arora, barry, django-ajax-selects, django-polymorphic, django-sitetree, flask-autoindex, flask-babel, genparse, golang-github-jacobsa-ogletest, healpy, jarisplayer, jsurf-alggeo, kmidimon, libmapper, libpreludedb, mathgl, metview, miaviewit, moksha.common, monster-masher, node-connect, node-postgres, opensurgsim, php-xml-rss, pokerth, pylint-django, python-django-contact-form, python-pyqtgraph, python-pyramid, qlipper, r-bioc-cummerbund, r-bioc-genomicalignments, rawdns, ruby-haml-rails, ruby-omniauth-ldap, scute, stellarium, step, synfigstudio, tulip, xdot, & yelp.
Since discovering I have elevated blood pressure and average but unhealthy levels of blood lipids at the start of this year I've made some small changes to my lifestyle. My diet was fundamentally sound but I have tweaked it a little and reduced the amount of it.
To date I've lost just over 21 kg at an average of 83 g per day, or in medieval units: 3 stones and 5 pounds at a rate of 3 ounces per day. It's important that the rate is slow so you don't stress your body and the weight loss should be sustainable as a result.
The last blood tests showed that my blood lipids had shifted drastically and are now very healthy, so my diet is worth sticking with. Yesterday I spoke to my GP, and after analysising a 5-day set of home blood pressure readings he's decided to stop the blood pressure medication.
I probably do have inherited high blood pressure, but my excess weight brought it on early and as long as I monitor it weekly we should be able to start medication again if it starts to rise up again.
My hybrid diet seems to have worked very well. To be honest I've only really tweaked what I was already eating, but the tweaks were worth the effort and I'll be sticking to the diet for ever now. I will be allowed a few extra calories per day - I do want to stop losing weight eventually!
The only real inconvenience has been having to replace most of my clothes, and as I've now shrunk down so much, even finding local shops that sell men's clothing that is even small enough to not look funny on me!
Back in March I was given an LG G Watch R, the first Android Wear smartwatch to have a full round display (the Moto 360 was earlier, but has a bit cut off the bottom of the actual display). I’d promised I’d get round to making some comments about it once I’d had it for a while and have failed to do so until now. Note that this is very much comments on the watch from a user point of view; I haven’t got to the point of trying to do any development or other hacking of it.
Firstly, it’s important to note I already was wearing a watch and have been doing so for all of my adult life. Just a basic, unobtrusive analogue watch (I’ve had a couple since I was 18, before that it was pretty much every type of calculator watch available at some point), but I can’t remember a period where I didn’t. The G Watch R is bulkier than what I was previously wearing, but I haven’t found it obtrusive. And I love the way it looks; if you don’t look closely it doesn’t look like a smart watch (and really it’s only the screen that gives it away).
Secondly, I already would have taken my watch off at night and when I was showering. So while the fact that the battery on the G Watch R will really only last a day and a half is by far and away its most annoying problem, it’s not as bad as it could be for me. The supplied charging dock is magnetic, so it lives on my beside table and I just drop the watch in it when I go to bed.
With those details out of the way, what have I thought of it? It’s certainly a neat gadget. Being able to see my notifications without having to take my phone out of my pocket is more convenient than I expected - especially when it’s something like an unimportant email that I can then easily dismiss by swiping the watch face. My agenda being just a flick away, very convenient, particularly when I’m still at the stage of trying to remember where my next lecture is. Having walking directions from Google Maps show up on the watch (and be accompanied by a gentle vibration when I need to change direction) is pretty handy too. The ability to take pictures via the phone camera, not so much. Perhaps if it showed me roughly what I was about to photograph, but without that it’s no easier than using the phone interface. It’s mostly an interface for consuming information - I’ve tried the text to SMS interface a few times, but it’s just not reliable enough that I’d choose to use it.
I’ve also been pleased to see it get several updates from LG in the time I’ve had it. First the upgrade from Wear 4.4 to Wear 5.1 (probably via 5.0 but I forget), but also the enabling of wifi support. The hardware could always support this, but initially Android Wear didn’t and then there was some uncertainty about the FCC certification for the G Watch R. I can’t say I use it much (mostly the phone is within range) but it’s nice to see the improvements in support when they’re available.
What about the downsides? Battery life, as mentioned above, is definitely the major one. Mostly a day is fine, but the problem comes if I’m ever away. There’s no way to charge without the charging dock, so that becomes another things I have to pack. And it’s really annoying to have your watch go dead on you midday when you forget to do so. I also had a period where I’d frequently (at least once a week) find an “Android Wear isn’t responding. Wait/Restart?” error on the watch screen. Not cool, but thankfully seems to have stopped happening. Finally there’s the additional resource requirements it puts on the phone. I have a fairly basic Moto G 4G that already struggles with Ingress and Chrome at the same time, so adding yet another thing running all the time doesn’t help. I’m sure I could make use of a few more apps if I was more comfortable with loading the phone.
The notable point for me with the watch was DebConf. I’d decided not to bring it, not wanting the hassle of dealing with the daily charging. I switched back to my old analogue watch (a Timex, if you care). And proceeded to spend 3 days looking at it every time my phone vibrated before realising that I couldn’t do that. That marked the point where I accepted that I was definitely seeing benefits from having a smart watch. So when I was away for a week at the start of September, I brought the charger with me (at some point I’ll get round to a second base for travel). I was glad I’d done so. I’m not sure I’m yet at the point I’d replace it in the event it died, but on the whole I’m more of a convert that I was expecting to be.
We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!
This month's meeting is at The Feathers Pub, Merstham
42 High St, Merstham, Redhill, Surrey, RH1 3EA
01737 645643 · http://www.thefeathersmerstham.co.uk
NOTE the pub opens at 12 Noon.