My newspaper of choice, the Guardian, has for some time produced its own android (and iOS of course) app. I have often used the android app on my tablet to catch up on emerging news items at the end of the day. I also read the BBC news app for the same reason. Yesterday I received an update to the Guardian app. That update was a complete rewrite and gives the user a very different experience to the original app. For example, in the old app I could tailor the home screen to show me just the news categories I wanted (i.e. no sport, no fashion, but plenty of politics, business and UK news). In the new app I can only do that if I subscribe to a paid version. Sorrry, but no, I already pay for the newspaper, I just want this to give me updated headlines, I don’t want to have to buy the newspaper all over again.
In today’s paper (and on-line of course) there is an editorial comment on the new app explaining its background. The writer opens:
Today I am proud to announce the launch of our redesigned Guardian app. It’s been a ground-up reworking to bring you a new, advanced and beautiful Guardian app. For the first time you will have a seamless experience across phones and tablets, with a cleaner, responsive design that showcases the Guardian’s award-winning journalism to our readers around the world.
The article goes on to explain the history of the original app and the thinking behind the redesign. It continues:
We’re also thrilled to announce that GuardianWitness – the Guardian’s award-winning platform through which readers can contribute their own pictures, videos and text – is now integrated into the app, meaning readers can now contribute to assignments seamlessly and directly within the main app.
Other new features include:
I particularly like that last bit.
And of course the app needs access to my location.
(P.S. The app called “UK Newspapers” by Markus Reitberger gives access to all the UK news sites you could want – and all it asks for is network access.)
Today has been a fairly mixed bag. I booked today off as holiday some weeks back as work has been consistently stressful and I felt I needed a day to spend by myself with nothing in particular to do. I could probably have predicted it, but that's not quite how it turned out.
The morning started pleasantly: my wife is on a late shift today so she didn't need to leave until 11 so we had a leisurely breakfast together. Then we got to talking about what I'd do today; "Not much", said I. Naturally, the conversation went down the route of talking about things I might do. "Perhaps I'll wander into the city". "Oh, while you're there could you just..."
Before long, I had a to-do list. I didn't mind too much but mentioned in passing that days off always go by far too quickly. The ability of menstrual women to misinterpret a sentence is literally mind-blowing.
After I'd dropped the raging pile of hormones at her place of work, I wandered into the city. First on the list was getting a family photo printed onto canvas with a frame for my mum's birthday tomorrow. My SD card didn't work in the guy's PC - oh no! Luckily, I'd brought my laptop with me so used that to copy everything off, reformat it (making sure I picked something his Windows 8 machine would be unlikely to barf on) and copy the photos back. Still no joy on his machine but it worked fine in the Kodak printer he had; but that turned out not to be useful as he couldn't get it from there onto the canvas printer. Giving up, I went to Boots for some other things and noticed they did canvas printing. Cheaper. And it worked. Flawlessly. I blame Windows 8. Or summat.
I then went to the optician as I've been getting really dry eyes again recently. They told me I'd have to wait until 2:45 to see someone. Resigning myself to spending the day in the city, I had lunch at a noodle bar. After quite a lot of walking around, I eventually had my appointment to be given two things I apparently need to go and buy:
Opticrom ("more than meets the eye"?)
Some eye bags. Never heard of these. Apparently Amazon is the place to get them.
On my way back to the car, I decided to pop into a stationers and ended up buying a really nice notebook. There's something really satisfying about a good notebook - even though I don't use one very often. I suppose next time I can put my to-do list in it ;)
After that success, I decided to try something I've been meaning to try for ages: a backrub from The Rub. I've never had a professional massage before so I went for a 10 minute backrub there. It was worth every single penny - absolutely fantastic. I'll be going back again soon for a longer massage.
Now I have about 50 minutes until I have to pick up my son from the nursery so I think I'm going to go for a cheeky half.
Btw, the illness I mentioned last time culminated in some stomach-based fun-times and then we both felt fine. Weird.
All week so far, I've gone to bed before 10pm every night and have woken up feeling tired, been tired all day, had difficulty focussing during the day, and have still been tired when home from work. I went to bed at 8pm on Wednesday night. Also, my stomach has seemed unsettled. My wife appears to have some of the same symptoms so I don't think I'm about to encounter renal failure or some such mechanical failure of my parts. A very odd illness though. Also, we're both feeling fairly down.
I blame the management.Dilos
I made a thing for parsing those stupid calendar files sent by Exchange so I can view them nicely in mutt :) I'll put it up somewhere when I've tidied it up a bit more.
Actual progress on this Ph.D revision has been quite slow. My current efforts are on improving the focus of the thesis. One of the criticisms the examiners made (somewhat obliquely) was that it wasn&apost very clear exactly what my subject was: musicology? music information retrieval? computational musicology? And the reason for this was that I failed to make that clear to myself. It was only at the writing up stage, when I was trying to put together a coherent argument, that I decided to try and make it a story about music information retrieval (MIR). I tried to argue that MIR&aposs existing evaluation work (which was largely modelled on information retrieval evaluation from the text world) only took into account the music information needs of recreational users of MIR systems, and that there was very little in the way of studying the music information seeking behaviour of "serious" users. However, the examiners didn&apost even accept that information retrieval was an important problem for musicology, nevermind that there was work to be done in examining music information needs of music scholarship.
So I&aposm using this as an excuse to shift the focus away from MIR a little and towards something more like computational musicology and music informatics. I&aposm putting together a case study of a computational musicology toolkit called music21. Doing this allows me to focus in more detail on a smaller and more distinct community of users (rather than attempting to studying musicologists in general which was another problematic feature of the thesis), it makes it much clearer what kind of music research can be addressed using the technology (all of MIR is either far too diverse or far too generic, depending on how you want to spin it), and also allows me to work with the actually Purcell Plus project materials using the toolkit.
So the Kelly report “of the independent review into the events leading to the Co-operative Bank’s capital shortfall” was published yesterday. During the day, I was putting odd bits from it out in 140 characters with the hashtags #coops #kellylessons. Here they are in one more permanent place. How many of these lessons has your organisation – whether a co-op or not – learned?
Are there other lessons that you would add?
There's an Italian restaurant somewhere in the Lake District in the north of England. And if you can imagine, there are hills, lots of rain, lakes, and so on and so forth. And inside this Italian restaurant is a huge picture—a wall-sized mural—of Marlon Brando.
Now, I asked my students once: «Why Marlon Brando?»
«Well he's The Godfather!»
«I don't think he was a Godfather, you know...»
«But he was in that film, wasn't he? It was called The Godfather.»
«Ah, that film from 40 years ago, virtually.. 1970-something?»
«Yes, yes, but he was The Godfather then, wasn't he?»
«You mean if was a mafiosi? He was a leading figure in gangsterland in New York?»
«Well, yes, he was The Godfather,» they keep saying, «... and he was Italian.»
«Well, no, actually he was American...»
So you get this regular kind of placing and misplacing of the real, ie. a very large American actor who played in a film called The Godfather about American gangsters, or Italian-American gangsters, nothing to do with godfathers in Christianity, in the north of England, stuck on a huge wall in a restaurant.
All of which tells us that it's an Italian restaurant.
And we don't demure, we don't say "no, that's all untrue", we just take it for granted: "Yeah, that's what it means - it's an Italian restaurant". And it's that kind of slippage between the real and the imagined—or the true and the fictional—which Baudrillard thinks is becoming inceasingly characteristic of how our society goes on.
Over the last couple of days I've been setting up a few things that I've been meaning to figure out for ages. I'm going to note down here what I did so when it all goes wrong, I can find out what I did ;)Pretties
My desktop machine is one of those frankenstein jobs that has been gradually evolving for the past 15 years or so; every single part of it has been replaced at least 3 times but it's still the same machine in my mind. I've recently been treating myself to a little bit of gaming, partially prompted by Steam's recent push for linux. Having bought Portal 2 a few days ago, it became apparent that my graphics card had become the next candidate for replacement so I bought myself a GeForce GT 640 which I'm sure someone will tell me was a terrible choice (I long ago stopped knowing anything about hardware) but it's doing the job just fine so far.Wakey wakey
One of the things I've been meaning to set up for ages is wake on LAN. I always remembered seeing the option in the BIOS but I'd never played with it.
To my surprise however, the BIOS on my machine's current incarnation has no mention of wake on LAN. So for posterity, here's what I did to sort it out:
I've installed wol-systemd which basically runs ethtool <interface> wol g on every boot. Seems odd to me that this has to be set every time the machine is turned on but meh, it works.
I also installed an app on my phone so I can turn the PC on from anywhere in the house.Eye sea yew
To go with the wake on LAN, I added x11vnc -display :0 -forever -many -nap to my .xprofile which means that I can also view the display from my phone as soon as it's up and running.
Combining the three things I've written about so far, when I got in bed last night, I realised that I'd forgotten to leave the computer on - I was downloading something - so I started it from my phone and then used VNC to get things running. From bed. From my phone. Win.Impatience
After breaking my collar bone a few weeks ago, I finally felt that I'd recovered enough to start playing squash again. Having played tonight, I'm not entirely sure I was right. My shoulder doesn't exactly hurt but it's aching like hell at the moment. I'm sure it'll be fine :SHubris
Ehh... I've done "laziness" and "impatience", I ought to have a "hubris"...
Oh, I've started writing a game with LÖVE that's going to be amazing.
(It probably won't be amazing)
In February of this year, Poul-Henning Kamp (a.k.a “PHK”) gave what now looks to be a peculiarly prescient presentation as the closing keynote to 2014′s FOSDEM.
In the presentation (PDF), PHK posits an NSA operation called ORCHESTRA which is designed to undermine internet security through a series of “disinformation” or “misinformation”, or “misdirection” sub operations. ORCHESTRA is intended to be cheap, non-technical, completely deniable, but effective. One of the opening slides gives ORCHESTRA’s “operation at a glance” overview as:
- Reduce cost of COMINT collection
- All above board
- No special authorizations
- Eliminate/reduce/prevent encryption
- Enable access
- Frustrate players
PHK delivers the presentation as if he were a mid-ranking NSA staffer intending to brief NATO in Brussels. But “being American, he ends up [at FOSDEM] instead”. The truly scary part of this presentation is that it could all be completely true.
What makes the presentation so timely is his commentary on openssl. Watch it and weep.
For any readers uncertain of exactly how the heartbleed vulberability in openssl might be exploitable, Sean Cassidy over at existential type has a good explanation.
And if you find that difficult to follow, Randall Munroe over at xkcd covers it quite nicely.
My thanks, and appreciation as always, to a great artist.
Of course, Randall foresaw this problem back in 2008 when he published his take on the debian openssl fiasco.
The Guardian and the Washington Post have been jointly awarded the Pulitzer prize for public service for their reporting of Edward Snowden’s whistleblowing on the NSA’s surveillance activities.
The Guardian reports:
The Pulitzer committee praised the Guardian for its “revelation of widespread secret surveillance by the National Security Agency, helping through aggressive reporting to spark a debate about the relationship between the government and the public over issues of security and privacy”.
Unfortunately that debate seems to be taking place in the USA rather than in the UK.
In typical Guardian style, one correspondent to today’s letters page says:
Congratulations to all. Can’t wait for the film. All the President’s Men II? Johnny Depp as Alan Rusbridger?
I’d pay to see that. But I’m not sure how it ends yet.
I was contacted recently by a guy called Andy Beverley who wrote:
Hope you don’t mind me contacting you about one of your old blog posts “what gives with dban”. Thought I’d let you know that I forked DBAN a while ago, and produced a standalone program (called nwipe) that will run on any Linux OS. That means it will work with any Live CD, meaning much better hardware support.
It’s included in PartedMagic, as well as most other popular distros.
“No I don’t mind at all” is my response. In fact, since DBAN seems to be borked permanently, it is nice to see an alternative out there.
Andy’s nwipe page says that he could do with some assistance. So if anyone feels able to help him out, give him a call.
I had occasion recently to need an entry in my ssh config such that connections to a certain host would be proxied through another connection. Several sources suggested the following snippet:Host myserver.net ProxyCommand nc -x <proxy host>:<proxy port> %h %p
In my situation, I wanted the connection to be proxied through an ssh tunnel that I already had set up in another part of the config. So my entry looked like:Host myserver.net ProxyCommand nc -x localhost:5123 %h %p
Try as I might however, I just could not get it to work, always receiving the following message:Error: Couldn't resolve host "localhost:5123"
After some head scratching, checking and double-checking that I had set up the proxy tunnel correctly, I finally figured out that it was because I had GNU netcat installed rather than BSD netcat. Apparently, most of the people in the internet use BSD netcat :)
Worse, -x is a valid option in both netcats but does completely different things depending on which you use; hence the less-than-specific-but-technically-correct error message.
After that revalation, I thought it was worth capturing the commonalities and differences between the options taken by the netcats.Common options
Prints out nc help.
Specifies a delay time interval between lines of text sent and received. Also causes a delay time between connections to multiple ports.
Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.
Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
Specifies the source port nc should use, subject to privilege restrictions and availability.
Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in the order that the system assigns them.
Specifies the IP of the interface which is used to send the packets. For UNIX-domain datagram sockets, specifies the local temporary socket file to create and use so that datagrams can be received. It is an error to use this option in conjunction with the -l option.
-t in BSD Netcat, -T in GNU Netcat
Causes nc to send RFC 854 DON'T and WON'T responses to RFC 854 DO and WILL requests. This makes it possible to use nc to script telnet sessions.
Use UDP instead of the default option of TCP. For UNIX-domain sockets, use a datagram socket instead of a stream socket. If a UNIX-domain socket is used, a temporary receiving socket is created in /tmp unless the -s flag is given.
Have nc give more verbose output.
Connections which cannot be established or are idle timeout after timeout seconds. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout.
Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.
Forces nc to use IPv4 addresses only.
Forces nc to use IPv6 addresses only.
Send CRLF as line-ending.
Enable debugging on the socket.
Do not attempt to read from stdin.
Specifies the size of the TCP receive buffer.
Forces nc to stay listening for another connection after its current connection is completed. It is an error to use this option without the -l option.
Specifies the size of the TCP send buffer.
Specifies a username to present to a proxy server that requires authentication. If no username is specified then authentication will not be attempted. Proxy authentication is only supported for HTTP CONNECT proxies at present.
after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.
Enables the RFC 2385 TCP MD5 signature option.
Change IPv4 TOS value. toskeyword may be one of critical, inetcontrol, lowcost, lowdelay, netcontrol, throughput, reliability, or one of the DiffServ Code Points: ef, af11 ... af43, cs0 ... cs7; or a number in either hex or decimal.
Specifies to use UNIX-domain sockets.
Set the routing table to be used. The default is 0.
Requests that nc should use the specified protocol when talking to the proxy server. Supported protocols are “4” (SOCKS v.4), “5” (SOCKS v.5) and “connect” (HTTPS proxy). If the protocol is not specified, SOCKS version 5 is used.
Requests that nc should connect to destination using a proxy at proxy_address and port. If port is not specified, the well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS).
Close connection on EOF from stdin.
Program to exec after connect.
Source-routing hop point[s], up to 8.
Source-routing pointer: 4, 8, 12, ...
Forward local port to remote address.
Output hexdump traffic to FILE (implies -x).
TCP mode (default).
Output version information and exit.
Hexdump incoming and outgoing traffic.
I uninstalled GNU netcat and installed BSD netcat btw ;)
(This is my first race of the 2014 season.)
I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.
I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.
Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.
Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.Run 1 (7.5km)
A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.
Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.
Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.
This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".
I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.
Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.
After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.
The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.
I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.
However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.
A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.
However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).
The race identified the following areas to work on:
«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»
After some deliberation I decided on the Ironman event in Klagenfurt, Austria (pictured) not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.
I've made the following conscious changes to my race scheduling and selection this year:
Readers may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is common practice, justified by the run-specific training leading up to a marathon and the recovery period afterwards compromising training overall.
For similar reasons, I have also chosen not to race a "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was suitably scheduled and I could convince myself that most of the benefits could be achieved through other means.
Cambridge Duathlon (link)
Run: 7.5km, bike: 40km, run: 7.5km
St Neots Olympic Tri (link)
Swim: 1,500m, bike: 40km, run: 10km
ECCA 50-mile cycling time trial (link)
50 miles. Course: E2/50C
Icknield RC 100-mile cycling time trial (link)
100 miles. Course: F1/100
Cambridge Triathlon (link)
Swim: 1,500m, bike: 40km, run: 10km
Ironman Austria (link)
Swim 2.4km, bike: 190km, run: 42.2km
This is nasty. There is a remotely exploitable bug in openssl which leads to the leak of memory contents from the server to the client and from the client to the server. In practice this means that an attacker can read 64K chunks of memory on a vulnerable service, thus potentially exposing security critical information.
At 19.00 UTC yesterday, openssl bug CVE-2014-0160 was announced at heartbleed.com. I picked it up following a flurry of emails on the tor relays list this morning. Roger Dingledine posted a blog commentary on the bug to the tor list giving details about the likely impacts on Tor and Tor users.
Dingledine’s blog entry says:
Here are our first thoughts on what Tor components are affected:
But as he also says earlier on, “this bug affects way more programs than just Tor”. The openssl security advisory is remarkably sparse on details, saying only that “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.” So it is left to others to explain what this means in practice. The heartbleed announcement does just that. It opens by saying:
The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).
The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.
During their investigations, the heartbleed researchers attacked their own SSL protected services from outside and found that they were:
able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.
According to the heartbleed site, versions 1.0.1 through 1.0.1f (inclusive) of openssl are vulnerable. The earlier 0.9.8 branch is NOT vulnerable, nor is the 1.0.0 branch. Unfortunately, the bug was introduced to openssl in December 2011 and has been available in real world use in the 1.0.1 branch since 14 March 2102 – or just over 2 years ago. This means that a LOT of services will be affected and will need to be patched, and quickly.
Openssl, or its libraries, are used in a vast range of security critical services across the internet. As the heartbleed site notes:
OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.
That point about networked appliances is particularly worrying. in the last two years a lot of devices (routers, switches, firewalls etc) may have shipped with embedded services built against vulnerable versions of the openssl library.
In my case alone I now have to generate new X509 certificates for all my webservers, my mail (both SMTP and POP/IMAP) services, and my openVPN services. I will also need to look critically at my ssh implementation and setup. I am lucky that I only have a small network.
My guess is that most professional sysadmins are having a very bad day today.