Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 1 hour 27 min ago

Mick Morgan: backblaze back seagate

Tue, 21/01/2014 - 20:30

In October last year I noted that the Western Digital “Green” drives in my desktop and a new RAID server build looked to be in imminent danger of early failure. That conclusion was based on a worryingly high load-cycle count which a series of posts around the net all attributed to the aggressive head parking features of these drives in order to save energy when not in use. I decided at the time to replace the desktop disk and recycle it to the RAID server. I have since decided to replace the entire RAID array as soon as I can. But which disks to use?

Well, Backblaze, a company which offers “unlimited” on-line backup storage for Mac and Windows users, has just published a rather useful set of statistics on the disks that they use in their storage arrays. The most interesting point is that they use the same domestic grade disks as would be used by you or I rather than the commercial grade ones used in high end RAID systems.

According to the backblaze blog post, at the end of 2013 they had 27,134 consumer-grade drives spinning in their “Storage Pods”. Those disks were mostly Seagate and Hitachi drives, with a (much) lower number of Western Digital also in use. Backblaze said:

Why do we have the drives we have? Basically, we buy the least expensive drives that will work. When a new drive comes on the market that looks like it would work, and the price is good, we test a pod full and see how they perform. The new drives go through initial setup tests, a stress test, and then a couple weeks in production. (A couple of weeks is enough to fill the pod with data.) If things still look good, that drive goes on the buy list. When the price is right, we buy it.

They also noted:

The drives that just don’t work in our environment are Western Digital Green 3TB drives and Seagate LP (low power) 2TB drives. Both of these drives start accumulating errors as soon as they are put into production. We think this is related to vibration. The drives do somewhat better in the new low-vibration Backblaze Storage Pod, but still not well enough.

These drives are designed to be energy-efficient, and spin down aggressively when not in use. In the Backblaze environment, they spin down frequently, and then spin right back up. We think that this causes a lot of wear on the drive.

Apart from the vibration point, I’d say that conclusion was spot-on given the reporting I have seen elsewhere. And the sheer number of drives they have in use gives a good solid statistical base upon which to draw when making future purchasing decisions. Backblaze note that the most reliable drives appear to those made by Hitachi (they get “four nines” of untroubled operation time, while the other brands just get “two nines”) but they also note that the Hitachi drive business was bought by Western Digital around 18 months ago – and WD disks do not seem to perform anywhere near as well as the others in use.

The post concludes:

We are focusing on 4TB drives for new pods. For these, our current favorite is the Seagate Desktop HDD.15 (ST4000DM000). We’ll have to keep an eye on them, though. Historically, Seagate drives have performed well at first, and then had higher failure rates later.

Our other favorite is the Western Digital 3TB Red (WD30EFRX).

We still have to buy smaller drives as replacements for older pods where drives fail. The drives we absolutely won’t buy are Western Digital 3TB Green drives and Seagate 2TB LP drives.

So I guess I’ll be buying more Seagates in future – and I was right to dump the WD caviar green when I did.

(As an aside, I’m not convinced the Backblaze backup model is a good idea, but that is not the point here).

Categories: LUG Community Blogs

Mick Morgan: thrust update

Mon, 20/01/2014 - 14:18

I have just run a search for further evidence of the possible compromise at thrustvps and found threads on webhostingtalk, vpsboard, freevps.us and habboxforum amongst others. All of those comments are from people (many, like me, ex-customers) who have received emails like the one I referred to below.

So, I guess thrust /do/ have a real problem.

Categories: LUG Community Blogs

Mick Morgan: rage against the machine

Sun, 19/01/2014 - 03:02

I know it is futile to rant about banks. I know also that I should not really expect anything other than crap service from an industry that treats its customers as useful idiots. But yesterday I met with such appalling and unforgiveable stupidity and intransigence that I feel the need to rant here. I have left a period of 24 hours between the experience and the rant simply to allow myself time to reflect, shout at the cats and explain my frustration to my wife in the (vain) belief that it might actually be me at fault rather than the banks (I am, at heart, an eternal optimist).

Here is what happened.

Recently I was reviewing my meagre savings because the laughably small interest rate on one of my ISAs had been reduced following the ending of a “bonus period”. Great scam this. Offer a rate say 0.5% higher than that offered by competitors but limit it to a fixed period. Thereafter, drop the rate to (say) 1.0% below your competitors’ offerings. Better still, lock your customer into the agreement for a period (say) two years longer than the initial “high rate” period. We all fall for this, yet in my view it should be outlawed because it takes advantage of those least able to care for themselves – i.e people who do not actively manage their savings. Such people are often elderly, or infirm, and perhaps confused by the finance sector (who isn’t?). Some people are taken in simply because they naively believe that, as long term, loyal, customers they will be treated as such by their bank.

Following my latest review I noted that one of my ISA accounts (I have several, largely as a result of the aforementioned bank policy of varying introductory rates) was paying 3.0% when the one with the now lapsed introductory rate was paying only 2.0% (now to be reduced even further to 1.5% from 1 February this year). On the face of it would seem to be a no-brainer to move the funds in the lower paying account to the one paying the higher rate. But ISA investment is complicated by the need to make 2-3 year decisions now based on current rates when rates may rise later this year and by the fact that you are limited to a fixed maximum investment sum in any one financial year and in any one ISA. Decisions are further complicated by the policy of some banks to refuse transfers in from other (previous year) ISAs and the fact that some ISAs do not allow any withdrawals or indeed transfers out. Little wonder that “inertia” means that they get to keep your money at rates as low as a derisory 0.1% per annum. I kid you not. My wife was getting this from Barclays up until last year when I found out and stood over her whilst she completed the forms to move her cash. My wife is in the “loyal customer” camp.

So, do I move funds now in order to gain 1.0% or wait six months (when rates may have moved and we will in any case be in a new F/Y and sparkly new ISAs with fresh introductory carrots will be in place)? Tough call, but the fact the rate differential was set to rise to 1.5% in February, coupled with the fact that the Halifax (let’s name and shame) on-line banking system said that I could transfer in the ISA in question to their ISA paying 3.00% convinced me to start the process.

Big mistake.

On starting the transfer I checked, and double checked, that the account I was transferring into was the one offering the 3.0% rate. It was. The account number and sort-code matched. On completion of the process, the system even gave me a nice “Thank you for transferring into your cash ISA” page to print which summarised the action I was authorising (“from” account detail, “to” account detail, amount of transfer, National Insurance No. etc). A couple of days later I noted that funds had disappeared from the “from” account so I knew the process must have started. I checked several times over the next week to see if the funds had been added to my Halifax account but was not overly worried that it had not because the Halifax does explain that the process can take up to 15 days, but that they will credit interest to the account for the full sum transferred from the date of application (the “Halifax ISA promise“). Well, I wasn’t worried until I received a letter from the Halifax saying that they could not complete the process of opening my account until I had provided information to verify my identity. Note that this letter confirmed the account details as being the existing ISA I hold with them. The letter concluded by saying that I should “call into any of our branches to confirm” the identity information requested. It also said that if I had any questions I should call a telephone number provided.

I have several accounts with the Halifax, including a mortgage account, but not a current account. My first mortgage in 1977 was with the Halifax. The Halifax has confirmed my identity often, and indeed fairly recently since the ISA in question was opened only one year ago. My nearest Halifax branch is a round trip of 25 miles away. I was at the time feeling pretty grumpy and in no mood to make an unnecessary 25 mile round trip to go through yet another identity check when I could call them to sort out the issue. (I should add as background here that I suffer from both arthritis and a form of inflammatory arthritis commonly known as gout. At the time I received the letter I was suffering from a gout attack. The normal treatment for pain relief in gout is NSAIDs such as aspirin, ibuprofen or diclofenac. I can’t take any of those pain killers because I am allergic to the damned things. The main drug prescribed for relief of gout inflammation is called colchicine. Colchicine has an interesting set of side effects (look them up). Now figure out why I was grumpy and disinclined to make an unnecessary 25 mile round trip.)

I called the 0845 number only to be greeted by the monumental stupidity of one of those automated systems which requires entry of a specific type of number before it will take you any further. Such systems seem to have been designed specifically to wind you up to the stage that you give up and go away. Mistakes numbers 1 and 2 here are 1, refer customers to such a number on a letter which says “call here if you wish to discuss any questions arising from this letter” and 2, assume that all callers to this number will have current accounts and thus possess the magic number required to use the system successfully. Eventually, however, through persistence, I got to a human being. The human being in question appeared to be a nice helpful young man in a call centre. Unfortunately after listening to my explanation as to why I was calling, the NHYM in question said he was “not trained to answer” my particular question. I know that such calls are recorded, but I found the precise wording he used rather odd, obviously part of a script he was forced to follow in certain cases. I guess that there are certain key phrases which must be used in all transactions for regulatory reasons. As an aside, I find it sad that even in the situation where you do eventually get to speak to a human being after going through a brain dead automated process, that human being is then forced to act and speak like an automaton.

NHYM number one then passed me back to the automated system whilst reassuring me that he was actually passing me to a colleague who would be able to respond to my particular problem. Whilst I was on hold, the automated system asked me if I was aware that I could get nearly all my questions answered by going on-line to the Halifax web site. Mistake number 3 lies in assuming that anyone who prefers to talk to a human being on a telephone is remotely interested in (or maybe even capable of) using a web service. Those people who have got to the telephone are either web users who have found that the site does /not/ meet their requirements or they are people who actively prefer not to use a website. Referring either group to the web is thus both counter productive and irritating.

Having been on hold for a short period and having ignored all requests to press button “X” or “Y” I was again eventually connected to a (different) NHYM in a call centre. I then again explained my situation: viz. I had received a letter asking me to schlepp up to my nearest Halifax in order to provide documentary proof that I was the same bloke who had opened a particular ISA one year ago. I pointed out that the Halifax knew who I was from previous encounters (I listed various accounts in evidence) and that it should be relatively easy to confirm this on the telephone – hence my call. NHYM number two then took me through various “security questions” which will be familiar to anyone who has a UK bank account (give full name, confirm first line of address and post code, give balance of account in question, give National Insurance Number (it is an ISA), state date of birth). Having done all this, NHYM number two then thanked me and said that he would arrange for me to receive an identifying number which I could use in future telephone banking interactions which would prevent me having to go through this rigmarole all over again. However, he said, I would still have to go into my nearest Halifax branch to prove my identity because this is a “legal requirement” and part of “our know your customer programme”.

I confess that by this time my patience was stretched a little thin and I may have been somewhat abrupt with the NHYM in expressing both my incredulity that this should still be necessary and my intense irritation that the Halifax should assume it OK to insist on my travelling to them when I have already been through the necessary hoops more than once. I pointed out that I had just gone through the process of proving I was who I said I was to his satisfaction in order to meet their security requirements. I also pointed out that for all he knew, I could be a disabled old man incapable of leaving his home without assistance (not actually true, but not so far from the truth) and that it was therefore a little unreasonable to expect me to do so. NHYM sympathised, said that he would pass on my complaint, but felt obliged to point out that any formal complaint would not be upheld because Halifax was merely obeying its legal obligations to identify its customers and the source of their funds. (I know for a fact that this is utter bollocks and that banks choose how they should meet their obligations under money laundering regulations, but I did not feel that this would be a fruitful line to take with the NHYM so I limited myself to forcefully asking him to lodge my complaint). Before concluding our conversation, I apologised to the NHYM for venting my frustration upon him and told him that I realised that he was following instructions in a difficult job and that he was in no way to feel himself at fault for the failings of his employer and its systems.

I then made the trip to my nearest Halifax. Before going however, I decided that I should take the opportunity of the visit to close an old branch book based savings account I hold as trustee for one of my grandsons and transfer the balance to a newer account I hold elsewhere which offers a much better rate (they play the same “bonus interest rate” trick on poor defenceless children.)

On arrival I was greeted by a NHYL who listened to my story and then told me that what I had described could not possibly have happened because I was not allowed to transfer in new funds to an existing fixed rate ISA account. What I must have done (according to her) was to open a new ISA and request transfer of funds into that. I showed her the documentary evidence refuting this statement and also pointed out that the money had obviously been requested by the Halifax because it had left my other account about a week ago, NHYL again said that this was not possible because the Halifax would not request the funds until the account had been set up. The fact that the letter I held in my hand said that the account could not be opened until they had verified my identity proved that the account had not yet been set up. I pointed to the account number and pointed out that this number was my existing ISA account which was already open, so perhaps the Halifax had requested the funds in order to place them in my existing account. Again, NHYL said this was impossible. I then asked where my money was because it clearly was not in my other account and the Halifax appeared not to have it. She proved unable to answer this and suggested that I check with my other bankers as to what they had done with the money. So I trotted (slowly) up the road to the other bank and spoke to yet another NHYL (though this one actually /was/ helpful). Apparently, despite the Halifax NHYL’s protestations to the contrary, the Halifax had requested transfer of my ISA funds almost immediately. My bankers had simply responded as they were expected to do and passed the money on. On hearing my full story, and learning that I did not now wish to pass my funds to the Halifax (old account or not) NHYL number two spoke to her colleagues in the bank’s ISA section and confirmed that once the funds were returned to them (as they would have to be if the account opening process was not completed) my old ISA would be re-instated.

I thanked the helpful lady and made my way back to the Halifax where I updated the Halifax version of the NHYL on the position. Whilst there I also asked how I should go about closing my grandson’s account. I was directed to the counter. At the counter I entered what appeared to be yet another banking bubble divorced from reality. My request to close the account and let me have a cheque made out in my grandson’s name was greeted with the response that it would not be possible to provide a cheque for a sum less than £500.00. I was offered cash. I said that I would prefer not to take cash because (bizarre as it may sound) I knew that the bank holding my grandson’s account did not accept cash, dealing only in cheques. (Sometimes I despair). I then asked if the cashier could just transfer the balance of the account to my grandson’s other account and then close the old account. Reader, you will not be surprised to learn that this proved impossible.

I took the cash and left.

As I said at the start of this rant, I really should know better. Despite all the (supposed) huge investment in technology in our banking systems, systems which we rely upon in ways which are deeply fundamental to our society, those systems consistently fail to meet quite simple needs. Frankly, this terrifies me.

Note carefully that all of the problems relating to my ISA transfer described above could have been prevented very simply. All that was necessary was a check in the on-line system which should have popped up a warning when I attempted to transfer funds saying “Sorry, you cannot transfer funds into this account. Do you wish to open a new ISA?”

To which my answer would have been: “No”.

Categories: LUG Community Blogs

Andrew Savory: Netflix versus Blu-Ray

Sun, 19/01/2014 - 01:29

Which is better … streaming content or buying shiny plastic discs?

It’s surely an unfair comparison because the constraints of a service like Netflix (delivering uninterrupted video over a range of network qualities) are very different to delivering content on physical media. But I was looking to justify my purchase of Battlestar Galactica (BSG) on Blu-Ray so here’s some screen captures as a very unscientific comparison. The screen captures are only approximately the same frame on each media, but should be close enough for a rough comparison.

The Netflix screen captures were done streaming on a fibre internet connection using a Mac with Silverlight installed, but with no other adjustments. The Battlestar Galactica Blu-Ray claims to be “High Definition Widescreen 1.78:1” (but doesn’t define high definition further … is it supposed to be 720p or 1080p?).

Here’s the two sources at 100% magnification (click for full-size):

And at 200% magnification:

And at 300% magnification:

In this instance, Netflix does a pretty good job, although it looks a little blurry. BSG is probably a bad choice for showing off Blu-Ray in comparisons: it’s heavily processed to make it look grainy like film, and almost every single scene is very dark. I doubt it was shot on the latest HD cameras, either (the miniseries first aired in 2003, the full series began in 2004). But since more recent series like Game of Thrones aren’t available on Netflix, this is the best I’ve got for comparisons at the moment.

Of course, even if the Blu-Ray and Netflix versions were identical, a big benefit of those shiny plastic discs is they can be used off-line and don’t require a continuing subscription.

Categories: LUG Community Blogs

Mick Morgan: thrustvps compromised?

Sat, 18/01/2014 - 16:29

I have not used thrust since my last contract expired. I left them because of their appalling actions at around this time last year. However, today I received the following email from them:

From: Admin
To: xxx@yyy
Subject: Damn::VPS aka Thrust::VPS
Date: Sat, 18 Jan 2014 03:28:06 +0000

This is a notification to let you know that we need to verify for reduce fraud.

We want your data as soon as possible.

The data that we need is as follows:

Server Username (Included)
Server Password (Included)
Full Name (Included)
Address (Included)
City(Included)
State (Included)
ZIP (Included)
Phone Number (For Call To Verify)
Country (Included)
Paypal Email (If Order With Paypal)
Paypal Password (If Order With Paypal)
Credit Card Information (If Order With Credit Card)
Scan Of Credit Card Front And Back (If Order With Credit Card)

Data is sent to Email : thrustvps@yahoo.com

Thanks in advance for your patience and support.

http://damnvps.com – Damn::VPS – We give a damn

Now, apart from the fairly obvious phishing nature of this email (you want me to scan my credit card front and back and send you a picture? Right…), and the request to reply to an address other than the sender (“Data is sent to….”) it actually looks to me as if the email really came from Thrust. Certainly the full headers (including “Return-Path:”, “Reply-To:”, “Received:” and even “Message-Id:”) look remarkably similar to the real ones I have from earlier mails from Thrust. A normal phishing email will usually spoof the “From:” address and use the “Reply-To:” to capture return emails at the scammer’s address. The fact that this email actually asks (in grammatically poor english) that you reply to a yahoo address suggests that the scammers are not that sophisticated.

I may not have much time for Thrust, but I have even less time for spammers and scammers so I forwarded the email to Thrust with a recommendation that they check it out and let their customers know that there appeared to be a scam going on in their name. I also checked their website to see if they had any alert thereon. The website (as at 15.00 today) appears to be unreachable (and I have tested from the UK, SanFrancisco, NYC and Amsterdam). With a website down and dodgy mail appearing to come from a legitimate Thrust mailserver address it suggests to me that they may have suffered a compromise. Certainly it looks to me as if their customer email database has been compromised (the address I got the email on was not my normal address, rather it was the one I use for contacts such as this). Whether that means any of their other account details have also been stolen I cannot be sure.

But I am glad that I am no longer a customer.

Categories: LUG Community Blogs

Jonathan McDowell: Fixing my parents' ADSL

Sat, 18/01/2014 - 07:22

I was back at my parents' over Christmas, like usual. Before I got back my Dad had mentioned they'd been having ADSL stability issues. Previously I'd noticed some issues with keeping a connection up for more than a couple of days, but it had got so bad he was noticing problems during the day. The eventual resolution isn't going to surprise anyone who's dealt with these things before, but I went through a number of steps to try and improve things.

Firstly, I arranged for a new router to be delivered before I got back. My old Netgear DG834G was still in use and while it didn't seem to have been the problem I'd been meaning to get something with 802.11n instead of the 802.11g it supports for a while. I ended up with a TP-Link TD-W8980, which has dual band wifi, ADSL2+, GigE switch and looked to have some basic OpenWRT support in case I want to play with that in the future. Switching over was relatively simple and as part of that procedure I also switched the ADSL microfilter in use (I've seen these fail before with no apparent cause).

Once the new router was up I looked at trying to get some line statistics from it. Unfortunately although it supports SNMP I found it didn't provide the ADSL MIB, meaning I ended up doing some web scraping to get the upstream/downstream sync rates/SNR/attenuation details. Examination of these over the first day indicated an excessive amount of noise on the line. The ISP offer the ability in their web interface to change the target SNR for the line. I increased this from 6db to 9db in the hope of some extra stability. This resulted in a 2Mb/s drop in the sync speed for the line, but as this brought it down to 18Mb/s I wasn't too worried about that.

Watching the stats for a further few days indicated that there were still regular periods of excessive noise, so I removed the faceplate from the NTE5 master BT socket, removing all extensions from the line. This resulted in regaining the 2Mb/s that had been lost from increasing the SNR target, and after watching the line for a few days confirmed that it had significantly decreased the noise levels. It turned out that the old external ringer that was already present on the line when my parents' moved in was still connected, although it had stopped working some time previously. Also there was an unused and much spliced extension in place. Removed both of these and replacing the NTE5 faceplate led to a line that was still stable. At the time of writing the connection has been up since before the new year, significantly longer than it had managed for some time.

As I said at the start I doubt this comes as a surprise to anyone who's dealt with this sort of line issue before. It wasn't particularly surprising to me (other than the level of the noise present), but I went through each of the steps to try and be sure that I had isolated the root cause and could be sure things were actually better. It turned out that doing the screen scraping and graphing the results was a good way to verify this. Observe:

The blue/red lines indicate the SNR for the upstream and downstream links - the initial lower area is when this was set to a 6db target, then later is a 9db target. Green are the forward error correction errors divided by 100 (to make everything fit better on the same graph). These are correctable, but still indicate issues. Yellow are CRC errors, indicating something that actually caused a problem. They can be clearly seen to correlate with the FEC errors, which makes sense. Notice the huge difference removing the extensions makes to both of these numbers. Also notice just how clear graphing the data makes things - it was easy to show my parents' the graph and indicate how things had been improved and should thus be better.

Categories: LUG Community Blogs

Chris Lamb: Review: Captain Phillips (2013)

Fri, 17/01/2014 - 13:27

Somalia's chief exports appear to be morally-ambiguous Salon articles about piracy and sophomoric evidence against libertarianism. However, it is the former topic that Captain Phillips concerns itself with, inspired by the hijacking of the Maersk Alabama container ship in 2009.

What is truth? In the end, Captain Phillips does not rise above Pontius Pilate in providing an answer, but it certainly tries using more refined instruments than irony or leaden sarcasm.

This motif pervades the film. Obviously, it is based on a "true story" and brings aboard that baggage, but it also permeates the plot in a much deeper sense. For example, Phillips and the US Navy lie almost compulsively to the pirates, whilst the pirates only really lie once where they put Phillips in greater danger.

Notice further that Phillips only starts to tell the truth when he thinks all hope is lost. These telling observations become even more fascinating when you realise that they must be based on the testimony of the...liars. Clearly, deception is a weapon to be monopolised and there are few limits on what good guys can or should lie about if they believe they can save lives.

Even Phillip's nickname ("Irish") is a falsehood – he straight-up admits he is an American citizen.

Lastly, there is an utterly disarming epilogue where Phillips is being treated for shock by clinical efficient medical staff. Not only will it scuttle any "blanket around the shoulders" cliché‎ but is probably a highly accurate portrayal of what actually happens post-trauma. This echoes the kind of truth Werner Herzog often aims for in his filmmaking as well his guilt-inducing duality between uncomfortable lingering and compulsive viewing.

Another angle worthy of discussion: can a film based on real-world events even be "spoilered"? Hearing headlines on the before you read the newspaper hardly robs you of a literary journey...

Captain Phillips does have some quotidian problems. Firstly, the only tool for ratcheting up tension is for the Somalians to launch verbal broadsides at the Americans, with each compromise somehow escalating the situation further. This technique is genuinely effective but well before the climatic rescue scene—where it is really needed—it has been subject to the most extreme diminishing returns.

(I cannot be the first to notice the "Africans festooned with guns shouting incomphensively" trope – I hope it is based on a Babel-esque mechanism of disorientation and miscommunication rather than anything, frankly, unsavoury.)

The racist idea that Africans prefer a AK-47 rotated about the Z-axis is socially constructed.

Secondly, the US Navy acts like a teacher with an Ofsted inspector observing quietly from the corner of the classroom; far too well-behaved it suspends belief, with no post-kill gloating or even the tiniest of post-arrest congratulations. Whilst nobody wants to see the Navy overreact badly to other military branches getting all the glory, nobody wants to see a suspiciously bland recruitment vehicle either. Paradoxically, this hermetic treatment made me unduly fascinated by them, as if they were part of some military "uncanny valley". Two quick observations:

  • All US—Somali interactions are recorded by a naval officer. No doubt a value-for-money defense against a USS Abu Ghraib, but knowing the plot is based on a factual events, it was perhaps a little too Baudrillardian to ponder how the presence of the Navy's cameras in a scene actually lent weight to the film's version of events, crucially without me even knowing whether the parallel "real life" footage is verifiable or not.
  • The navigational computers not only seem to require lines to drawn repeatedly between points of interest, but the Maersk Alabama's arbitrary relabelling as MOTHERSHIP seems to imply that an officer could humourously rename a contact to something unbecoming of a 12A classification.

The drone footage: I'd love to read (or write) an essay about how Call of Duty might have influenced cinema.

Finally, despite the title, the film is actually about two captains; the skillful liar Phillips and ... well, that's the real problem. Whilst Captain Muse is certainly no caricatured Hook, we are offered little depth beyond a "You're not just a fisherman" faux-revelation that leads nowhere. I was left inventing reasons for his akrasia so that he made any sense whatsoever.

One could charitably argue that the film attempts to stay objective on Muse, but the inability for the film to take any obvious ethical stance actually seems to confuse and then compromise the narrative. What deeper truth is actually being revealed? Is this film or documentary?

Worse still, the moral vacuum is invariably filled by the viewer's existing political outlook: are Somali pirates victims of circumstance who are forced into (alas, regrettable) swashbuckling adventures to pacify plank-threatening warlords? Or are they violent and dangerous criminals who habour an irrational resentment against the West, flimsily represented by material goods in shipping containers?

Your improvised answer to this Rorschach test will always sit more haphazardly in the film than any pre-constructed treatment ever could.

6/10

Categories: LUG Community Blogs

Chris Lamb: Captain Phillips: Pontius Pirate

Fri, 17/01/2014 - 13:27

Somalia's chief exports appear to be morally-ambiguous Salon articles about piracy and sophomoric evidence against libertarianism. However, it is the former topic that Captain Phillips concerns itself with, inspired by the hijacking of the Maersk Alabama container ship in 2009.

What is truth? In the end, Captain Phillips does not rise above Pontius Pilate in providing an answer, but it certainly tries using more refined instruments than irony or leaden sarcasm.

This motif pervades the film. Obviously, it is based on a "true story" and brings aboard that baggage, but it also permeates the plot in a much deeper sense. For example, Phillips and the US Navy lie almost compulsively to the pirates, whilst the pirates only really lie once (where they put Phillips in greater danger).

Notice further that Phillips only starts to tell the truth when he thinks all hope is lost. These telling observations become even more fascinating when you realise that they must be based on the testimony of the, well, liars. Clearly, deception is a weapon to be monopolised and there are few limits on what good guys can or should lie about if they believe they can save lives.

Even Phillip's nickname ("Irish") is a falsehood – he straight-up admits he is an American citizen.

Futhermore, there is an utterly disarming epilogue where Phillips is being treated for shock by clinical efficient medical staff. Not only will this scuttle any "blanket around the shoulders" cliché‎ but is probably a highly accurate portrayal of what actually happens post-trauma. This echoes the kind of truth Werner Herzog aims for in his filmmaking as well his guilt-inducing duality between uncomfortable lingering and compulsive viewing.

Lastly, a starter for a meta-discussion: can a film based on real-world events even be "spoilered"? Hearing headlines on the radio before you read your newspaper hardly robs you of a literary journey...

Captain Phillips does have some quotidian problems. Firstly, the only tool for ratcheting up tension is for the Somalians to launch verbal broadsides at the Americans, with each compromise somehow escalating the situation. This technique is effective but well before the climatic rescue scene—where it is really needed—it has been subject to the most extreme diminishing returns.

(I cannot be the first to notice the "Africans festooned with guns shouting incomphensively" trope – I hope it is based on a Babel-esque mechanism of disorientation from miscommunication rather than anything more unsavoury.)

The racist idea that Africans prefer an AK-47 rotated about the Z-axis is socially constructed.

Secondly, the US Navy acts like a teacher with an Ofsted inspector observing quietly from the corner of the classroom; far too well-behaved it suspends belief, with no post-kill gloating or even the tiniest of post-arrest congratulations. Whilst nobody wants to see the Navy overreact badly to other military branches getting all the glory, nobody wants to see a suspiciously bland recruitment vehicle either. Paradoxically, this hermetic treatment made me unduly fascinated by them as if they were part of some military "uncanny valley". Two quick observations:

  • All US—Somali interactions are recorded by a naval officer. No doubt a value-for-money defense against a USS Abu Ghraib, but knowing the plot is based on factual events, it was perhaps a little too Baudrillardian to ponder how the presence of the Navy's cameras in a scene actually lent weight to the film's version of events, crucially without me even knowing whether the parallel "real-life" footage is verifiable or not.
  • The navigational computers not only seem to require lines to drawn repeatedly between points of interest, but the Maersk Alabama's arbitrary relabelling as MOTHERSHIP seems to imply that an officer could humourously rename a radar contact to something unbecoming of a 12A classification.

The drone footage: I'd love to write an essay about how Call of Duty might have influenced (or even be) cinema.

Finally, despite the title, the film is actually about two captains; the skillful liar Phillips and ... well, that's the real problem. Whilst Captain Muse is certainly no caricatured Hook, we are offered little depth beyond a "You're not just a fisherman" faux-revelation that leads nowhere. I was left inventing reasons for his akrasia so that he made any sense whatsoever.

One could charitably argue that the film attempts to stay objective on Muse, but the inability for the film to take any obvious ethical stance actually seems to confuse and then compromise the narrative. What deeper truth is actually being revealed? Is this film or documentary?

Worse still, the moral vacuum is invariably filled by the viewer's existing political outlook: are Somali pirates victims of circumstance who are forced into (alas, regrettable) swashbuckling adventures to pacify plank-threatening warlords? Or are they violent and dangerous criminals who habour an irrational resentment against the West, flimsily represented by material goods in shipping containers?

Your improvised answer to this Rorschach test will always sit more haphazardly in the film than any pre-constructed treatment ever could.

6/10

Categories: LUG Community Blogs

Steve Engledow (stilvoid): TODO

Thu, 16/01/2014 - 17:11

New year this year passed basically the same as last year, though even more enjoyably.

I decided I'd better review my TODO list from last year so here's a diff :)

My firstsecond new year as a dad was a very pleasant one. The Mrs and I polished off atwo bottles of champage (I never used to like the stuff -I still can't get enough of it now) and put our favouritea wide variety of tunes on all night. Dylan woke up at precisely two minutes to midnightdidn't wake up all night :)

New Year's Resolutions

I don't normally make new year's resolutions but I'll give it a go. Here's my resolution list for this year including those not done from last year.

  • Sort out NatWest

  • Sort out LoveFilm

  • Buy a house

  • Read even more (fiction and non-fiction)

  • Write at least one short story

  • Write some more games

  • Go horse riding

  • Learn some more turkish

  • Play a lot more guitar

  • Lose at least a stone (in weight, from myself)

  • Try to be less of a pedant (except when it's funny)

  • Try to be more funny ;)

Categories: LUG Community Blogs

Andrew Savory: Home Automation: buttons on the desktop

Wed, 15/01/2014 - 19:49

I want a button on my Mac’s desktop to turn on or turn off the lights I have controlled by the Raspberry Pi. Here’s what I’ve got so far.

First, I wrote a script on the Pi to turn the lights on:

#!/bin/bash
/usr/bin/tdtool --on 1
/usr/bin/tdtool --on 2
/usr/bin/tdtool --on 3

For everything here, I also wrote the equivalent to turn the lights off.

Next, I tested running the script via SSH from my Mac:

ssh -f user@raspberrypi.local /home/user/lights_on.sh &>/dev/null

You can use the Applescript Editor to make pseudo-apps, so I wrote an Applescript to execute the SSH command:

tell application “Terminal”

    do script “ssh -f user@raspberrypi.local /home/user/lights_on.sh &>/dev/null”

    activate

end tell

delay 15

tell application “Terminal”

    quit

end tell

I saved this as file format “Application” in my Applications folder, then dragged it to my desktop (an alias is automatically created, leaving the original in Applications). I can now double-click the app and the script runs, or launch it from Spotlight or the wonderful Alfred:     It works but it’s ugly as I have a Terminal window pop up for ~ 15 seconds. There must be a better way.   After some searching, I came across Use automator to create X11 shortcuts in OSX. It’s similar to the Applescript trick, but uses Automator instead, so there’s no need for Terminal. I put the SSH command into the “Run shell script” workflow action, and saved it as an application:     It works! Now I can turn the lights on or off without the Terminal window popping up.   There’s a couple of issues:
  • The tdtool command takes a long time to execute on the Pi. It takes about three seconds for all three plugs to switch, whereas the smart plug remote has an “all on / all off” button which is instant. I need to find out why the command is so slow, and/or a way to control all three in one go.
  • I don’t really want “Lights on” and “Lights off” apps, I want a single app that toggles the state. This could be done by making the server-side script smarter, but I’d really like the app icon to reflect the status of the lights too.
Room for improvement, but this is good enough for now.   Next up: automating sunsets.   References  
Categories: LUG Community Blogs

Andrew Savory: Home Automation: Turn it on again

Wed, 15/01/2014 - 00:55

In Pi three ways I wrote:

what happens when you combine an RF transmitter, smart sockets with RF receivers, and a Raspberry Pi?

I’m still finding out what can be done, but this is what I’ve discovered so far.

This is what I bought:

First, I set up each of the smart plugs, and paired them with the remote control.

This is already a big improvement: my home office has bookshelves filled with LED lights, but with inaccessible switches. Being able to turn them all on and off from the remote is awesome, but I’d really like them to turn on automatically, for example at sunset. So I need some compute power in the loop. Time for the Pi.

The Pi is running Raspian. I followed the installation instructions for telldus on Raspberry Pi. See also R-Pi Tellstick core for non-Debian instructions.

Next I tried to figure out the correct on/off parameters in tellstick.conf for the smart plugs. The Tellstick documentation is a bit sparse. Tellstick on Raspberry Pi to control Maplin (UK) lights talks about physical dials on the back of the remote control; sadly the Bye Bye Standby remote doesn’t have this.

Each plug is addressed using a protocol and a number of parameters. In the case of the Bye Bye Standby, it apparently falls under the arctech protocol, which has four different models, and each model uses the parameters “house” and sometimes “unit”.

Taking a brute-force approach, I generated a configuration for every possible parameter for the arctech protocol and codeswitch model:

for house in A B C D E F G H I J K L M N O P ; do
    for unit in {1..16} ; do
      cat <<EOF
device {
   id = $count
   name = "Test"
   protocol = "arctech"
   model = "codeswitch"
   parameters {
      house = "$house"
      unit = "$unit"
   }
}
EOF
   done
done

I then turned each of them on and off in turn, and waited until the tellstick spoke to the plugs:

count = 0
((count++))
for house in A B C D E F G H I J K L M N O P ; do
    for unit in {1..16} ; do
        echo "id = $count, house = $house, unit = $unit"
        tdtool --on $count
        tdtool --off $count
        ((count++))
    done
done

This eventually gave me house E and unit 16 (and the number of the corresponding automatically generated configuration, 80):

tdtool --on 80
Turning on device 80, Test – Success

But this only turned on or off all three plugs at the same time. I wanted control over each plug individually.

I stumbled upon How to pair Home Easy plug to Raspberry Pi with Tellstick, and that gave me enough information to reverse the process. Instead of getting the tellstick to work out what code the plugs talk, in theory I need to get the tellstick to listen to the plug for the code.

So this configuration should work, in combination with the tdlearn command:

device {
    id = 1
   name = "Study Right"
   protocol = "arctech"
   model = "selflearning-switch"
   parameters {
      house = "1"
      unit = "1"
   }
}

However this tiny footnote on the telldus website says: 

4Bye Bye Standby Self learning should be configured as Code switch.

So it seems it should be:

device {
    id = 1
   name = "Study Right"
   protocol = "arctech"
   model = “codeswitch"
   parameters {
      house = "1"
      unit = "1"
   }
}

… which is exactly what I had before. Remembering of course to do service telldusd restart each time we change the config, I tried learning again:

tdtool --learn 1
Learning device: 1 Study Right - The method you tried to use is not supported by the device

Well, bother. Looking at the Tellstick FAQ:

Is it possible to receive signals with TellStick?

TellStick only contains a transmitter module and it’s therefore only possible to transmit signals, not receive signals. TellStick Duo can receive signals from the compatible devices.

So it seemed like I was stuck with all-on, all-off unless I bought a TellStick Duo. Alternatively, I could expand my script to generate every possible combination in the tellstick.conf, and see if I can work out the magic option to control each plug individually. But since there are 1 to 67108863 possible house codes, this could take some time.

Rereading Bye Bye Standby 2011 compatible? finally gave me the answer. You put the plug into learning mode, and get the Tellstick to teach the right code to the plug by sending an “off” or an “on” signal:

tdtool --off 3

So setting house to a common letter and setting units to sensible increments, I can now control each of the plugs separately.

Next up: some automation.

Categories: LUG Community Blogs

Mick Morgan: strip exif data

Sat, 11/01/2014 - 21:52

I have a large collection of photographs on my computer. And each Christmas the collection grows ever larger. I use digiKam to manage that collection, but as I have mentioned before, storing family photographs as a collection of jpeg files seems counter intuitive to me. Photographs should be on display, or at least stored in physical albums that family members can browse at will. At a push, even an old shoebox will do.

So when this Christmas I copied the latest batch of images from my camera to my PC, I did a quick count of the files I hold – it came to nearly 5,500. Ok, so many of these are very similar, and this is simply a reflection of the ease (and very marginal cost) of photographs today compared with the old 35mm days, but even so, that is a lot of images. Disappointingly few of these ever see the light of day because whilst both my wife and I can happily view them on-line, I don’t print enough of them to make worthwhile albums. Sure, actually /taking/ photographs is cheap these days, but printing them at home on the sort of inkjet printer most people posess is rather expensive. Which is where online print companies such as photopanda, snapfish, photobox or jessops come in.

Most of these companies will provide high quality prints in the most popular 6″ x 4″ size for around 5 pence each – so a batch of 40 Christmas pictures is not going to break the bank. But one nice innovation of the digital era is that you can get your photos pre-printed into hard back albums for very reasonable prices. Better yet, my wife pointed me to a “special offer” (70% off) being run by a site she has used in the past. That was such a bargain that I decided to go back over my entire collection and create a “year book” for each of the eleven years of digital images I currently hold (yes, I don’t get out much).

However, I don’t much like the idea of posting a large batch of photographs to a site run by a commercial company, even when that company may have a much less cavalier approach to my privacy than does say, facebook. Once the photographs have been posted, they are outside my control and could end up anywhere. And of course I am not just concerned with the actual images, but the metadata that goes with those images. All electronic images created by modern cameras (or more usually these days, smartphones) contain EXIF data which at the minimum will give date and time information alongside the technical details about the shot (exposure, flash timing etc). In the case of nearly all smartphones, and increasingly with cameras themselves, the image will also contain geo-location details derived from the camera’s GPS system. I don’t take many images with my smartphone, and in any case, its GPS system is resolutely turned off, but my camera (a Panasonic TZ40) contains not just a GPS location system but a map display capability. Sometimes too much technology gets crammed into devices which don’t necessarily need them. As digicamhelp points out, many popular photo sharing sites such as Flickr or picasa helpfully allow viewers to examine EXIF data on-line. It is exactly this sort of capability which is so scarily exploited by ilektrojohn’s creepy tool.

So, before posting my deeply personal pictures of my cats to a commercial site I thought I would scrub the EXIF data. This is made easy with Phil Harvey’s excellent exiftool. This tool is platform independent so OSX and Windows users can take advantage of its capabilities – though of course it is much more flexible when used in conjunction with an OS which offers shell scripting.

Exiftool allows you to selectively edit, remove or replace any of the EXIF data stored in an image. But in my case I simply wanted to remove /all/ EXIF data. This is easy with the “-all= ” switch. Thus having chosen (and copied) the images I wanted to post to the commercial site it was a matter of a minute or two to recursively edit those files with find – thus:

find . -type f -iname ‘*.jpg’ -exec exiftool -all= {} \;

Highly recommended – particularly if you are in the habit of using photo sharing sites.

Categories: LUG Community Blogs

Andrew Savory: Mobile modem

Fri, 10/01/2014 - 23:01

I was trying to get a handle on how much mobile data download speeds have improved over the years, so I did some digging through my archives. (The only thing I like more than a mail archive that spans decades is a website that spans decades. Great work, ALUG!) Here’s some totally arbitrary numbers to illustrate a point.

In response to A few Questions, this is what I wrote in May 2002:

[Nokia] 7110 works fine with IR (7110 has a full modem). The 7110 supports 14.4bps
connections, but I bet your telco doesn’t

That should have been 14.4kbps (14,400bps). In 2002 the phones were ahead of the network’s ability to deliver. In 2014, not much has changed.

In GPRS on Debian, this is what I wrote in November 2002:

I finally took the plunge and went for GPRS [..]  (up to 5x the speed of a dialup connection over a GSM mobile connection)

Remember when we did dialup over mobile connections? GSM Arena on the Sony-Ericsson T68i states 24-36 kbps. I’m assuming I got the lower end of that.

In 2003 I was using a Nokia 3650 GPRS connection. GSM Arena on the Nokia 3650 states 24-36 kbps. Let’s be generous and assume reality was right in the middle, at 30 kbps.

In 2004 I got a Nokia 6600, which according to GSM Arena could also do 24 – 36 kbps. It was a great phone, so let’s assume the upper bound for the 6600.

In 2008 I upgraded to 3G with the Nokia N80, and wrote:

3G data connections are dramatically better than GPRS

… but sadly I didn’t quantify how much better. According to GSM Arena, it was 384 kbps.

That’s a pretty good and pretty dramatic speed increase:

But then in 2009 I was using the Nokia N900 (and iPhone, HTC Hero, Google Nexus One, …). GSM Arena on the Nokia N900 states a theoretical 10Mbps … quite the upgrade, except O2 were limited to 3.6 mbps.

In 2012 I was using the Samsung Galaxy SII. GSM Arena on the Samsung Galaxy SII promises 21 mbps.

And now the Sony Xperia Z Ultra supports LTE at 42 MBPS and 150MBPS. Sadly, the networks don’t yet fully support those speeds, but if they did, the chart would be truly dramatic. 2003-2008 starts to look like a rounding error:

I don’t need to use a modem or infrared, either. Things have really improved over the last twelve years!

(This post is probably best read in conjunction with Tom’s analysis of Mobile phone sizes.)

Categories: LUG Community Blogs

MJ Ray: Request for West Norfolk to Complete the PCC Consultation

Thu, 09/01/2014 - 14:19

Please excuse the intrusion to your usual software and co-op news items but vine seems broken and as part of my community and democratic interests, I’d like to share this short clip quoting Norfolk’s Deputy Police Commissioner Jenny McKibben about why Commissioner Stephen Bett believes it’s important to get views from the west of the county about next year’s police budget:

http://www.news.software.coop/wp-content/blogs.dir/6/files/2014/01/depNorfolkPCC-consultation.mp3

Personally, with a King’s Lynn + West Norfolk Bike Users Group hat on, I’d like it if people supported a 2% (£4/year average) tax increase to reduce the police’s funding cut (the grant from gov.uk is being cut by 4%) so that we’re less likely to have future cuts to traffic policing. The consultation details and response form are on the PCC website.

Categories: LUG Community Blogs

Andrew Savory: Multi media

Tue, 07/01/2014 - 22:39

My movie collection is a bit of a mishmash, a bunch of different file formats all sat on a Drobo. In the early days I would create AVI, MKV or MP4 rips of my DVDs depending on how and where I wanted to watch them. Sometimes the rips would be split across multiple files. More recently I just copied the DVD wholesale, for conversion later. As a result, ensuring a consistent set of files to copy onto my phone or tablet is a bit of a pain.

With the arrival of the RaspBMC media server, I decided to clean everything up. Some constraints I set:

  • I want to avoid loss of quality from source material (so no re-encoding if possible, only copying).
  • I should be able to do everything from the command line so it can be automated (manipulating video files can be a slow process even without encoding).
  • I want to combine multiple DVDs where possible for easier viewing.
  • My end goal is to have MKV files for most things.
Here’s what I’ve got working so far. Bug fixes and improvements welcome.

~

AVI files

You can glue AVI files together (concatenate them) and then run mencoder over the joined up file to fix up the indexes:

brew install mplayer cat part1.avi part2.avi > tmp.avi && \ /usr/local/bin/mencoder -forceidx -oac copy -ovc copy tmp.avi -o whole.avi

This forces mencoder to rebuild the index of the avi, which allows players to seek through the file. It encodes with the “copy” audio and video codec, i.e. no encoding, just streamed copying.

~

MKV files

MKV is Matroska, an open source open standard video container format. The process is similar to AVI files, but the mkvmerge tool does everything for you: 

brew install mkvtoolnix /usr/local/bin/mkvmerge -o whole.mkv part1.mkv +part2.mkv

This takes the two parts and joins them together. Again, no re-encoding, just copying.

~

DVD rips

I started using RipIt to back up my DVDs; it can automatically encode DVDs, but once I got my Drobo I opted to keep the originals, so I always have the option to re-encode on a case-by-case basis for the target device without losing the best quality original.

I don’t need to touch most of the DVD copies, but a number of my DVDs are split across several disks, for example Starship Troopers and The Lord of the Rings.

One option would be to encode each DVD at the highest possible quality and then merge the AVI or MKV using the mechanisms above, but I want to avoid encoding if possible.

It turns out that the VOB files on a DVD are just MPEG files (see What’s on a DVD? for more details), so there’s no need to convert to AVI or MP4. We can glue them together as we did with the AVIs, then package them as MKV. The basic method is:

cat *.VOB > movie.vob

The problem is that we need to be selective about the VOB files that are included; there’s no point including DVD menu and setup screen animations, for example. A dirty hack might be to select only the VOB files bigger than a certain threshold size, and just hope that the movie is divided into logical chunks. Something like this, run in a movie directory:

find -s . -name '*.VOB' -size +50M

There’s a catch: the first VOB (vts_XX_0.vob) always contains a menu, so we need to skip those, and we don’t want the menu/copyright message (video_ts.vob):

find -s . \( -iname '*.VOB' ! -iname 'VTS_*_0.VOB' ! -iname 'VIDEO_TS.VOB' \) -size +50M 

We can then use ffmpeg to copy the output of find (a list of our VOB files) into an MKV file. So far we’re assuming we only want the first audio stream (usually English), and I haven’t investigated how best to handle subtitles yet. The command is:

ffmpeg -i - -vcodec copy -acodec copy foo.mkv

There’s a couple of issues with this:

So our final command is:

find -s . \( -iname '*.VOB' ! -iname 'VTS_*_0.VOB' ! -iname 'VIDEO_TS.VOB' \) -size +50M -exec cat {} \; \ | ffmpeg -fflags +genpts -i - -f matroska -vcodec copy -acodec copy -c:s copy foo.mkv

The output should be an mkv file roughly the same size as the constituent .dvdmedia directories. You can test it using mkvinfo foo.mkv, which should output information on the mkv file. For some reason, using ‘file foo.mkv’ does not recognise it as an mkv file, only as data.

~

Putting it all together

Now we know how to handle several individual file formats, we can script the whole process.

The next step is to trawl through a disk full of movies and to normalise them into one format. At this point, we’re well into XKCD territory (The General Problem, and Is It Worth The Time?), so that’s left as an exercise for the reader

~

References

Categories: LUG Community Blogs

Brett Parker (iDunno): Wow, I do believe Fasthosts have outdone themselves...

Sat, 04/01/2014 - 11:24

So, got a beep this morning from our work monitoring system. One of our customers domain names is hosted with livedns.co.uk (which, as far as I can tell, is part of the Fasthosts franchise)... It appears that Fasthosts have managed to entirely break their DNS:

brettp@laptop:~$ host www.fasthosts.com ;; connection timed out; no servers could be reached brettp@laptop:~$ whois fasthosts.com | grep -i "Name Server" Name Server: NS1.FASTHOSTS.NET.UK Name Server: NS2.FASTHOSTS.NET.UK Name Server: NS1.FASTHOSTS.NET.UK Name Server: NS2.FASTHOSTS.NET.UK brettp@laptop:~$ whois fasthosts.net.uk | grep -A 2 "Name servers:" Name servers: ns1.fasthosts.net.uk 213.171.192.252 ns2.fasthosts.net.uk 213.171.193.248 brettp@laptop:~$ host -t ns fasthosts.net.uk 213.171.192.252 ;; connection timed out; no servers could be reached brettp@laptop:~$ host -t ns fasthosts.net.uk 213.171.193.248 ;; connection timed out; no servers could be reached brettp@laptop:~$

So, that's fasthosts core nameservers not responding, good start! They also provide livedns.co.uk, so lets have a look at that:

brettp@laptop:~$ whois livedns.co.uk | grep -A 3 "Name servers:" Name servers: ns1.livedns.co.uk 213.171.192.250 ns2.livedns.co.uk 213.171.193.250 ns3.livedns.co.uk 213.171.192.254 brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.250 ;; connection timed out; no servers could be reached brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.193.250 ;; connection timed out; no servers could be reached brettp@laptop:~$ host -t ns ns1.livedns.co.uk 213.171.192.254 ;; connection timed out; no servers could be reached

So, erm, apparently that's all their DNS servers "Not entirely functioning correctly"! That's quite impressive!

Categories: LUG Community Blogs

John Woodard: A year in Prog!

Wed, 01/01/2014 - 20:56

It's New Year's Day 2014 and I'm reflecting on the music of past year.Album wise there were several okay...ish releases in the world of Progressive Rock. Steven Wilson's The Raven That Refused To Sing not the absolute masterpiece some have eulogised a solid effort though but it did contain some filler. Motorpsyco entertained with Still Life With Eggplant not as good as their previous album but again a solid effort. Magenta as ever didn't disappoint with The 27 Club, wishing Tina Booth a swift recovery from her ill health.

The Three stand out albums in no particular order for me were Edison's Children's Final Breath Before November which almost made it as album of the year and Big Big Train with English Electric Full Power which combined last years Part One and this years Part Two with some extra goodies to make the whole greater than the sum of the parts. Also Adrian Jones of Nine Stones Close fame pulled one out of the bag with his side Project Jet Black Sea which was very different and a challenging listen, hard going at first but surprisingly very good. This man is one superb guitarist especially if you like emotion wrung out of the instrument like David Gilmore or Steve Rothery.

The moniker of Album of the Year this year goes to Fish for the incredible Feast of Consequences. A real return to form and his best work since Raingods With Zippos. The packaging of the deluxe edition with a splendid book featuring the wonderful artwork of Mark Wilkinson was superb. A real treat with a very thought provoking suite about the first world war really hammed home the saying "Lest we forget". A fine piece that needs to be heard every November 11th.


Gig wise again Fish at the Junction in Cambridge was great. His voice may not be what it was in 1985 but he is the consummate performer, very at home on the stage. As a raconteur between songs he is as every bit as entertaining as he is singing songs themselves.

The March Marillion Convention in Port Zealand, Holland where they performed their masterpiece Brave was very special as every performance of incredible album is. The Marillion Conventions are always special but Brave made this one even more special than it would normally be.
Gig of the year goes again to Marillion at Aylesbury Friars in November. I had waited thirty years and forty odd shows to see them perform Garden Party segued into Market Square Heroes that glorious night it came to pass, I'm am now one very happy Progger or should that be Proggie? Nevermind Viva Progressive Rock!
Categories: LUG Community Blogs

Chris Lamb: 2013: Selected highlights

Tue, 31/12/2013 - 17:26

January

Entered monthly 10km races in Regent's Park, reducing my time from 55:07 in January 4th to 43:28 on December 1st.

February

Entered the Hell of Ashdown cyclosportive in sub-zero conditions for over 100 miles & 7,500 ft of elevation (actual photo).

March

Had my lute returned after it was damaged.

April

Had a time-trial bike built and raced my first triathlon, duathlon and aquathlon.

May

More biking, including a long ride with my brother. Also performed on the viola da gamba in Bach's St John Passion with Belsize Baroque.

June

Two big concerts: Monn's Cello concerto in G minor with the Zadok Baroque Orchestra followed by the Blackfriars Quartet performing Shostakovich's String Quartet No. 8.

July

Amongst more triathlon preperation, I performed in a Linden Baroque concert of Handel's Israel in Egypt.

August

Raced my biggest event of the year—a "Half-Ironman" triathlon—hitting my time goal.

September

Whilst procrastinating about writing some letters, I created a small service to send letters without a printer.

October

Started cooking a little more adventurously.

November

Performed Geminiani's arrangement of Corelli's La Folia in the Fitzwilliam Museum with Le Petit Orchestre.

December

Ramped up my running volume so I could go over 1000km for the year. (Strava profile)

Categories: LUG Community Blogs

Mick Morgan: http compression in lighttpd

Mon, 30/12/2013 - 22:39

Today I had occasion to test trivia’s page load times. I used the (admittedly fairly dated) website optimization test tool and was surprised to find that it reported that parts of the pages I tested were not compressed before delivery.

I have the default compression options set in my lighty configuration file as below:

compress.cache-dir = “/var/cache/lighttpd/compress/”
compress.filetype = ( “application/javascript”, “text/css”, “text/html”, “text/plain” )

and the mod_compress server module is loaded, so I expected all the text, html and scripts loaded by my wordpress configuration to be compressed.

It turns out that in order for compression to work correctly in WordPress (or any other php based web delivery mechanism) with lighty you need to enable compression in php. In all the time I have been running trivia on my own server I hadn’t done this. The option that needs to be changed to correct this is to set:

zlib.output_compression = On

in “/etc/php5/cgi/php.ini“.

What I think I might need to work on now is the number of scripts my theme and plugins load. Counterize in particular is beginning to feel a bit sluggish. Certainly the generation of traffic reports is now quite slow and mysql is chewing up a lot of CPU. I suspect that I may need to purge the database and start afresh in the new year – or find another nice traffic analysis tool.

Categories: LUG Community Blogs