Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 22 min 40 sec ago

Mick Morgan: nsa operation orchestra

Wed, 16/04/2014 - 22:30

In February of this year, Poul-Henning Kamp (a.k.a “PHK”) gave what now looks to be a peculiarly prescient presentation as the closing keynote to 2014′s FOSDEM.

In the presentation (PDF), PHK posits an NSA operation called ORCHESTRA which is designed to undermine internet security through a series of “disinformation” or “misinformation”, or “misdirection” sub operations. ORCHESTRA is intended to be cheap, non-technical, completely deniable, but effective. One of the opening slides gives ORCHESTRA’s “operation at a glance” overview as:

* Objective:
- Reduce cost of COMINT collection
* Scope:
- All above board
- No special authorizations
* Means:
- Eliminate/reduce/prevent encryption
- Enable access
- Frustrate players

PHK delivers the presentation as if he were a mid-ranking NSA staffer intending to brief NATO in Brussels. But “being American, he ends up [at FOSDEM] instead”. The truly scary part of this presentation is that it could all be completely true.

What makes the presentation so timely is his commentary on openssl. Watch it and weep.

Categories: LUG Community Blogs

Mick Morgan: more heartbleed

Wed, 16/04/2014 - 12:04

For any readers uncertain of exactly how the heartbleed vulberability in openssl might be exploitable, Sean Cassidy over at existential type has a good explanation.

And if you find that difficult to follow, Randall Munroe over at xkcd covers it quite nicely.

My thanks, and appreciation as always, to a great artist.

Of course, Randall foresaw this problem back in 2008 when he published his take on the debian openssl fiasco.

Categories: LUG Community Blogs

Mick Morgan: pulitzer guardian

Wed, 16/04/2014 - 11:42

The Guardian and the Washington Post have been jointly awarded the Pulitzer prize for public service for their reporting of Edward Snowden’s whistleblowing on the NSA’s surveillance activities.

The Guardian reports:

The Pulitzer committee praised the Guardian for its “revelation of widespread secret surveillance by the National Security Agency, helping through aggressive reporting to spark a debate about the relationship between the government and the public over issues of security and privacy”.

Unfortunately that debate seems to be taking place in the USA rather than in the UK.

In typical Guardian style, one correspondent to today’s letters page says:

Congratulations to all. Can’t wait for the film. All the President’s Men II? Johnny Depp as Alan Rusbridger?

I’d pay to see that. But I’m not sure how it ends yet.

Categories: LUG Community Blogs

Mick Morgan: boot and nuke no more

Tue, 15/04/2014 - 19:54

I was contacted recently by a guy called Andy Beverley who wrote:

Hope you don’t mind me contacting you about one of your old blog posts “what gives with dban”. Thought I’d let you know that I forked DBAN a while ago, and produced a standalone program (called nwipe) that will run on any Linux OS. That means it will work with any Live CD, meaning much better hardware support.

It’s included in PartedMagic, as well as most other popular distros.

“No I don’t mind at all” is my response. In fact, since DBAN seems to be borked permanently, it is nice to see an alternative out there.

Andy’s nwipe page says that he could do with some assistance. So if anyone feels able to help him out, give him a call.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Netcat

Tue, 15/04/2014 - 01:30

I had occasion recently to need an entry in my ssh config such that connections to a certain host would be proxied through another connection. Several sources suggested the following snippet:

Host myserver.net ProxyCommand nc -x <proxy host>:<proxy port> %h %p

In my situation, I wanted the connection to be proxied through an ssh tunnel that I already had set up in another part of the config. So my entry looked like:

Host myserver.net ProxyCommand nc -x localhost:5123 %h %p

Try as I might however, I just could not get it to work, always receiving the following message:

Error: Couldn't resolve host "localhost:5123"

After some head scratching, checking and double-checking that I had set up the proxy tunnel correctly, I finally figured out that it was because I had GNU netcat installed rather than BSD netcat. Apparently, most of the people in the internet use BSD netcat :)

Worse, -x is a valid option in both netcats but does completely different things depending on which you use; hence the less-than-specific-but-technically-correct error message.

After that revalation, I thought it was worth capturing the commonalities and differences between the options taken by the netcats.

Common options
  • -h

    Prints out nc help.

  • -i interval

    Specifies a delay time interval between lines of text sent and received. Also causes a delay time between connections to multiple ports.

  • -l

    Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.

  • -n

    Do not do any DNS or service lookups on any specified addresses, hostnames or ports.

  • -p source_port

    Specifies the source port nc should use, subject to privilege restrictions and availability.

  • -r

    Specifies that source and/or destination ports should be chosen randomly instead of sequentially within a range or in the order that the system assigns them.

  • -s source

    Specifies the IP of the interface which is used to send the packets. For UNIX-domain datagram sockets, specifies the local temporary socket file to create and use so that datagrams can be received. It is an error to use this option in conjunction with the -l option.

  • -t in BSD Netcat, -T in GNU Netcat

    Causes nc to send RFC 854 DON'T and WON'T responses to RFC 854 DO and WILL requests. This makes it possible to use nc to script telnet sessions.

  • -u

    Use UDP instead of the default option of TCP. For UNIX-domain sockets, use a datagram socket instead of a stream socket. If a UNIX-domain socket is used, a temporary receiving socket is created in /tmp unless the -s flag is given.

  • -v

    Have nc give more verbose output.

  • -w timeout

    Connections which cannot be established or are idle timeout after timeout seconds. The -w flag has no effect on the -l option, i.e. nc will listen forever for a connection, with or without the -w flag. The default is no timeout.

  • -z

    Specifies that nc should just scan for listening daemons, without sending any data to them. It is an error to use this option in conjunction with the -l option.

BSD netcat only
  • -4

    Forces nc to use IPv4 addresses only.

  • -6

    Forces nc to use IPv6 addresses only.

  • -b

    Allow broadcast.

  • -C

    Send CRLF as line-ending.

  • -D

    Enable debugging on the socket.

  • -d

    Do not attempt to read from stdin.

  • -I length

    Specifies the size of the TCP receive buffer.

  • -k

    Forces nc to stay listening for another connection after its current connection is completed. It is an error to use this option without the -l option.

  • -O length

    Specifies the size of the TCP send buffer.

  • -P proxy_username

    Specifies a username to present to a proxy server that requires authentication. If no username is specified then authentication will not be attempted. Proxy authentication is only supported for HTTP CONNECT proxies at present.

  • -q seconds

    after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.

  • -S

    Enables the RFC 2385 TCP MD5 signature option.

  • -T toskeyword

    Change IPv4 TOS value. toskeyword may be one of critical, inetcontrol, lowcost, lowdelay, netcontrol, throughput, reliability, or one of the DiffServ Code Points: ef, af11 ... af43, cs0 ... cs7; or a number in either hex or decimal.

  • -U

    Specifies to use UNIX-domain sockets.

  • -V rtable

    Set the routing table to be used. The default is 0.

  • -X proxy_protocol

    Requests that nc should use the specified protocol when talking to the proxy server. Supported protocols are “4” (SOCKS v.4), “5” (SOCKS v.5) and “connect” (HTTPS proxy). If the protocol is not specified, SOCKS version 5 is used.

  • -x proxy_address[:port]

    Requests that nc should connect to destination using a proxy at proxy_address and port. If port is not specified, the well-known port for the proxy protocol is used (1080 for SOCKS, 3128 for HTTPS).

  • -Z

    DCCP mode.

GNU netcat only
  • -c, --close

    Close connection on EOF from stdin.

  • -e, --exec=PROGRAM

    Program to exec after connect.

  • -g, --gateway=LIST

    Source-routing hop point[s], up to 8.

  • -G, --pointer=NUM

    Source-routing pointer: 4, 8, 12, ...

  • -L, --tunnel=ADDRESS:PORT

    Forward local port to remote address.

  • -o, --output=FILE

    Output hexdump traffic to FILE (implies -x).

  • -t, --tcp

    TCP mode (default).

  • -V, --version

    Output version information and exit.

  • -x, --hexdump

    Hexdump incoming and outgoing traffic.

I uninstalled GNU netcat and installed BSD netcat btw ;)

Categories: LUG Community Blogs

Chris Lamb: Race report: Cambridge Duathlon 2014

Mon, 14/04/2014 - 13:59

(This is my first race of the 2014 season.)


I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.

I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.

Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.

Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.

Run 1 (7.5km)

A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.

Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.

Time
33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)
Bike (40km)

Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.

This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".

I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.

Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.

Time
1:10:45 (T2: 00:58)
Run 2 (7.5km)

After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.

The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.

I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.

However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.

Time
32:46 (4:22/km) / Last year: 38:10 (5:05/km)
Overall
Total time
2:18:19

A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.

However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).

The race identified the following areas to work on:

  • Perform an outdoors FTP on my time-trial bike outdoors to develop an optimum power plan.
  • Do a few more brick runs, at least to re-acclimatise the feeling.
  • Schedule another bike fit.

Although not strictly race-related, I also need to find techniques to ensure transporting a bike on public transport is less stressful. (Full results & full 2014 race schedule)

Categories: LUG Community Blogs

Chris Lamb: 2014 race schedule

Fri, 11/04/2014 - 23:58

«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»

In 2013, my training efforts were based around a "70.3"-distance race. In my second year in triathlon I will be targetting my first Ironman-distance event.

After some deliberation I decided on the Ironman event in Klagenfurt, Austria (pictured) not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.

I've made the following conscious changes to my race scheduling and selection this year:

  • Fewer races overall to allow for generous spacing between events, allowing more training, recovery and life.
  • No sprint-distance events as they do not provide enough enjoyment or appropriate training for the IM distance given their logistical outlay.
  • Prefering cycling over running time trials: performance in triathlon—paradoxically including your run performance—is primarily based around bike fitness.
  • Prefering smaller events over "mass-participation" ones.

Readers may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is common practice, justified by the run-specific training leading up to a marathon and the recovery period afterwards compromising training overall.

For similar reasons, I have also chosen not to race a "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was suitably scheduled and I could convince myself that most of the benefits could be achieved through other means.

April 13th

Cambridge Duathlon (link)

Run: 7.5km, bike: 40km, run: 7.5km

May 11th

St Neots Olympic Tri (link)

Swim: 1,500m, bike: 40km, run: 10km

May 17th

ECCA 50-mile cycling time trial (link)

50 miles. Course: E2/50C

June 1st

Icknield RC 100-mile cycling time trial (link)

100 miles. Course: F1/100

June 15th

Cambridge Triathlon (link)

Swim: 1,500m, bike: 40km, run: 10km

June 29th

Ironman Austria (link)

Swim 2.4km, bike: 190km, run: 42.2km

Categories: LUG Community Blogs

Mick Morgan: heartbleed

Tue, 08/04/2014 - 13:45

This is nasty. There is a remotely exploitable bug in openssl which leads to the leak of memory contents from the server to the client and from the client to the server. In practice this means that an attacker can read 64K chunks of memory on a vulnerable service, thus potentially exposing security critical information.

At 19.00 UTC yesterday, openssl bug CVE-2014-0160 was announced at heartbleed.com. I picked it up following a flurry of emails on the tor relays list this morning. Roger Dingledine posted a blog commentary on the bug to the tor list giving details about the likely impacts on Tor and Tor users.

Dingledine’s blog entry says:

Here are our first thoughts on what Tor components are affected:

  • Clients: Tor Browser shouldn’t be affected, since it uses libnss rather than openssl. But Tor clients could possibly be induced to send sensitive information like “what sites you visited in this session” to your entry guards. If you’re using TBB we’ll have new bundles out shortly; if you’re using your operating system’s Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor.
  • Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you’re at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor’s multi-hop design means that attacking just one relay in the client’s path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys.
  • Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn’t allow an attacker to identify the location of the hidden service, but an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  • Directory authorities: In addition to the keys listed in the “relays and bridges” section above, Tor directory authorities might leak their medium-term authority signing keys. Once you’ve updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don’t rotate that yet. We’ll see how this unfolds and try to think of a good solution there.
  • Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  • Orbot looks vulnerable; they have some new packages available for testing.
  • The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades too. Maybe we’ll need to throw away our torproject SSL web cert and get a new one too.

But as he also says earlier on, “this bug affects way more programs than just Tor”. The openssl security advisory is remarkably sparse on details, saying only that “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.” So it is left to others to explain what this means in practice. The heartbleed announcement does just that. It opens by saying:

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

During their investigations, the heartbleed researchers attacked their own SSL protected services from outside and found that they were:

able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

According to the heartbleed site, versions 1.0.1 through 1.0.1f (inclusive) of openssl are vulnerable. The earlier 0.9.8 branch is NOT vulnerable, nor is the 1.0.0 branch. Unfortunately, the bug was introduced to openssl in December 2011 and has been available in real world use in the 1.0.1 branch since 14 March 2102 – or just over 2 years ago. This means that a LOT of services will be affected and will need to be patched, and quickly.

Openssl, or its libraries, are used in a vast range of security critical services across the internet. As the heartbleed site notes:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

That point about networked appliances is particularly worrying. in the last two years a lot of devices (routers, switches, firewalls etc) may have shipped with embedded services built against vulnerable versions of the openssl library.

In my case alone I now have to generate new X509 certificates for all my webservers, my mail (both SMTP and POP/IMAP) services, and my openVPN services. I will also need to look critically at my ssh implementation and setup. I am lucky that I only have a small network.

My guess is that most professional sysadmins are having a very bad day today.

Categories: LUG Community Blogs

Mick Morgan: the netbook is not dead

Mon, 31/03/2014 - 22:03

I bought my first netbook, the Acer Aspire One, back in April 2009 – five years ago. That machine is still going strong and has seen umpteen different distros in its time. It currently runs Mint 16, and very happily too.

The little Acer has nothing on it that I value over much, all my important data is stored on my desktop (and backed up appropriately) but I find it useful as a “walking about” tool simply because it is so portable, so it still gets quite a bit of use. I am planning a few trips later this year, notably to Scotland and later (by bike) to Croatia, and I want to take something with me that will give me more flexibility in my connectivity options than simply taking my phone to collect email. In particular, it would be really useful if I could connect back to my home VPN endpoint whilst I am out and about. This is exactly the sort of thing I use the Acer for when I’m in the UK. I (briefly) considered using my Galaxy Tab, which is even lighter and more portable than the Acer, but I just don’t trust Android enough to load my openvpn certificates on to it. There is also the possibility that the Tablet could be lost or stolen (I shall be camping quite a lot). So the Acer looks a good bet. The downside of taking the Acer is the need for mains power (for recharging) of course, but I shall be staying in the odd hotel en route, and I have an apartment booked in Dubrovnik so I figure it should cope. However, I am not the only fan of the Acer – my grandson loves to sit with it on his lap in my study and watch Pingu and Octonauts (parents and grandparents of small children will understand). Given that I recognise that I might lose the Tablet, I must also assume that the Acer might disappear. My grandson wouldn’t like that. So the obvious solution is to take another AAO with me – off to ebay then.

Since my AAO is five years old, and that model has been around for longer than that, I figured I could pick up an early ZG5 with 8 or 16 Gig SSD (rather than the 160 Gig disk mine has) for about £20 – £30. Hell, I’ve bought better specced Dell laptops (I know, I know) for less than that fairly recently. I was wrong. Astonishingly, the ZG5, in all its variants, appears to still be in huge demand. I bid on a few models and lost, by quite some margin. So I set up a watch list of likely looking candidates and waited a few days to get a feel for the prices being paid. The lowest price paid was £37.00 (plus £10.00 P&P) for a very early ZG5 still running Linpus, another Linpus ZG5 with only 512Mb of RAM and an 8 Gig SSD went for £50.00 plus £9.00 P&P), most of the later models (with 1 Gig of RAM and 160 – 250 Gig disks went for around £70 – £90 (plus various P&P charges). In all I watched 20 different auctions (plus a few for similar devices such as the MSI Wind and the Samsung NC10). The lowest price I saw after the £37.00 device was £55.00 and the highest was £103.00 with a mean of just over £67. That is astonishing when you consider that you can pick up a new generic 7″ Android tablet for around £70.00 and you can get a decent branded one for just over £100. And as I said, old laptops go for silly money – just take a look at ebay or gumtree and you can pick up job lots of half a dozen or more for loose change.

So – despite what all the pundits may have said, clearly the netbook as a concept still meets the requirements of enough people (like me I guess) to keep the market bouyant. Intriguingly, the most popular machines sold (in terms of numbers of bidders) were all running XP. I just hope the the buyers intended to do as I did and wipe them to install a linux distro. Of course, having started the search for another ZG5, I just couldn’t let it go without buying one. I was eventually successful on a good one with the same specification as my original model.

The only drawback is that it is not blue…..

Well, at least I don’t think it will be stolen.

Categories: LUG Community Blogs

Richard Lewis: Taking notes in Haskell

Fri, 28/03/2014 - 00:13

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where import Data.Maybe -- A document comprises some number of openings (double page spreads) data Document = Document [Opening] -- An opening comprises one or two pages (usually two) data Opening = Opening (Page, Maybe Page) -- A page comprises multiple systems data Page = Page [System] -- Each part is the line for a particular voice data Voice = Superius | Discantus | Tenor | Contratenor | Bassus -- A part comprises a list of musical sybmols, but it may span mutliple systems --(including partial systems) data Part = Part [MusicalSymbol] -- A piece comprises some number of sections data Piece = Piece [Section] -- A system is a collection of staves data System = System [Staff] -- A staff is a list of atomic graphical symbols data Staff = Staff [Glyph] -- A section is a collection of parts data Section = Section [Part] -- These are the atomic components, MusicalSymbols are semantic and Glyphs are --syntactic (i.e. just image elements) data MusicalSymbol = MusicalSymbol data Glyph = Glyph -- If this were real, Image would abstract over some kind of binary format data Image = Image -- One of the important properties we need in order to be able to construct pieces -- from the scanned components is to be able to say when objects of the some of the -- types are strictly contiguous, i.e. this staff immediately follows that staff class Contiguous a where immediatelyFollows :: a -> a -> Bool immediatelyPrecedes :: a -> a -> Bool immediatelyPrecedes a b = b `immediatelyFollows` a instance Contiguous Staff where immediatelyFollows :: Staff -> Staff -> Bool immediatelyFollows = undefined -- Another interesting property of this data set is that there are a number of -- duplicate scans of openings, but nothing in the metadata that indicates this, -- so our workflow needs to recognise duplicates instance Eq Opening where (==) :: Opening -> Opening -> Bool (==) a b = undefined -- Maybe it would also be useful to have equality for staves too? instance Eq Staff where (==) :: Staff -> Staff -> Bool (==) a b = undefined -- The following functions actually represent the workflow collate :: [Document] collate = undefined scan :: Document -> [Image] scan = undefined split :: Image -> Opening split = undefined paginate :: Opening -> [Page] paginate = undefined omr :: Page -> [System] omr = undefined segment :: System -> [Staff] segment = undefined tokenize :: Staff -> [Glyph] tokenize = undefined recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol recogniseMusicalSymbol = undefined part :: [Glyph] -> Maybe Part part gs = if null symbols then Nothing else Just $ Part symbols where symbols = mapMaybe recogniseMusicalSymbol gs alignable :: Part -> Part -> Bool alignable = undefined piece :: [Part] -> Maybe Piece piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

Categories: LUG Community Blogs

Richard Lewis: Ph.D Progress

Thu, 27/03/2014 - 00:37

I submitted my Ph.D thesis at the end of September 2013 in time for what was believed to be the AHRC deadline. It was a rather slim submission at around 44,000 words and rejoiced under the title of Understanding Information Technology Adoption in Musicology. Here&aposs the abstract:

Since the mid 1990s, innovations and technologies have emerged which, to varying extents, allow content-based search of music corpora. These technologies and their applications are known commonly as music information retrieval (MIR). While there are a variety of stakeholders in such technologies, the academic discipline of musicology has always played an important motivating and directional role in the development of these technologies. However, despite this involvement of a small representation of the discipline in MIR, the technologies have so far failed to make any significant impact on mainstream musicology. The present thesis, carried out under a project aiming to examine just such an impact, attempts to address the question of why this has been the case by examining the histories of musicology and MIR to find their common roots and by studying musicologists themselves to gauge their level of technological sophistication. We find that some significant changes need to be made in both music information retrieval and musicology before the benefits of technology can really make themselves felt in music scholarship.

(Incidentally, the whole thing was written using org-mode, including some graphs that get automatically generated each time the text is compiled. Unfortunately I did have to cheat a little bit and typed in LaTeX \cite commands rather than using proper org-mode links for the references.)

So the thing was then examined in January 2014 by an information science, user studies expert and a musicologist. As far as it went, the defence was actually not too bad, but after defending the defensible it eventually became clear that significant portions of the thesis were just not up to scratch; not, in fact, defensible. They weren&apost prepared to pass it and have asked that I revise and then re-submit it.

Two things seem necessary to address: 1) why did this happen? And 2) what do I do next?

I started work on this Ph.D with only quite a vague notion of what it was going to be about. The Purcell Plus project left open the possibility of the Ph.D student doing some e-Science-enabled musicological study. But I think I&aposd come out of undergraduate and masters study with a view of academic research that was very much text-based; the process of research---according to the me of ca. 2008---was to read lots of things and synthesise them, and the more obscure and dense the stuff read the better. The process is one of noticing generalisations amongst all these sources that haven&apost been remarked on before and remarking on them, preferably with a good balance of academic rigour and barefaced rhetoric. And I brought this pre-conception into a computing department. My first year was intended to be a training year, but I was actually already highly computer literate with considerable programming experience and quite a bit of knowledge of at least symbolic work in computational musicology. Consequently, I didn&apost fully engage with learning new stuff during that first year and instead embarked on a project of attempting to be rhetorical. It wasn&apost until later on that I really started to understand that those around had a completely different idea as to how research can be carried out. While I was busy reading, most of my colleagues were doing experiments; they were actually finding out new stuff (or at least were attempting to) and had the potential to make an original contribution to knowledge. At this point I started to look for research methods that could be applicable to my subject matter and eventually hit upon a couple of actually quite standard social science methods. So I think that&aposs the first thing that went wrong: I failed to take on board soon enough the new research culture that I had (or, I suppose, should have) entered.

I think I&aposve always been someone who thrives on the acknowledgement of things I&aposve done; I always looked forwarded to receiving my marks at school and as an undergraduate; and I liked finding opportunities to do clever jobs for people, especially little software development projects where there&aposs someone to say, "that&aposs great! Thanks for doing that." I think I quickly found that doctoral research didn&apost offer me this at all. My experience was very much a solitary one where no one was really aware of what I was working on. Consequently two things happened: first, I tended not to pursue things very far through lack of motivation; and second (and this was the really dangerous one), I kept finding other things to do that did give me that feedback. I always found ways to justify these--lets face it---procrastination activities; mainly that they were all Goldsmiths work, including quite a lot of undergraduate and masters level teaching (the latter actually including designing a course from scratch), some Computing Department admin work, and some development projects. Doing these kinds of activities is actually generally considered very good for doctoral students, but they&aposre normally deliberately constrained to ensure that the student has plenty of research time still. Through my own choice, I let them take over far too much of my research time.

The final causal point to mention is the one that any experienced academic will immediately point to: supervision. I failed to take advantage of the possibilities of supervision. As my supervisor was both involved in the project of which the Ph.D was part and also worked in the same office as me, we never had the right kind of relationship to foster good progress and working habits from me. I spoke to my supervisor every day and so I didn&apost really push for formal supervision often enough. I can now see that it would have been better to have someone with whom I had a less familiar relationship and who had less of an interest in my work and who, as a result, would just operate and enforce the procedures of doctoral project progress. It&aposs also possible that a more formal supervision relationship would have addressed the points above: I may have been forced to solidify ideas and to identify proper methods much sooner; I may have had more of the feedback that I needed; and I may have been more strongly discouraged from engaging in so much extra-research activity.

The purpose of all this is not to apportion blame (I have a strong sense of being responsible for everything that I do), but to state publicly something that I&aposve been finding very hard to say: I failed my Ph.D. And (and this is the important bit) to make sure that I get on with what I need to do to pass it.

  • I need disinterested supervision; I&aposve requested assistance from the Sociology Department which should fit well with the research methods I used;
  • I need to improve the reporting of the studies I carried out; this involves correcting and expanding the methods sections and also doing more analysis work;
  • I need to either extend the samples of the existing studies, or carry out credible follow up studies to improve my evidence base;
  • I need to focus the research questions better and (having done so) make the conclusions directly address them.

I&aposm going to blog this work as it goes along. So if I stop blogging, please send me harassing emails telling me to get the f*** on with it!

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Judon't

Wed, 26/03/2014 - 01:44

tl;dr: broke my collar bone, ouch.

Since my last post, I've had a second Judo session which was just as enjoyable as the first except for one thing. Towards the end of the session we went into randori (free practice) - basically one-on-one sparring. I'm pleased to say that I won my first bout but in the second I went up against a guy who'd been learning for only 6 months or so. After some grappling, he threw me quite hard and with little control and I landed similarly badly - owch.

The first thing I realised was that I'd slammed my head pretty hard against the mat and that I was feeling a little groggy. After a little while, it became apparent that I'd also injured my shoulder. The session was over though and I winced through helping to put the mats away.

By the time I got in my car to drive home, my shoulder was a fair bit more painful and I made an appointment to see the doctor. When I saw her, she said there's a slight chance I'd broken my collar bone but it didn't seem very bad so I could just go home and see how it was in a couple of days.

A couple of days later I went back to the surgery. I saw a different doctor who said that she didn't think it was broken but she'd refer me for an X-Ray. The radiologist soon pointed out a nice clean break at the tip of my collar bone! I sat in A&E forever until I eventually saw a doctor who referred me to the orthopaedic clinic the following Monday.

Finally, the orthopaedic clinic's doctor told me I'd self-managed pretty well and that I should be ok to just get on with things but definitely no falling over, lifting anything heavy, and definitely no judo for up to six weeks.

Apparently, I've been very lucky as it's easy to dislodge the broken bone when the break is where mine is. Apparently, if I fall over or anything similar, I'm likely to end up in surgery :S

I've had to tell this story so many times now that I thought I might as well just write it down. For some reason, people seem to want to know all of the details when I mention that I've injured myself. Sadists.

Not Morrison's

On an unrelated subject, I've come to realise that I've developed an unhelpful manner when dealing with doors. I'm not the most graceful of people in general but it strikes me that I have a particularly awkward way of approaching doors. When I walk up to them, I hold a hand out to grasp the handle and push or pull (somehow I usually even manage to get that bit wrong regardless of signage) without slowing down at all which means that I have to swing the door quite fast to get it out of my way by the time my body catches up. Then, as the door's opening at a hefty pace, I have to grab the edge of the door and stop it before it slams into the wall. Because I'm still moving forward, this usually means that I'm partially closing the door as I move away from it.

In all, I feel awkward when passing through doors, and anybody directly behind me is liable to receive a door in the face if I'm not aware of them :S

Categories: LUG Community Blogs

Chris Lamb: Fingerspitzengefühl

Mon, 24/03/2014 - 00:53

Loanwords can often appear more insightful than they really are. How prescient of another culture to codify such a concept into a single word! They surely must have lofty and perceptive discussions if it was necessary to coin and codify one.

But whilst there is always the danger of over-inflating the currency of the loanword—especially the compound one—there must be a few that are worth the trouble.

One such term is fingerspitzengefühl. Literally meaning "finger-tips feeling" in German, it attempts to capture the idea of an intuitive sophistication, flair or instinct. Someone exhibiting fingerspitzengefühl would be able to respond appropriately, delicately and tactfully to certain things or situations.

Oliver Reichenstein clarifies the distinction from a personal taste:

Whether I like pink or not, sugar in my coffee, red or white wine, these things are a matter of personal taste. These are personal preferences, and both designers and non-designers have them. This is the taste we shouldn't bother discussing.

Whether I set a text’s line height to 100% or 150% is not a matter of taste, it is a matter of knowing the principles of typography.

However, whether I set a text’s line height at 150% or 145% is a matter of Fingerspitzengefühl; wisdom in craft, or sophistication.

Fingerspitzengefühl is therefore not innate and is probably refined over the years via subconscious—rather than conscious—study and reflection. However, it always flows naturally in the moment, not dissimilar to Castiglione's sprezzatura.

Personally, I am particularly enamoured how this concept of a "trained taste" appears to blur the line between an objective and subjective aesthetic, putting me somewhat at odds with those who baldly assert that taste is "obviously" entirely individual.

Categories: LUG Community Blogs

Andrew Savory: Mastering the mobile app challenge at Adobe Summit

Sun, 23/03/2014 - 18:00

I’m presenting a 2 hour Building mobile apps with PhoneGap Enterprise lab at Adobe Summit in Salt Lake City on Tuesday, with my awesome colleague John Fait. Here’s a sneak preview of the blurb which will be appearing over on the Adobe Digital Marketing blog tomorrow. I’m posting it here as it may be interesting to the wider Apache Cordova community to see what Adobe are doing with a commercial version of the project…

~

Mobile apps are the next great challenge for marketing experts. Bruce Lefebvre sets the the scene perfectly in So, You Want to Build an App. In his mobile app development and content management with AEM session at Adobe Summit he’ll show you how Adobe Marketing Cloud solutions are providing amazing capabilities for delivering mobile apps. It’s a must-see session to learn about AEM and PhoneGap.

But what if you want to gain hands-on practical experience of AEM, PhoneGap, and mobile app development? If you want to roll up your sleeves and build a mobile app yourself, then we’ve got an awesome lab planned for you. In “Building mobile apps with PhoneGap Enterprise“, you’ll have the opportunity to create, build, and update a mobile application with Adobe Experience Manager. You’ll see how easy it is to deliver applications across multiple platforms. You’ll also learn how you can easily monitor app engagement through integration with Adobe Analytics and Adobe Mobile Services.

If you want to know how you can deliver more effective apps, leverage your investment in AEM, and bridge the gap between marketers and developers, then you need to attend this lab at Adobe Summit. Join us for this extended deep dive into the integration between AEM and PhoneGap. No previous experience is necessary – you don’t need to know how to code, and you don’t need to know any of the Adobe solutions, as we’ll explain it all as we go along. Some of you will also be able to leave the lab with the mobile app you wrote, so that you can show your friends and colleagues how you’re ready to continuously drive mobile app engagement and ROI, reduce your app time to market, and deliver a unified experience across channels and brands.

Are you ready to master the mobile app challenge?

~

All hyperbole aside, I think this is going to be a pretty interesting technology space to watch:

  • Being able to build a mobile app from within a CMS is incredibly powerful for certain classes of mobile app. Imagine people having the ability to build mobile apps with an easy drag-and-drop UI that requires no coding. Imagine being able to add workflows (editorial review, approvals etc) to your mobile app development.
  • No matter how we might wish we didn’t have these app content silos, you can’t argue against the utility of content-based mobile apps until the mobile web matures sufficiently so that everyone can build offline mobile websites with ease. Added together with over-the-air content updates, it’s really nice to be able to have access to important content even when the network is not available.
  • Analytics and mobile metrics are providing really useful ways to understand how people are using websites and apps. Having all the SDKs embedded in your app automatically with no extra coding required means that everyone can take advantage of these metrics. Hopefully this will lead to a corresponding leap in app quality and usability.
  • By using Apache Cordova, we’re ensuring that these mobile app silos are at least built using open standards and open technologies (HTML, CSS, JS, with temporary native shims). So when mobile web runtimes are mature enough, it will be trivial to switch front-end app to front-end website without retooling the entire back-end content management workflow.

Exciting times.

Categories: LUG Community Blogs

Chris Lamb: Two aesthetically pleasing Python snippets

Tue, 11/03/2014 - 10:25

The following Python tree structure recently resurfaced. I find it rare to see functional or recursive techniques expressed so succinctly in Python:

def tree(): return collections.defaultdict(tree)

Another construction I am quite fond of is:

for _, dirnames, filenames in os.walk(path): break

...which leaves dirnames and filenames in the current scope, containing the immediate subdirectories and files of the specified path.

Perhaps a little too cute, but appealing in its own way.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Things

Thu, 06/03/2014 - 00:28

I've been meaning to write something for weeks but somehow I never quite seem to get around to it.

Never say never. That old chestnut. Two in the bush gathers no broth.

Troy's Moan

It was my birthday recently and my wife and parents clubbed together and bought me something I'd often talked about before (if somewhat whimsically): a telescope.

After an evening of setting it up indoors, figuring out how all the bits, and performing some initial calibration of the sighting scope, the skies proceeded to be full of cloud for several nights afterwards.

The first night of clear skies happened to tie in very nicely with what was apparently the perfect night for viewing Jupiter. I got myself warmly attired, put some wellies on, and went out into the garden with my telescope (and a bottle of scotch). After some more time getting used to the equipment and better calibrating the sighting scope now that I was looking at celestial bodies rather than neighbour's aerials, I finally got sight of Jupiter! At first, I was just finding my way with a low magnification but once I'd got the hang of it, I stuck my best lens in and was utterly blown away by what I saw.

It's not that I'm anywhere near tinfoil-hatted scepticism but actually seeing an object in the sky - that to the naked eye is a mere white dot - so much closer with equipment that I could naively understand felt like I was properly confirming to myself that what I'd been taught was true.

I could see the colours and some shapes on the planet itself and after some steadying and patience, I realised I could also see all of the moons :)

A great experience and I think I'll head out to a meeting of my local astronomical society one of these days.

Snakes on a screen

In other news, I have been really enjoying getting stuck into Python again recently. I have a couple of smallish projects I'm working on when I get chance which I'll unveil at some point.

Take a sad song

This evening I had my first ever* Judo lesson and thoroughly enjoyed it.

* OK, technically my second; I had a taster lesson when I was at high school nearly 20 years ago.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Things

Thu, 06/03/2014 - 00:28

I've been meaning to write something for weeks but somehow I never quite seem to get around to it.

Never say never. That old chestnut. Two in the bush gathers no broth.

Troy's Moan

It was my birthday recently and my wife and parents clubbed together and bought me something I'd often talked about before (if somewhat whimsically): a telescope.

After an evening of setting it up indoors, figuring out how all the bits, and performing some initial calibration of the sighting scope, the skies proceeded to be full of cloud for several nights afterwards.

The first night of clear skies happened to tie in very nicely with what was apparently the perfect night for viewing Jupiter. I got myself warmly attired, put some wellies on, and went out into the garden with my telescope (and a bottle of scotch). After some more time getting used to the equipment and better calibrating the sighting scope now that I was looking at celestial bodies rather than neighbour's aerials, I finally got sight of Jupiter! At first, I was just finding my way with a low magnification but once I'd got the hang of it, I stuck my best lens in and was utterly blown away by what I saw.

It's not that I'm anywhere near tinfoil-hatted scepticism but actually seeing an object in the sky - that to the naked eye is a mere white dot - so much closer with equipment that I could naively understand felt like I was properly confirming to myself that what I'd been taught was true.

I could see the colours and some shapes on the planet itself and after some steadying and patience, I realised I could also see all of the moons :)

A great experience and I think I'll head out to a meeting of my local astronomical society one of these days.

Snakes on a screen

In other news, I have been really enjoying getting stuck into Python again recently. I have a couple of smallish projects I'm working on when I get chance which I'll unveil at some point.

Take a sad song

This evening I had my first ever* Judo lesson and thoroughly enjoyed it.

* OK, technically my second; I had a taster lesson when I was at high school nearly 20 years ago.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Things

Thu, 06/03/2014 - 00:28

I've been meaning to write something for weeks but somehow I never quite seem to get around to it.

Never say never. That old chestnut. Two in the bush gathers no broth.

Troy's Moan

It was my birthday recently and my wife and parents clubbed together and bought me something I'd often talked about before (if somewhat whimsically): a telescope.

After an evening of setting it up indoors, figuring out how all the bits, and performing some initial calibration of the sighting scope, the skies proceeded to be full of cloud for several nights afterwards.

The first night of clear skies happened to tie in very nicely with what was apparently the perfect night for viewing Jupiter. I got myself warmly attired, put some wellies on, and went out into the garden with my telescope (and a bottle of scotch). After some more time getting used to the equipment and better calibrating the sighting scope now that I was looking at celestial bodies rather than neighbour's aerials, I finally got sight of Jupiter! At first, I was just finding my way with a low magnification but once I'd got the hang of it, I stuck my best lens in and was utterly blown away by what I saw.

It's not that I'm anywhere near tinfoil-hatted scepticism but actually seeing an object in the sky - that to the naked eye is a mere white dot - so much closer with equipment that I could naively understand felt like I was properly confirming to myself that what I'd been taught was true.

I could see the colours and some shapes on the planet itself and after some steadying and patience, I realised I could also see all of the moons :)

A great experience and I think I'll head out to a meeting of my local astronomical society one of these days.

Snakes on a screen

In other news, I have been really enjoying getting stuck into Python again recently. I have a couple of smallish projects I'm working on when I get chance which I'll unveil at some point.

Take a sad song

This evening I had my first ever* Judo lesson and thoroughly enjoyed it.

* OK, technically my second; I had a taster lesson when I was at high school nearly 20 years ago.

Categories: LUG Community Blogs

Mick Morgan: the spy in your bathroom

Fri, 28/02/2014 - 21:40

Back in June 2008 I noted Craig Wright had posted to bugtraq reporting a “remote exploitation of an information disclosure vulnerability in Oral B’s SmartGuide management system”. I found it faintly amusing that a security researcher should have been looking for vulnerabities in a toothbrush.

I should have known better.

A report in wednesday’s on-line Guardian points to the release of a new smart tootbrush from Oral B. Apparently that toothbush will link via bluetooth to an app on either an iPhone or Android and report back to your dentist. Apparently Oral B “sees the connected toothbrush, launched as part of Mobile World Congress’s Connected City exhibition, as the next evolution of the smart bathroom.” Wayne Randall, global vice president of Oral Care at Procter and Gamble reportedly said:

“It provides the highest degree of user interaction to track your oral care habits to help improve your oral health, and we believe it will have significant impact on the future of personal oral care, providing data-based solutions for oral health, and making the relationship between dental professionals and patients a more collaborative one.”

That’s just great. GCHQ have plenty of other personal data feeds already without giving them access to our bathrooms.

Categories: LUG Community Blogs

Wayne Stallwood (DrJeep): Outlook 2003, Cutting off Emails

Sat, 22/02/2014 - 09:43
I had a friend come to me with an interesting problem they were having in their office. Due to the Exchange server and Office licencing they have they are running Outlook 2003 on Windows 7 64bit Machines.

After Internet Explorer updates to IE11 it introduces a rather annoying bug into Outlook. Typed emails often get cut off mid sentence when you click Send ! So only part of the email gets sent !

What I think is happening is that Outlook is reverting to a previously autosaved copy before sending.

Removing the IE11 update would probably fix it but perhaps the easiest way is to disable the "Autosave unsent email" option in Outlook.

Navigate to:-
Tools, Options, E-Mail Options, Advanced E-Mail Options, and disable the "Autosave unsent" option.

Categories: LUG Community Blogs