Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 1 hour 57 min ago

Steve Engledow (stilvoid): Things

Thu, 16/10/2014 - 01:05
Ogg(camp|tober|sford)

OggCamp was fantastic. If I can remember all the talks I went to I'll do a brief write up. The event certainly left me with a few little ideas for things to write and do.

Down with dynamic things!

One small example is that I've rewritten the build script for this blog. No more scripted generation of the final HTML; I just wget -m the development server and that's everything built. Then it's all just served up as static content through nginx. Simples and I don't know why I didn't think of just snapshotting it like that before.

Click

I've reached a point in my career now where I find myself wanting to create and present slide decks. WTF?

I'm still writing code fairly regularly but there's so much other stuff I spend my time doing that I'm not even sure I can account for. It still feels important.

Clock

I think I'm going to buy a Pebble to wean myself off the phone-checking habit that I've developed over the years.

Relatedly, I wrote this post on my phone. It wasn't nearly as painful as I'd expected.

Categories: LUG Community Blogs

Wayne Stallwood (DrJeep): Hosting Update2

Thu, 09/10/2014 - 22:31
Well after a year the SD card on the Raspberry Pi has failed, I noticed /var was unhappy when I tried to apply the recent Bash updates. Attempts at repair only made things worse and I suspect there is some physical issue. I had minimised writes with logs in tmpfs and the frequently updated weather site sat in tmpfs too..logging to remote systems etc. So not quite sure what happened. Of course this is all very inconvenient when your kit lives in another country, so at some point I guess I will have to build a new SD card and ship it out...for now we are back on Amazon EC2...yay for the elastic cloud \o/
Categories: LUG Community Blogs

Chris Lamb: London—Paris—London 2014

Thu, 09/10/2014 - 18:19

I've wanted to ride to Paris for a few months now but was put off by the hassle of taking a bicycle on the Eurostar, as well having a somewhat philosophical and aesthetic objection to taking a bike on a train in the first place. After all, if one already is possession of a mode of transport...

My itinerary was straightforward:

Friday 12h00
London → Newhaven
Friday 23h00
Newhaven → Dieppe (ferry)
Saturday 04h00
Dieppe → Paris
Saturday 23h00
(Sleep)
Sunday 07h00
Paris → Dieppe
Sunday 18h00
Dieppe → Newhaven (ferry)
Sunday 21h00
Newhaven → Peacehaven
Sunday 23h00
(Sleep)
Monday 07h00
Peacehaven → London
Packing list
  • Ferry ticket (unnecessary in the end)
  • Passport
  • Credit card
  • USB A male → mini A male (charges phone, battery pack & front light)
  • USB A male → mini B male (for charging or connecting to Edge 800)
  • USB mini A male → OTG A female (for Edge 800 uploads via phone)
  • Waterproof pocket
  • Sleeping mask for ferry (probably unnecessary)
  • Battery pack

Not pictured:

  • Castelli Gabba Windstopper short-sleeve jersey
  • Castelli Velocissimo bib shorts
  • Castelli Nanoflex arm warmers
  • Castelli Squadra rain jacket
  • Garmin Edge 800
  • Phone
  • Front light: Lezyne Macro Drive
  • Rear lights: Knog Gekko (on bike), Knog Frog (on helmet)
  • Inner tubes (X2), Lezyne multitool, tire levers, hand pump

Day 1: London → Newhaven

Tower Bridge.

Many attempt to go from Tower Bridge → Eiffel Tower (or Marble Arch → Arc de Triomphe) in less than 24 hours. This would have been quite easy if I had left a couple of hours later.

Fanny's Farm Shop, Merstham, Surrey.

Plumpton, East Sussex.

West Pier, Newhaven.

Leaving Newhaven on the 23h00 ferry.


Day 2: Dieppe → Paris

Beauvoir-en-Lyons, Haute-Normandie.

Sérifontaine, Picardie.

La tour Eiffel, Paris.

Champ de Mars, Paris.

Pont de Grenelle, Paris.


Day 3: Paris → Dieppe

Cormeilles-en-Vexin, Île-de-France.

Gisors, Haute-Normandie.

Paris-Brest, Gisors, Haute-Normandie.

Wikipedia: This pastry was created in 1910 to commemorate the Paris–Brest bicycle race begun in 1891. Its circular shape is representative of a wheel. It became popular with riders on the Paris–Brest cycle race, partly because of its energizing high caloric value, and is now found in pâtisseries all over France.

Gournay-en-Bray, Haute-Normandie.

Début de l'Avenue Verte, Forges-les-Eaux, Haute-Normandie.

Mesnières-en-Bray, Haute-Normandie.

Dieppe, Haute-Normandie.

«La Mancha».


Day 4: Peacehaven → London

Peacehaven, East Sussex.

Highbrook, West Sussex.

London weather.


Summary
Distance
588.17 km
Pedal turns
~105,795

My only non-obvious tips would be to buy a disposable blanket in the Newhaven Co-Op to help you sleep on the ferry. In addition, as the food on the ferry is good enough you only need to get to the terminal one hour before departure, avoiding time on your feet in unpicturesque Newhaven.

In terms of equipment, I would bring another light for the 4AM start on «L'Avenue Verte» if only as a backup and I would have checked I could arrive at my Parisian Airbnb earlier in the day - I had to hang around for five hours in the heat before I could have a shower, properly relax, etc.

I had been warned not to rely on being able to obtain enough water en route on Sunday but whilst most shops were indeed shut I saw a bustling tabac or boulangerie at least once every 20km so one would never be truly stuck.

Route-wise, the surburbs of London and Paris are both equally dismal and unmotivating and there is about 50km of rather uninspiring and exposed riding on the D915.

However, «L'Avenue Verte» is fantastic even in the pitch-black and the entire trip was worth it simply for the silent and beautiful Normandy sunrise. I will be back.

Categories: LUG Community Blogs

Ben Francis: What is a Web App?

Fri, 03/10/2014 - 17:50

What is a web app? What is the difference between a web app and a web site? What is the difference between a web app and a non-web app?

In terms of User Experience there is a long continuum between “web site” and “web app” and the boundary between the two is not always clear. There are some characteristics that users perceive as being more “app like” and some as more “web like”.

The presence of web browser-like user interface elements like a URL bar and navigation controls are likely to make a user feel like they’re using a web site rather than an app for example, whereas content which appears to run independently of the browser feels more like an app. Apps are generally assumed to have at least limited functionality without an Internet connection and tend to have the concept of residing in a self-contained way on the local device after being “installed”, rather than being navigated to somewhere on the Internet.

From a technical point of view there is in fact usually very little difference between a web site and a web app. Different platforms currently deal with the concept of “web apps” in all sorts of different, incompatible ways, but very often the main difference between a web site and web app is simply the presence of an “app manifest”. The app manifest is a file containing a collection of metadata which is used when “installing” the app to create an icon on a homescreen or launcher.

At the moment pretty much every platform has its own proprietary app manifest format, but the W3C has the beginnings of a proposed specification for a standard “Manifest for web applications” which is starting to get traction with multiple browser vendors.

Web Manifest – Describing an App

Below is an example of a web app manifest following the proposed standard format.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" }

The manifest file is referenced inside the HTML of a web page using a link relation. This is cool because with this approach a web app doesn’t have to be distributed through a centrally controlled app store, it can be discovered and installed from any web page.

http://example.com/myapp/index.html:

<!DOCTYPE html> <html> <head> <title>My App - Welcome</title> <link rel="manifest" href="manifest.json"> <meta name="application-name" content="My App"> <link rel="icon" sizes="64x64" href="icon.png"> ...

As you can see from the example, these basic pieces of metadata which describe things like a name, an icon and a start URL are not that interesting in themselves because these things can already be expressed in HTML in a web standard way. But there are some other other proposed properties which could be much more interesting.

Display Modes – Breaking out of the Browser

We said above that one thing that makes a web app feel more app like is when it runs outside of the browser, without common browser UI elements like the URL bar and navigation controls. The proposed “display” property of the manifest allows authors of web content which is designed to function without the need for these UI elements to express that they want their content to run outside of the browser.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" "display": "standalone" }

The proposed display modes are “fullscreen”, “standalone”, “minimal-ui” and “browser”. The “browser” display mode opens the content in the user agent’s conventional method (e.g. a browser tab), but all of the other display modes open the content separate from the browser, with varying levels of browser UI.

There’s also a proposed “orientation” property which allows the content author to specify the default orientation (i.e. portrait/landscape) of their content.

App Scope – A Slice of the Web

In order for a web app to be treated separately from the rest of the web, we need to be able to define which parts of the web are part of the app, and which are not. The proposed “scope” property of the manifest defines the URL scope to which the manifest applies.

By default the scope of a web app is anything from the same origin as its manifest, but a single origin can also be sliced up into multiple apps or into app and non-app content.

Below is an example of a web app manifest with a defined scope.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" }

From the user’s point of view they can browse around the web, seamlessly navigating between web apps and web sites until they come across something they want to keep on their device and use often. They can then slice off that part of the web by “bookmarking” or “installing” it on their device to create an icon on their homescreen or launcher. From that point on, that slice of the web will be treated separately from the browser in its own “app”.

Without a defined scope, a web app is just a web page opened in a browser window which can then be navigated to any URL. If that window doesn’t have any browser-like navigation controls or a URL bar then the user can get stranded at a dead on the web with no way to go back, or worse still can be fooled into thinking that a web page they thought was part of a web app they trust is actually from another, malicious, origin.

The web browser is like a catch-all app for browsing all of the parts of the web which the user hasn’t sliced off to use as a standalone app. Once a web app is registered with the user agent as managing a defined slice of the web, the user can seamlessly link into and out of installed web apps and the rest of the web as they please.

Service Workers – Going Offline

We said above that another characteristic users often associate with “apps” is their ability to work offline, in the absence of a connection to the Internet. This is historically something the web has done pretty badly at. AppCache was a proposed standard intended for this purpose, but there are many common problems and limitations of that technology which make it difficult or impractical to use in many cases.

A new, much more versatile, proposed standard is called Service Workers. Service Workers allow a script to be registered as managing a slice of the web, even whilst offline, by intercepting HTTP requests to URLs within a specified scope. A Service Worker can keep an offline cache of web resources and decide when to use the offline version and when to fetch a new version over the Internet.

The programmable nature of Service Workers make them an extremely versatile tool in adding app-like capabilities to web content and getting rid of the notion that using the web requires a persistent connection to the Internet. Service Workers have lots of support from multiple browser vendors and you can expect to see them coming to life soon.

The proposed “service_worker” property of the manifest allows a content author to define a Service Worker which should be registered with a specified URL scope when a web app is installed or bookmarked on a device. That means that in the process of installing a web app, an offline cache of web resources can be populated and other installation steps can take place.

Below is our example web app manifest with a Service Worker defined.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" "service_worker": { "src": "app.js", "scope": "/myapp" } } Packages – The Good, the Bad and the Ugly

There’s a whole category of apps which many people refer to as “web apps” but which are delivered as a package of resources to be downloaded and installed locally on a device, separate from the web. Although these resources may use web technologies like HTML, CSS and Javascript, if those resources are not associated with real URLs on the web, then in my view they are by definition not part of a web app.

The reason this approach is commonly taken is that it allows operating system developers and content authors to side-step some of the current shortcomings of the web platform. Packaging all of the resources of an app into a single file which can be downloaded and installed on a device is the simplest way to solve the offline problem. It also has the convenience that the contents of that package can easily be reviewed and cryptographically signed by a trusted party in order to safely give the app privileged access to system functions which would currently be unsafe to expose to the web.

Unfortunately the packaged app approach misses out on many of the biggest benefits of the web, like its universal and inter-linked nature. You can’t hyperlink into a packaged app, and providing an updated version of the app requires a completely different mechanism to that of web content.

We have seen above how Service Workers hold some promise in finally solving the offline problem, but packages as a concept may still have some value on the web. The proposed “Packaging on the Web” specification is exploring ways to take advantage of some of the benefits of packages, whilst retaining all the benefits of URLs and the web.

This specification does not explore a new security model for exposing more privileged APIs to the web however, which in my view is the single biggest unsolved problem we now have left on the web as a platform.

Conclusions

In conclusion, a look at some of the latest emerging web standards tells us that the answer to the question “what is a web app?” is that a web app is simply a slice of the web which can be used separately from the browser.

With that in mind, web authors should design their content to work just as well inside and outside the browser and just as well offline as online.

Packaged apps are not web apps and are always a platform-specific solution. They should only be considered as a last resort for apps which need access to privileged functionality that can’t yet be safely exposed to the web. New web technologies will help negate the need for packages for offline functionality, but packages as a concept may still have a role on the web. A security model suitable for exposing more privileged functionality to the web is one of the last remaining unsolved challenges for the web as a platform.

The web is the biggest ecosystem of content that exists, far bigger than any proprietary walled garden of curated content. Lots of cool stuff is possible using web technologies to build experiences which users would consider “app like”, but creating a great user experience on the web doesn’t require replicating all of the other trappings of proprietary apps. The web has lots of unique benefits over other app platforms and is unrivalled in its scale, ubiquity, openness and diversity.

It’s important that as we invent cool new web technologies we remember to agree on standards for them which work cross-platform, so that we don’t miss out on these unique benefits.

The web as a platform is here to stay!

Categories: LUG Community Blogs

Mick Morgan: CVE-2014-6271 bash vulnerability

Fri, 26/09/2014 - 12:38

Guess what I found in trivia’s logs this morning?

89.207.135.125 – - [25/Sep/2014:10:48:13 +0100] “GET /cgi-sys/defaultwebpage.cgi HTTP/1.0″ 404 345 “-” “() { :;}; /bin/ping -c 1 198.101.206.138″

I’ll bet a lot of cgi scripts are being poked at the moment.

Check your logs guys. A simple grep “:;}” access.log will tell you all you need to know.

(Update 27 September)

Digital Ocean, the company I use to host my Tor node and tails/whonix mirrors, has posted a useful note about the vulnerability. And John Leyden at El Reg posted about the problem here. Leyden’s article references some of the more authoritative discussions so I won’t repeat the links here.

All my systems were vulnerable, but of course have now been patched. However, the vulnerability has existed in bash for so long that I can’t help but feel deeply uneasy even though, as Michal Zalewski (aka lcamtuf) notes in his blog:

PS. As for the inevitable “why hasn’t this been noticed for 15 years” / “I bet the NSA knew about it” stuff – my take is that it’s a very unusual bug in a very obscure feature of a program that researchers don’t really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on.

Categories: LUG Community Blogs

Jonathan McDowell: Automatic inline signing for mutt with RT

Thu, 18/09/2014 - 11:39

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign" send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Update: Apparently Mutt in Jessie and beyond doesn't have the pgp_autosign option; you want crypt_autosign instead (and maybe crypt_autopgp but that defaults to yes so unless you've configured your setup to do S/MIME by default you should be fine). Thanks to Luca Capello for pointing this out.

Categories: LUG Community Blogs

Jonathan McDowell: Back from DebConf 14

Fri, 12/09/2014 - 15:03

I previously forgot to mention that I was planning to attend DebConf14, having missed DebConf13. This year the conference was held in Portland, OR. This is a city I've been to many times before, and enjoy, but I hadn't spent any time wandering around its city centre as a pedestrian. I have to say I really prefer DebConfs that are held in middle of city. It always seems a bit of a shame to travel some distance to somewhere new and spend all the time there in a conference venue. Plus these days I have the added lure of going out and playing Ingress in a new location. DebConf14 didn't disappoint in these respects; the location was super easy to get to from the airport via public transportation, all of the evening social events were within reasonable walking distance (I'll tend to default to walking when possible) and the talk venue/accommodation were close to each other and various eating + drinking options. Throw in the fact at Portland managed to produce some excellent weather (modulo my Ingress session on the last Saturday morning, when rained on me) and it's impossible to fault the physicalities of DebConf this year.

This year the conference format was a bit different; previous years have had a week long DebConf before the week of the conference itself. This year went for a 9 day talk schedule (Saturday -> Sunday) with various gaps of hacking time interspersed. I've found it hard to justify a full two weeks away in the past, so this setup worked a lot better from my viewpoint. Also I rarely go to DebConf with a predetermined list of things to do; the stuff I work on naturally falls out of talks I attend and informal discussions I have. Having hack time throughout the conference helped me avoid feeling I was having to trade off hacking vs talks.

Naturally enough a lot of my involvement at DebConf was around OpenPGP. Gunnar and I spent a fair bit of time getting Daniel up to speed with the keyring-maint team (Gunnar more than I, I'll confess). We finally set a hard timeframe for freeing Debian of older 1024 bit keys. I was introduced to the Gnuk, which is a particularly interesting piece of open specification hardware with a completely Free software stack on top if it that implements the OpenPGP smartcard spec. Currently it's limited to 2K keys but it's hoped that 4K support can be added (and I ended up spending a couple of hours after the closing talk hacking on the source and seeing how much needs to change for 4K support, aided by the very patient Niibe). These are the sort of things that really benefit from the face time that DebConf offers to the Debian project. I've said it before, but I think it's worth saying again: Debian is a bit like a huge telecommuting organization and it's my opinion that any such organization should try and ensure its members actually spend some time together on a regular basis. It improves the ability to work remotely a hell of a lot if you can actually put a face to the entity you're emailing / IRCing and have some sort of idea where they're coming from because you've spent some time with them, whether that's in talks or over dinner or just casual hallway chats.

For once I also found myself considering alternative employment while at DebConf and it was incredibly useful to be able to have various conversations with both old friends and people who were there with an eye on recruitment. Thanks to all those whose ears I bent about the subject (and more on the outcome in a future post). Thank you also to the many people involved with the organization of DebConf; I've been on the periphery a few times over the years and it's given me a glimpse into the amount of hard work all of the volunteers (be they global team, local organizing team, video team or just random volunteers) put into making DebConf one of my must-attend yearly conferences. If you're at all involved in Debian and haven't attended I strongly urge you to do so - I'll see you all next year at DebConf15 in Heidelberg!

Categories: LUG Community Blogs

Jonathan McDowell: Breaking up with America

Sat, 06/09/2014 - 23:38

Back in January I changed jobs. This took me longer to decide to do than it should have. My US visa (an L-1B) was tied to the old job, and not transferable, so leaving the old job also meant leaving the US. That was hard to do; I'd had a mostly fun 3 and a half years in the SF Bay Area.

The new job had an office in Belfast, and HQ in the Bay Area. I went to work in Belfast, and got sent out to the US to meet coworkers and generally get up to speed. During that visit the company applied for an H-1B visa for me. This would have let me return to the US in October 2014 and start working in the US office; up until that point I'd have continued to work from Belfast. Unfortunately there were 172,500 applications for 85,000 available visas and mine was not selected for processing.

I'm disappointed by this. I've enjoyed my time in the US. I had a green card application in process, but after nearly 2 years it still hadn't completed the initial hurdle of the labor certification stage (a combination of a number of factors, human, organizational and governmental). However the effort of returning to live here seems too great for the benefits gained. I can work for a US company with a non-US office and return on an L-1B after a year. And once again have to leave should I grow out of the job, or the job change in some way that doesn't suit me, or the company hit problems and have to lay me off. Or I can try again for an H-1B next year, aiming for an October 2015 return, and hope that this time my application gets selected for processing.

Neither really appeals. Both involve putting things on hold in the hope longer terms pans out as I hope. And to be honest I'm bored of that. I've loved living in America, but I ended up spending at least 6 months longer in the job I left in January than I'd have done if I'd been freely able to change employer without having to change continent. So it seems the time has come to accept that America and I must part ways, sad as that is. Which is why I'm currently sitting in SFO waiting for a flight back to Belfast and for the first time in 5 years not having any idea when I might be back in the US.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Lessons learned

Fri, 05/09/2014 - 09:51
  • Apparently I am unable to summarise.

  • When going on holiday somewhere, research things we might do once there rather than rely on local knowledge.

  • I am mildly allergic to raw tomatoes and need to stop bloody eating them.

  • Fork out for the TomTom map wherever we go. My aged TomTom One is still far better than anything I've found on Android so far.

    • Google Maps does not do navigation in Turkey.

    • Not all road signs in Turkey are reliable. Some rely on local knowledge.

  • Whatever the heat, keep feet covered at night; the mosquitos love them. Ouch.

  • Lost luggage will only turn up after you've given up hope and have bought replacements for the important stuff.

  • Turkey has inherited several things from French immigrants of yore. Notably, quite a bit of vocabulary and their driving style.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): O Baggage Where Art Thou

Fri, 05/09/2014 - 09:32

Owing to various factors, I'm finding it difficult to recall the things that have happened and in what order over the last few days so, for my own purposes, I'm going to note them down here.</pointless-intro>

Edit: Those were not notes. I'm a waffler.

tl;dr: We got tired, the airline lost one of our bags, we did stuff, the airline found the bag.

Monday

Woke up around 9, considered the fact that we had until around 5pm to tackle tidy the house, tackle the Everest of dishes, wash all clothes, pack, and then leave for our holiday.

Farted around a fair bit and eventually resigned ourselves to coming back to a less-than-perfectly-tidy house. I scaled Mount Crockery at least.

Around 18:30 we eventually left for Stansted. We made good time and arrived plenty early enough for our 23:35 flight to Istanbul. On check-in, we were told the flight was delayed and was expected to depart between 01:00 and 01:30. Just what we needed with our already over-tired 2 year old.

We decided we would try to take it easy; we had a pint and I walked around the airport with the little man until he had calmed down a bit.

Tuesday

Eventually, the plane was ready for us to board at 01:15; we did so.

The flight passed easily enough. We were served a hot meal as soon as we hit cruising altitude and then we all slept through until descent. The landing was smooth and early morning Istanbul seemed warm and inviting.

Until we had found ourselves still waiting for our luggage ninety minutes later.

2 hours later, we learned that one of our bags had been lost. After some half-hearted arguing (we were just too tired), we filled in a form and left the arrivals hall with our remaining luggage. Unfortunately, the one that was missing contained most of our clothes and, frustratingly, toys and clothes for my sister-in-law's newborn.

Brother-in-law was waiting patiently outside for us. I guess he'd been there a while because he looked very relieved to see us. We made our way to the car hire place to find that, because we were so much later than we'd told them (by this time we were 3 hours later than we had booked the car for) they had decided we'd cancelled and gave our car to someone else. After some more arguing (half-hearted again), they found us another car of "equivalent size" and told us to wait round the front.

The car was a Ford Fiesta. I'm not one of those blokey types that know about cars. But I can say with certainty that I will never buy a Ford Fiesta and hope never to have to drive one again. It was tiny and weird. If we'd had our missing bag, I don't think we could have fit everything in the car. mumble mumble small mercies or summat

With the help of b-i-l, we found our way to his house - driving on the "wrong" side of the road in a "wrong"-hand-drive car after a long and stressful night with not much sleep was fun - and greeted s-i-l and her new baby and then had breakfast.

Then we slept. Then we went to the park. Then we slept.

Wednesday

The oddity of travelling at night then sleeping in the day but still being tired enough to sleep again at night was a new experience for me and I am still feel quite confused but I think I've managed to convince myself that everything above under "Tuesday" is correct.

On Wednesday, we decided on the strength of internet reviews to visit Polonezköy. Don't bother, it's rubbish. We pressed on then to the "nearby" beach. It turned out to be a 45 minute drive and a storm broke out along the way. When we arrived at the little seaside town (I don't remember its name) there was nowhere to park. Being already in a grump, we decided to head home and call the day a complete loss. Half way home, we decided we would visit Kartal instead; a town near s-i-l's.

Kartal was nice :)

Thursday

Shopping in Kadıköy, ferry to Beşiktaş, more shopping, ferry back, home. In all, a nice day. Rounded off by some quality time with a beer on the balcony. It is way too hot indoors, even at night.

Just after midnight, the airline called us to say that they had found our missing luggage and would be sending it to us tomorrow.

Fresh pants!

Categories: LUG Community Blogs

Steve Engledow (stilvoid): All fired up

Thu, 21/08/2014 - 23:52

After putting it off for various reasons for at least a couple of years, I've finally switched back from Chromium to Firefox and I'm very glad I did so.

The recent UI change seems to have upset a lot of Firefox users but it was instrumental in prompting my return and I'm sure others will have felt the same; Firefox once again looks and feels like a modern browser.

I have to say also that it feels an imperial bucketload snappier than Chromium too. The exact opposite was one of the reasons I left in the first place.

Good job Firefolk :)

Categories: LUG Community Blogs

Mick Morgan: net neutrality

Wed, 13/08/2014 - 21:46

My apologies that this is a few weeks late – but it still bears posting. John Oliver at HBO gave the best description of the net neutrality argument I have seen so far.

Following that broadcast, the FCC servers were, rather predictably, overwhelmed by the outraged response from the trolls that Oliver set loose.

Unfortunately, as John Naughton reports in the Observer, the FCC are unlikely to be moved by that.

Categories: LUG Community Blogs

Mick Morgan: levison on dime

Mon, 11/08/2014 - 20:14

Ladar Levison and Stephen Wyatt presented the upcoming Dark Internet Mail Environment (DIME) at Defcon22 this week. According to El Reg, Levison, who shut down Lavabit, his previous mail service rather than comply with FBI demands that he divulge the private SSL certificates used to encrypt traffic on that service, said:

“I’m not upset that I got railroaded and I had to shut down my business … I’m upset because we need a mil-spec cryptographic mail system for the entire planet just to be able to talk to our friends and family without any kind of fear of government surveillance”.

I think that puts the problem into perspective.

Categories: LUG Community Blogs

Ben Francis: Building the Firefox Browser for Firefox OS

Thu, 07/08/2014 - 15:41

Re-posted from Mozilla Hacks.

Boot to Gecko

As soon as the Boot to Gecko (B2G) project was announced in July 2011 I knew it something I wanted to contribute to. I’d already been working on the idea of a browser based OS for a while but it seemed Mozilla had the people, the technology and the influence to build something truly disruptive.

At the time Mozilla weren’t actively recruiting people to work on B2G, the team still only consisted of the four co-founders and the project was little more than an empty GitHub repository. But I got in touch the day after the announcement and after conversations with Chris, Andreas and Mike over Skype and a brief visit to Silicon Valley, I somehow managed to convince them to take me on (initially as a contractor) so I could work on the project full time.

A Web Browser Built from Web Technologies

On my first day Chris Jones told me “The next, highest-priority project is a very basic web browser, just a URL bar and back button basically.”

Chris and his bitesize browser, Taipei, December 2011

The team was creating a prototype smartphone user interface codenamed “Gaia”, built entirely with web technologies. Partly to prove it could be done, but partly to find the holes in the web platform that made it difficult and fill those holes with new Web APIs. I was asked to work on the first prototypes of a browser app, a camera app and a gallery app to help find some of those holes.

You might wonder why a browser-based OS needs a browser app at all, but the thinking for this prototype was that if other smartphone platforms had a browser app, then B2G would need one too.

The user interface of the desktop version of Firefox is written in highly privileged “chrome” code using the XUL markup language. On B2G it would need to be written in “content” using nothing but HTML, CSS and JavaScript, just like all the other apps. That would present some interesting challenges.

In the beginning, there was an <iframe>

It all started with a humble iframe, a text input for the URL bar and a go button, in fact you can see the first commit here. When you clicked the go button, it set the src attribute of the iframe to the contents of the text input, which caused the iframe to load the web page at that URL.

First commit, November 2011

The first problem with trying to build a web browser using an iframe is that the same-origin policy in JavaScript prevents you accessing just about any information about what’s going on inside it if the content comes from a different origin than the browser itself. In particular, it’s not possible to access the contentWindow property and all of the information that gives access to. This policy exists for good reasons so in order to build a fully functional web browser we would have to figure out a way for a privileged web app to safely poke holes in that cross-origin boundary to get just enough information to do its job, but without creating serious security vulnerabilities or compromising the user’s privacy.

Another problem we came across quite quickly was that many web authors will go to great lengths to prevent their web site being loaded inside an iframe in order to prevent phishing attacks. A web server can send an X-Frame-Options HTTP response header instructing a user agent to simply not render the content, and there are also a variety of techniques for “framebusting” where a web site will actively try to break out of an iframe and load itself in the parent frame instead.

It was quickly obvious that we weren’t going to get very far building a web browser using web technologies without evolving the web technologies themselves.

The Browser API

I met Justin Lebar at the first B2G work week in Taipei in December 2011. He was tasked with modifying Gecko to make the browser app on Boot to Gecko possible. To me Gecko was (and largely still is) a giant black box of magic spells which take the code I write and turn it into dancing images on the screen. I needed a wizard who had a grasp on some of these spells, including a particularly strong spell called Docshell which only the most practised of wizards dare peer into.

Justin at the first B2G Work Week in Taipei, December 2011

When I told Justin what I needed he made the kinds of sounds a mechanic makes when you take your car in for what you think is a simple problem but turns out costing the price of a new car. Justin had a better idea than I did as to what was needed, but I don’t think either of us realised the full scale of the task at hand.

With the adding of a simple boolean “mozbrowser” attribute to the HTML iframe element in Gecko, the Browser API was born. I tried adding features to the browser app and every time I found something that wasn’t possible with current web technologies, I went back to Justin to get him to cast a new magic spell.

There were easier approaches we could have taken to build the browser app. We could have added a mechanism to allow the browser to inject scripts into the iframe and communicate freely with the content inside, but we wanted to provide a safe API which anyone could use to build their own browser app and this approach would be too risky. So instead we built an explicit privileged API into the DOM to create a new class of iframe which could one day become a new standard HTML tag.

Keeping the Web Contained

The first thing we did was to try to trick web pages loaded inside an iframe into thinking they were not in fact inside an iframe. At first we had a crude solution which just ignored X-Frame-Options headers for iframes in whitelisted domains that had the mozbrowser attribute. That’s when we discovered that some web sites are quite clever at busting out of iframes. In the end we had to take other measures like making sure window.top pointed at the iframe rather than its parent so a web site couldn’t detect that it had a parent, and eventually also run every browser tab in its own system process to completely isolate them from each other.

Once we had the animal that is the web contained, we needed to poke a few air holes to let it breathe. There’s some information we need to let out of the iframe in the form of events: when the location, title or icon of a web page changes (locationchange, titlechange and iconchange); when a page starts and finishes loading (loadstart, loadend) and when the security characteristics of the currently loaded page changes (securitychange). This all allows us to keep the address bar and title bar up to date and show a progress indicator.

The browser app needs to be able to navigate the iframe by telling it to goBack(), goForward(), stop() and reload(). We also need to be able to explicitly ask for information like characteristics of the session history (getCanGoBack(), getCanGoForward()) to determine which navigation buttons to display.

With these basics in place it was possible to build a simple functional browser app.

The Prototype

The Gaia project’s first UX designer was Josh Carpenter. At an intensive work week in Paris the week before Mobile World Congress in February 2012, Josh created UI mockups for all the basic features of a smartphone, including a simple browser, and we built a prototype to those designs.

Josh and me plotting over a beer in Paris.

 

The prototype browser app could navigate web content, keep it contained and display basic information about the content being viewed. This would be the version demonstrated at MWC in Barcelona that year.

Simple browser demo for Mobile World Congress, February 2012

Building a Team

At a work week in Qualcomm’s offices in San Diego in May 2012 I was able to give a demo of a slightly more advanced basic browser web app running inside Firefox on the desktop. But it was still very basic. We needed a team to start building something good enough that we could ship it on real devices.

“Browser Inception”, San Diego May 2012

San Diego was also where I first met Dale Harvey, a brave Scotsman who came on board to help with Gaia. His first port of call was to help out with the browser app.

Dale Getting on Board in San Diego, May 2012

One of the first things Dale worked on was creating multiple tabs in the browser and even adding a screenshotting spell to the Browser API to show thumbnails of browser tabs (I told you he was brave).

By this time we had also started to borrow Larissa Co, a brilliant designer from the Firefox team, to work on the interaction design and Patryk Adamczyk, formerly of RIM, to work on the visual design for the browser on B2G. That was when it started to look more like a Firefox browser.

Early UI Mockup, July 2012

Things that Pop Up

Web pages like to make things pop up. For a start they like to alert(), prompt() or confirm() things with you. Sometimes they like to open() a new browser window (and close() them again), open a link in a _blank window, ask you for a password, ask for your permission to do something, ask you to select an option from a menu, open a context menu or confirm re-sending the contents of a form.

An alert(), version 1.0

All of this required new events in the Browser API, which meant more spells for Justin to cast.

Scroll, Pan and Zoom

Moving around web pages on web devices works a little differently from on the desktop. Rather than scroll bars or a scroll wheel on a mouse it uses touch input and a system called Asynchronous Pan and Zoom to allow the user to pan around a web page by dragging it and scrolling it using “kinetic scrolling” which feels like it has some physics to it.

The first implementation of kinetic scrolling was written in JavaScript by Frenchman and Gaia leader Vivien Nicolas, specifically for Gaia, but it would later be written in a cross-platform way in Gecko to unify the code used on B2G and Android.

One of the trickier interactions to get right was that we wanted the address bar to hide as you scrolled down the page in order to make more room for content, then show again when you scroll back to the top of the page.

This required adding asyncscroll events which tapped directly into the Asynchronous Pan and Zoom code so that the browser knew not only when the user directly manipulated the page, but how much it scrolled based on physics, asynchronously from the user’s interaction.

Storing Stuff

One of the most loved features of Firefox is the “Awesomebar”, a combined address bar, search bar (and on mobile, title bar) which lets you quickly get to the content you’re looking for. You type a few characters and immediately start to see matching web pages from your browsing history, ranked by a “frecency” algorithm.

On the desktop and on Android all of this data is stored in the “Places” database as part of privileged “chrome” code. In order to implement this feature in B2G we would need to use the local storage capabilities of the web, and for that we chose IndexedDB. We built a Places database in IndexedDB which would store all of the “places” a user visits on the web including their URL, title and icon, and store all the times the user visited that page. It would also be used to store the users bookmarks and rank top sites by “frecency”.

Awesomebar, version 1.0

Clearing Stuff

As you browse around the web Gecko also stores a bunch of data about the places you’ve been. That can be cookies, offline pages, localStorage, IndexedDB databases and all sorts of other bits of data. Firefox browsers provide a way for you to clear all of this data, so methods needed to be added to the Browser API to allow this data to be cleared from the browser settings in B2G.

Browser settings, version 1.0

Handling Crashes

Sometimes web pages crash the browser. In B2G every web app and every browser tab runs in its own system process so that should the worst happen, it will only cause that one window/tab to crash. In fact, due to the memory constraints of the low-end smartphones B2G would initially target, sometimes the system will intentionally kill a background app or browser tab in order to conserve memory. The browser app needs to be informed when this happens and needs to be able to recover seamlessly so that in most cases the user doesn’t even realise a process was killed. Events were added to the Browser API for this purpose.

Crashed tab, version 1.0

Talking to Other Apps

Common use cases of a mobile browser are for the user to want to share a URL using another app like a social networking tool, or for another app to want to view a URL using the browser.

B2G implemented Web Activities for this purpose, to add a capability to the web for apps to interact with each other, but in an app-agnostic way. So for example the user can click on a share button in the browser app and B2G will fire a “share URL” Web Activity which can then be handled by any installed app which has registered to handle that type of Web Activity.

Share Web Activity, version 1.2

Working Offline

Despite the fact that B2G and Gaia are built on the web, it is a requirement that all of the built-in Gaia apps should be able to function offline, when an Internet connection is unavailable or patchy, so that the user can still make phone calls, take photos and listen to music etc.. At first we started to use AppCache for this purpose, which was the web’s first attempt at making web apps work offline. Unfortunately we soon ran into many of the common problems and limitations of that technology and found it didn’t fulfill all of our requirements.

In order to ship version 1.0 of B2G on time, we were forced to implement “packaged apps” to fulfill all of the offline and security requirements for built-in Gaia apps. Packaged apps solved our problems but they are not truly web apps because they don’t have a real URL on the Internet, and attempts to standardise them didn’t get much traction. Packaged apps were intended very much as a temporary solution and we are working hard at adding new capabilities like ServiceWorkers, standardised hosted packages and manifests to the web so that eventually proprietary packaged apps won’t be necessary for a full offline experience.

Offline, version 1.4

Spit and Polish

Finally we applied a good deal of spit and polish to the browser app UI to make it clean and fluid to use, making full use of hardware-accelerated CSS animations, and a sprinkling of Firefoxy interaction and visual design to make the youngest member of the Firefox browser family feel consistent with its brothers and sisters on other platforms.

Shipping 1.0

At an epic work week in Berlin in January 2013 hosted by Deutsche Telekom the whole B2G team, including engineers from multiple competing mobile networks and device manufacturers, got together with the common cause of shipping B2G 1.0, in time to demo at Mobile World Congress in Barcelona in February. The team sprinted towards this goal by fixing an incredible 200 bugs in one week.

Version 1.0 Team, Berlin Work Week, January 2013

In the last few minutes of the week Andreas Gal excitedly declared “Zarro Gaia Boogs”, signifying version 1.0 of Gaia was complete, with the rest of B2G to shortly follow over the weekend. Within around 18 months a dedicated team spanning multiple organisations had come together working entirely in the open to turn an empty GitHub repository into a fully functioning mobile operating system which would later ship on real devices as Firefox OS 1.0.1.

Zarro Gaia Boogs, January 2013

Browser app v1.0

So having attended Mobile World Congress 2012 with a prototype and a promise to deliver commercial devices into the market, we were able to return in 2013 having delivered on that promise by fully launching the “Firefox OS” brand with multiple devices on multiple mobile networks with a launch that really stole the show at the biggest mobile conference in the world. Firefox OS had arrived.

Mobile World Congress, Barcelona, February 2013

1.x

Firefox OS 1.1 quickly followed and by the time we started working on version 1.2 the project had grown significantly. We re-organised into autonomous agile teams focused on product areas, the browser app being one. That meant we now had a dedicated team with designers, engineers, a test engineer, a product manager and a project manager.

The browser team, London work week, July 2013

Firefox OS moved to a rapid release “train model” of development like Firefox, where a new version is delivered every 12 weeks. We quickly added new features and worked on improving performance to get the best out of the low end hardware we were shipping on in emerging markets.

Browser app v1.4

“Haida”

Version 1.0 of Firefox OS was very much about proving that we could build what already exists on other smartphones, but entirely using open web technologies. That included a browser app.

Once we’d proved that was possible and put real devices on shelves in the market it was time to figure out what would differentiate Firefox OS as a product going forward. We wanted to build something that doesn’t just imitate what’s already been done, but which plays to the unique strengths of the web to build something that’s true to Mozilla’s DNA, is the best way to experience the web, and is the platform that HTML5 deserves.

Below is a mockup I created right back towards the start of the project at the end of 2011, before we even had a UX team. I mentioned earlier that the Awesomebar is a core part of the Firefox experience in Firefox browsers. My proposal back then was to build a system-wide Awesomebar which could search the whole device, including your apps and their contents, and be accessible from anywhere in the OS.

Very early mockup of a system-wide Awesomebar, December 2011

At the time, this was considered a little too radical for version 1.0 and our focus really needed to be on innovating in the web technology needed to build a mobile OS, not necessarily the UX. We would instead take a more conservative approach to the user interface design and build a browser app a lot like the one we’d built for Android.

In practice that meant that we in fact built two browsers in Firefox OS. One was the browser app which managed the world of “web sites” and the other was the window manager in the system app which managed the world of “web apps” .

In reality on the web there isn’t so much of a distinction between web apps and web sites – each exists on a long continuum of user experience with a very blurry boundary in the middle.

In March 2013, with Firefox OS 1.0 out of the door, Josh Carpenter put me in touch with Gordon Brander, a member of the UX team who had been thinking along the same lines as me. In fact Gordon being as much of an engineer as he is a designer, had gone as far as to write a basic prototype in JavaScript.

Gordon’s Rocketbar Prototype, March 2013

Gordon and I started to meet weekly to discuss the concept he had by then codenamed “Rocketbar”, but it was a bit of a side project with a few interested people.

In April 2013 the UX team had a summit in London where they got together to discuss future directions for the user experience of Firefox OS. I was lucky enough to be invited along to not only observe but participate in this process, Josh being keen to maintain a close collaboration between Design and Engineering.

We brainstormed around what was unique about the experience of the web and how we might create a unique user experience which played to those strengths. A big focus was on “flow”, the way that we can meander through the web by following hyperlinks. The web isn’t a world of monolithic apps with clear boundaries between them, it is an experience of surfing from one web site to another, flowing through content.

Brainstorming session, London, April 2013

In the coming weeks the UX team would create some early designs for a concept (eventually codenamed “Haida”) which would blur the lines between web apps and web sites and create a unique user experience which flows like the web does. This would eventually include not only the “Rocketbar”, which would be accessible across the whole OS and seamlessly adapt to different types of web content, but also “sheets”, which would split single page web apps into multiple pages which you could swipe through with intuitive edge gestures. It would also eventually include a content model based around live apps which you can surf to, use, and then bookmark if you choose to, rather than monolithic apps which you have to install from a central app store before you can use them.

In June 2013 a small group of designers and engineers met in Paris to develop a throwaway prototype of Haida, to rapidly iterate on some of the more radical concepts and put them through user testing.

Haida Prototyping, Paris, June 2013

Josh and Gordon working in a highly co-ordinated fashion, Paris, June 2013

Wizards at work, Paris, June 2013

2.x and the Future

Fast forward to the present and the browser team has been merged into the “Systems Front End” team. The results of the Haida prototyping and user testing are slowly starting to make their way into the main Firefox OS product. It won’t happen all at once, but it will happen in small pieces as we iterate and learn.

In version 2.0 of Firefox OS the homescreen search feature from 1.x will be replaced with a new search experience developed in conjunction with a new homescreen, implemented by Kevin Grandon, which will lay the foundations for “Rocketbar”. In version 2.1 our intention is to completely merge the browser app into the system app so that browser tabs become “sheets” alongside apps in the task manager and the “Rocketbar” is accessible from anywhere in the OS. The Rocketbar will adapt to different types of web content and shrink down into the status bar when not in use. Edge gestures will allow you to swipe between web apps and browser windows and eventually apps will be able to spawn multiple sheets.

UI Mockups of Rocketbar in expanded and collapsed state, July 2014

In parallel we see the evolution of web standards around manifests, packages and webviews and ongoing discussions around what defines the scope of an “app”.

Version 1.x of Firefox OS was built with web technologies but still has quite a similar user experience to other mobile platforms when it comes to installing and using apps, and browsing the web. Going forward I think you can expect to see the DNA of the web come through into the user interface with a unified experience which breaks down the barriers between web apps and web sites, allowing you to freely flow between the two.

Firefox OS is an open source project developed completely in the open. If you’re interested in contributing to Gaia, take a look at the “Developing Gaia” page on MDN. If you’re interested in creating your own HTML5 app to run on Firefox OS take a look at the “App Center“.

Categories: LUG Community Blogs

Chris Lamb: Strava Enhancement Suite update

Wed, 06/08/2014 - 09:22

I've been told I don't blog about my projects often enough, so here's a feature update on my Chrome Extension that adds various improvements to the Strava.com fitness tracker.

All the features are optional and can be individually enabled in the options panel.


Repeated segments

This adds aggregate data (fastest, slowest, average, etc.) when segments are repeated within an activity. It's particularly useful for laps or—like this Everesting attempt—hill repeats:


Leaderboard default

Changes the default leaderboard away from "Overall" when viewing a segment effort. The most rewarding training often comes from comparing your own past performances rather than those of others, so viewing your own results by default can make more sense.

You can select any of Men, Women, I'm Following or My Results:


Hide feed entries

Hides various entry types in the activity feed that can get annoying. You currently have the option of hiding challenges, route creations, goals created or club memberships:


External links

Adds links to Strava Labs Flyby, Veloviewer, Race Shape, KOM Club, etc. on activity, segment detail and Challenge pages:


Variability Index

Calculates a Variability Index (VI) from the weighted average power and the average power, an indication of how "smooth" a ride was. Required a power meter. A VI of 1.0 would mean a ride was paced "perfectly", with very few surges of effort:


Infinite scroll

Automatically loads more dashboard entries when reaching the bottom, saving a tedious click on the "More" button:


Running TSS

Estimates a run's Training Stress Score (TSS) from its Grade Adjusted Pace distribution, a measure of that workout's duration and intensity relative to the athletes's capability, providing an insight into correct recovery time and overall training load over time:


Hide "find friends"

Hides social networking buttons, including invitations to invite or find further friends:


"Enter" posts comment

Immediately posts comment when pressing Enter/Return key in the edit box rather than adding a newline:


Compare running

Changes the default sport for the "Side by Side comparison" module to running when viewing athlete profiles:


Running cadence

Shows running cadence by default in the elevation profile:


Running heart rate

Shows running heart rate by default in elevation profile:


Estimated FTP

Selects "Show Estimated FTP" by default on the Power Curve page:


Standard Google map

Prefer the "Standard" Google map over the "Terrain" view:


Let me know if you have any comments, suggestions or if you have any other feedback. If you find this software useful, please consider donating via Paypal to support further development. You can also view the source and contribute directly on GitHub.

Categories: LUG Community Blogs

Mick Morgan: punctuation matters

Mon, 28/07/2014 - 15:12

There is a nice tweet over at @NSA_PR. It reads:

We take your privacy, seriously.

Beyond parody.

Categories: LUG Community Blogs

Chris Lamb: start-stop-daemon: --exec vs --startas

Mon, 28/07/2014 - 14:15

start-stop-daemon is the classic tool on Debian and derived distributions to manage system background processes. A typical invokation from an initscript is as follows:

start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --exec /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid

The basic operation is that it will first check whether /usr/sbin/daemon is not running and, if not, execute /usr/sbin/daemon -c /etc/daemon.cfg -p /var/run/daemon.pid. This process then has the responsibility to daemonise itself and write the resulting process ID to /var/run/daemon.pid.

start-stop-daemon then waits until /var/run/daemon.pid has been created as the test of whether the service has actually started, raising an error if that doesn't happen.

(In practice, the locations of all these files are parameterised to prevent DRY violations.)

Idempotency

By idempotence we are mostly concerned with repeated calls to /etc/init.d/daemon start not starting multiple versions of our daemon.

This might not seem to be particularly big issue at first but the increased adoption of stateless configuration management tools such as Ansible (which should be completely free to call start to ensure a started state) mean that one should be particularly careful of this apparent corner case.

In its usual operation, start-stop-daemon ensures only one instance of the daemon is running with the --exec parameter: if the specified pidfile exists and the PID it refers to is an "instance" of that executable, then it is assumed that the daemon is already running and another copy is not started. This is handled in the pid_is_exec method (source) - the /proc/$PID/exe symlink is resolved and checked against the value of --exec.

Interpreted scripts

However, one case where this doesn't work is interpreted scripts. Lets look at what happens if /usr/sbin/daemon is such a script, eg. a file that starts:

#!/usr/bin/env python # [..]

The problem this introduces is that /proc/$PID/exe now points to the interpreter instead, often with an essentially non-deterministic version suffix:

$ ls -l /proc/14494/exe lrwxrwxrwx 1 www-data www-data 0 Jul 25 15:18 /proc/14494/exe -> /usr/bin/python2.7

When this process is examined using the --exec mechanism outlined above it will be rejected as an instance of /usr/sbin/daemon and therefore another instance of that daemon will be incorrectly started.

--startas

The solution is to use the --startas parameter instead. This omits the /proc/$PID/exe check and merely tests whether a PID with that number is running:

start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --startas /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid

Whilst it is therefore less reliable (in that the PID found in the pidfile could actually be an entirely different process altogether) it's probably an acceptable trade-off against the case of running multiple instances of that daemon.

This danger can be ameliorated by using some of start-stop-daemon's other matching tests, such as --user or even --name.

Categories: LUG Community Blogs

Mick Morgan: department of dirty

Wed, 23/07/2014 - 13:42

Like most ‘net users I get my fair share of spam. Most of it gets binned automatically by my email system, but of course some still gets through so I am used to hitting the delete button on random email from .ru domains offering me the opportunity to “impress my girl tonight”.

Most such phishing email relies on the recipient being dumb enough, naive enough, or (possibly) drunk enough to actually click through the link to the malicious website. I was therefore more than a little astonished at an email I received today from the open rights group. That email is given below in its entirety (I have obfuscated my email address for obvious reasons).

From: Department of Dirty
To: xxxxxxxx@yyy.zzz
Subject: Cleaning up the Internet
Date: Wed, 23 Jul 2014 07:14:18 -0400 (EDT)

Dear Mick,

Ever thought the internet was just too big? Want to help clean up online filth?

*Welcome to the Department of Dirty*

Watch the Department tackling its work here: www.departmentofdirty.co.uk and share our success, as we stop one man try to get one over us with his ‘spotted dick recipe’:

Department of Dirty Video: http://www.departmentofdirty.co.uk/

The Department of Dirty is working with internet and mobile companies to stop the dirty internet. We are committed to protecting children and adults from online filth such as:

*Talk to Frank: This government website tries to educate young people about drugs. We all know what ‘education’ means, don’t we? Blocked by Three.
*Girl Guides Essex:
They say, ‘guiding is about acquiring skills for life’. We say, why would young girls need skills? Blocked by BT.
*South London Refugee Association:
This charity aims to relieve poverty and distress. Not on our watch they don’t. Blocked by BT, EE, Sky and VirginMedia

This is just the tip of the iceberg.

We need you to help us take a stand against blogs, charities and education websites, all of which are being blocked [1]. It’s time to stop this sick filth. Together, we can clean up the internet.

http://www.departmentofdirty.co.uk

Sincerely,

Your Department of Dirty representative

[1] You can find out what we’re blocking at this convenient website: https://www.blocked.org.uk/

[DISCLAIMER] This email has come from the Open Rights Group. This email was delivered to: xxxxxxxx@yyy.zzz If you wish to opt out of future emails, you can do so here.

Now, I’m an ORG supporter (i.e. I am a paying member) and I am sure that someone, somewhere in ORG thought that this email campaign was a great idea. After all, it follows up the ORG’s earlier research on the fairly obvious stupidities arising from the implementation of Dave’s anti-porn campaign, it looks “ironic”, and it uses a snappy domain name which has shades of Monty Python about it. But I’m sorry, in my view this most certainly is not a good idea and I’m sure that ORG will come to regret it.

One of the most fundamental pieces of advice any and every ‘net user is beaten up with is “do not click on links in unsolicited emails”. In particular, the advice normally goes on – “if that email is from an unknown source, or has in any way a supicious from address you should immediately bin it”.

This email comes from an unknown address with a wonderfully prurient domain name. Even if it is successful and gets to the intended email inbox [1], it then relies on the recipient breaking a fundamental security rule. It does this by encouraging him (this looks to be male targeted) to click on a link which the naive might believe leads to a porn video.

How exactly is that going to help?

([1] Note. It got to my email inbox because the email system at e-activist.com which sent it is allowed by my filters.)

Categories: LUG Community Blogs

MJ Ray: Three systems

Tue, 22/07/2014 - 04:59

There are three basic systems:

The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.

The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.

The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.

So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?

Categories: LUG Community Blogs

Chris Lamb: Disabling internet for specific processes with libfiu

Mon, 21/07/2014 - 19:26

My primary usecase is to prevent testsuites and build systems from contacting internet-based services. This, at the very least, introduces an element of non-determinism and malicious code at worst.

I use Alberto Bertogli's libfiu for this, specifically the fiu-run utility which part of the fiu-utils package on Debian and Ubuntu.

Here's a contrived example, where I prevent Curl from talking to the internet:

$ fiu-run -x -c 'enable name=posix/io/net/connect' curl google.com curl: (6) Couldn't resolve host 'google.com'

... and here's an example of it detecting two possibly internet-connecting tests:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ./manage.py text [..] ---------------------------------------------------------------------- Ran 892 tests in 2.495s FAILED (errors=2) Destroying test database for alias 'default'...

Note that libfiu inherits all the drawbacks of LD_PRELOAD; in particular, we cannot limit the child process that calls setuid binaries such as /bin/ping:

$ fiu-run -x -c 'enable name=posix/io/net/connect' ping google.com PING google.com (173.194.41.65) 56(84) bytes of data. 64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=1 ttl=57 time=21.7 ms 64 bytes from lhr08s01.1e100.net (17.194.41.65): icmp_req=2 ttl=57 time=18.9 ms [..]

Whilst it would certainly be more robust and flexible to use iptables—such as allowing localhost and other local socket connections but disabling all others—I gravitate towards this entirely userspace solution as it requires no setup and I can quickly modify it to block other calls on an ad-hoc basis. The list of other "modules" libfiu supports is viewable here.

Categories: LUG Community Blogs