Planet ALUG

Syndicate content
Planet ALUG - http://planet.alug.org.uk/
Updated: 1 hour 32 min ago

Mick Morgan: solidarity with the tor project

Sat, 13/12/2014 - 19:16

On Thursday 11 December, Roger Dingledine of the Tor project posted the following email to the “tor-talk” mail list (to which I am subscribed).

I’d like to draw your attention to

https://blog.torproject.org/blog/solidarity-against-online-harassment
https://twitter.com/torproject/status/543154161236586496

One of our colleagues has been the target of a sustained campaign of harassment for the past several months. We have decided to publish this statement to publicly declare our support for her, for every member of our organization, and for every member of our community who experiences this harassment. She is not alone and her experience has catalyzed us to action. This statement is a start.

Roger asked those who deplored on-line harassment (of any person, for any reason) and who supported the Tor project’s action in publicly condemning the harassment of one of the Tor developers to add their name and voice to the blog post.

I am proud to have done so.

Categories: LUG Community Blogs

Ben Francis: The Times They Are A Changin’ (Open Web Remix)

Thu, 11/12/2014 - 11:26

In the run up to the “Mozlandia” work week in Portland, and in reflection of the last three years of the Firefox OS project, for a bit of fun I’ve reworked a Bob Dylan song to celebrate our incredible journey so far.

Here’s a video featuring some of my memories from the last three years, with Siobhan (my fiancée) and me singing the song at you! There are even lyrics so you can sing along

“Keep on rockin’ the free web” — Potch

Categories: LUG Community Blogs

Chris Lamb: Starting IPython automatically from zsh

Wed, 10/12/2014 - 18:07

Instead of a calculator, I tend to use IPython for those quotidian bits of "mental" arithmetic:

In [1]: 17 * 22.2 Out [1]: 377.4

However, I often forget to actually start IPython, resulting in me running the following in my shell:

$ 17 * 22.2 zsh: command not found: 17

Whilst I could learn do this maths within Zsh itself, I would prefer to dump myself into IPython instead — being able to use "_" and Python modules generally is just too useful.

After following this pattern too many times, I put together the following snippet that will detect whether I have prematurely attempted a calculation inside zsh and pretend that I ran it in IPython all along:

zmodload zsh/pcre math_regex='^[\d\-][\d\.\s\+\*\/\-]*$' function math_precmd() { if [ "${?}" = 0 ] then return fi if [ -z "${math_command}" ] then return fi if whence -- "$math_command" 2>&1 >/dev/null then return fi if [ "${math_command}" -pcre-match "${math_regex}" ] then echo ipython -i -c "_=${math_command}; print _" fi } function math_preexec() { typeset -g math_command="${1}" } typeset -ga precmd_functions typeset -ga preexec_functions precmd_functions+=math_precmd preexec_functions+=math_preexec

For example:

lamby@seriouscat:~% 17 * 22.2 zsh: command not found: 17 377.4 In [1]: _ + 1 Out [1]: 378.4

(Canonical version from my zshrc.d)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Testing a Django app with Docker

Tue, 09/12/2014 - 01:00

I've been playing around with Docker a fair bit and recently hit upon a configuration that works nicely for me when testing code at work.

The basic premise is that I run a docker container that pretty well emulates the exact environment that the code will run in down to the OS so I don't need to care that I'm not running the same distribution as the servers we deploy to and that I can test my code at any time without having to rebuild the docker image.

Here's an annotated Dockerfile with the project-specific details removed.

# We start with ubuntu 14.04 FROM ubuntu:14.04 MAINTAINER Steve Engledow <steve@offend.me.uk> USER root # Install OS packages # This list of packages is what gets installed by default # on Amazon's Ubuntu 14.04 AMI plus python-virtualenv RUN apt-get update \ && apt-get -y install software-properties-common git \ ssh python-dev python-virtualenv libmysqlclient-dev \ libqrencode-dev swig libssl-dev curl screen # Configure custom apt repositories # and install project-specific packages COPY apt-key.list apt-repo.list apt.list /tmp/ # Not as nice as this could be as docker defaults to sh rather than bash RUN while read key; do curl --silent "$key" | apt-key add -; done < /tmp/apt-key.list RUN while read repo; do add-apt-repository -y "$repo"; done < /tmp/apt-repo.list RUN apt-get -qq update RUN while read package; do apt-get -qq -y install "$package"; done < /tmp/apt.list # Now we create a normal user and switch to it RUN useradd -s /bin/bash -m ubuntu \ && chown -R ubuntu:ubuntu /home/ubuntu \ && passwd -d ubuntu USER ubuntu WORKDIR /home/ubuntu ENV HOME /home/ubuntu # Set up a virtualenv andinstall python packages # from the requirements file COPY requirements.txt /tmp/ RUN mkdir .myenv \ && virtualenv -p /usr/bin/python2.7 ~/.myenv \ && . ~/.myenv/bin/activate \ && pip install -r /tmp/requirements.txt \ # Set PYTHONPATH and activate the virtualenv in .bashrc RUN echo "export PYTHONPATH=~/myapp/src" > .bashrc \ && echo ". ~/.myenv/bin/activate" >> .bashrc # Copy the entrypoint script COPY entrypoint.sh /home/ubuntu/ EXPOSE 8000 ENTRYPOINT ["/bin/bash", "entrypoint.sh"]

And here's the entrypoint script that nicely wraps up running the django application:

#!/bin/bash . ./.bashrc cd myapp/src ./manage.py $*

You generate the base docker image from these files with docker build -t myapp ./.

Then, when you're ready to run a test suite, you need the following invocation:

docker run -ti --rm -P -v ~/code/myapp:/home/ubuntu/myapp myapp test

This mounts ~/code/myapp and /home/ubuntu/myapp within the Docker container meaning that you're running the exact code that you're working on from inside the container :)

I have an alias that expands that for me so I only need to type docked myapp test.

Obviously, you can substitute test for runserver, syncdb or whatever :)

This is all a bit rough and ready but it's working very well for me now and is repeatable enough that I can use more-or-less the same script for a number of different django projects.

Categories: LUG Community Blogs

Chris Lamb: Don't ask your questions in private

Thu, 04/12/2014 - 12:55

(If I've linked you to this page, it is my feeble attempt to provide a more convincing justification.)


I often receive instant messages or emails requesting help or guidance at work or on one of my various programming projects.

When asked why they asked privately, the responses vary; mostly along the lines of it simply being an accident, not knowing where else to ask, as well as not wishing to "disturb" others with their bespoke question. Some will be more candid and simply admit that they were afraid of looking unknowledgable in front of others.

It is always tempting to simply reply with the answer, especially as helping another human is inherently rewarding unless one is a psychopath. However, one can actually do more good overall by insisting the the question is re-asked in a more public forum.

This is for many reasons. Most obviously, public questions are simply far more efficient as soon as more than one person asks that question — the response can be found in a search engine or linked to in the future. These time savings soon add up, meaning that simply more stuff can be done in any given day. After all, most questions are not as unique as people think.

Secondly, a private communication cannot be corrected or elaborated on if someone else notices it is incorrect or incomplete. Even this rather banal point is more subtle that it first appears — the lack of possible corrections deprives both the person asking and the person responding of the true and correct answer.

Lastly, conversations that happen in private are depriving others of the answer as well. Perhaps someone was curious but hadn't got around to asking? Maybe the answer—or even the question!—contains a clue to solving some other issue. None of this can happen if this is occurs behind closed doors.

(There are lots of subtler reasons too — in a large organisation or team, simply knowing what other people are curious about can be curiously valuable information.)

Note that this is not—as you might immediately suspect—simply a way of ensuring that one gets the public recognition or "kudos" from being seen helping others.

I wouldn't deny that technical communities work on a gift economy basis to some degree, but to attribute all acts of assistance as "selfish" and value-extracting would be to take the argument too far in the other direction. Saying that, the lure and appeal of public recognition should not be understated and can certainly provide an incentive to elaborate and provide a generally superior response.


More philosophically, there's also something fundamentally "honest" about airing issues in an appropriately public and transparent manner. I feel it promotes a culture of egoless conversations, of being able to admit one's mistakes and ultimately a healthy personal mindset.

So please, take care not only in the way you phrase and frame your question, but also consider wider context in which you are asking it. And don't take it too personally if I ask you to re-ask elsewhere...

Categories: LUG Community Blogs

MJ Ray: Autumn Statement #AS2014, the Google tax and how it relates to Free Software

Thu, 04/12/2014 - 04:34

One of the attention-grabbing measures in the Autumn Statement by Chancellor George Osborne was the google tax on profits going offshore, which may prove unworkable (The Independent). This is interesting because a common mechanism for moving the profits around is so-called transfer pricing, where the business in one country pays an inflated price to its sibling in another country for some supplies. It sounds like the intended way to deal with that is by inspecting company accounts and assessing the underlying profits.

So what’s this got to do with Free Software? Well, one thing the company might buy from itself is a licence to use some branding, paying a fee for reachuse. The main reason this is possible is because copyright is usually a monopoly, so there is no supplier of a replacement product, which makes it hard to assess how much the price has been inflated.

One possible method of assessing the overpayment would be to compare with how much other businesses pay for their branding licences. It would be interesting if Revenue and Customs decide that there’s lots of Royalty Free licensing out there – including Free Software – and so all licence fees paid to related companies are a tax avoidance ruse. Similarly, any premium for a particular self-branded product over a generic equivalent could be classed as profit transfer.

This could have amusing implications for proprietary software producers who sell to sister companies but I doubt that the government will be that radical, so we’ll continue to see absurdities like Starbucks buying all their coffee from famous coffee producing countries Switzerland and the Netherlands. Shouldn’t this be stopped, really?

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Just call me Anneka

Mon, 01/12/2014 - 00:16

I had an idea a few days ago to create a Pebble watchface that works like an advent calendar; you get a new christmas-themed picture every day.

Here it is :)

The fun part however, was that I completely forgot about the idea until today. Family life and my weekly squash commitment meant that I didn't have a chance to start work on it until around 22:00 and I really wanted to get it into the Pebble store by midnight (in time for the 1st of December).

I submitted the first release at 23:55!

Enjoy :)

I'll put the source on GitHub soon. Before that, it's time for some sleep.

Categories: LUG Community Blogs

Mick Morgan: independent hit

Thu, 27/11/2014 - 11:33

On trying to reach the website of the Independent newspaper today (the Grauniad is trying my patience of late), I received the following response:

Closing the popup takes you to this page:

I haven’t checked whether this is simply a DNS redirect or an actual compromise of the Indy site, but however the graffiti was added, it indicates that the Indy has a problem.

Categories: LUG Community Blogs

Chris Lamb: Validating Django model attribute assignment

Tue, 25/11/2014 - 14:54

Ever done the following?

>>> user = User.objects.get(pk=102) >>> user.superuser = True >>> user.save() # Argh, why is this user now not a superuser...

Here's a dirty hack to validate these:

import sys from django.db import models from django.conf import settings FIELDS = {} EXCEPTIONS = { 'auth.User': ('backend',), } def setattr_validate(self, name, value): super(models.Model, self).__setattr__(name, value) # Real field names cannot start with underscores if name.startswith('_'): return # Magic if name == 'pk': return k = '%s.%s' % (self._meta.app_label, self._meta.object_name) try: fields = FIELDS[k] except KeyError: fields = FIELDS[k] = set( getattr(x, y) for x in self._meta.fields for y in ('attname', 'name') ) # Field is in allowed list if name in fields: return # Field is in known exceptions if name in EXCEPTIONS.get(k, ()): return # Always allow Django internals to set values (eg. aggregates) if 'django/db/models' in sys._getframe().f_back.f_code.co_filename: return raise ValueError( "Refusing to set unknown attribute '%s' on %s instance. " "(Did you misspell %s?)" % (name, k, ', '.join(fields)) ) # Let's assume we have good test coverage if settings.DEBUG: models.Model.__setattr__ = setattr_validate

Now:

>>> user = User.objects.get(pk=102) >>> user.superuser = True ... ValueError: Refusing to set unknown attribute 'superuser' on auth.User instance. (Did you misspell 'username', 'first_name', 'last_name', 'is_active', 'email', 'is_superuser', 'is_staff', 'last_login', 'password', 'id', 'date_joined')

(Django can be a little schizophrenic on this — Model.save()'s update_fields keyword argument validates its fields, as does prefetch_related, but it's taking select_related a little while to land.)

Categories: LUG Community Blogs

Chris Lamb: Calculating the number of pedal turns on a bike ride

Mon, 17/11/2014 - 09:34

If you have a cadence sensor on your bike such as the Garmin GSC-10, you can approximate the number of pedal turns you made on the bike ride using the following script (requires GPSBabel):

#!/bin/sh STYLE="$(mktemp)" cat >${STYLE} <<EOF FIELD_DELIMITER COMMA RECORD_DELIMITER NEWLINE OFIELD CADENCE,"","%d" EOF exec gpsbabel -i garmin_fit -f "${1}" -o xcsv,style=${STYLE} -F- | awk '{x += $1} END {print int(x / 60)}'

... then call with:

$ sh cadence.sh ~/path/to/2014-11-16-14-46-05.fit 24344

Unfortunately the Garmin .fit format doesn't store the actual number of pedal turns, only the average for each particular second. However, it should be reasonably accurate given that one keeps a reasonably steady cadence.

As a bonus, using a small amount of shell plumbing you can then sum an entire year's worth of riding like so:

$ for X in ~/path/to/2014-*.fit; do sh cadence.sh ${X}; done | awk '{x += $1} END { print x }' 749943
Categories: LUG Community Blogs

Steve Engledow (stilvoid): Stony Silence

Sat, 08/11/2014 - 23:27

I just bought a Pebble - it's great!

I'd been toying with the idea of buying a wristwatch for several months. My main reason was that I'd noticed I'd fallen into the following pattern:

  • Wonder what the time is
  • Take phone out of pocket
  • Get distracted by email/twitter/app updates/something
  • Put phone back in pocker
  • Realise I didn't note or don't remember the time

I have a friend who owns a Pebble and after quizzing him about it, I decided that might help me achieve an even better goal: to dramatically reduce the amount of time I spend looking at my phone altogether.

After a week with my new Pebble, I'd say it's going well. The watch receives notifications from my phone so whenever an email arrives, I can quickly glance at my watch to decide whether I need to bother reading it now (most of the time, not). Despite requiring that my phone's bluetooth be switched on all the time, I've noticed that the battery has lasted slightly longer than normal - the Android battery usage charts tell me that's because of the reduced amount of screen usage.

My favourite thing about the Pebble is how hackable it is. The SDK is pretty good and simple to use. It's only been a week and I've already written, three, watch faces. The latest of which has resulted in a few emails from people saying how much they liked it :)

In unrelated news - OR IS IT?! - I had a panic attack the other day for the first time in over a year :S I managed to keep on top of it as I know what to do now, but the effects lasted for much longer than on previous occasions; I didn't feel alright until 10am the following day - 17 hours after it started. I still don't feel quite right.

Does anyone reading have any experience of panic attacks and whether it's worth seeing a doctor? I get them very rarely and I can make my way through them without help now but they leave me feeling awful for hours afterwards when they do happen. I'm not interested in taking regular meds to prevent them when they occur but I would be interested to know if there's something that could halt an attack when it starts or at least reduce the after-effects.

I don't really know what brought this one on but I rarely do. Caffeine seems to be a trigger (and I'd drunk a few teas and coffees that day) but it can't be a trigger on its own as I've drunk that much caffeine on other occasions and been fine.

Bleh.

Categories: LUG Community Blogs

Chris Lamb: Generating gradiented fade images using ImageMagick

Mon, 03/11/2014 - 08:42

Whilst gradienting images is certainly possible with CSS, current browser support means that it can still make sense to do it yourself, especially if front-end performance is a concern.

However, to avoid manual work in Gimp or Photoshop, you can use ImageMagick to generate them for you:

$ wget --quiet -Obackground.jpg http://i.imgur.com/WCjlJ.jpg $ convert background.jpg \ -alpha set -channel A \ -sparse-color Barycentric '%w,%[fx:h-300] opaque %w,%h transparent' \ -background '#ffcc32' -flatten \ background-gradiented.jpg

300 here refers to the height or "speed" of the gradient and the target colour is specified by with -background.

Before:

After:

Categories: LUG Community Blogs

Chris Lamb: Are you building an internet fridge?

Thu, 30/10/2014 - 18:00

Mikkel Rasmussen:

If you look at the idea of "The Kitchen of Tomorrow" as IKEA thought about it is the core idea is that cooking is slavery.

It's the idea that technology can free us from making food. It can do it for us. It can recognise who we are, we don't have to be tied to the kitchen all day, we don't have to think about it.

Now if you're an anthropologist, they would tell you that cooking is perhaps one of the most complicated things you can think about when it comes to the human condition. If you think about your own cooking habits they probably come from your childhood, the nation you're from, the region you're from. It takes a lot of skill to cook. It's not so easy.

And actually, it's quite fun to cook. there's also a lot of improvisation. I don't know if you ever tried to come home to a fridge and you just look into the fridge: oh, there's a carrot and some milk and some white wine and you figure it out. That's what cooking is like – it's a very human thing to do.

The physical version of your smart recipe site?


Therefore, if you think about it, having anything that automates this for you or decides for you or improvises for you is actually not doing anything to help you with what you want to do, which is that it's nice to cook.

More generally, if you make technology—for example—that has at its core the idea that cooking is slavery and that idea is wrong, then your technology will fail. Not because of the technology, but because it simply gets people wrong.

This happens all the time. You cannot swing a cat these days without hitting one of those refrigerator companies that make smart fridges. I don't know you've ever seen them, like a "intelligent fridge". There's so many of them that there is actually a website called "Fuck your internet fridge" by a guy who tracks failed prototypes on intelligent fridges.

Why? Because the idea is wrong. Not the technology, but the idea about who we are - that we do not want the kitchen to be automated for us.

We want to cook. We want Japanese knives. We want complicated cooking. And so what we are saying here is not that technology is wrong as such. It's just you need to base it—especially when you are innovating really big ideas—on something that's a true human insight. And cooking as slavery is not a true human insight and therefore the prototypes will fail.

(I hereby nominate "internet fridge" as the term to describe products or ideas that—whilst technologically sound—is based on fundamentally flawed anthropology.)

Hearing "I hate X" and thinking that simply removing X will provide real value to your users is short-sighted, especially when you don't really understand why humans are doing X in the first place.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): When all the things went wrong

Mon, 27/10/2014 - 23:34

The last few weeks have seen several bits of technology fail that affect my everyday life.

It started with the locks on my car beginning to seize up. To begin with, they were just a bit stiff, then one of them stopped working altogether. Then they all stopped working with any regularity. I took the car to a garage who told me it's a common problem with this exact model and charged me £100 to replace the driver's side lock. Apparently, a full set would cost around £600.

So now I'm left with two car keys; one to get in, and one for the ignition.

Second, I somehow left the cable for my bicycle light charger hanging out of the car door on a journey. When I arrived, I found it looking quite mangled.

Thirdly, last night when my wife came home, she and I both turned our keys in the lock at the same time from different sides of the front door (without realising). This somehow broke the damn lock. Now the key doesn't turn all the way and the door can be locked but the key must remain in it.

So now we have to leave by the back door.

Fourth, while at rehearsal tonight with my band, my keyboard started playing up; it complains that the battery is low whilst running off mains power. Thinking maybe the adapter was playing up, I tried another with the same result.

Bleh.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Things

Thu, 16/10/2014 - 00:05
Ogg(camp|tober|sford)

OggCamp was fantastic. If I can remember all the talks I went to I'll do a brief write up. The event certainly left me with a few little ideas for things to write and do.

Down with dynamic things!

One small example is that I've rewritten the build script for this blog. No more scripted generation of the final HTML; I just wget -m the development server and that's everything built. Then it's all just served up as static content through nginx. Simples and I don't know why I didn't think of just snapshotting it like that before.

Click

I've reached a point in my career now where I find myself wanting to create and present slide decks. WTF?

I'm still writing code fairly regularly but there's so much other stuff I spend my time doing that I'm not even sure I can account for. It still feels important.

Clock

I think I'm going to buy a Pebble to wean myself off the phone-checking habit that I've developed over the years.

Relatedly, I wrote this post on my phone. It wasn't nearly as painful as I'd expected.

Categories: LUG Community Blogs

Wayne Stallwood (DrJeep): Hosting Update2

Thu, 09/10/2014 - 21:31
Well after a year the SD card on the Raspberry Pi has failed, I noticed /var was unhappy when I tried to apply the recent Bash updates. Attempts at repair only made things worse and I suspect there is some physical issue. I had minimised writes with logs in tmpfs and the frequently updated weather site sat in tmpfs too..logging to remote systems etc. So not quite sure what happened. Of course this is all very inconvenient when your kit lives in another country, so at some point I guess I will have to build a new SD card and ship it out...for now we are back on Amazon EC2...yay for the elastic cloud \o/
Categories: LUG Community Blogs

Chris Lamb: London—Paris—London 2014

Thu, 09/10/2014 - 17:19

I've wanted to ride to Paris for a few months now but was put off by the hassle of taking a bicycle on the Eurostar, as well having a somewhat philosophical and aesthetic objection to taking a bike on a train in the first place. After all, if one already is possession of a mode of transport...

My itinerary was straightforward:

Friday 12h00
London → Newhaven
Friday 23h00
Newhaven → Dieppe (ferry)
Saturday 04h00
Dieppe → Paris
Saturday 23h00
(Sleep)
Sunday 07h00
Paris → Dieppe
Sunday 18h00
Dieppe → Newhaven (ferry)
Sunday 21h00
Newhaven → Peacehaven
Sunday 23h00
(Sleep)
Monday 07h00
Peacehaven → London
Packing list
  • Ferry ticket (unnecessary in the end)
  • Passport
  • Credit card
  • USB A male → mini A male (charges phone, battery pack & front light)
  • USB A male → mini B male (for charging or connecting to Edge 800)
  • USB mini A male → OTG A female (for Edge 800 uploads via phone)
  • Waterproof pocket
  • Sleeping mask for ferry (probably unnecessary)
  • Battery pack

Not pictured:

  • Castelli Gabba Windstopper short-sleeve jersey
  • Castelli Velocissimo bib shorts
  • Castelli Nanoflex arm warmers
  • Castelli Squadra rain jacket
  • Garmin Edge 800
  • Phone
  • Front light: Lezyne Macro Drive
  • Rear lights: Knog Gekko (on bike), Knog Frog (on helmet)
  • Inner tubes (X2), Lezyne multitool, tire levers, hand pump

Day 1: London → Newhaven

Tower Bridge.

Many attempt to go from Tower Bridge → Eiffel Tower (or Marble Arch → Arc de Triomphe) in less than 24 hours. This would have been quite easy if I had left a couple of hours later.

Fanny's Farm Shop, Merstham, Surrey.

Plumpton, East Sussex.

West Pier, Newhaven.

Leaving Newhaven on the 23h00 ferry.


Day 2: Dieppe → Paris

Beauvoir-en-Lyons, Haute-Normandie.

Sérifontaine, Picardie.

La tour Eiffel, Paris.

Champ de Mars, Paris.

Pont de Grenelle, Paris.


Day 3: Paris → Dieppe

Cormeilles-en-Vexin, Île-de-France.

Gisors, Haute-Normandie.

Paris-Brest, Gisors, Haute-Normandie.

Wikipedia: This pastry was created in 1910 to commemorate the Paris–Brest bicycle race begun in 1891. Its circular shape is representative of a wheel. It became popular with riders on the Paris–Brest cycle race, partly because of its energizing high caloric value, and is now found in pâtisseries all over France.

Gournay-en-Bray, Haute-Normandie.

Début de l'Avenue Verte, Forges-les-Eaux, Haute-Normandie.

Mesnières-en-Bray, Haute-Normandie.

Dieppe, Haute-Normandie.

«La Mancha».


Day 4: Peacehaven → London

Peacehaven, East Sussex.

Highbrook, West Sussex.

London weather.


Summary
Distance
588.17 km
Pedal turns
~105,795

My only non-obvious tips would be to buy a disposable blanket in the Newhaven Co-Op to help you sleep on the ferry. In addition, as the food on the ferry is good enough you only need to get to the terminal one hour before departure, avoiding time on your feet in unpicturesque Newhaven.

In terms of equipment, I would bring another light for the 4AM start on «L'Avenue Verte» if only as a backup and I would have checked I could arrive at my Parisian Airbnb earlier in the day - I had to hang around for five hours in the heat before I could have a shower, properly relax, etc.

I had been warned not to rely on being able to obtain enough water en route on Sunday but whilst most shops were indeed shut I saw a bustling tabac or boulangerie at least once every 20km so one would never be truly stuck.

Route-wise, the surburbs of London and Paris are both equally dismal and unmotivating and there is about 50km of rather uninspiring and exposed riding on the D915.

However, «L'Avenue Verte» is fantastic even in the pitch-black and the entire trip was worth it simply for the silent and beautiful Normandy sunrise. I will be back.

Categories: LUG Community Blogs

Ben Francis: What is a Web App?

Fri, 03/10/2014 - 16:50

What is a web app? What is the difference between a web app and a web site? What is the difference between a web app and a non-web app?

In terms of User Experience there is a long continuum between “web site” and “web app” and the boundary between the two is not always clear. There are some characteristics that users perceive as being more “app like” and some as more “web like”.

The presence of web browser-like user interface elements like a URL bar and navigation controls are likely to make a user feel like they’re using a web site rather than an app for example, whereas content which appears to run independently of the browser feels more like an app. Apps are generally assumed to have at least limited functionality without an Internet connection and tend to have the concept of residing in a self-contained way on the local device after being “installed”, rather than being navigated to somewhere on the Internet.

From a technical point of view there is in fact usually very little difference between a web site and a web app. Different platforms currently deal with the concept of “web apps” in all sorts of different, incompatible ways, but very often the main difference between a web site and web app is simply the presence of an “app manifest”. The app manifest is a file containing a collection of metadata which is used when “installing” the app to create an icon on a homescreen or launcher.

At the moment pretty much every platform has its own proprietary app manifest format, but the W3C has the beginnings of a proposed specification for a standard “Manifest for web applications” which is starting to get traction with multiple browser vendors.

Web Manifest – Describing an App

Below is an example of a web app manifest following the proposed standard format.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" }

The manifest file is referenced inside the HTML of a web page using a link relation. This is cool because with this approach a web app doesn’t have to be distributed through a centrally controlled app store, it can be discovered and installed from any web page.

http://example.com/myapp/index.html:

<!DOCTYPE html> <html> <head> <title>My App - Welcome</title> <link rel="manifest" href="manifest.json"> <meta name="application-name" content="My App"> <link rel="icon" sizes="64x64" href="icon.png"> ...

As you can see from the example, these basic pieces of metadata which describe things like a name, an icon and a start URL are not that interesting in themselves because these things can already be expressed in HTML in a web standard way. But there are some other other proposed properties which could be much more interesting.

Display Modes – Breaking out of the Browser

We said above that one thing that makes a web app feel more app like is when it runs outside of the browser, without common browser UI elements like the URL bar and navigation controls. The proposed “display” property of the manifest allows authors of web content which is designed to function without the need for these UI elements to express that they want their content to run outside of the browser.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" "display": "standalone" }

The proposed display modes are “fullscreen”, “standalone”, “minimal-ui” and “browser”. The “browser” display mode opens the content in the user agent’s conventional method (e.g. a browser tab), but all of the other display modes open the content separate from the browser, with varying levels of browser UI.

There’s also a proposed “orientation” property which allows the content author to specify the default orientation (i.e. portrait/landscape) of their content.

App Scope – A Slice of the Web

In order for a web app to be treated separately from the rest of the web, we need to be able to define which parts of the web are part of the app, and which are not. The proposed “scope” property of the manifest defines the URL scope to which the manifest applies.

By default the scope of a web app is anything from the same origin as its manifest, but a single origin can also be sliced up into multiple apps or into app and non-app content.

Below is an example of a web app manifest with a defined scope.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" }

From the user’s point of view they can browse around the web, seamlessly navigating between web apps and web sites until they come across something they want to keep on their device and use often. They can then slice off that part of the web by “bookmarking” or “installing” it on their device to create an icon on their homescreen or launcher. From that point on, that slice of the web will be treated separately from the browser in its own “app”.

Without a defined scope, a web app is just a web page opened in a browser window which can then be navigated to any URL. If that window doesn’t have any browser-like navigation controls or a URL bar then the user can get stranded at a dead on the web with no way to go back, or worse still can be fooled into thinking that a web page they thought was part of a web app they trust is actually from another, malicious, origin.

The web browser is like a catch-all app for browsing all of the parts of the web which the user hasn’t sliced off to use as a standalone app. Once a web app is registered with the user agent as managing a defined slice of the web, the user can seamlessly link into and out of installed web apps and the rest of the web as they please.

Service Workers – Going Offline

We said above that another characteristic users often associate with “apps” is their ability to work offline, in the absence of a connection to the Internet. This is historically something the web has done pretty badly at. AppCache was a proposed standard intended for this purpose, but there are many common problems and limitations of that technology which make it difficult or impractical to use in many cases.

A new, much more versatile, proposed standard is called Service Workers. Service Workers allow a script to be registered as managing a slice of the web, even whilst offline, by intercepting HTTP requests to URLs within a specified scope. A Service Worker can keep an offline cache of web resources and decide when to use the offline version and when to fetch a new version over the Internet.

The programmable nature of Service Workers make them an extremely versatile tool in adding app-like capabilities to web content and getting rid of the notion that using the web requires a persistent connection to the Internet. Service Workers have lots of support from multiple browser vendors and you can expect to see them coming to life soon.

The proposed “service_worker” property of the manifest allows a content author to define a Service Worker which should be registered with a specified URL scope when a web app is installed or bookmarked on a device. That means that in the process of installing a web app, an offline cache of web resources can be populated and other installation steps can take place.

Below is our example web app manifest with a Service Worker defined.

http://example.com/myapp/manifest.json:

{ "name": "My App", "icons": [{ "src": "/myapp/icon.png", "sizes": "64x64", "type": "image/png" }], "start_url": "/myapp/index.html" "scope": "/myapp" "service_worker": { "src": "app.js", "scope": "/myapp" } } Packages – The Good, the Bad and the Ugly

There’s a whole category of apps which many people refer to as “web apps” but which are delivered as a package of resources to be downloaded and installed locally on a device, separate from the web. Although these resources may use web technologies like HTML, CSS and Javascript, if those resources are not associated with real URLs on the web, then in my view they are by definition not part of a web app.

The reason this approach is commonly taken is that it allows operating system developers and content authors to side-step some of the current shortcomings of the web platform. Packaging all of the resources of an app into a single file which can be downloaded and installed on a device is the simplest way to solve the offline problem. It also has the convenience that the contents of that package can easily be reviewed and cryptographically signed by a trusted party in order to safely give the app privileged access to system functions which would currently be unsafe to expose to the web.

Unfortunately the packaged app approach misses out on many of the biggest benefits of the web, like its universal and inter-linked nature. You can’t hyperlink into a packaged app, and providing an updated version of the app requires a completely different mechanism to that of web content.

We have seen above how Service Workers hold some promise in finally solving the offline problem, but packages as a concept may still have some value on the web. The proposed “Packaging on the Web” specification is exploring ways to take advantage of some of the benefits of packages, whilst retaining all the benefits of URLs and the web.

This specification does not explore a new security model for exposing more privileged APIs to the web however, which in my view is the single biggest unsolved problem we now have left on the web as a platform.

Conclusions

In conclusion, a look at some of the latest emerging web standards tells us that the answer to the question “what is a web app?” is that a web app is simply a slice of the web which can be used separately from the browser.

With that in mind, web authors should design their content to work just as well inside and outside the browser and just as well offline as online.

Packaged apps are not web apps and are always a platform-specific solution. They should only be considered as a last resort for apps which need access to privileged functionality that can’t yet be safely exposed to the web. New web technologies will help negate the need for packages for offline functionality, but packages as a concept may still have a role on the web. A security model suitable for exposing more privileged functionality to the web is one of the last remaining unsolved challenges for the web as a platform.

The web is the biggest ecosystem of content that exists, far bigger than any proprietary walled garden of curated content. Lots of cool stuff is possible using web technologies to build experiences which users would consider “app like”, but creating a great user experience on the web doesn’t require replicating all of the other trappings of proprietary apps. The web has lots of unique benefits over other app platforms and is unrivalled in its scale, ubiquity, openness and diversity.

It’s important that as we invent cool new web technologies we remember to agree on standards for them which work cross-platform, so that we don’t miss out on these unique benefits.

The web as a platform is here to stay!

Categories: LUG Community Blogs

Mick Morgan: CVE-2014-6271 bash vulnerability

Fri, 26/09/2014 - 11:38

Guess what I found in trivia’s logs this morning?

89.207.135.125 – - [25/Sep/2014:10:48:13 +0100] “GET /cgi-sys/defaultwebpage.cgi HTTP/1.0″ 404 345 “-” “() { :;}; /bin/ping -c 1 198.101.206.138″

I’ll bet a lot of cgi scripts are being poked at the moment.

Check your logs guys. A simple grep “:;}” access.log will tell you all you need to know.

(Update 27 September)

Digital Ocean, the company I use to host my Tor node and tails/whonix mirrors, has posted a useful note about the vulnerability. And John Leyden at El Reg posted about the problem here. Leyden’s article references some of the more authoritative discussions so I won’t repeat the links here.

All my systems were vulnerable, but of course have now been patched. However, the vulnerability has existed in bash for so long that I can’t help but feel deeply uneasy even though, as Michal Zalewski (aka lcamtuf) notes in his blog:

PS. As for the inevitable “why hasn’t this been noticed for 15 years” / “I bet the NSA knew about it” stuff – my take is that it’s a very unusual bug in a very obscure feature of a program that researchers don’t really look at, precisely because no reasonable person would expect it to fail this way. So, life goes on.

Categories: LUG Community Blogs

Jonathan McDowell: Automatic inline signing for mutt with RT

Thu, 18/09/2014 - 10:39

I spend a surprising amount of my time as part of keyring-maint telling people their requests are badly formed and asking them to fix them up so I can actually process them. The one that's hardest to fault anyone on is that we require requests to be inline PGP signed (i.e. the same sort of output as you get with "gpg --clearsign"). That's because RT does various pieces of unpacking[0] of MIME messages that mean that a PGP/MIME signatures that have passed through it are no longer verifiable. Daniel has pointed out that inline PGP is a bad idea and got as far as filing a request that RT handle PGP/MIME correctly (you need a login for that but there's a generic read-only one that's easy to figure out), but until that happens the requirement stands when dealing with Debian's RT instance. So today I finally added the following lines to my .muttrc rather than having to remember to switch Mutt to inline signing for this one special case:

send-hook . "unset pgp_autoinline; unset pgp_autosign" send-hook rt.debian.org "set pgp_autosign; set pgp_autoinline"

i.e. by default turn off auto inlined PGP signatures, but when emailing anything at rt.debian.org turn them on.

(Most of the other things I tell people to fix are covered by the replacing keys page; I advise anyone requesting a key replacement to read that page. There's even a helpful example request template at the bottom.)

[0] RT sticks a header on the plain text portion of the mail, rather than adding a new plain text part for the header if there are multiple parts (this is something Mailman handles better). It will also re-encode received mail into UTF-8 which I can understand, but Mutt will by default try to find an 8 bit encoding that can handle the mail, because that's more efficient, which tends to mean it picks latin1.

Update: Apparently Mutt in Jessie and beyond doesn't have the pgp_autosign option; you want crypt_autosign instead (and maybe crypt_autopgp but that defaults to yes so unless you've configured your setup to do S/MIME by default you should be fine). Thanks to Luca Capello for pointing this out.

Categories: LUG Community Blogs