After putting it off for various reasons for at least a couple of years, I've finally switched back from Chromium to Firefox and I'm very glad I did so.
The recent UI change seems to have upset a lot of Firefox users but it was instrumental in prompting my return and I'm sure others will have felt the same; Firefox once again looks and feels like a modern browser.
I have to say also that it feels an imperial bucketload snappier than Chromium too. The exact opposite was one of the reasons I left in the first place.
Good job Firefolk :)
My apologies that this is a few weeks late – but it still bears posting. John Oliver at HBO gave the best description of the net neutrality argument I have seen so far.
Following that broadcast, the FCC servers were, rather predictably, overwhelmed by the outraged response from the trolls that Oliver set loose.
Unfortunately, as John Naughton reports in the Observer, the FCC are unlikely to be moved by that.
Ladar Levison and Stephen Wyatt presented the upcoming Dark Internet Mail Environment (DIME) at Defcon22 this week. According to El Reg, Levison, who shut down Lavabit, his previous mail service rather than comply with FBI demands that he divulge the private SSL certificates used to encrypt traffic on that service, said:
“I’m not upset that I got railroaded and I had to shut down my business … I’m upset because we need a mil-spec cryptographic mail system for the entire planet just to be able to talk to our friends and family without any kind of fear of government surveillance”.
I think that puts the problem into perspective.
Re-posted from Mozilla Hacks.Boot to Gecko
As soon as the Boot to Gecko (B2G) project was announced in July 2011 I knew it something I wanted to contribute to. I’d already been working on the idea of a browser based OS for a while but it seemed Mozilla had the people, the technology and the influence to build something truly disruptive.
At the time Mozilla weren’t actively recruiting people to work on B2G, the team still only consisted of the four co-founders and the project was little more than an empty GitHub repository. But I got in touch the day after the announcement and after conversations with Chris, Andreas and Mike over Skype and a brief visit to Silicon Valley, I somehow managed to convince them to take me on (initially as a contractor) so I could work on the project full time.A Web Browser Built from Web Technologies
On my first day Chris Jones told me “The next, highest-priority project is a very basic web browser, just a URL bar and back button basically.”
Chris and his bitesize browser, Taipei, December 2011
The team was creating a prototype smartphone user interface codenamed “Gaia”, built entirely with web technologies. Partly to prove it could be done, but partly to find the holes in the web platform that made it difficult and fill those holes with new Web APIs. I was asked to work on the first prototypes of a browser app, a camera app and a gallery app to help find some of those holes.
You might wonder why a browser-based OS needs a browser app at all, but the thinking for this prototype was that if other smartphone platforms had a browser app, then B2G would need one too.
It all started with a humble iframe, a text input for the URL bar and a go button, in fact you can see the first commit here. When you clicked the go button, it set the src attribute of the iframe to the contents of the text input, which caused the iframe to load the web page at that URL.
First commit, November 2011
Another problem we came across quite quickly was that many web authors will go to great lengths to prevent their web site being loaded inside an iframe in order to prevent phishing attacks. A web server can send an X-Frame-Options HTTP response header instructing a user agent to simply not render the content, and there are also a variety of techniques for “framebusting” where a web site will actively try to break out of an iframe and load itself in the parent frame instead.
It was quickly obvious that we weren’t going to get very far building a web browser using web technologies without evolving the web technologies themselves.The Browser API
I met Justin Lebar at the first B2G work week in Taipei in December 2011. He was tasked with modifying Gecko to make the browser app on Boot to Gecko possible. To me Gecko was (and largely still is) a giant black box of magic spells which take the code I write and turn it into dancing images on the screen. I needed a wizard who had a grasp on some of these spells, including a particularly strong spell called Docshell which only the most practised of wizards dare peer into.
Justin at the first B2G Work Week in Taipei, December 2011
When I told Justin what I needed he made the kinds of sounds a mechanic makes when you take your car in for what you think is a simple problem but turns out costing the price of a new car. Justin had a better idea than I did as to what was needed, but I don’t think either of us realised the full scale of the task at hand.
With the adding of a simple boolean “mozbrowser” attribute to the HTML iframe element in Gecko, the Browser API was born. I tried adding features to the browser app and every time I found something that wasn’t possible with current web technologies, I went back to Justin to get him to cast a new magic spell.
There were easier approaches we could have taken to build the browser app. We could have added a mechanism to allow the browser to inject scripts into the iframe and communicate freely with the content inside, but we wanted to provide a safe API which anyone could use to build their own browser app and this approach would be too risky. So instead we built an explicit privileged API into the DOM to create a new class of iframe which could one day become a new standard HTML tag.Keeping the Web Contained
The first thing we did was to try to trick web pages loaded inside an iframe into thinking they were not in fact inside an iframe. At first we had a crude solution which just ignored X-Frame-Options headers for iframes in whitelisted domains that had the mozbrowser attribute. That’s when we discovered that some web sites are quite clever at busting out of iframes. In the end we had to take other measures like making sure window.top pointed at the iframe rather than its parent so a web site couldn’t detect that it had a parent, and eventually also run every browser tab in its own system process to completely isolate them from each other.
Once we had the animal that is the web contained, we needed to poke a few air holes to let it breathe. There’s some information we need to let out of the iframe in the form of events: when the location, title or icon of a web page changes (locationchange, titlechange and iconchange); when a page starts and finishes loading (loadstart, loadend) and when the security characteristics of the currently loaded page changes (securitychange). This all allows us to keep the address bar and title bar up to date and show a progress indicator.
The browser app needs to be able to navigate the iframe by telling it to goBack(), goForward(), stop() and reload(). We also need to be able to explicitly ask for information like characteristics of the session history (getCanGoBack(), getCanGoForward()) to determine which navigation buttons to display.
With these basics in place it was possible to build a simple functional browser app.The Prototype
The Gaia project’s first UX designer was Josh Carpenter. At an intensive work week in Paris the week before Mobile World Congress in February 2012, Josh created UI mockups for all the basic features of a smartphone, including a simple browser, and we built a prototype to those designs.
Josh and me plotting over a beer in Paris.
The prototype browser app could navigate web content, keep it contained and display basic information about the content being viewed. This would be the version demonstrated at MWC in Barcelona that year.
Simple browser demo for Mobile World Congress, February 2012Building a Team
At a work week in Qualcomm’s offices in San Diego in May 2012 I was able to give a demo of a slightly more advanced basic browser web app running inside Firefox on the desktop. But it was still very basic. We needed a team to start building something good enough that we could ship it on real devices.
“Browser Inception”, San Diego May 2012
Dale Getting on Board in San Diego, May 2012
One of the first things Dale worked on was creating multiple tabs in the browser and even adding a screenshotting spell to the Browser API to show thumbnails of browser tabs (I told you he was brave).
By this time we had also started to borrow Larissa Co, a brilliant designer from the Firefox team, to work on the interaction design and Patryk Adamczyk, formerly of RIM, to work on the visual design for the browser on B2G. That was when it started to look more like a Firefox browser.
Early UI Mockup, July 2012Things that Pop Up
Web pages like to make things pop up. For a start they like to alert(), prompt() or confirm() things with you. Sometimes they like to open() a new browser window (and close() them again), open a link in a _blank window, ask you for a password, ask for your permission to do something, ask you to select an option from a menu, open a context menu or confirm re-sending the contents of a form.
An alert(), version 1.0
All of this required new events in the Browser API, which meant more spells for Justin to cast.Scroll, Pan and Zoom
Moving around web pages on web devices works a little differently from on the desktop. Rather than scroll bars or a scroll wheel on a mouse it uses touch input and a system called Asynchronous Pan and Zoom to allow the user to pan around a web page by dragging it and scrolling it using “kinetic scrolling” which feels like it has some physics to it.
One of the trickier interactions to get right was that we wanted the address bar to hide as you scrolled down the page in order to make more room for content, then show again when you scroll back to the top of the page.
This required adding asyncscroll events which tapped directly into the Asynchronous Pan and Zoom code so that the browser knew not only when the user directly manipulated the page, but how much it scrolled based on physics, asynchronously from the user’s interaction.Storing Stuff
One of the most loved features of Firefox is the “Awesomebar”, a combined address bar, search bar (and on mobile, title bar) which lets you quickly get to the content you’re looking for. You type a few characters and immediately start to see matching web pages from your browsing history, ranked by a “frecency” algorithm.
On the desktop and on Android all of this data is stored in the “Places” database as part of privileged “chrome” code. In order to implement this feature in B2G we would need to use the local storage capabilities of the web, and for that we chose IndexedDB. We built a Places database in IndexedDB which would store all of the “places” a user visits on the web including their URL, title and icon, and store all the times the user visited that page. It would also be used to store the users bookmarks and rank top sites by “frecency”.
Awesomebar, version 1.0Clearing Stuff
As you browse around the web Gecko also stores a bunch of data about the places you’ve been. That can be cookies, offline pages, localStorage, IndexedDB databases and all sorts of other bits of data. Firefox browsers provide a way for you to clear all of this data, so methods needed to be added to the Browser API to allow this data to be cleared from the browser settings in B2G.
Browser settings, version 1.0Handling Crashes
Sometimes web pages crash the browser. In B2G every web app and every browser tab runs in its own system process so that should the worst happen, it will only cause that one window/tab to crash. In fact, due to the memory constraints of the low-end smartphones B2G would initially target, sometimes the system will intentionally kill a background app or browser tab in order to conserve memory. The browser app needs to be informed when this happens and needs to be able to recover seamlessly so that in most cases the user doesn’t even realise a process was killed. Events were added to the Browser API for this purpose.
Crashed tab, version 1.0Talking to Other Apps
Common use cases of a mobile browser are for the user to want to share a URL using another app like a social networking tool, or for another app to want to view a URL using the browser.
B2G implemented Web Activities for this purpose, to add a capability to the web for apps to interact with each other, but in an app-agnostic way. So for example the user can click on a share button in the browser app and B2G will fire a “share URL” Web Activity which can then be handled by any installed app which has registered to handle that type of Web Activity.
Share Web Activity, version 1.2Working Offline
Despite the fact that B2G and Gaia are built on the web, it is a requirement that all of the built-in Gaia apps should be able to function offline, when an Internet connection is unavailable or patchy, so that the user can still make phone calls, take photos and listen to music etc.. At first we started to use AppCache for this purpose, which was the web’s first attempt at making web apps work offline. Unfortunately we soon ran into many of the common problems and limitations of that technology and found it didn’t fulfill all of our requirements.
In order to ship version 1.0 of B2G on time, we were forced to implement “packaged apps” to fulfill all of the offline and security requirements for built-in Gaia apps. Packaged apps solved our problems but they are not truly web apps because they don’t have a real URL on the Internet, and attempts to standardise them didn’t get much traction. Packaged apps were intended very much as a temporary solution and we are working hard at adding new capabilities like ServiceWorkers, standardised hosted packages and manifests to the web so that eventually proprietary packaged apps won’t be necessary for a full offline experience.
Offline, version 1.4Spit and Polish
Finally we applied a good deal of spit and polish to the browser app UI to make it clean and fluid to use, making full use of hardware-accelerated CSS animations, and a sprinkling of Firefoxy interaction and visual design to make the youngest member of the Firefox browser family feel consistent with its brothers and sisters on other platforms.Shipping 1.0
At an epic work week in Berlin in January 2013 hosted by Deutsche Telekom the whole B2G team, including engineers from multiple competing mobile networks and device manufacturers, got together with the common cause of shipping B2G 1.0, in time to demo at Mobile World Congress in Barcelona in February. The team sprinted towards this goal by fixing an incredible 200 bugs in one week.
Version 1.0 Team, Berlin Work Week, January 2013
In the last few minutes of the week Andreas Gal excitedly declared “Zarro Gaia Boogs”, signifying version 1.0 of Gaia was complete, with the rest of B2G to shortly follow over the weekend. Within around 18 months a dedicated team spanning multiple organisations had come together working entirely in the open to turn an empty GitHub repository into a fully functioning mobile operating system which would later ship on real devices as Firefox OS 1.0.1.
Zarro Gaia Boogs, January 2013
Browser app v1.0
So having attended Mobile World Congress 2012 with a prototype and a promise to deliver commercial devices into the market, we were able to return in 2013 having delivered on that promise by fully launching the “Firefox OS” brand with multiple devices on multiple mobile networks with a launch that really stole the show at the biggest mobile conference in the world. Firefox OS had arrived.
Mobile World Congress, Barcelona, February 20131.x
Firefox OS 1.1 quickly followed and by the time we started working on version 1.2 the project had grown significantly. We re-organised into autonomous agile teams focused on product areas, the browser app being one. That meant we now had a dedicated team with designers, engineers, a test engineer, a product manager and a project manager.
The browser team, London work week, July 2013
Firefox OS moved to a rapid release “train model” of development like Firefox, where a new version is delivered every 12 weeks. We quickly added new features and worked on improving performance to get the best out of the low end hardware we were shipping on in emerging markets.
Browser app v1.4“Haida”
Version 1.0 of Firefox OS was very much about proving that we could build what already exists on other smartphones, but entirely using open web technologies. That included a browser app.
Once we’d proved that was possible and put real devices on shelves in the market it was time to figure out what would differentiate Firefox OS as a product going forward. We wanted to build something that doesn’t just imitate what’s already been done, but which plays to the unique strengths of the web to build something that’s true to Mozilla’s DNA, is the best way to experience the web, and is the platform that HTML5 deserves.
Below is a mockup I created right back towards the start of the project at the end of 2011, before we even had a UX team. I mentioned earlier that the Awesomebar is a core part of the Firefox experience in Firefox browsers. My proposal back then was to build a system-wide Awesomebar which could search the whole device, including your apps and their contents, and be accessible from anywhere in the OS.
Very early mockup of a system-wide Awesomebar, December 2011
At the time, this was considered a little too radical for version 1.0 and our focus really needed to be on innovating in the web technology needed to build a mobile OS, not necessarily the UX. We would instead take a more conservative approach to the user interface design and build a browser app a lot like the one we’d built for Android.
In practice that meant that we in fact built two browsers in Firefox OS. One was the browser app which managed the world of “web sites” and the other was the window manager in the system app which managed the world of “web apps” .
In reality on the web there isn’t so much of a distinction between web apps and web sites – each exists on a long continuum of user experience with a very blurry boundary in the middle.
Gordon’s Rocketbar Prototype, March 2013
Gordon and I started to meet weekly to discuss the concept he had by then codenamed “Rocketbar”, but it was a bit of a side project with a few interested people.
In April 2013 the UX team had a summit in London where they got together to discuss future directions for the user experience of Firefox OS. I was lucky enough to be invited along to not only observe but participate in this process, Josh being keen to maintain a close collaboration between Design and Engineering.
We brainstormed around what was unique about the experience of the web and how we might create a unique user experience which played to those strengths. A big focus was on “flow”, the way that we can meander through the web by following hyperlinks. The web isn’t a world of monolithic apps with clear boundaries between them, it is an experience of surfing from one web site to another, flowing through content.
Brainstorming session, London, April 2013
In the coming weeks the UX team would create some early designs for a concept (eventually codenamed “Haida”) which would blur the lines between web apps and web sites and create a unique user experience which flows like the web does. This would eventually include not only the “Rocketbar”, which would be accessible across the whole OS and seamlessly adapt to different types of web content, but also “sheets”, which would split single page web apps into multiple pages which you could swipe through with intuitive edge gestures. It would also eventually include a content model based around live apps which you can surf to, use, and then bookmark if you choose to, rather than monolithic apps which you have to install from a central app store before you can use them.
In June 2013 a small group of designers and engineers met in Paris to develop a throwaway prototype of Haida, to rapidly iterate on some of the more radical concepts and put them through user testing.
Haida Prototyping, Paris, June 2013
Josh and Gordon working in a highly co-ordinated fashion, Paris, June 2013
Wizards at work, Paris, June 20132.x and the Future
Fast forward to the present and the browser team has been merged into the “Systems Front End” team. The results of the Haida prototyping and user testing are slowly starting to make their way into the main Firefox OS product. It won’t happen all at once, but it will happen in small pieces as we iterate and learn.
In version 2.0 of Firefox OS the homescreen search feature from 1.x will be replaced with a new search experience developed in conjunction with a new homescreen, implemented by Kevin Grandon, which will lay the foundations for “Rocketbar”. In version 2.1 our intention is to completely merge the browser app into the system app so that browser tabs become “sheets” alongside apps in the task manager and the “Rocketbar” is accessible from anywhere in the OS. The Rocketbar will adapt to different types of web content and shrink down into the status bar when not in use. Edge gestures will allow you to swipe between web apps and browser windows and eventually apps will be able to spawn multiple sheets.
UI Mockups of Rocketbar in expanded and collapsed state, July 2014
Version 1.x of Firefox OS was built with web technologies but still has quite a similar user experience to other mobile platforms when it comes to installing and using apps, and browsing the web. Going forward I think you can expect to see the DNA of the web come through into the user interface with a unified experience which breaks down the barriers between web apps and web sites, allowing you to freely flow between the two.
Firefox OS is an open source project developed completely in the open. If you’re interested in contributing to Gaia, take a look at the “Developing Gaia” page on MDN. If you’re interested in creating your own HTML5 app to run on Firefox OS take a look at the “App Center“.
I've been told I don't blog about my projects often enough, so here's a feature update on my Chrome Extension that adds various improvements to the Strava.com fitness tracker.
All the features are optional and can be individually enabled in the options panel.
This adds aggregate data (fastest, slowest, average, etc.) when segments are repeated within an activity. It's particularly useful for laps or—like this Everesting attempt—hill repeats:
Changes the default leaderboard away from "Overall" when viewing a segment effort. The most rewarding training often comes from comparing your own past performances rather than those of others, so viewing your own results by default can make more sense.
You can select any of Men, Women, I'm Following or My Results:
Hides various entry types in the activity feed that can get annoying. You currently have the option of hiding challenges, route creations, goals created or club memberships:
Calculates a Variability Index (VI) from the weighted average power and the average power, an indication of how "smooth" a ride was. Required a power meter. A VI of 1.0 would mean a ride was paced "perfectly", with very few surges of effort:
Automatically loads more dashboard entries when reaching the bottom, saving a tedious click on the "More" button:
Estimates a run's Training Stress Score (TSS) from its Grade Adjusted Pace distribution, a measure of that workout's duration and intensity relative to the athletes's capability, providing an insight into correct recovery time and overall training load over time:
Hides social networking buttons, including invitations to invite or find further friends:
Immediately posts comment when pressing Enter/Return key in the edit box rather than adding a newline:
Changes the default sport for the "Side by Side comparison" module to running when viewing athlete profiles:
Shows running cadence by default in the elevation profile:
Shows running heart rate by default in elevation profile:
Selects "Show Estimated FTP" by default on the Power Curve page:
Prefer the "Standard" Google map over the "Terrain" view:
Let me know if you have any comments, suggestions or if you have any other feedback. If you find this software useful, please consider donating via Paypal to support further development. You can also view the source and contribute directly on GitHub.
start-stop-daemon is the classic tool on Debian and derived distributions to manage system background processes. A typical invokation from an initscript is as follows:start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --exec /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid
The basic operation is that it will first check whether /usr/sbin/daemon is not running and, if not, execute /usr/sbin/daemon -c /etc/daemon.cfg -p /var/run/daemon.pid. This process then has the responsibility to daemonise itself and write the resulting process ID to /var/run/daemon.pid.
start-stop-daemon then waits until /var/run/daemon.pid has been created as the test of whether the service has actually started, raising an error if that doesn't happen.
(In practice, the locations of all these files are parameterised to prevent DRY violations.)Idempotency
By idempotence we are mostly concerned with repeated calls to /etc/init.d/daemon start not starting multiple versions of our daemon.
This might not seem to be particularly big issue at first but the increased adoption of stateless configuration management tools such as Ansible (which should be completely free to call start to ensure a started state) mean that one should be particularly careful of this apparent corner case.
In its usual operation, start-stop-daemon ensures only one instance of the daemon is running with the --exec parameter: if the specified pidfile exists and the PID it refers to is an "instance" of that executable, then it is assumed that the daemon is already running and another copy is not started. This is handled in the pid_is_exec method (source) - the /proc/$PID/exe symlink is resolved and checked against the value of --exec.Interpreted scripts
However, one case where this doesn't work is interpreted scripts. Lets look at what happens if /usr/sbin/daemon is such a script, eg. a file that starts:#!/usr/bin/env python # [..]
The problem this introduces is that /proc/$PID/exe now points to the interpreter instead, often with an essentially non-deterministic version suffix:$ ls -l /proc/14494/exe lrwxrwxrwx 1 www-data www-data 0 Jul 25 15:18 /proc/14494/exe -> /usr/bin/python2.7
When this process is examined using the --exec mechanism outlined above it will be rejected as an instance of /usr/sbin/daemon and therefore another instance of that daemon will be incorrectly started.--startas
The solution is to use the --startas parameter instead. This omits the /proc/$PID/exe check and merely tests whether a PID with that number is running:start-stop-daemon \ --quiet \ --oknodo \ --start \ --pidfile /var/run/daemon.pid \ --startas /usr/sbin/daemon \ -- -c /etc/daemon.cfg -p /var/run/daemon.pid
Whilst it is therefore less reliable (in that the PID found in the pidfile could actually be an entirely different process altogether) it's probably an acceptable trade-off against the case of running multiple instances of that daemon.
This danger can be ameliorated by using some of start-stop-daemon's other matching tests, such as --user or even --name.
Like most ‘net users I get my fair share of spam. Most of it gets binned automatically by my email system, but of course some still gets through so I am used to hitting the delete button on random email from .ru domains offering me the opportunity to “impress my girl tonight”.
Most such phishing email relies on the recipient being dumb enough, naive enough, or (possibly) drunk enough to actually click through the link to the malicious website. I was therefore more than a little astonished at an email I received today from the open rights group. That email is given below in its entirety (I have obfuscated my email address for obvious reasons).
From: Department of Dirty
Subject: Cleaning up the Internet
Date: Wed, 23 Jul 2014 07:14:18 -0400 (EDT)
Ever thought the internet was just too big? Want to help clean up online filth?
*Welcome to the Department of Dirty*
Watch the Department tackling its work here: www.departmentofdirty.co.uk and share our success, as we stop one man try to get one over us with his ‘spotted dick recipe’:
Department of Dirty Video: http://www.departmentofdirty.co.uk/
The Department of Dirty is working with internet and mobile companies to stop the dirty internet. We are committed to protecting children and adults from online filth such as:
*Talk to Frank: This government website tries to educate young people about drugs. We all know what ‘education’ means, don’t we? Blocked by Three.
*Girl Guides Essex:
They say, ‘guiding is about acquiring skills for life’. We say, why would young girls need skills? Blocked by BT.
*South London Refugee Association:
This charity aims to relieve poverty and distress. Not on our watch they don’t. Blocked by BT, EE, Sky and VirginMedia
This is just the tip of the iceberg.
We need you to help us take a stand against blogs, charities and education websites, all of which are being blocked . It’s time to stop this sick filth. Together, we can clean up the internet.
Your Department of Dirty representative
 You can find out what we’re blocking at this convenient website: https://www.blocked.org.uk/
[DISCLAIMER] This email has come from the Open Rights Group. This email was delivered to: firstname.lastname@example.org If you wish to opt out of future emails, you can do so here.
Now, I’m an ORG supporter (i.e. I am a paying member) and I am sure that someone, somewhere in ORG thought that this email campaign was a great idea. After all, it follows up the ORG’s earlier research on the fairly obvious stupidities arising from the implementation of Dave’s anti-porn campaign, it looks “ironic”, and it uses a snappy domain name which has shades of Monty Python about it. But I’m sorry, in my view this most certainly is not a good idea and I’m sure that ORG will come to regret it.
One of the most fundamental pieces of advice any and every ‘net user is beaten up with is “do not click on links in unsolicited emails”. In particular, the advice normally goes on – “if that email is from an unknown source, or has in any way a supicious from address you should immediately bin it”.
This email comes from an unknown address with a wonderfully prurient domain name. Even if it is successful and gets to the intended email inbox , it then relies on the recipient breaking a fundamental security rule. It does this by encouraging him (this looks to be male targeted) to click on a link which the naive might believe leads to a porn video.
How exactly is that going to help?
( Note. It got to my email inbox because the email system at e-activist.com which sent it is allowed by my filters.)
There are three basic systems:
The first is slick and easy to use, but fiddly to set up correctly and if you want to do something that its makers don’t want you to, it’s rather difficult. If it breaks, then fixing it is also fiddly, if not impossible and requiring complete reinitialisation.
The second system is an older approach, tried and tested, but fell out of fashion with the rise of the first and very rarely comes preinstalled on new machines. Many recent installations can be switched to and from the first system at the flick of a switch if wanted. It needs a bit more thought to operate but not much and it’s still pretty obvious and intuitive. You can do all sorts of customisations and it’s usually safe to mix and match parts. It’s debatable whether it is more efficient than the first or not.
The third system is a similar approach to the other two, but simplified in some ways and all the ugly parts are hidden away inside neat packaging. These days you can maintain and customise it yourself without much more difficulty than the other systems, but the basic hardware still attracts a price premium. In theory, it’s less efficient than the other types, but in practice it’s easier to maintain so doesn’t lose much efficiency. Some support companies for the other types won’t touch it while others will only work with it.
So that’s the three types of bicycle gears: indexed, friction and hub. What did you think it was?
My primary usecase is to prevent testsuites and build systems from contacting internet-based services. This, at the very least, introduces an element of non-determinism and malicious code at worst.
I use Alberto Bertogli's libfiu for this, specifically the fiu-run utility which part of the fiu-utils package on Debian and Ubuntu.
Here's a contrived example, where I prevent Curl from talking to the internet:$ fiu-run -x -c 'enable name=posix/io/net/connect' curl google.com curl: (6) Couldn't resolve host 'google.com'
... and here's an example of it detecting two possibly internet-connecting tests:$ fiu-run -x -c 'enable name=posix/io/net/connect' ./manage.py text [..] ---------------------------------------------------------------------- Ran 892 tests in 2.495s FAILED (errors=2) Destroying test database for alias 'default'...
Note that libfiu inherits all the drawbacks of LD_PRELOAD; in particular, we cannot limit the child process that calls setuid binaries such as /bin/ping:$ fiu-run -x -c 'enable name=posix/io/net/connect' ping google.com PING google.com (18.104.22.168) 56(84) bytes of data. 64 bytes from lhr08s01.1e100.net (22.214.171.124): icmp_req=1 ttl=57 time=21.7 ms 64 bytes from lhr08s01.1e100.net (126.96.36.199): icmp_req=2 ttl=57 time=18.9 ms [..]
Whilst it would certainly be more robust and flexible to use iptables—such as allowing localhost and other local socket connections but disabling all others—I gravitate towards this entirely userspace solution as it requires no setup and I can quickly modify it to block other calls on an ad-hoc basis. The list of other "modules" libfiu supports is viewable here.
I get my domestic ADSL connectivity from the rather excellent people at Andrews and Arnold.
They also happily take (and similarly reply to) GPG encrypted support questions.
Good guys. Thoroughly recommended.
Now can you /really/ see BT doing any of that?
Every now and then I decide I'll try and sort out my VoIP setup. And then I give up. Today I tried again. I really didn't think I was aiming that high. I thought I'd start by making my email address work as a SIP address. Seems reasonable, right? I threw in the extra constraints of wanting some security (so TLS, not UDP) and a soft client that would work on my laptop (I have a Grandstream hardphone and would like an Android client as well, but I figure those are the easy cases while the "I have my laptop and I want to remain connected" case is a bit trickier). I had a suitable Internet connected VM, access to control my DNS fully (so I can do SRV records) and time to read whatever HOWTOs required. And oh my ghod the state of the art is appalling.
Let's start with getting a SIP server up and running. I went with repro which seemed to be a reasonably well recommended SIP server to register against. And mostly getting it up and running and registering against it is fine. Until you try and make a TLS SIP call through it (to a sip5060.net test address). Problem the first; the StartCom free SSL certs are not suitable because they don't advertise TLS Client. So I switch to CACert. And then I get bitten by the whole question about whether the common name on the cert should be the server name, or the domain name on the SIP address (it's the domain name on the SIP address apparently, though that might make your SIP client complain).
That gets the SIP side working. Of course RTP is harder. repro looks like it's doing the right thing. The audio never happens. I capitulate at this point, and install Lumicall on my phone. That registers correctly and I can call the sip:email@example.com test number and hear the time. So the server is functioning, it's the client that's a problem. I try the following (Debian/testing):
I'm bored at this point. Can I "dial" my debian.org SIP address from Lumicall? Of course not; I get a "Codecs incompatible" (SIP 488 Not Acceptable Here) response. I have no idea what that means. I seem to have all of the options on Lumicall enabled. Is it a NAT thing? A codec thing? Did I sacrifice the wrong colour of goat?
At some point during this process I get a Skype call from some friends, which I answer. Up comes a video call with them, their newborn, perfect audio, and no hassle. I have a conversation with them that doesn't involve me cursing technology at all. And then I go back to fighting with SIP.
Gunnar makes the comment about Skype creating a VoIP solution 10 years ago when none was to be found. I believe they're still the market leader. It just works. I'm running the Linux client, and they're maintaining it (a little behind the curve, but close enough), and it works for text chat, voice chat and video calls. I've spent half a day trying to get a Free equivalent working and failing. I need something that works behind NAT, because it's highly likely when I'm on the move that's going to be the case. I want something that lets my laptop be the client, because I don't want to rely on my mobile phone. I want my email address to also be my VoIP address. I want some security (hell, I'm not even insisting on SRTP, though I'd like to). And the state of the Open VoIP stack just continues to make me embarrassed.
I haven't given up yet, but I'd appreciate some pointers. And Skype, if you're hiring, drop me a line. ;)
Docker is the new best thing ever.
The technology behind it is pretty cool. It works very well and it's incredibly easy to just make things work.
But that's not the best bit!
My favourite thing about Docker is that it's simple to explain to semi-technical folks and better yet, it's easy to get people enthusiastic about it.
As I've previously mentioned, simplicity is something I aspire to in all things and the fact that "post-technical" [cheers Goran ;)] types get excited about how Docker can be used to break your services down into small components that you thread together makes my life that much easier when I'm trying to "sell" the benefits of doing so.
I have failed at sentence construction. Maybe I need to dockerise [eww] that.
Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?
Does anyone actually mean SSL? Have there been any accidents through people confusing the two?
So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.
The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.
Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…
So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.
The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.
The cliché is that lotteries are a tax on the mathematically illiterate.
It's easy to have some sympathy for this position. Did you know trying to get rich by playing the lottery is like trying to commit suicide by flying on commercial airlines? These comparisons are superficially amusing but to look at lotteries in this rational way has seems to be in-itself irrational, ignoring the real motivations of the participants.
Even defined as a tax they are problematic – far from being progressive or redistributive, it has always seemed suspect when lottery money is spent proudly on high-brow projects such as concert hall restorations and theatre lighting rigs when—with no risk of exaggeration—there is zero overlap between the people who would benefit from the project and who funded it.
But no, what rankles me more about our lotteries isn't the unsound economics of buying a ticket or even that it's a state-run monopoly, but rather the faux philanthropic way it manages to evade all criticism by talking about the "good causes" it is helping.
Has our discourse become so relative and non-judgemental that when we are told that the lottery does some good, however slight, we are willing to forgive all of the bad? Isn't there something fundamentally dishonest about disguising the avarice, cupidity, escapism and being part of some shared cultural event—that are surely the only incentives to play this game—with some shallow feel-good fluff about good causes? And where are the people doing real good in communities complaining about this corrupting lucre, or are they just happy to take the money and don't want to ask too many awkward questions..?
"Vices are not crimes" claims Lysander Spooner, and I would not want to legislate that citizens cannot make dubious investments in any market, let alone a "lottery market", but we should at least be able to agree that this nasty regressive tax should enjoy no protection nor special privileges from the state, and it should be incapable of getting away with deflecting criticism with a bunch of photogenic children from an inner-city estate clutching a dozen branded footballs.
I put out the call for nominations for the 2014 Software in the Public Interest (SPI) Board election last week. At this point I haven't yet received any nominations, so I'm mentioning it here in the hope of a slightly wider audience. Possibly not the most helpful as I would hope readers who are interested in SPI are already reading spi-announce. There are 3 positions open this election and it would be good to see a bit more diversity in candidates this year. Nominations are open until the end of Tuesday July 13th.
The primary hard and fast time commitment a board member needs to make is to attend the monthly IRC board meetings, which are conducted publicly via IRC (#spi on the OFTC network). These take place at 20:00 UTC on the second Thursday of every month. More details, including all past agendas and minutes, can be found at http://spi-inc.org/meetings/. Most of the rest of the board communication is carried out via various mailing lists.
The ideal candidate will have an existing involvement in the Free and Open Source community, though this need not be with a project affiliated with SPI.
Software in the Public Interest (SPI, http://www.spi-inc.org/) is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. We see it as our role to handle things like holding domain names and/or trademarks, and processing donations for free and open source projects, allowing them to concentrate on actual development.
It now supports:
Needless to say, this software is not endorsed by Strava. Suggestions, feedback and contributions welcome.
I arrived in Klagenfurt early on Thursday before Sunday's race and went to register at the "Irondome" on the shores of Lake Wörthersee. I checked up on my bike at Race Force's HQ and had a brief look around the expo before it got busy.
Over the next few days I met up a number of times with my sister who had travelled—via Venice—to support and cheer me. Only the day before the race did it sincerely dawn on me how touching and meaningful this was, as well as how much it helped having someone close by.
I had planned to take part in as much of the "Ironman experience" as possible but in practice I not only wanted to stay out of the sun as much as possible, I found that there was an unhealthy pre-race tension at the various events so I kept myself at a slight distance.
Between participants the topic of discussion was invariably the weather forecast but I avoided paying much attention as I had no locus of control; I would simply make different decisions in each eventuality. However, one could not "un-learn" that it reached 40°C on the run course in 2012, landing many in hospital.
As this was my first long-distance triathlon with a corresponding investment of training I decided that conservative pacing and decisions were especially prudent in order to guarantee a finish. Ironman race intensity is quite low but this also means the perceived difference between a sustainable and a "suicide" pace is dangerously narrow.
Despite that, my goal was to finish under 11 hours, targeting a 1:20 swim, a 5:30 bike and a 4:00 marathon.Race day
I got to sleep at around 10PM and awoke early at 3AM, fearing that I had missed my alarm. I dozed for another hour before being woken at 4AM and immediately started on two strong coffees and waited for a taxi.
Over the next 2 hours I ate two Powerbars, a banana and sipped on isotonic energy drink. I also had a gel immediately before the swim, a total of approximately 600 calories. Many consume much more pre-race, but I had not practised this and there would be plenty of time to eat on the bike.
I got to transition as it opened at 5AM and checked over my bags and bike and then made my way slowly to the lake to put on my wetsuit.Swim
I felt my swimming ability to be just on the right of the bell-curve so I lined myself up according to their suggestion. I'm quite good with nerves so in the final ten minutes I kept to myself and remained very calm.
After some theatrics from the organisers, the gun finally went off at 7AM. It was difficult to get my technique "in" straight away but after about 5 minutes I found I could focus almost entirely on my stroke. I didn't get kicked too much and I reached the first turn buoy in good time, feeling relaxed. Between the next two buoys I had some brief hamstring cramps but they passed quickly.
After the second turn I veered off-course due to difficulties in sighting the next landmark—the entrance to the Lend canal—in the rising sun. Once I reached it, the canal itself was fast-moving but a little too congested so I became stuck behind slower swimmers.
The end of the swim came suddenly and glancing at my watch I was pretty happy with my time, especially as I didn't feel like I had exerted myself much at all.T1
Due to the size of Ironman events there is an involved system of transition bags and changing tents; no simple container beside your bike. There was also a fair amount of running between the tents as well. Despite that (and deciding to swim in my Castelli skinsuit to save time) I was surprised at such a long transition split – I'm not sure where I really spent it all.
I also had been under the impression volunteers would be applying sunscreen so I had put my only sun spray in the bike-to-run bag, not the swim-to-bike one. However, I found a discarded bottle by my feet and borrowed some.Bike
The bike leg consists of two hilly laps just to the south of the Wörthersee.
It felt great to be out on the bike but it soon became frustrating as I could not keep to my target wattage due to so many people on the course. There were quite a few marshalls out too which compounded this – I didn't want to exert any more than necessary in overtaking slower riders but I also did not want to draft, let alone be caught drafting.
This also meant I had to switch my read-out from a 10-second power average to 3-seconds, a sub-optimal situation as it does not encourage consistent power output. It's likely a faster swim time would have "seeded" me within bikers more around my ability, positively compounding my overall performance.
I started eating after about 15 minutes: in total I consumed six Powerbars, a NutriGrain, half a banana, four full bottles of isotonic mix and about 500ml of Coca-Cola. I estimate I took on about 1750 calories in total.
The aid stations were very well-run, my only complaints being that the isotonic drink became extremely variable—the bottles being half-filled and/or very weak—and that a few were cruelly positioned before hills rather than after them.
I felt I paced the climbs quite well and kept almost entirely below threshold as rehearsed. Gearing-wise, I felt I had chosen wisely – I would not have liked to have been without 36x28 in places and only managed to spin out the 52x12 four or five times. Another gear around 16t would have been nice though.
There was quite heavy rain and wind on the second lap but it did not deter the locals or my sister, who was on the Rupitiberg waving a custom "Iron Lamb" banner. It was truly fantastic seeing her out on the course.
On the final descent towards transition I was happy with my time given the congestion but crucially did not feel like I had just ridden 112 miles, buoying my spirits for the upcoming marathon.T2
Apart from the dismount line which came without warning and having a small mishap in finding my bike rack, transitioning to the run was straightforward.Run
The run course consists of two "out-and-backs". The first leads to Krumpendorf along the Wörthersee and the railway, the second to the centre of Klagenfurt along the canal. Each of these is repeated twice.
I felt great off the bike but it is always difficult to slow oneself to an easy pace after T2, even when you are shouting at yourself to do so. I did force my cadence down—as well as scared myself with a 4:50 split for the first kilometer—and settled into the first leg to Krumpendorf.
Once the crowds thinned I took stock and decided to find a bathroom before it could become a problem. After that, I felt confident enough to start taking on fuel and the 10km marker on the return to the Irondome came around quickly.
Over the course of the run I had about three or four caffeinated gels and in latter stages a few mouthfuls of Coke. I tried drinking water but switched to watermelon slices as I realised I could absorb more liquid that way, remaining moving and gaining a feeling of security that comes from simply carrying something.
The first visit to Klagenfurt was unremarkable and I was taking care to not go too hard on the downhill gradients there – whilst going uphill is relatively straightforward to pace, I find downhill running deceptively wearing on your quads and I still had 25km to go.
The halfway point came after returning from Klagenfurt and I was spotted by my sister which was uplifting. I threw her a smile, my unused running belt and told her I felt great, which I realised I actually did.
At about 23km I sensed I needed the bathroom again but the next aid station had locked theirs and for some bizarre reason I then did not stop when I saw a public WC which was clearly open. I finally found one at 28km but the episode had made for a rather uncomfortable second lap of Krumpendorf. I did run a little of the Klagenfurt canal earlier in the week, but I wished I had run the route through Krumpendorf instead – there was always "just another" unexpected turn which added to the bathroom frustration.
In a chapter in What I Talk About When I Talk About Running, Haruki Murakami writes about the moment he first ran past 26.2 miles:I exaggerate only a bit when I say that the moment I straddled that line a slight shiver went through me, for this was the first time I'd ever run more than a marathon. For me this was the Strait of Gibraltar, beyond which lay an unknown sea. What lay in wait beyond this, what unknown creatures were living there, I didn't have a clue. In my own small way I felt the same fear that sailors of old must have felt.
I was expecting a similar feeling at this point but I couldn't recall whether my longest run to date was 30 or 31km, a detail which somehow seemed to matter at the time. I certainly noticed how uphill the final return from Krumpendorf had suddenly become and how many people had started walking, lying down, or worse.
32km. Back near the Irondome, the crowds were insatiable but extremely draining. Having strangers call your name out in support sounds nice in principle but I was already struggling to focus, my running form somewhat shot.
In the final leg to Krumpendorf, the changes of gradient appeared to have been magnified tenfold but I was still mostly in control, keeping focused on the horizon and taking in something when offered. Once I reached Klagenfurt for the last time at 36km I decided to ignore the aid stations; they probably weren't going to be of any further help and running on fumes rather than take nutrition risks seemed more prudent.
The final stretch from Klagenfurt remains a bit of a blur. I remember briefly walking up a rather nasty section of canal towpath, this was the only part I walked outside of the aid stations which again seemed more important (and worrying) at the time. I covered a few kilometers alongside another runner where matching his pace was a welcome distraction from the pain and feelings of utter emptiness and exhaustion.
I accelerating away from him and others but the final kilometer seemed like an extremely cruel joke, teasing you multiple times with the sights and sounds of the finish before winding you away—with yet more underpasses!—to fill out the distance.
Before entering the finishing chute I somehow zipped up my trisuit, flattened my race number and climbed the finish ramp, completely numb to any "You are an Ironman" mantra being called out.Overall
I spent about 15 minutes in the enclave just beyond the finish line, really quite unsure about how my body was feeling. After trying lying down and soaking myself in water, Harriet took me off to the post-race tent where goulash, pizza and about a litre of Sport-Weiss made me feel human again...
I have been travelling a lot over the last few months (Czech Republic, Scotland, France, Germany, Austria, Slovenia, Croatia, Italy). That travel, plus my catching up on a load of reading is my excuse for the woeful lack of posts to trivia of late. But hey, sometimes life gets in the way of blogging – which is as it should be.
A couple of things struck me whilst I have been away though. Firstly, and most bizarrely I noticed a significant number of tourists in popular, and hugely photogenic, locations (such as Prague and Dubrovnik) wandering around staring at their smartphones rather than looking at the reality around them. At first I thought that they were just checking photographs they had taken, or possibly that they were texting or emailing friends and relatives about their holidays, or worse, posting to facebook, but that did not appear to be the case. Then by chance I overheard one tourist telling his partner that they needed to “turn left ahead” whilst they walked past me so it struck me that they might just possibly be using google maps to navigate. So I watched others more carefully. And I must conclude that many people were doing just that. I can’t help but feel a little saddened that someone should choose to stare at a google app on a small screen in their hand than look at the beauty of something like the Charles Bridge across the Vlatva.
The second point which struck me was how much of a muppet you look if you use an iPad to take photographs.