Planet SurreyLUG

Syndicate content
Updated: 25 min 21 sec ago


Thu, 26/11/2015 - 07:47

A couple of experiences recently have reminded me that I ought to rely more on profilers when debugging performance problems. Usually I'll start with logging/printing or inline REPLs to figure out where the time is taking, but in heavily iterative functions, a profiler identifies the cause much more quickly.

1. Augeas path expression performance

Two users mentioned that Augeas's path expression performance was terrible in some circumstances. One was working on an Augeas provider for Puppet and it was slow matching entries when there were hundreds or thousands of services defined in a Nagios config, while netcf developers reported the utility was very slow with hundreds of VLAN interfaces.

Specifically it was found that filtering of nodes from the in-memory tree was slow when they had the same label, even when accessed directly by index, e.g. /files/foo[0] or /files/foo[512]. Running Valgrind's callgrind utility against a simple test run of the binary showed a huge proporation of time in the ns_add and memmove functions.

Using KCachegrind to analyse the output file from callgrind:

Visually it's very easy to spot where the time's taken up from the callee map. The two candidates are the red and yellow squares shown on the left, which are being called through two different paths (hence showing up twice). The third large group of functions are from the readline library initialisation (since I was testing the CLI, not the API), and the fourth block is Augeas' own initialisation. These have a much more regular spread of time spent in functions.

The bulk of the time in the two functions on the left was eliminated by David by:

  1. Replacing an expensive check in ns_add that was attempting to build a set of unique nodes. Instead of rescanning the existing set each time to see if the new node was already present, a flag inside the node indicates if it's already been added.
  2. Replacing many memmove calls with batched calls. Instead of removing candidate nodes from the start of a list by memmoving the rest of the list towards the start as each one is identified, the number to remove is batched and removed in one operation.

After fixing it, the group of expensive ns_* functions have moved completely in the callee map. They're now over on the right hand side of KCachegrind, leaving aug_init being the most expensive part, followed by readline, followed by the actual filtering.

2. JGrep filtering performance

Puppet module tests for theforeman/puppet had recently started timing out during startup of the test. The dependencies would install, the RSpec test framework would start up and there would be a long pause before it started listing tests that were running. This caused a timeout on Travis CI and the cancellation of the builds after 10 minutes of inactivity.

Julien dug into it and narrowed it down to a method call to rspec-puppet-facts, which loads hashes of system data (facts) for each supported operating systems and generates sets of tests for each OS. This in turn calls facterdb, a simple library that loads JSON files into memory and provides a filtering API using jgrep. On my system, this was taking about seven or eight seconds per call and per test, so a total of about two minutes on this one method call.

I was rather prejudiced when I started looking at this, as I knew from chasing a Heisenbug into facterdb a few weeks earlier that it loaded all of its JSON files into one large string before passing it into JGrep to be deserialised and filtered. I figured that it was either taking a long time to parse or deserialise the huge blob of JSON or that it was too large to filter quickly. Putting some basic logging into the method showed that I was completely wrong about the JSON, it was extremely quick to parse and was a non-issue.

Next up, with some basic logging and inserting pry (a REPL) into jgrep, I could see that parsing the filter string was taking about 2/3 of the time and then applying it another 1/3. This surprised me, because although the filter string was reasonably complex (it lists all of the supported OSes and versions), it was only a few lines long. Putting ruby-prof around the JGrep method call quickly showed where the issues lay.

ruby-prof has a variety of output formats, including writing callgrind-compatible files which can be imported straight into KCachegrind. These quickly showed an alarming amount of time splitting strings and matching regular expressions:

The string splitting was only in order to access individual characters in an array during parsing of the grep filter, so this could easily be replaced by String#[] for an instant performance boost. The regex matching was mostly to check whether certain characters were in strings, or if the string started with certain substrings, both of which could be done much more cheaply with String#include? and String#start_with?. After these two small changes, the map looks a lot healthier with a more even distribution of runtime between methods:

Actual runtime has dropped from 7-8 seconds to a fraction of a second, and tests no longer timeout during startup.

Both callgrind and ruby-prof are extremely easy to use - it's just a matter of running a binary under Valgrind for the former, and wrapping a code block in a RubyProf.profile call for the latter - and the results from fixing some low-hanging performance problems is very satisfying!

Categories: LUG Community Blogs

Facebook experts: how do I stop all re-shared posts appearing in my feed?

Wed, 25/11/2015 - 10:29

I’d like to stop all re-shared posts appearing in my Facebook feed, and only
see newly created content from my contacts. Is this possible?

The post Facebook experts: how do I stop all re-shared posts appearing in my feed? appeared first on life at warp.

Categories: LUG Community Blogs

Engine stalled yesterday. #ill

Wed, 25/11/2015 - 10:20

The problem with being the engine of your business is that, sometimes, engines stall.

Recovering now.

The post Engine stalled yesterday. #ill appeared first on life at warp.

Categories: LUG Community Blogs

Self-hosting the IndieWeb

Tue, 24/11/2015 - 10:15

Discovering the IndieWeb movement was a 2015 highlight for me. It addressed many of my concerns about the direction of the modern internet, especially regarding ownership and control over that data.  But to truly own your own data, self-hosting is a must!

Background: Self-hosting your own stuff

I’m an ideas person. I have a number of projects – or, rather, project ideas – lined up, which I need to record and review. My blog provides me with the ideal space for that, as some ideas may attract the attention of others who are also interested. But why does this matter?

As someone who naturally likes to share experiences and knowledge, I see no benefit in not sharing my ideas too. After all, the web is all about sharing ideas. This matters to me, because the web is widely regarded as the most valuable asset civilised society has today (aside from the usual – like natural resources, power, warmth and sustenance)!

Owning your own data

As a small business owner, I sometimes benefit from various common business practices. For example, the standard accounting principle of straight-line depreciation means that after several years, capital assets once purchased by the business have little-to-no use for the business, meaning they become potential liabilities (both in the financial and risk-management sense). This means I am able to get hold of used, good-condition computing hardware of 4-5 years old at very little cost.


Even 10 year old servers still make for good general purpose machines. I’ll be using one of these for this blog, soon.  Expect plenty of caching!


This is useful for me, as a blogger and an IndieWeb advocate, as I can not only publish and manage all my own data, but also physically host my own data too. As I have fibre broadband running to my house, it’s now feasible to serve my blog as reasonable speeds with 10-20 Mib/sec upstream (“download speed” to you), which is sufficient for my likely traffic and audience.

This ties in nicely with one of my core beliefs, that people should be able to manage all their own data if they choose. I am technically competent enough, and have the meants at my disposal to do it. So why not!

Another driver towards this is that I wish to permanently separate “work” and “pleasure”. My business web hosting and cloud service is for my customers. Yes, we host our own web content as a business, but personal content? Well, in the interests of security and vested interests, I am pushing towards making  personal content something that is only hosted for a paying customer.

Of course, I would encourage anyone to start their own adventure self-hosting too!

Many bridges to cross

Naturally, taking on this type of arrangement has various challenges attached. Here is a selection of the tasks still to be achieved:

  • Convert some space in house for hosting
    • Create a level screed
    • Sort out wiring
    • Fire detection/resistance considerations
    • Power supply (e.g. UPS)
    • Physical security
  • Get server cabinet & rack it up
  • Configure firewall(s)/routing accordingly
  • Implement back-up – and possibly failover – processes
Step one: documentation

Whilst I am progressing these endeavours, it would be remiss if I didn’t document them. There is a lot to be said for the benefits (to a devop, anyway) of hosting one’s own sites and data, but naturally my blog must carry on while I am in the process of building its new home.

A quick jiggle around of my site’s menu structure will hopefully clarify where you can see this work, going forwards (hint, check the projects menu).

Taking it from here

If you are interested in hosting your own servers and being in direct control over your content/data, why not subscribe to this blog’s RSS feed or subscribe by email (form towards footer). Or if you have comments, just “Leave a Reply” beneath!

The post Self-hosting the IndieWeb appeared first on life at warp.

Categories: LUG Community Blogs

Blogroll-Firefox sync: Wondering if possible sync #wordpress blogroll with #firefox bookmarks.

Mon, 23/11/2015 - 12:01

Wondering if this exists, to sync #wordpress blogroll with #firefox bookmarks.

The post Blogroll-Firefox sync: Wondering if possible sync #wordpress blogroll with #firefox bookmarks. appeared first on life at warp.

Categories: LUG Community Blogs

Varidesk Pro Plus 48 Standing Desk

Sun, 22/11/2015 - 18:25

In my recent post on the Bac Posture Stand I mentioned that I had purchased the Varidesk Pro Plus 48. I promised to review this "in due course" and that time has now come!

The Varidesk Pro Plus 48 is "desk riser" that enables you to choose your working height. You can start the day standing up and then, after your executive lunch, you can spend your afternoon sitting down.

N.B. This post includes Amazon Associates links, I have never actually received anything from them, but I live in hope!!


At £437 (including delivery) this is a very expensive piece of equipment indeed. The alternative is to buy an actual standing desk and in many cases this will be a better option. The beauty of the Varidesk is that you are able to retain your existing furniture.

You do also need the Varidesk Standing Mat, which is another £67.50 (including delivery).

There are cheaper solutions, such as the LIFT Standing Desk mentioned on Bad Voltage, but at the time I was unable to locate this in the UK. You can even build your own.


In the end I purchased the Varidesk Pro Plus 48 from Amazon*. Not only was it almost the only option that I could find, it was a professional looking unit and the reviews were also very positive.

*Please note that the photo on the Amazon item is actually incorrect and the Pro Plus 48 actually looks like the photo above, visit the Varidesk website for details. I did contact Varidesk about this anomaly and they explained that they had informed Amazon on multiple occasions and were unable to get the photo corrected.

To enhance my new standing desk I also purchased a number of other items:


Typically the Varidesk arrived when I was on a day off. It turned up on a pallet, resulting in much consternation at work. In fact, once the packaging was removed, the item was the desired size and all was well.

There is no assembly required, simply unpackage and lift onto your desk. Be warned though, the Varidesk does weigh a lot and will require two fairly strong people to manoeuvre it into place.

I was a little concerned as to how this Pro Plus 48 would fit on my corner desk. I did originally order the Varidesk Cube Corner 48, but I received an email from Varidesk explaining that this unit had been discontinued and cancelling and refunding my order.

In the event the Pro Plus 48 fitted my corner desk beautifully.

First Impressions

The Varidesk Pro Plus 48 appears to be extremely well constructed and solidly made. It has what I assume are bolt holes for those scared of the desk falling over, but in practice the weight of the metal base is sufficient.

The desk can be raised and lowered without physical effort, although sometimes cables will get under the desk requiring their removal before it will lower fully. I cannot blame Varidesk for that, but you need to give some thought about cable routing to minimise the likelihood of this happening.

The First Month

After a month of using my Varidesk I am very happy with it. I would love to be able to report a stunning transformation of my painful back, but sadly my back is still very sore. That said, I certainly feel healthier from standing up and being more active.

Standing up all day is not without its issues, my feet and legs get tired, so I tend to sit down at lunchtime and late afternoon. But I have also found that sitting down for too long has started feeling very uncomfortable. Perhaps it always did? But now I can do something about it, raising up my desk and enjoying standing up again, if only for a while.

In short, for me at least, being able to choose to sit or stand feels very natural and it would feel very strange now to be forced to spend the entire day sitting down.

I believe I now have the closest thing possible to the perfect desk.

Categories: LUG Community Blogs

Howto | Add ActiveDirectory Addressbook to Sylpheed Email

Thu, 19/11/2015 - 21:00

Where we require a lightweight mail client, we tend to use Sylpheed, a fork of the old Claws Mail.

It seems unlikely that you would be able to add an ActiveDirectory Address Book into such a lightweight email client, and indeed the manual states:

### FIXME: write this part.

But in fact it was trivially easy:


Whilst these instructions worked for us, do be aware that we are using Samba4 ActiveDirectory. In theory this is a drop-in replacement for Windows ActiveDirectory and these instructions should work unchanged.

Add LDAP Addressbook

Firstly run Sylpheed and go to Tools and Addressbook. Within the Sylpheed Addressbook go to File New LDAP Server You should now see a screen like this:

Having entered the Name, Hostname and Port you are able to "Check Server", to ensure connectivity. Next either enter your Search Base, or click on the &ellipsis; button to select from the detected Search Bases.

Item Explanation Example Name Addressbook or server name example Hostname ActiveDirectory Host Name ads.example.lan Port LDAP Port Number* 389 or 636 Search Base Your AD domain in LDAP form DC=example,DC=lan

*You should probably choose 636 when connecting via a public network, and you may need to open ports on your router.

Now select the Extended tab and you should see the following screen:

Item Explanation Example Search Criteria This simple example worked for us (objectclass=*) Bind DN Your ActiveDirectory username chris@example.lan Bind Password Your ActiveDirectory password -

Now click on OK to finish.


You should now have a Search field available, enter a colleague's first name and Search and you should be faced with their email addresses.


As far as I can tell the addressbook lookup is not automatic and you have to click on the addressbook icon in the Compose Window and search for the person, in order to add them to the To: field. A bit clunky perhaps, but arguably not so very different from the need in Outlook to press Check Names to look up new addresses. Needless to say - once the address is in the recent address list, it is auto-completed in the future.

Categories: LUG Community Blogs

Howto | Install ESET Remote Administrator on Ubuntu

Tue, 17/11/2015 - 20:12

After much research I decided to purchase ESET Antivirus for our Windows clients. But rather than install ESET Antivirus on each client in turn, I decided to install ESET Remote Administrator on an Ubuntu VPS. ESET Remote Administrator is a server program for controlling ESET Antivirus (and other ESET programs) on clients, you can use it to deploy the ESET Agent, which in turn can then install and monitor the ESET Antivirus software.

Our Windows PCs are controlled by an ActiveDirectory server (actually a Samba4 ActiveDirectory server, although that should not make any difference to these instructions).

I found the ESET instructions for so doing decidedly sketchy, but I eventually managed to install it and took some notes as I went. I cannot promise these instructions are complete, but used in conjunction with the ESET instructions they may be of help.

Update Hosts

Edit /etc/hosts: localhost.example.lan localhost.localdomain localhost eset.example.lan eset

Test by typing the following two commands and checking the output matches:

# hostname eset # hostname -f eset.example.lan Dependencies # apt-get install mysql-server unixodbc libmyodbc cifs-utils libjcifs-java winbind libqtwebkit4 xvfb # dpkg-reconfigure libmyodbc Install MySQL Server

Edit /etc/mysql/my.cnf:


Restart MySQL

# service mysql-server restart Configure MySQL # mysql -u root -p

Cannot remember whether I created database and/or username and password, suggest you try without and then with.

Install ESET Remote Administration Server

Replace ??? with actual:

# sh --license-key=??? \ --db-type="MySQL Server" --db-driver="MySQL" --db-hostname= --db-port=3306 \ --server-root-password="???" --db-name="era_db" --db-admin-username="root" --db-admin-password="???" \ --db-user-username="era" --db-user-password="???" \ --cert-hostname="eset" --cert-auth-common-name="eset.example.lan" --cert-organizational-unit="eset.example.lan" \ --cert-organization="example ltd" --cert-locality="UK" \ --ad-server="ads.example.lan" --ad-user-name="era" --ad-user-password="???"

In case of error, read the following carefully:

/var/log/eset/RemoteAdministrator/EraServerInstaller.log Install Tomcat7 # apt-get install tomcat7 tomcat7-admin

Wait 5 minutes for Tomcat to start, then visit http://localhost:8080 to check it has worked.

Configure SSL

See SSL/TLS Configuration HOW-TO.

Step 1 - Generate Keystore $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/share/tomcat7/.keystore

Use password 'changeit'.

At the time of writing $JAVA_HOME is /usr/lib/jvm/java-7-openjdk-amd64/jre

Step 2 - Configure Tomcat

You should have two sections like this:

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/share/tomcat7/.keystore" keystorePass="changeit" clientAuth="false" sslProtocol="TLS" />

These will already exist, but need uncommenting and adjusting with keystore details.


You should now be able to go to http://localhost:8443.

Join Windows Domain

The ERA server must be joined to the domain

Install samba and stop any samba services that start.

# apt-get install samba krb5-user smbclient

Edit /etc/samba/smb.conf:

[global] workgroup = EXAMPLE security = ADS realm = EXAMPLE.COM dedicated keytab file = /etc/krb5.keytab kerberos method = secrets and keytab server string = Samba 4 Client %h winbind enum users = yes winbind enum groups = yes winbind use default domain = yes winbind expand groups = 4 winbind nss info = rfc2307 winbind refresh tickets = Yes winbind normalize names = Yes idmap config * : backend = tdb idmap config * : range = 2000-9999 idmap config EXAMPLE : backend = ad idmap config EXAMPLE : range = 10000-999999 idmap config EXAMPLE:schema_mode = rfc2307 printcap name = cups cups options = raw usershare allow guests = yes domain master = no local master = no preferred master = no os level = 20 map to guest = bad user username map = /etc/samba/smbmap

Create /etc/samba/smbmap:

!root = EXAMPLE\Administrator Administrator admionistrator

Edit /etc/krb5.conf:

[libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = true ticket_lifetime = 24h forwardable = yes

Make sure that /etc/resolv.conf points to the AD DC, and dns is setup correctly.

Then run this command:

# net ads join -U Administrator@EXAMPLE.COM

Enter Administrators password when requested.

Edit /etc/nsswitch.conf and add 'winbind' to passwd & group lines.

Start samba services.

Update DNS

For some reason the join does not always create the DNS entry on Samba4, so you may need to add this manually.

Categories: LUG Community Blogs

Intermittent USB3 Drive Mount

Tue, 17/11/2015 - 14:38

I have a problem with a Samsung M3 Portable USB3 external hard drive only working intermittently. I use a couple of these for off-site backups for work and so I need them working reliably. In fairness a reboot does cure the problem, but I hate that as a solution.

To troubleshoot this problem, the first step was to check the system devices:

# ls -al /dev/disk/by-id

But only my main two system drives were showing, and not the USB3 drive, which would have been sdc, being the third drive (a, b, c).

System Logs

The next step was to check the system logs:

# dmesg | tail

This showed no issues at all. Then I checked syslog:

# tail /var/log/syslog

This was completely empty, and had not been written to for almost a year. I can't quite believe I haven't needed to check the logs in all that time, but there you are.

I checked that rsyslogd was running and it was as user syslog. But the /var/log/syslog file was owned by root with group adm, but whilst syslog was a member of the adm group, the files all had user rw permissions only -rw-------.

This was easily fixed:

# chmod g+rw /var/log/* # service rsyslog restart

Now the syslog file was being written to, but there was a problem writing to /dev/xconsole:

Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info=""] start Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's groupid changed to 103 Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's userid changed to 101 Nov 17 12:20:48 asusi5 rsyslogd-2039: Could no open output pipe '/dev/xconsole': No such file or directory [try ] Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info=""] exiting on signal 15.

So I duly visited the link mentioned, which gave instructions for disabling /dev/xconsole, but this made me nervous and further research suggested that that was indeed the correct fix for headless servers, but possibly not for a desktop PC. Instead I used the following fix:

# mknod -m 640 /dev/xconsole c 1 3 # chown syslog:adm /dev/xconsole # service rsyslog restart USB Powersaving

Now at least it seems that my syslog is working correctly. Unfortunately unplugging and plugging in the USB drive still was not writing to the logs! When I plugged in the drive the blue light would flash repeatedly and then switch off. I would have believed that the drive had a fault, if it weren't for the fact that rebooting the PC solves the problem.

Thinking that perhaps this problem was USB3 related, I decided to Google for "USB3 drive not recognised" which found this post. Except, that post was only relevant when operating on battery power, whereas I am using a plugged in desktop PC. Clearly that page could not be relevant? Except that in my notification area there is a battery icon, relating to my Uninterruptible Power Supply. But surely Ubuntu couldn't be treating my UPS as if it were a battery? Could it?

In order to find out the power status of my USB devices I followed the suggestion of typing:

# grep . /sys/bus/usb/devices/*/power/control

Most were flagged as "on", but a number of devices were flagged as "auto". I decided to try switching them on to see if that made any difference:

# for F in /sys/bus/usb/devices/*/power/control; do echo on >"${F}"; done # grep . /sys/bus/usb/devices/*/power/control

Now all the devices were showing as on. Time to try out the drive again - I unplugged it and plugged it back in again. This time the power light flashed repeatedly and then went solid.

Unfortunately the drive was still not mounted, but at least it was alive now. What next? Ah yes, I should check the logs to see if they have any new messages, now that the drive is powered and my logs are working:

# dmesg | tail [15243.369812] usb 4-3: reset SuperSpeed USB device number 2 using xhci_hcd [15243.616871] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff880213c71800 Faulty USB-SATA Bridge

At last I have something to go on. I searched for that error message, which took me to this page, which suggested:

Reason is faulthy usb-sata bridge ASM1051 in USB3 drives. It has nothing to do with motherboards. See this. Workaround was made to kernel 3.17-rc5. Now those disks works. Some perfectly and some works but data transfer is not smooth.

Could that be my problem? I checked my current Kernel:

# uname -r 3.13.0-68-generic

So yes, it could be. As I cannot read the device yet, I cannot check whether ASM1051 is present in my USB3 drive.

Having uninstalled 34 obsolete kernels and a further 8 newer kernels that weren't in use, I was in a position to install the latest Kernel from Vivid with:

# apt install linux-generic-lts-vivid

The problem of course is that in order to boot onto the newer Kernel I must reboot, thereby fixing the problem anyway.

And finally

It would be easy to portray this as clear evidence of the difficulty of running Linux, certainly Samsung will have thoroughly tested their drive on Windows and Mac operating systems. That said such problems are not unheard of on the more popular operating systems and debugging such problems is far harder, I would argue, than the above logical steps.

Having rebooted, as expected, the drive worked. I tried running lshw to see if ASM1051 was present, but I could not find it. Of course upgrading the Kernel could have fixed the problem anyway.

For the present I will labour under the comforting belief that the problem is fixed and will update this post again when either the problem reoccurs on in a week's time if all is well!

Categories: LUG Community Blogs

Wordpress Comments Migration to Disqus

Mon, 16/11/2015 - 21:00

I assumed that, when I migrated from Wordpress to Jekyll, one of the casualties would be the old Wordpress comments. I had decided to use DISQUS for comments on Jekyll, partly because it is the "new thing" and partly because it is incredibly easy to achieve. But I never for a moment considered that migrating the old comments would be a possibility.

In fact, not only is it possible, but it is almost trivially easy.

I did encounter a few issues, as I had foolishly renamed some pages to more logical names, thereby both breaking all Google and other links, but also breaking the link with DISCUS comments. That was a very bad idea and as a consequence I spent quite a while renaming files back again, but other than that it all worked flawlessly. If you are going to rename pages - probably best to do it in Wordpress before migrating to Jekyll.

Well done [DISCUS] for making something that you would expect to be difficult easy. And thank you Wordpress for providing a decent export routine, without which my data would have been locked away.

I look forward to your comments!


Categories: LUG Community Blogs

Sleeping Better with Linux

Sun, 15/11/2015 - 19:28

I was reading Phones need 'bed mode' to protect sleep on the BBC, that computers should have a blue light filter to help people to sleep properly at night. As I am frequently working on my laptop until late in the evening, I felt that I should investigate further. Currently I do turn down the brightness, but it seems that is not enough.

The article linked to a program called f.lux, which is free for Linux. I also came across this article on Linux Magazine Avoiding Eye Strain.


Getting this working on my laptop was trivial:

sudo add-apt-repository ppa:kilian/f.lux sudo apt-get update sudo apt-get install fluxgui

Finally I ran the program and set it to launch on start-up. I entered latitude as 51.2, which is close enough I believe.


The software feels fairly basic, whether this is the same for Windows and Mac I don't know. That said the applet seems to run perfectly in the Ubuntu notification area.

It is very noticeable that the display is more muted and much more comfortable to view in the evening. I don't yet know whether this will work well during the day, nor whether it will improve my sleeping. But the logic behind it seems sound and there is no reason why it shouldn't help.

Categories: LUG Community Blogs

Reasons For Migrating from Wordpress to Jekyll

Sun, 15/11/2015 - 00:00

Following my recent announcement, I thought I would give some of my reasons for the move and some early impressions of using Jekyll.

What is Jekyll?

Jekyll is a Markdown based website and blogging platform, written in Ruby. The principle is simple - you write markown text files and they are automatically converted to static HTML webpages.

What is Markdown?

I am assuming that most of my audience have at least a passing knowledge of Markdown, but basically it is a very clean virtually syntax-free way of writing text files, so that they can be easily converted into different formats. The key to markdown is the conversion utility and I currently use Pandoc. I write the file once, and then I can convert into whatever format I want it in:

  • PDF: pandoc -o sample.pdf sample.markdown
  • Word: pandoc -o sample.docx sample.markdown
  • HTML: pandoc -o sample.html sample.markdown

I would imagine most people start using Markdown so that can continue to use the favourite text editor - Vim or Emacs. At work I have found myself using it in preference to a word-processor, I have written a simple md2pdf perl script, so that in vim I can simply type :md2pdf % to have my document saved as a PDF. And the PDFs that Pandoc produces are beautiful and headings and sub-headings are automatically converted into PDF Bookmarks, giving your documents an effortless professionalism.

For complicated documents I sometimes start in Markdown and then move to LaTeX, but increasingly I am finding myself able to do virtually everything in Markdown, including simple images and hyperlinks. But you also have the option of extending plain markdown with HTML tags.

So in simplest terms Jekyll is just an automated way of creating Markdown files and converting them to HTML.

But why change from Wordpress?

Wordpress has been great for me, it's relatively simple, has great statistical tools, a build in commenting system and much more besides. So why leave all that behind for something which is fundamentally so basic?

Benefits of Jekyll
  1. Text Editor: Once again the desire to use Vim was probably the key motivation.
  2. Github Pages: The fact that Jekyll could be used with the free Github Pages was another.
  3. Command Line: The ability to use grep, sed and perl and all the other command line goodies makes for an incredibly versatile system.
  4. Version Control: To have the whole site under version control.

I cannot tell you how wonderful it is to grep -l searchtext *.markdown | vim - and be able to edit each matching file in Vim.

Bootpolish Blog

There was another reason too, which was that I still had an old blog at, which I wanted to close down. I could have merged it into my Wordpress blog, but I thought it would be easier to transfer it to Jekyll. To be honest I can't say that it was particularly easy, but thankfully it is now done.

The Migration Process

I followed the GitHub Instructions for Jekyll. I used rsync to copy the underlying text files from into the _drafts folder, before using bash for loops to auto-process each into a Jekyll markdown post. I used the Wordpress Importer to transfer my Wordpress blog. The importer did not work particularly well, so I ended up editing each file in turn.

I found there was some customisation required:

  1. Categories: By default Jekyll has no Category pages, for example:
  2. Tags: By default Jekyll has no Tag pages, for example:
  3. Wordpress RSS: I wanted to maintain the existing feed locations, which required creation of various additional feeds.
  4. Tag Cloud: By default Jekyll has no tag cloud functionality, which I believe is crucial to a blog.
  5. Site Search: By default Jekyll has no site search. There are plug-ins, but these are not compatible with GitHub pages. For now I have used Google Custom Search, but it has not yet indexed the entire site.

I have written a script to build all the tags and categories, which is working well. I would like to integrate this into the git commit command somehow, so that I don't forget to run it!

Any new categories would require additional RSS feed files creating, by simply copying the feed template into the relevant category/feed folder.


This has been much more work than I would have liked. That said, I now have my Markdown files in a format under my control. I can move those files to any Jekyll provider, or indeed to any web provider and have them hosted.

In short, I am hoping that this is the last time I will move platforms!

Lastly, if you're unfamiliar with Markdown, you can view the Markdown version of this page.

Categories: LUG Community Blogs

On Jekyll

Sat, 14/11/2015 - 14:18

I've moved this here blog thing from Joomla to Jekyll.

I've not (yet) moved the site content over to this new site, as currently I'm unsure if any of the content I had written was really worth preserving, or if it was best to start anew.

The resons for moving to Jekyll are simple and mostly obvious:

No requirement for a server-side scripting language or database

This should be pretty clear but the main reason is that working with PHP and the popular PHP CMSes out there on a daily basis teaches you to be wary of them, this deployment mechanism means that the site itself cannot be hacked directly or used as an attack vector.

Of course, nothing is truely "hack-proof" so precautions still need to be taken, but it removes the vulnerabilities that a CMS like Wordpress would introduce.

Native Markdown Editing

Because most CMSes are not designed for people like me, who use markdown as their de-facto standard for formatting prose text. Many use an HTML WYSIWYG editor, which is great for most users, but ends up making editing less efficient, and the output less elegant. It also means that the only format the content can be delivered in is HTML.

No dedicated editing application

Using Jekyll, and a git based deployment process means that deploying changes to the site is simple and easy, and I can do it anywhere thanks to github's online editor. I only need to be logged into my github account in order to make changes or write a new post.

Currently, I'm using a git hook to rebuild the site and publish the changes, this is triggered by a git push to my server.

This script clones the repo from github to a temporary directory, builds it to the public directory, then deletes the temporary copy.

My next post will probably be about this deployment mechanism.

Warp Speed

Finally, this item actually returns to my first point, but the lack of server-side programming in this site means that it can be delivered at break-neck speeds. Even without any kind of CDN or HTTP accellerator like Varnish, a well tuned nginx configuration and a lack of server-side scripting means that the all-important TTFB is much lower.

I hope that these items above and the transition to Jekyll will give me cause to write better, more frequent posts here.

Categories: LUG Community Blogs

Migration from Wordpress to Jekyll

Thu, 12/11/2015 - 22:44

For the past few weeks I have migrating from Wordpress to Jekyll. I have also been merging in my previous blog at

This process has proved to be much more work than I expected, but this evening it all came together and I finally pressed the button to transfer the DNS over to the Jekyll site, hosted by GitHub.

I have tried to maintain the URL structure, along with tags, categories and RSS feeds, but it can't be perfect and there will be breakage.

If you notice any problems please do comment below.

Thank you.

Chris Roberts

Categories: LUG Community Blogs

A taste of #MozFest

Mon, 09/11/2015 - 09:04

The campus venue where the magic happens.


Let’s be clear from the outset: there’s no word that adequately defines MozFest.  The Mozilla Festival is, simply, crazy. Perhaps it’s more kindly described as chaotic? Possibly. A loosely-coupled set of talks, discussion groups, workshops and hackathons, roughly organised into allocated floors, feed the strangely-complimenting hemispheres of work and relaxation.

Nothing can prepare you for the 9 floors of intensity.

How MozFest works

Starting from the seeming calm of Ravensbourne’s smart entrance, you stroll in, unaware of the soon-experienced confusion. A bewildering and befuddling set of expectations and realisations come and go in rapid succession. From the very first thought – “ok, I’m signed in – what now?”, to the second – “perhaps I need to go upstairs?”, third – “or do I? there’s no obvious signage, just a load of small notices”…. and so on, descending quickly but briefly into self-doubt before emerging victorious from the uneasy, childlike dependency you have on others’ goodwill.

Volunteers in #MozHelp t-shirts, I’m looking at you. Thanks.

The opening evening started this year with the Science Fair, which featured – in my experience – a set of exciting hardware and software projects which were all in some way web-enabled, or web-connected, or web-controlled. Think Internet of Things, but built by enthusiasts, tinkerers and hackers – the way it should be.

“Open Hardware” projects, interactive story-telling, video games and robots being controlled by the orientation of the smartphone (by virtue of its gyroscopic capability).. the demonstration of genius and creativity is not even limited by the hardware available. If it didn’t already exist, it got designed and built.

An Open Web, for Free Society

A multitude of social and policy-driven themes permeated MozFest

As made clear from the opening keynotes on Saturday morning, MozFest is not a place for debate. Don’t think this as a bad thing. The intention is simply to help communicate ideas, as opposed to getting bogged down in the mire of detail. “Free” vs “Open”? Not here. The advice given was to use one’s ears much more than one’s mouth, and it’s sound advice – no pun intended. I have generally been considered a good listener, so I felt at home not having to “prove” anything by making a point. There was no point.

Categories: LUG Community Blogs

By: Andrew Schott

Wed, 28/10/2015 - 22:36
Categories: LUG Community Blogs

Jekyll and WordPress: how I learned to stop worrying

Wed, 28/10/2015 - 16:01

I had been cultivating a fascination with Jekyll for blogging for a short while. It looked oh so clean, and minimalist, and sleek. It has its fans, for sure, and I am one of them.

If I were starting my blog from this day, I would almost certainly consider using Jekyll for it, rather than WordPress.

WordPress: better the devil?

But, I am not. Back in 2007 (can it really be so long ago?!), when I started blogging, I didn’t give much thought to my requirements eight years down the line. And the funny thing is, they have hardly changed.

Org2Blog is everything I need from blogging. It’s quick, because I can compose my text in Emacs, and also supply my category and tag information directly too.

When saving the post in Emacs, I can save a local copy using the same date-title-based file name schema that Jekyll would expect (e.g.:

Further benefits to Emacs/WordPress duality

Emacs Rocks.

As indicated by the previous filename example, blogs can be saved locally on my hard disk in Org-mode format, allowing me the option later on to convert everything for a Jekyll-based future. In other words, making the decision to hard-switch from one system or another need not be rushed and can, in fact, be assessed based on technical need.

Another “turn-off” from Jekyll is that, despite various attempts to make it easy to migrate WordPress posts, I found the process awkward and the documentation confusing. There is more than one way to skin this cat.
For me, Emacs provides such a comfortable environment using Org2Blog that it’s really hard to justify the alternative approaches of org-jekyll or Org+Jekyll.

Disadvantages to using WordPress

Well, it’s not elitest

Categories: LUG Community Blogs

Bac Posture Stand – Laptop Standing Desk

Thu, 15/10/2015 - 12:11

Being tall (6’2″), sedentary and over 40, I have succumbed to the inevitable back problems. Listening to Jono’s review of the LIFT Standing Desk on Bad Voltage reminded me that I’ve been meaning to try out working standing up, at least for part of the time. I haven’t gone for the LIFT, as I wasn’t able to find it on Amazon UK, but I have purchased the Bac Posture Stand, better known on Amazon as Portable Folding Notebook or Laptop Table – Desk – Tray – Stand – (Black). Hmm catchy.

It appears to be well made, although it does take a little setting up. There is no fan cooling, but there are holes for natural venting. One pleasant surprise was that it does include a mouse stand, which is not shown in the Amazon picture. There was a cheaper alternative, but the reviews led me to believe that this one might work out better. As I am not planning to buy both – I will never know.

It is early days and I don’t yet have a standing mat for home, which I understand from that podcast is absolutely essential, but so far I am pleased with my purchase.

I have also invested in a VARIDESK Pro Plus 48 – Height Adjustable Standing Desk for work, along with a VARIDESK – Standing Desk Floor Mat. I will review these in due course.

Categories: LUG Community Blogs

Bac Posture Stand - Laptop Standing Desk

Thu, 15/10/2015 - 11:11

Being tall (6'2"), sedentary and over 40, I have succumbed to the inevitable back problems. Listening to Jono's review of the LIFT Standing Desk on Bad Voltage reminded me that I've been meaning to try out working standing up, at least for part of the time. I haven't gone for the LIFT, as I wasn't able to find it on Amazon UK, but I have purchased the Bac Posture Stand, better known on Amazon as Portable Folding Notebook or Laptop Table - Desk - Tray - Stand - (Black).

Hmm catchy.

It appears to be well made, although it does take a little setting up. There is no fan cooling, but there are holes for natural venting. One pleasant surprise was that it does include a mouse stand, which is not shown in the Amazon picture. There was a cheaper alternative, but the reviews led me to believe that this one might work out better. As I am not planning to buy both - I will never know.

It is early days and I don't yet have a standing mat for home, which I understand from that podcast is absolutely essential, but so far I am pleased with my purchase.

I have also invested in a VARIDESK Pro Plus 48 - Height Adjustable Standing Desk for work, along with a VARIDESK - Standing Desk Floor Mat.

I will review these in due course.

Categories: LUG Community Blogs