Planet HantsLUG

Syndicate content
Planet HantsLUG - http://hantslug.org.uk/planet/
Updated: 1 hour 53 min ago

Debian Bits: New Debian Developers and Maintainers (July and August 2015)

Tue, 01/09/2015 - 12:45

The following contributors got their Debian Developer accounts in the last two months:

  • Gianfranco Costamagna (locutusofborg)
  • Graham Inggs (ginggs)
  • Ximin Luo (infinity0)
  • Christian Kastner (ckk)
  • Tianon Gravi (tianon)
  • Iain R. Learmonth (irl)
  • Laura Arjona Reina (larjona)

The following contributors were added as Debian Maintainers in the last two months:

  • Senthil Kumaran
  • Riley Baird
  • Robie Basak
  • Alex Muntada
  • Johan Van de Wauw
  • Benjamin Barenblat
  • Paul Novotny
  • Jose Luis Rivero
  • Chris Knadle
  • Lennart Weller

Congratulations!

Categories: LUG Community Blogs

Andy Smith: Scrobbling to Last.fm from D-Bus

Sun, 23/08/2015 - 12:50

Yesterday afternoon I noticed that my music player, Banshee, had not been scrobbling to my Last.fm for a few weeks. Last.fm seem to be in the middle of reorganising their site but that shouldn’t affect their API (at least not for scrobbling). However, it seems that it has upset Banshee so no more scrobbling for me.

Banshee has a number of deficiencies but there’s a few things about it that I really do like, so I wasn’t relishing changing to a different player. It’s also written in Mono which doesn’t look like something I could learn very quickly.

I then noticed that Banshee has some sort of D-Bus interface where it writes things about what it it doing, such as the metadata for the currently-playing track… and so a hackish idea was formed.

Here’s a thing that listens to what Banshee is saying over D-Bus and submits the relevant “now playing” and scrobble to Last.fm. The first time you run it it asks you to authorise it and then it remembers that forever.

https://github.com/grifferz/dbus-scrobbler

I’ve never looked at D-Bus before so I’m probably doing it all very wrong, but it appears to work. Look, I have scrobbles again! And after all it would not be Linux on the desktop if it didn’t require some sort of lash-up that would make Heath Robinson cry his way to the nearest Apple store to beg a Genius to install iTunes, right?

Anyway it turns out that there is a standard for this remote control and introspection of media players, called MPRIS, and quite a few of them support it. Even Spotify, apparently. So it probably wouldn’t be hard to adapt this script to scrobble from loads of different things even if they don’t have scrobbling extensions themselves.

Categories: LUG Community Blogs

Debian Bits: Debian turns 22!

Sun, 16/08/2015 - 22:59

Sorry for posting so late, we're very busy at DebConf15!

Happy 22nd birthday Debian!

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Weight Plateau

Thu, 13/08/2015 - 22:09

After nearly 22 weeks of continuous and even weight loss I've hit my weigh plateau and not changed my weight for over three weeks now.

There are three basic reasons for this:

  1. My energy intake now equals my energy use. As you lose weight your total metabolic demand falls, so the deliberate energy deficit gradually shrinks to nothing. This is normal.
  2. Calorie creep, over time it's easy to add a little extra to your diet, which means the energy deficit isn't as big as it should be. This is common.
  3. Laziness creep, over time it's easy to slow down and not stick to the exercise plan. This is also common.

The closer you are to your target weight the more likely, and the easier it is to give and stay put. In my case all three are probably happening, my BMR has probably fallen by 168 kcal / 702 kj, which is 400 g of milk or 30 g of almonds - which isn't much but if you eat a few extra nuts or an extra glass of milk, it adds up...

To correct this, I've made sure I don't eat too many nuts (they are good for me in moderation) and I've cut down on the milk in my porridge, substituting water. I've also trimmed my bread, as good though it is, wheat has ~ 360 kcal per 100 g. I'll also try to push harder on the bike and walk faster...

I'm currently stuck under 74 kg, with about 8 kg to go...

Categories: LUG Community Blogs

Steve Kemp: Making an old android phone useful again

Thu, 13/08/2015 - 15:44

I've got an HTC Desire, running Android 2.2. It is old enough that installing applications such as thsoe from my bank, etc, fails.

The process of upgrading the stock ROM/firmware seems to be:

  • Download an unsigned zip file, from a shady website/forum.
  • Boot the phone in recovery mode.
  • Wipe the phone / reset to default state.
  • Install the update, and hope it works.
  • Assume you're not running trojaned binaries.
  • Hope the thing still works.
  • Reboot into the new O/S.

All in all .. not ideal .. in any sense.

I wish there were a more "official" way to go. For the moment I guess I'll ignore the problem for another year. My nokia phone does look pretty good ..

Categories: LUG Community Blogs

Steve Kemp: A brief look at the weed file store

Mon, 10/08/2015 - 14:29

Now that I've got a citizen-ID, a pair of Finnish bank accounts, and have enrolled in a Finnish language-course (due to start next month) I guess I can go back to looking at object stores, and replicated filesystems.

To recap my current favourite, despite the lack of documentation, is the Camlistore project which is written in Go.

Looking around there are lots of interesting projects being written in Go, and so is my next one the seaweedfs, which despite its name is not a filesystem at all, but a store which is accessed via HTTP.

Installation is simple, if you have a working go-lang environment:

go get github.com/chrislusf/seaweedfs/go/weed

Once that completes you'll find you have the executable bin/weed placed beneath your $GOPATH. This single binary is used for everything though it is worth noting that there are distinct roles:

  • A key concept in weed is "volumes". Volumes are areas to which files are written. Volumes may be replicated, and this replication is decided on a per-volume basis, rather than a per-upload one.
  • Clients talk to a master. The master notices when volumes spring into existance, or go away. For high-availability you can run multiple masters, and they elect the real master (via RAFT).

In our demo we'll have three hosts one, the master, two and three which are storage nodes. First of all we start the master:

root@one:~# mkdir /node.info root@one:~# weed master -mdir /node.info -defaultReplication=001

Then on the storage nodes we start them up:

root@two:~# mkdir /data; root@two:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

Then the second storage-node:

root@three:~# mkdir /data; root@three:~# weed volume -dir=/data -max=1 -mserver=one.our.domain:9333

At this point we have a master to which we'll talk (on port :9333), and a pair of storage-nodes which will accept commands over :8080. We've configured replication such that all uploads will go to both volumes. (The -max=1 configuration ensures that each volume-store will only create one volume each. This is in the interest of simplicity.)

Uploading content works in two phases:

  • First tell the master you wish to upload something, to gain an ID in response.
  • Then using the upload-ID actually upload the object.

We'll do that like so:

laptop ~ $ curl -X POST http://one.our.domain:9333/dir/assign {"fid":"1,06c3add5c3","url":"192.168.1.100:8080","publicUrl":"192.168.1.101:8080","count":1} client ~ $ curl -X PUT -F file=@/etc/passwd http://192.168.1.101:8080/1,06c3add5c3 {"name":"passwd","size":2137}

In the first command we call /dir/assign, and receive a JSON response which contains the IPs/ports of the storage-nodes, along with a "file ID", or fid. In the second command we pick one of the hosts at random (which are the IPs of our storage nodes) and make the upload using the given ID.

If the upload succeeds it will be written to both volumes, which we can see directly by running strings on the files beneath /data on the two nodes.

The next part is retrieving a file by ID, and we can do that by asking the master server where that ID lives:

client ~ $ curl http://one.our.domain:9333/dir/lookup?volumeId=1,06c3add5c3 {"volumeId":"1","locations":[ {"url":"192.168.1.100:8080","publicUrl":"192.168.1.100:8080"}, {"url":"192.168.1.101:8080","publicUrl":"192.168.1.101:8080"} ]}

Or, if we prefer we could just fetch via the master - it will issue a redirect to one of the volumes that contains the file:

client ~$ curl http://one.our.domain:9333/1,06c3add5c3 <a href="http://192.168.1.100:8080/1,06c3add5c3">Moved Permanently</a>

If you follow redirections then it'll download, as you'd expect:

client ~ $ curl -L http://one.our.domain:9333/1,06c3add5c3 root:x:0:0:root:/root:/bin/bash ..

That's about all you need to know to decide if this is for you - in short uploads require two requests, one to claim an identifier, and one to use it. Downloads require that your storage-volumes be publicly accessible, and will probably require a proxy of some kind to make them visible on :80, or :443.

A single "weed volume .." process, which runs as a volume-server can support multiple volumes, which are created on-demand, but I've explicitly preferred to limit them here. I'm not 100% sure yet whether it's a good idea to allow creation of multiple volumes or not. There are space implications, and you need to read about replication before you go too far down the rabbit-hole. There is the notion of "data centres", and "racks", such that you can pretend different IPs are different locations and ensure that data is replicated across them, or only within-them, but these choices will depend on your needs.

Writing a thin middleware/shim to allow uploads to be atomic seems simple enough, and there are options to allow exporting the data from the volumes as .tar files, so I have no undue worries about data-storage.

This system seems reliable, and it seems well designed, but people keep saying "I'm not using it in production because .. nobody else is" which is an unfortunate problem to have.

Anyway, I like it. The biggest omission is really authentication. All files are public if you know their IDs, but at least they're not sequential ..

Categories: LUG Community Blogs

Andy Smith: SSDs and Linux Native Command Queuing

Sun, 09/08/2015 - 08:10
Native Command Queueing

Native Command Queuing (NCQ) is an extension of the Serial ATA protocol that allows multiple requests to be sent to a drive, allowing the drive to order them in a way it considers optimal.

This is very handy for rotational media like conventional hard drives, because they have to move the head all over to do random IO, so in theory if they are allowed to optimise ordering then they may be able to do a better job of it. If the drive supports NCQ then it will advertise this fact to the operating system and Linux by default will enable it.

Queue depth

The maximum depth of the queue in SATA is 31 for practical purposes, and so if the drive supports NCQ then Linux will usually set the depth to 31. You can change the depth by writing a number between 1 and 31 to /sys/block/<device>/device/queue_depth. Writing 1 to the file effectively disables NCQ for that device.

NCQ and SSDs

So what about SSDs? They aren’t rotational media; any access is in theory the same as any other access, so no need to optimally order the commands, right?

The sad fact is, many SSDs even today have incompatibilities with SATA drivers and chipsets such that NCQ does not reliably work. There’s advice all over the place that NCQ can be disabled with no ill effect, because supposedly SSDs do not benefit from it. Some posts even go as far as to suggest that NCQ might be detrimental to performance with SSDs.

Well, let’s see what fio has to say about that.

The setup
  • Two Intel DC s3610 1.6TB SSDs in an MD RAID-10 on Debian 8.1.
  • noop IO scheduler.
  • fio operating on a 4GiB test file that is on an ext4 filesystem backed by LVM.
  • fio set to do a 70/30% mix of read vs write operations with 128 simultaneous IO operations in flight.

The goal of this is to simulate a busy highly parallel server load, such as you might see with a database.

The fio command line looks like this:

fio --randrepeat=1 \ --ioengine=libaio \ --direct=1 \ --gtod_reduce=1 \ --name=ncq \ --filename=test \ --bs=4k \ --iodepth=128 \ --size=4G \ --readwrite=randrw \ --rwmixread=70

Expected output will be something like this:

ncq: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=128 fio-2.1.11 Starting 1 process Jobs: 1 (f=1): [m(1)] [100.0% done] [50805KB/21546KB/0KB /s] [12.8K/5386/0 iops] [eta 00m:00s] ncq1: (groupid=0, jobs=1): err= 0: pid=11272: Sun Aug 9 06:29:33 2015 read : io=2867.6MB, bw=44949KB/s, iops=11237, runt= 65327msec write: io=1228.5MB, bw=19256KB/s, iops=4813, runt= 65327msec cpu : usr=4.39%, sys=25.20%, ctx=732814, majf=0, minf=6 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% issued : total=r=734099/w=314477/d=0, short=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=128   Run status group 0 (all jobs): READ: io=2867.6MB, aggrb=44949KB/s, minb=44949KB/s, maxb=44949KB/s, mint=65327msec, maxt=65327msec WRITE: io=1228.5MB, aggrb=19255KB/s, minb=19255KB/s, maxb=19255KB/s, mint=65327msec, maxt=65327msec   Disk stats (read/write): dm-0: ios=732755/313937, merge=0/0, ticks=4865644/3457248, in_queue=8323636, util=99.97%, aggrios=734101/314673, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00% md4: ios=734101/314673, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=364562/313849, aggrmerge=2519/1670, aggrticks=2422422/2049132, aggrin_queue=4471730, aggrutil=94.37% sda: ios=364664/313901, merge=2526/1618, ticks=2627716/2223944, in_queue=4852092, util=94.37% sdb: ios=364461/313797, merge=2513/1722, ticks=2217128/1874320, in_queue=4091368, util=91.68%

The figures we’re interested in are the iops= ones, in this case 11237 and 4813 for read and write respectively.

Results

Here’s how different NCQ queue depths affected things. Click the graph image for the full size version.

Conclusion

On this setup anything below a queue depth of about 8 is disastrous to performance. The aberration at a queue depth of 19 is interesting. This is actually repeatable. I have no explanation for it.

Don’t believe anyone who tells you that NCQ is unimportant for SSDs unless you’ve benchmarked that and proven it to yourself. Disabling NCQ on an Intel DC s3610 appears to reduce its performance to around 25% of what it would be with even a queue depth of 8. Modern SSDs, especially enterprise ones, have a parallel architecture that allows them to get multiple things done at once. They expect NCQ to be enabled.

It’s easy to guess why 8 might be the magic number for the DC s3610:

The top of the PCB has eight NAND emplacements and Intel’s proprietary eight-channel PC29AS21CB0 controller.

The newer NVMe devices are even more aggressive with this; while the SATA spec stops at one queue with a depth of 32, NVMe specifies up to 65k queues with a depth of up to 65k each! Modern SSDs are designed with this in mind.

Categories: LUG Community Blogs

Steve Kemp: The differences in Finland start at home.

Thu, 30/07/2015 - 10:09

So we're in Finland, and the differences start out immediately.

We're renting a flat, in building ten, on a street. You'd think "10 Streetname" was a single building, but no. It is a pair of buildings: 10A, and 10B.

Both of the buildings have 12 flats in them, with 10A having 1-12, and 10B having 13-24.

There's a keypad at the main entrance, which I assumed was to let you press a button and talk to the people inside "Hello I'm the postmaster", but no. There is no intercom system, instead you type in a magic number and the door opens.

The magic number? Sounds like you want to keep that secret, since it lets people into the common-area? No. Everybody has it. The postman, the cleaners, the DHL delivery man, and all the ex-tenants. We invited somebody over recently and gave it out in advance so that they could knock on our flat-door.

Talking of cleaners: In the UK I lived in a flat and once a fortnight somebody would come and sweep the stair-well, since we didn't ever agree to do it ourselves. Here somebody turns up every day, be it to cut the grass, polish the hand-rail, clean the glass on the front-door, or mop the floors of the common area. Sounds awesome. But they cut the grass, right outside our window, at 7:30AM. On the dot. (Or use a leaf-blower, or something equally noisy.)

All this communal-care is paid for by the building-association, of which all flat-owners own shares. Sounds like something we see in England, or even like Americas idea of a Home-Owners-Association. (In Scotland you own your own flat, you don't own shares of an entity which owns the complete building. I guess there are pros and cons to both approaches.)

Moving onwards other things are often the same, but the differences when you spot them are odd. I'm struggling to think of them right now, somebody woke me up by cutting our grass for the second time this week (!)

Anyway I'm registered now with the Finnish government, and have a citizen-number, which will be useful, I've got an appointment booked to register with the police - which is something I had to do as a foreigner within the first three months - and today I've got an appointment with a local bank so that I can have a euro-bank-account.

Happily I did find a gym to join, the owner came over one Sunday to give me a tiny-tour, and then gave me a list of other gyms to try if his wasn't good enough - which was a nice touch - I joined a couple of days later, his gym is awesome.

(I'm getting paid in UK-pounds, to a UK-bank, so right now I'm getting local money by transferring to my wifes account here, but I want to do that to my own, and open a shared account for paying for rent, electricity, internet, water, & etc).

My flat back home is still not rented, because the nice property management company lost my keys. Yeah you can't make that up can you? With a bit of luck the second set of keys I mailed them will arrive soon and the damn thing can be occupied, while I'm not relying on that income I do wish to have it.

Categories: LUG Community Blogs

Alan Pope: Easily port mobile HTML5 games to Ubuntu Phone

Tue, 28/07/2015 - 13:36

Article also available in Spanish at http://thinkonbytes.blogspot.co.uk/2015/07/migrar-facilmente-juegos-moviles-en.html thanks to Marcos Costales.

I really like playing games on my phone & tablet and wanted some more games to play on Ubuntu. With a little work it turns out it’s really pretty easy to ‘port’ games over to Ubuntu phone. I put the word ‘port’ in quotes simply because in some cases it’s not a tremendous amount of effort, so calling it a ‘port’ might make people think it’s more work than it is.

Update: A few people have asked why someone would want to even do this, and why not just bookmark a game in the browser. Sorry if that’s not clear. With this method the game is entirely cached offline on the customer phone. Having fully offline games is desirable in many situations including when travelling or in a location with spotty Internet access. Not all games are fully offline of course, this method wouldn’t help with a large on-line multi-player game like Clash of Clans for example. It would be great for many other titles though. This method also makes use of application confinement on Ubuntu so the app/game cannot access anything outside of the game data directory.

I worked with sturmflut from the Ubuntu Insiders on this over a few evenings and weekends. He wrote it up in his post Panda Madness.

We had some fun porting a few games and I wanted to share what we did so others can do the same. We created a simple template on github which can be used as a starting point, but I wanted to explain the process and the issues I had, so others can port apps/games.

If you have any questions feel free to leave me a comment, or if you’d rather talk privately you can get in contact in other ways.

Proof of concept

To prove that we could easily port existing games, we licensed a couple of games from Code Canyon. This is a marketplace where developers can license their games either for other developers to learn from, build upon or redistribute as-is. I started with a little game called Don’t Crash which is an HTML5 game written using Construct 2. I could have licensed other games, and other marketplaces are also available, but this seemed like a good low-cost way for me to test out this process.

Side note: Construct 2 by Scirra is a popular, powerful, point-and-click Windows-only tool for developing cross-platform HTML5 apps and games. It’s used by a lot of indie game developers to create games for desktop browsers and mobile devices alike. In development is Construct 3 which aims to be backwards compatible, and available on Linux too.

Before I licensed Don’t Crash I checked it worked satisfactorily on Ubuntu phone using the live preview feature on Code Canyon. I was happy it worked, so I paid and received a download containing the ‘source’ Construct 2 files.

If you’re a developer with your own game, then you can of course skip the above step, because you’ve already got the code to port.

Porting to Ubuntu

The absolute minimum needed to port a game is a few text files and the directory containing the game code. Sometimes a couple of tweaks are needed for things like permissions and lock rotation, but mostly it Just Works(TM).

I’m using an Ubuntu machine for all the packaging and testing, but in this instance I needed a Windows machine to export out the game runtime using Construct 2. Your requirements may vary, but for Ubuntu if you don’t have one, you could install it in a VM like VMWare or VirtualBox, then add the SDK tools as detailed at developer.ubuntu.com.

This is the entire contents of the directory, with the game itself in the www/ folder.

alan@deep-thought:~/phablet/code/popey/licensed/html5_dontcrash⟫ ls -l total 52 -rw-rw-r-- 1 alan alan 171 Jul 25 00:51 app.desktop -rw-rw-r-- 1 alan alan 167 Jun 9 17:19 app.json -rw-rw-r-- 1 alan alan 32826 May 19 19:01 icon.png -rw-rw-r-- 1 alan alan 366 Jul 25 00:51 manifest.json drwxrwxr-x 4 alan alan 4096 Jul 24 23:55 www Creating the metadata Manifest

This contains the basic details about your app like name, description, author, contact email and so on. Here’s mine (called manifest.json) from the latest version of Don’t Crash. Most of it should be fairly self-explanitory. You can simply replace each of the fields with your app details.

{ "description": "Don't Crash!", "framework": "ubuntu-sdk-14.10-html", "hooks": { "dontcrash": { "apparmor": "app.json", "desktop": "app.desktop" } }, "maintainer": "Alan Pope ", "name": "dontcrash.popey", "title": "Don't Crash!", "version": "0.22" }

Note: “popey” is my developer namespace in the store, you’ll need to specify your namespace which you configure in your account page on the developer portal.

Security profile

Named app.json, this details what permissions my app needs in order to run:-

{ "template": "ubuntu-webapp", "policy_groups": [ "networking", "audio", "video", "webview" ], "policy_version": 1.2 } Desktop file

This defines how the app is launched, what the icon filename is, and some other details:-

[Desktop Entry] Name=Don't Crash Comment=Avoid the other cars Exec=webapp-container $@ www/index.html Terminal=false Type=Application X-Ubuntu-Touch=true Icon=./icon.png

Again, change the Name and Comment fields, and you’re mostly done here.

Building a click package

With those files created, and an icon.png thrown in, I can now build my click package for uploading to the store. Here’s that process in its entirety:-

alan@deep-thought:~/phablet/code/popey/licensed⟫ click build html5_dontcrash/ Now executing: click-review ./dontcrash.popey_0.22_all.click ./dontcrash.popey_0.22_all.click: pass Successfully built package in './dontcrash.popey_0.22_all.click'.

Which on my laptop took about a second.

Note the “pass” is output from the click-review tool which sanity checks click packages immediately after building, to make sure there’s no errors likely to cause it to be rejected from the store.

Testing on an Ubuntu device

Testing the click package on a device is pretty easy. It’s just a case of copying the click package over from my Ubuntu machine via a USB cable using adb, then installing it.

adb push dontcrash.popey_0.22_all.click /tmp adb shell pkcon install-local --allow-untrusted /tmp/dontcrash.popey_0.22_all.click

Switch to the app scope and pull down to refresh, tap the icon and play the game.

Success!

Tweaking the app

At this point for some of the games I noticed some issues which I’ll highlight here in case others also have them:-

Local loading of files

Construct 2 moans that “Exported games won’t work until you upload them. (When running on the file:/// protocol, browsers block many features from working for security reasons.” in a javascript popup and the game doesn’t start. I just removed that chunk of js which does the check from the index.html and the game works fine in our browser.

Device orientation

With the most recent Over The Air (OTA) update of Ubuntu we enabled device orientation everywhere which means some games can rotate and become unplayable. We can lock games to be portrait or landscape in the desktop file (created above) by simply adding this line:-

X-Ubuntu-Supported-Orientations=portrait

Obviously changing “portrait” to “landscape” if your game is horizontally played. For Don’t Crash I didn’t do this because the developer had coded orientation detection in the game, and tells the player to rotate the device when it’s the wrong way round.

Twitter links

Some games we ported have Twitter links in the game so players can tweet their score. Unfortunately the mobile web version of Twitter doesn’t support intents so you can’t have a link which contains the content “Check out my score in Don’t Crash” embedded in it for example. So I just removed the Twitter links for now.

Cookies

Our browser doesn’t support locally served cookies. Some games use this. For Heroine Dusk I ported from cookies to Local Storage which worked fine.

Uploading to the store

Uploading click packages to the Ubuntu store is fast and easy. Simply visit myapps.developer.ubuntu.com/dev/click-apps/, sign up/in, click “New Application” and follow the upload steps.

That’s it! I look forward to seeing some more games in the store soon. Patches also welcome to the template on github.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: GCN/Hannah Grant Energy Bars

Sat, 25/07/2015 - 12:58

Today I tried to make some GCN/Hannah Grant energy bars. I fist had to convert from silly cups into sensible units*, and we were missing pumpkin seeds but we had everything else.

  • 4 ripe bananas - about 340 g. Blended to a fine mush
  • 200 g rolled oats
  • 100 g dried fruit - we used rains
  • 60 g linseed/flax seeds - our were golden
  • 60 g sunflower seeds - ours were kernels only
  • 60 g almonds - chopped
  • 60 g pecans - chopped, we also had some cashews in this mix
  • cinnamon - I substituted nutmeg as my better half doesn't like cinnamon
  • salt - skipped as I'm on a low salt diet

Mix together, spread in a baking tray - ours wasn't deep enough, it should be 2 - 3 cm thick and bake on 170°C for 20 to 25 minutes until golden brown. Allow to cool for 10 minutes before cutting into energy bar shaped pieces. Store in an airtight container in the fridge.

Before baking it looks a bit like a home made lard & seed cake for garden birds, which in may respects it is, albeit with a lot less fat and lot more expensive ingredients!

Mine is now cooling and we'll try this it afternoon!

How do you measure a cup full of banana? Weights are far easier to use.

Categories: LUG Community Blogs

Steve Kemp: We're in Finland now.

Sat, 25/07/2015 - 03:00

So we've recently spent our first week together in Helsinki, Finland.

Mostly this has been stress-free, but there are always oddities about living in new places, and moving to Europe didn't minimize them.

For the moment I'll gloss over the differences and instead document the computer problem I had. Our previous shared-desktop system had a pair of drives configured using software RAID. I pulled one of the drives to use in a smaller-cased system (smaller so it was easier to ship).

Only one drive of a pair being present make mdadm scream, via email, once per day, with reports of failure.

The output of cat /proc/mdstat looked like this:

md2 : active raid1 sdb6[0] [LVM-storage-area] 1903576896 blocks super 1.2 2 near-copies [2/1] [_U] md1 : active raid10 sdb5[1] [/root] 48794112 blocks super 1.2 2 near-copies [2/1] [_U] md0 : active raid1 sdb1[0] [/boot] 975296 blocks super 1.2 2 near-copies [2/1] [_U]

See the "_" there? That's the missing drive. I couldn't remove the drive as it wasn't present on-disk, so this failed:

mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1 # repeat for md1, md2.

Similarly removing all "detached" drives failed, so the only thing to do was to mess around re-creating the arrays with a single drive:

lvchange -a n shelob-vol mdadm --stop /dev/md2 mdadm --create /dev/md2 --level=1 --raid-devices=1 /dev/sdb6 --force ..

I did that on the LVM-storage area, and the /boot partition, but "/" is still to be updated. I'll use knoppix/similar to do it next week. That'll give me a "RAID" system which won't alert every day.

Thanks to the joys of re-creation the UUIDs of the devices changed, so /etc/mdadm/mdadm.conf needed updating. I realized that too late, when grub failed to show the menu, because it didn't find it's own UUID. Handy recipe for the future:

set prefix=(md/0)/grub/ insmod linux linux (md/0)/vmlinuz-3.16.0-0.bpo.4-amd64 root=/dev/md1 initrd (md/0)//boot/initrd.img-3.16.0-0.bpo.4-amd64 boot
Categories: LUG Community Blogs

Andy Smith: systemd on Debian, reading the persistent system logs as a user

Mon, 20/07/2015 - 11:48

All the documentation and guides I found say that to enable a persistent journal on Debian you just need to create /var/log/journal. It is true that once you create that directory you will get a persistent journal.

All the documentation and guides I found say that as long as you are in group adm (or sometimes they say group systemd-journal) it is possible to see all system logs by just typing journalctl, without having to run it as root. Having simply done mkdir /var/log/journal I can tell you that is not the case. All you will see is logs relating to your user.

The missing piece of info is contained in /usr/share/doc/systemd/README.Debian:


Enabling persistent logging in journald
=======================================

To enable persistent logging, create /var/log/journal and set up proper permissions:

install -d -g systemd-journal /var/log/journal
setfacl -R -nm g:adm:rx,d:g:adm:rx /var/log/journal

-- Tollef Fog Heen , Wed, 12 Oct 2011 08:43:50 +0200

Without the above you will not have permission to read the /var/log/journal//system.journal file, and the ACL is necessary for journal files created in the future to also be readable.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Hybrid Diet

Sun, 19/07/2015 - 21:06

I'm sticking to my calorie restricted diet. Once I get to the correct target weight or waist size I'll stick to the diet but increase the calories to match my burn rate so I stay put at the right size.

My diet is a combination of three highly regarded diets: the DASH; the portfolio and the Mediterranean diet. They are basically the same for over ~75% of their components and ideas, so they are easy to combine. All three are good for reducing blood pressure, reducing serum LDL and if used in a calorie restricted manner then good for reducing body mass.

The all share the following obvious components: lots of fresh fruit and vegetables every day (5 portions of each); high fibre un-refined cereals; plenty of nuts and pulses; low levels of fat & sugar and not much processed food.

The DASH diet keeps the salt levels low or ultra low. Lower than the national RDA and either aligned with the WHO upper limit in the basic version, or lower still in the ultra low salt version. Caffeine and alcohol are also moderated to lower than normal levels.

The portfolio diet adds more plant protein in the form of soya and other legumes. It also adds know "cholesterol" absorbing foods to the diet like beta-glucans from wholemeal oats, sterols from fortified dairy products and soya instead of some diary products.

Finally from the Mediterranean diet there is oily fish, e.g. mackerel and sardines instead of beef.

I'm now less than 75 kg, and starting to fit into medium sized men's clothing rather than large which is too lose and XL which fits like a tent. About 10 kg to go if you assume BMI, and about 1 trouser size if you accept waist:height ratio.

Categories: LUG Community Blogs

Debian Bits: Debian Perl Sprint 2015

Mon, 13/07/2015 - 20:00

The Debian Perl team had its first sprint in May and it was a success: 7 members met in Barcelona the weekend from May 22nd to May 24th to kick off the development around perl for Stretch and to work on QA tasks across the more than 3000 packages that the team maintains.

Even though the participants enjoyed the beautiful weather and the food very much, a good amount of work was also done:

  • 53 bugs were filed or worked on, 31 uploads were accepted.
  • The current practice of patch management (quilt) was discussed and possible alternatives were shown (git-debcherry and git-dpm).
  • Improvements were made in the Debian Perl Tools (dpt) and discussed how to get track of upstream git history and tags.
  • Team's policies, documentation and recurring tasks were reviewed and updated.
  • Perl 5.22 release was prepared and src:perl plans for Stretch were discussed.
  • autopkgtest whitelists were reviewed, new packages added, and IRC notificacions by KGB were discussed.
  • Outstanding migrations were reviewed.
  • Reproducibility issues with POD_MAN_DATE were commented.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the Computer Architecture Department of the Universitat Politècnica de Catalunya for hosting us, and all donors to the Debian project who helped to cover a large part of our expenses.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Body Mass

Thu, 02/07/2015 - 22:20

It's now been a few weeks since I've been on my new diet. Since April I've lost a further ~6 kg, currently weighing in at around 77 kg. Other than my trip to Guernsey which appear to have added 1 kg (all the raspberries and tomatoes...) instead of a 440 g loss, taking me about 1.5 kg off track. I've stopped using a weekly weigh-in, opting for a 7-day moving average which is less volatile and probably more meaningful.

My diet is basically what I had when I was too heavy but slightly tweaked:

  • Five servings of fruit per day
  • Five servings of vegetables per day
  • Five servings of nuts and/or beans per week
  • One serving of protein per day, lean or oily fish or vegetable based
  • Plenty of fibre, from the food rather than added as a supplement

I've had to exclude:

  • Salt - need to keep to the WHO limit, not the much higher UK limit
  • Caffeine - I don't drink coffee but I can only have very little or no tea, which I do like
  • Added sugars - no added sugars on/to anything
  • Added fat, especially saturated, trans-fats and Palm fat
  • Alcohol - it's got too much energy in it and there isn't space for it in the budget
  • No substitute foods, e.g. fat-free fat or synthetic sweeteners

The up shot is that with the limit on sugars, fats and salt most processed foods are now off limits, and will probably remain that way for ever. The occasion item is okay but it really has to be only occasionally.

The main addition to my diet are the nuts, I'm not really a fan of them, but they apparently are good for LDL/HDL ratio and blood pressure. I've also added some xylitol based mints as they are minty (I have a sweet tooth) and apparently there is good evidence that they contribute to reducing dental decay.

I've also swapped some of my yoghurt to yoghurt with plant sterols in or yoghurt based on soya rather than milk. Both are proven to reduce your LDL levels in the blood, which is probably a good idea - though possibly not enough to make a clinically significant outcome.

Categories: LUG Community Blogs

Steve Kemp: My new fitness challenge

Thu, 02/07/2015 - 09:18

So recently I posted on twitter about a sudden gain in strength:

I have conquered pull-ups! On Saturday night I could do 1.5. Today I could do 11! (Chinups were always easy.) #fitness

— Steve Kemp (@Stolen_Souls) June 15, 2015

To put that more into context I should give a few more details. In the past I've been using an assisted pull-up machine, which offers a counterweight to make such things easier.

When I started the exercise I assumed I couldn't do it for real, so I used the machine and set it on 150lb. Over a few weeks I got as far as being able to use it with only 80lb. (Which means I was lifting my entire body-weight minus 80lb. With the assisted-pullup machine smaller numbers are best!)

One evening I was walking to the cinema with my wife and told her I thought I'd be getting close to doing one real pull-up soon, which sounds a little silly, but I guess is pretty common for random men who are 40 as I almost am. As it happens there were some climbing equipment nearby so I said "Here see how close I am", and I proceeded to do 1.5 pullups. (The second one was bad, and didn't count, as I got 90% of the way "up".)

Having had that success I knew I could do "almost two", and I set a goal for the next gym visit: 3 x 3-pullups. I did that. Then I did two more for fun on the way out (couldn't quite manage a complete set.)

So that's the story of how I went from doing 1.5 pullus to doing 11 in less than a week. These days I can easily do 3x3, but struggle with more. It'll come, slowly.

So pull-up vs. chin-up? This just relates to which way you place your hands: palm facing you (chin-up) and palm way from you (pull-up).

Some technical details here but chinups are easier, and more bicep-centric.

Anyway too much writing. My next challenge is the one-armed pushup. However long it takes, and I think it will take a while, that's what I'm working toward.

Categories: LUG Community Blogs

Adam Trickett: Picasa Web: Guernsey 2015

Thu, 02/07/2015 - 08:00

Long weekend in Guernsey.

Location: Guernsey
Date: 2 Jul 2015
Number of Photos in Album: 82

View Album

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Guernsey Walking Holiday 2015 (Day 3+4)

Mon, 29/06/2015 - 23:55

Yesterday was our last full day on Guernsey as we return to the UK this afternoon. The forecast was good for the morning and not so good in the afternoon, so we decided to walk to the northern tip while it was nice and if needed take the bus back. More beaches and fewer crags on this section of coastline than the southside.

The afternoon wasn't so nice, but it also wasn't too bad so we were still able to walk back to our hotel without getting cold or wet. We have now walked all the eastern seaboard of Guernsey from the southern most point (I think) to it's northern most.

Today is our last day in Guernsey, and we have had a lovely break - I think we will come back but with our bikes and for more than just a flying visit.

As the ferry back to Blighty was in the afternoon, we had several hours to explore the castle that guards the port. It was a few quid to get in, but very interesting with several museums and lots to look at. We had a very nice lunch in the sun at the back of the castle in relative peace, with no pigeons, seagulls or tourists bothering us.

Back to work tomorrow!

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Guernsey Walking Holiday 2015 (Day 1+2)

Sat, 27/06/2015 - 21:49

Yesterday morning we awoke at silly o'clock to take the train to catch the ferry from Poole to Guernsey. The ferry was rather busy with people going to the Island Games in Jersey, but we got off a St. Peter Port. We walked up the hill to our B&B to discover there had been a booking error and they were actually full - so they took us to another hotel (an extra star) where we stayed instead.

The glorious weather we had for our crossing had mostly deserted us and it had become rather dull and flat. However the predicted rain didn't turn up so we were able to explore the town without getting wet and were able to find some food for dinner.

This morning was great, after our breakfast we went into town to explore further. Once we had bought lunch bits we took the bus towards the airport, getting off one stop shy, then we walked all along the southern coast back to St. Peter Port. The walking was easy and the views were beautiful - very reminiscent of the Brittany coast or Cornwall. More like the UK and less like France they were a bit stingy with with signs and it was a bit confusing in places - the French GR paint marking system is very simple and much easier to navigate with than the occasional sign!

When we made it back to town we had a look at La Valette Underground Military Museum, which was most fascinating, and packed with more stuff than you would imagine could fit in such a small place.

For dinner we decided to try eating out. La Creperie was strange, the staff appeared to be of Slavic origin, half the menu was not crepe or galette, but the galette was actually quite good though the crepe was only average. Definitely fusion food!

Categories: LUG Community Blogs

Debian Bits: Reproducible Builds get funded by the Core Infrastructure Initiative

Tue, 23/06/2015 - 13:00

The Core Infrastructure Initiative announced today that they will support two Debian Developers, Holger Levsen and Jérémy Bobbio, with $200,000 to advance their Debian work in reproducible builds and to collaborate more closely with other distributions such as Fedora, Ubuntu, OpenWrt to benefit from this effort.

The Core Infrastructure Initiative (CII) was established in 2014 to fortify the security of key open source projects. This initiative is funded by more than 20 companies and managed by The Linux Foundation.

The reproducible builds initiative aims to enable anyone to reproduce bit by bit identical binary packages from a given source, thus enabling anyone to independently verify that a binary matches the source code from which it was said it was derived. For example, this allow the users of Debian to rebuild packages and obtain exactly identical packages to the ones provided by the Debian repositories.

Categories: LUG Community Blogs