News aggregator

Meeting at "The Moon Under Water"

Wolverhampton LUG News - Mon, 14/04/2014 - 14:55


53-55 Lichfield St
Wolverhampton
West Midlands
WV1 1EQ

Eat, Drink and talk Linux

Event Date and Time:  Wed, 16/04/2014 - 19:30 - 23:00
Categories: LUG Community Blogs

Chris Lamb: Race report: Cambridge Duathlon 2014

Planet ALUG - Mon, 14/04/2014 - 13:59

(This is my first race of the 2014 season.)


I had entered this race in 2013 and found it was effective for focusing winter training. As triathlons do not typically start until May in the UK, scheduling earlier races can be motivating in the colder winter months.

I didn't have any clear goals for the race except to blow out the cobwebs and improve on my 2013 time. I couldn't set reasonable or reliable target times after considerable "long & slow" training in the off-season but I did want to test some new equipment and stategies, especially race pacing with a power meter, but also a new wheelset, crankset and helmet.

Preparation was both accidentally and deliberately compromised: I did very little race-specific training as my season is based around an entirely different intensity of race, but compounding this I was confined to bed the weekend before.

Sleep was acceptable in the preceding days and I felt moderately fresh on race morning. Nutrition-wise, I had porridge and bread with jam for breakfast, a PowerGel before the race, 750ml of PowerBar Perform on the bike along with a "Hydro" PowerGel with caffeine at approximately 30km.

Run 1 (7.5km)

A few minutes before the start my race number belt—the only truly untested equipment that day—refused to tighten. However, I decided that once the race began I would either ignore it or even discard it, risking disqualification.

Despite letting everyone go up the road, my first km was still too fast so I dialed down the effort, settling into a "10k" pace and began overtaking other runners. The Fen winds and drag-strip uphill from 3km provided a bit of pacing challenge for someone used to shelter and shorter hills but I kept a metered effort through into transition.

Time
33:01 (4:24/km, T1: 00:47) — Last year: 37:47 (5:02/km)
Bike (40km)

Although my 2014 bike setup features a power meter, I had not yet had the chance to perform an FTP test outdoors. I was thus was not able to calculate a definitive target power for the bike leg. However, data from my road bike suggested I set a power ceiling of 250W on the longer hills.

This was extremely effective in avoiding going "into the red" and compromising the second run. This lends yet more weight to the idea that a power meter in multisport events is "almost like cheating".

I was not entirely comfortable with my bike position: not only were my thin sunglasses making me raise my head more than I needed to, I found myself creeping forward onto the nose of my saddle. This is sub-optimal, even if only considering that I am not training in that position.

Overall, the bike was uneventful with the only memorable moment provided by a wasp that got stuck between my head and a helmet vent. Coming into transition I didn't feel like I had really pushed myself that hard—probably a good sign—but the time difference from last year's bike leg (1:16:11) was a little underwhelming.

Time
1:10:45 (T2: 00:58)
Run 2 (7.5km)

After leaving transition, my legs were extremely uncooperative and I had great difficulty in pacing myself in the first kilometer. Concentrating hard on reducing my cadence as well as using my rehearsed mental cue, I managed to settle down.

The following 4 kilometers were a mental struggle rather than a physical one, modulo having to force a few burps to ease some discomfort, possibly from drinking too much or too fast on the bike.

I had planned to "unload" as soon as I reached 6km but I didn't really have it in me. Whilst I am physiologically faster compared to last year, I suspect the lack of threshold-level running over the winter meant the mental component required for digging deep will require some coaxing to return.

However, it is said that you have successfully paced a duathlon if the second run faster than the first. On this criterion, this was a success, but it would have been a bonus to have really felt completely completely drained at the end of the day, if only from a neo-Calvinist perspective.

Time
32:46 (4:22/km) / Last year: 38:10 (5:05/km)
Overall
Total time
2:18:19

A race that goes almost entirely to plan is a bit of a paradox – there's certainly satisfaction in setting goals and hitting them without issue, but this is a gratification of slow-burning fire rather than the jubilation of a fireworks display.

However, it was nice to learn that I managed to finish 5th in my age group despite this race attracting an extremely strong field: as an indicator, the age-group athlete finishing immediately before me was seven minutes faster and the overall winner finished in 1:54:53 (!).

The race identified the following areas to work on:

  • Perform an outdoors FTP on my time-trial bike outdoors to develop an optimum power plan.
  • Do a few more brick runs, at least to re-acclimatise the feeling.
  • Schedule another bike fit.

Although not strictly race-related, I also need to find techniques to ensure transporting a bike on public transport is less stressful. (Full results & full 2014 race schedule)

Categories: LUG Community Blogs

Debian Bits: DPL election is over, Lucas Nussbaum re-elected

Planet HantsLUG - Mon, 14/04/2014 - 07:10

The Debian Project Leader election has concluded and the winner is Lucas Nussbaum. Of a total of 1003 developers, 401 developers voted using the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2014 page.

The new term for the project leader will start on April 17th and expire on April 17th 2015.

Categories: LUG Community Blogs

Chris Lamb: 2014 race schedule

Planet ALUG - Fri, 11/04/2014 - 23:58

«Swim 2.4 miles! Bike 112 miles! Run 26.2 miles! Brag for the rest of your life...»

In 2013, my training efforts were based around a "70.3"-distance race. In my second year in triathlon I will be targetting my first Ironman-distance event.

After some deliberation I decided on the Ironman event in Klagenfurt, Austria (pictured) not only because the location lends a certain tone to the occasion but because the course is suited to my relative strengths within the three disciplines.

I've made the following conscious changes to my race scheduling and selection this year:

  • Fewer races overall to allow for generous spacing between events, allowing more training, recovery and life.
  • No sprint-distance events as they do not provide enough enjoyment or appropriate training for the IM distance given their logistical outlay.
  • Prefering cycling over running time trials: performance in triathlon—paradoxically including your run performance—is primarily based around bike fitness.
  • Prefering smaller events over "mass-participation" ones.

Readers may observe that despite my primary race finishing with a marathon-distance run, I am not racing a standalone marathon in preparation. This is common practice, justified by the run-specific training leading up to a marathon and the recovery period afterwards compromising training overall.

For similar reasons, I have also chosen not to race a "70.3" distance event in 2014. Whether to do so is a more contentious issue than whether to run a marathon, but it resolved itself once I could not find an event that was suitably scheduled and I could convince myself that most of the benefits could be achieved through other means.

April 13th

Cambridge Duathlon (link)

Run: 7.5km, bike: 40km, run: 7.5km

May 11th

St Neots Olympic Tri (link)

Swim: 1,500m, bike: 40km, run: 10km

May 17th

ECCA 50-mile cycling time trial (link)

50 miles. Course: E2/50C

June 1st

Icknield RC 100-mile cycling time trial (link)

100 miles. Course: F1/100

June 15th

Cambridge Triathlon (link)

Swim: 1,500m, bike: 40km, run: 10km

June 29th

Ironman Austria (link)

Swim 2.4km, bike: 190km, run: 42.2km

Categories: LUG Community Blogs

Steve Kemp: Putting the finishing touches to a nodejs library

Planet HantsLUG - Fri, 11/04/2014 - 15:14

For the past few years I've been running a simple service to block blog/comment-spam, which is (currently) implemented as a simple JSON API over HTTP, with a minimal core and all the logic in a series of plugins.

One obvious thing I wasn't doing until today was paying attention to the anchor-text used in hyperlinks, for example:

<a href="http://fdsf.example.com/">buy viagra</a>

Blocking on the anchor-text is less prone to false positives than blocking on keywords in the comment/message bodies.

Unfortunately there seem to exist no simple nodejs modules for extracting all the links, and associated anchors, from a random Javascript string. So I had to write such a module, but .. given how small it is there seems little point in sharing it. So I guess this is one of the reasons why there often large gaps in the module ecosystem.

(Equally some modules are essentially applications; great that the authors shared, but virtually unusable, unless you 100% match their problem domain.)

I've written about this before when I had to construct, and publish, my own cidr-matching module.

Anyway expect an upload soon, currently I "parse" HTML and BBCode. Possibly markdown to follow, since I have an interest in markdown.

Categories: LUG Community Blogs

David Goodwin: Installing Debian (Jessie) on an Intel NUC D54250WYK

Planet WolvesLUG - Fri, 11/04/2014 - 11:04

Product - D54250WYK / boxd54250wykh3 – via e.g. Ballicom or eBuyer

It’s an Intel i5 4250U processor (dual core, laptop processor). Supports up to 16gb of RAM and the Intel 5000 graphics thing in it.

The box itself is really small – and silent. A laptop size hard disk can fit into it (2.5″ hdd).

Issues :

  1. BIOS needs updating before it can be installed (apparently); See Intel’s website – currently here - it’s just a case of downloading the .BIO file and sticking it on a USB stick and pressing F7 on boot and following through the prompts.
  2. Most Linux distros do not yet support the network card (Intel 559/I218-V) – I had to netboot a Debian unstable netboot iso image (from here )

Good things -

  1. BTRFS root filesystem + booting etc just worked with Jessie.
  2. X configuration just works – even though it’s quite a new graphics chipset.
  3. Boot time is VERY fast – currently <5 seconds.
Categories: LUG Community Blogs

Steve Kemp: A small assortment of content

Planet HantsLUG - Thu, 10/04/2014 - 16:34

Today I took down my KVM-host machine, rebooting it and restarting all of my guests. It has been a while since I'd done so and I was a little nerveous, as it turned out this nerveousness was prophetic.

I'd forgotten to hardwire the use of proxy_arp so my guests were all broken when the systems came back online.

If you're curious this is what my incoming graph of email SPAM looks like:

I think it is obvious where the downtime occurred, right?

In other news I'm awaiting news from the system administration job I applied for here in Edinburgh, if that doesn't work out I'll need to hunt for another position..

Finally I've started hacking on my console based mail-client some more. It is a modal client which means you're always in one of three states/modes:

  • maildir - Viewing a list of maildir folders.
  • index - Viewing a list of messages.
  • message - Viewing a single message.

As a result of a lot of hacking there is now a fourth mode/state "text-mode". Which allows you to view arbitrary text, for example scrolling up and down a file on-disk, to read the manual, or viewing messages in interesting ways.

Support is still basic at the moment, but both of these work:

-- -- Show a single file -- show_file_contents( "/etc/passwd" ) global_mode( "text" )

Or:

function x() txt = { "${colour:red}Steve", "${colour:blue}Kemp", "${bold}Has", "${underline}Definitely", "Made this work" } show_text( txt ) global_mode( "text") end x()

There will be a new release within the week, I guess, I just need to wire up a few more primitives, write more of a manual, and close some more bugs.

Happy Thursday, or as we say in this house, Hyvää torstai!

Categories: LUG Community Blogs

School is in session

Planet SurreyLUG - Tue, 08/04/2014 - 15:59
I’ve had a lot of travel lately and I’ve had some time to think, when you take a step back and look at technology you realise it’s always changing always developing updating and adding new features.  It got me thinking, I should also keep updating my knowledge bank. It’s all well and good to talk […]
Categories: LUG Community Blogs

Mick Morgan: heartbleed

Planet ALUG - Tue, 08/04/2014 - 13:45

This is nasty. There is a remotely exploitable bug in openssl which leads to the leak of memory contents from the server to the client and from the client to the server. In practice this means that an attacker can read 64K chunks of memory on a vulnerable service, thus potentially exposing security critical information.

At 19.00 UTC yesterday, openssl bug CVE-2014-0160 was announced at heartbleed.com. I picked it up following a flurry of emails on the tor relays list this morning. Roger Dingledine posted a blog commentary on the bug to the tor list giving details about the likely impacts on Tor and Tor users.

Dingledine’s blog entry says:

Here are our first thoughts on what Tor components are affected:

  • Clients: Tor Browser shouldn’t be affected, since it uses libnss rather than openssl. But Tor clients could possibly be induced to send sensitive information like “what sites you visited in this session” to your entry guards. If you’re using TBB we’ll have new bundles out shortly; if you’re using your operating system’s Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor.
  • Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you’re at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor’s multi-hop design means that attacking just one relay in the client’s path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys.
  • Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn’t allow an attacker to identify the location of the hidden service, but an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  • Directory authorities: In addition to the keys listed in the “relays and bridges” section above, Tor directory authorities might leak their medium-term authority signing keys. Once you’ve updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don’t rotate that yet. We’ll see how this unfolds and try to think of a good solution there.
  • Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  • Orbot looks vulnerable; they have some new packages available for testing.
  • The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades too. Maybe we’ll need to throw away our torproject SSL web cert and get a new one too.

But as he also says earlier on, “this bug affects way more programs than just Tor”. The openssl security advisory is remarkably sparse on details, saying only that “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.” So it is left to others to explain what this means in practice. The heartbleed announcement does just that. It opens by saying:

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

During their investigations, the heartbleed researchers attacked their own SSL protected services from outside and found that they were:

able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

According to the heartbleed site, versions 1.0.1 through 1.0.1f (inclusive) of openssl are vulnerable. The earlier 0.9.8 branch is NOT vulnerable, nor is the 1.0.0 branch. Unfortunately, the bug was introduced to openssl in December 2011 and has been available in real world use in the 1.0.1 branch since 14 March 2102 – or just over 2 years ago. This means that a LOT of services will be affected and will need to be patched, and quickly.

Openssl, or its libraries, are used in a vast range of security critical services across the internet. As the heartbleed site notes:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

That point about networked appliances is particularly worrying. in the last two years a lot of devices (routers, switches, firewalls etc) may have shipped with embedded services built against vulnerable versions of the openssl library.

In my case alone I now have to generate new X509 certificates for all my webservers, my mail (both SMTP and POP/IMAP) services, and my openVPN services. I will also need to look critically at my ssh implementation and setup. I am lucky that I only have a small network.

My guess is that most professional sysadmins are having a very bad day today.

Categories: LUG Community Blogs

Reproducible builds on Copr with tito and git-annex

Planet SurreyLUG - Mon, 07/04/2014 - 08:22

Recently, Fedora's Copr service was launched which lets individuals create personal repos and build packages on Fedora's servers for a number of Fedora and EL versions (similar to OBS and PPAs).

I've set up a couple of repos under it, and one which contains builds of various gems as dependencies for librarian-puppet. Setting up and tracking RPM builds is made quite easy with git and tito, which lets you set up a simple directory structure in your RPM repo, track specs and source binaries (.gem files), tag and release changes to Copr:

$ tree . |-- README.md |-- rel-eng | |-- packages | | |-- rubygem-highline | | |-- rubygem-librarian | | |-- rubygem-librarian-puppet | | `-- rubygem-thor | |-- releasers.conf | `-- tito.props |-- rubygem-highline | |-- highline-1.6.20.gem | `-- rubygem-highline.spec |-- rubygem-librarian | |-- librarian-0.1.2.gem | `-- rubygem-librarian.spec |-- rubygem-librarian-puppet | |-- librarian-puppet-0.9.17.gem | `-- rubygem-librarian-puppet.spec |-- rubygem-thor | |-- rubygem-thor.spec | `-- thor-0.15.4.gem `-- setup_sources.sh 6 directories, 16 files

(my librarian-puppet-copr repo)

However storing binary files in git has lots of problems, including the resulting size of the repo. To solve this, I use git-annex so the repo only stores metadata and a local cache of the binaries. The binaries can then be fetched from the web on a clean checkout using the setup_sources.sh script and git-annex on the fly.

Setting this up is easy:

  1. mkdir myrepo && cd myrepo
  2. git init
  3. git annex init
  4. tito init
Next, configure Copr and tito to use Copr using Mirek Suchý's blog post and my rel-eng directory. The key part of the rel-eng configuration for using the git-annex support in tito 0.5.0 is setting the builder in tito.props:

[buildconfig] builder = tito.builder.GitAnnexBuilder

Adding new packages is now a matter of doing:

  1. mkdir rubygem-foo && cd rubygem-foo
  2. vim rubygem-foo.spec
  3. git annex addurl --file=foo-1.2.3.gem http://rubygems.org/downloads/foo-1.2.3.gem or copy the file into place and run git annex add foo-1.2.3.gem
  4. git commit -am "Add foo 1.2.3"
  5. tito tag --keep-version
  6. tito release copr-domcleal

git-annex will store the file in its local storage under .git/annex, replacing the file in git with a symlink based on the checksum of the contents. When you push the repo to a remote without git-annex support (like GitHub), then the binaries won't be transferred, keeping the size small. Other users can fetch binaries by adding a shared remote (e.g. a WebDAV or rsync share or web remotes using setup_sources.sh).

When tito is building SRPMs, it will "unlock" the files, fetching them from available remotes, build the RPM and then re-lock the checkout.

So the combination of git (for tracking sources and specs), git-annex (for keeping sources outside git), tito (for tagging builds) and Copr (for building and publishing) makes it easy to build and release your own RPMs, while allowing you to make the source code and build process accessible and transparent.

Categories: LUG Community Blogs

Steve Kemp: So that distribution I'm not-building?

Planet HantsLUG - Sun, 06/04/2014 - 15:35

The other week I was toying with using GNU stow to build an NFS-share, which would allow remote machines to boot from it.

It worked. It worked well. (Standard stuff, PXE booting with an NFS-root.)

Then I started wondering about distributions, since in one sense what I'd built was a minimal distribution.

On that basis yesterday I started hacking something more minimal:

  • I compiled a monolithic GNU/Linux kernel.
  • I created a minimal initrd image, using busybox.
  • I built a static version of the tcc compiler.
  • I got the thing booting, via KVM.

Unfortunately here is where I ran out of patience. Using tcc and the static C library I can compile code. But I can't link it.

$ cat > t.c <>EOF int main ( int argc, char *argv[] ) { printf("OK\n" ); return 1; } EOF $ /opt/tcc/bin/tcc t.c tcc: error: file 'crt1.o' not found tcc: error: file 'crti.o' not found ..

Attempting to fix this up resulted in nothing much better:

$ /opt/tcc/bin/tcc t.c -I/opt/musl/include -L/opt/musl/lib/

And because I don't have a full system I cannot compile t.c to t.o and use ld to link (because I have no ld.)

I had a brief flirt with the portable c-compiler, pcc, but didn't get any further with that.

I suspect the real solution here is to install gcc onto my host system, with something like --prefix=/opt/gcc, and then rsync that into my (suddenly huge) intramfs image. Then I have all the toys.

Categories: LUG Community Blogs
Syndicate content