News aggregator

School is in session

Planet SurreyLUG - Tue, 08/04/2014 - 15:59
I’ve had a lot of travel lately and I’ve had some time to think, when you take a step back and look at technology you realise it’s always changing always developing updating and adding new features.  It got me thinking, I should also keep updating my knowledge bank. It’s all well and good to talk […]
Categories: LUG Community Blogs

Mick Morgan: heartbleed

Planet ALUG - Tue, 08/04/2014 - 13:45

This is nasty. There is a remotely exploitable bug in openssl which leads to the leak of memory contents from the server to the client and from the client to the server. In practice this means that an attacker can read 64K chunks of memory on a vulnerable service, thus potentially exposing security critical information.

At 19.00 UTC yesterday, openssl bug CVE-2014-0160 was announced at heartbleed.com. I picked it up following a flurry of emails on the tor relays list this morning. Roger Dingledine posted a blog commentary on the bug to the tor list giving details about the likely impacts on Tor and Tor users.

Dingledine’s blog entry says:

Here are our first thoughts on what Tor components are affected:

  • Clients: Tor Browser shouldn’t be affected, since it uses libnss rather than openssl. But Tor clients could possibly be induced to send sensitive information like “what sites you visited in this session” to your entry guards. If you’re using TBB we’ll have new bundles out shortly; if you’re using your operating system’s Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor.
  • Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you’re at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor’s multi-hop design means that attacking just one relay in the client’s path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys.
  • Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn’t allow an attacker to identify the location of the hidden service, but an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  • Directory authorities: In addition to the keys listed in the “relays and bridges” section above, Tor directory authorities might leak their medium-term authority signing keys. Once you’ve updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don’t rotate that yet. We’ll see how this unfolds and try to think of a good solution there.
  • Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  • Orbot looks vulnerable; they have some new packages available for testing.
  • The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades too. Maybe we’ll need to throw away our torproject SSL web cert and get a new one too.

But as he also says earlier on, “this bug affects way more programs than just Tor”. The openssl security advisory is remarkably sparse on details, saying only that “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.” So it is left to others to explain what this means in practice. The heartbleed announcement does just that. It opens by saying:

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

During their investigations, the heartbleed researchers attacked their own SSL protected services from outside and found that they were:

able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

According to the heartbleed site, versions 1.0.1 through 1.0.1f (inclusive) of openssl are vulnerable. The earlier 0.9.8 branch is NOT vulnerable, nor is the 1.0.0 branch. Unfortunately, the bug was introduced to openssl in December 2011 and has been available in real world use in the 1.0.1 branch since 14 March 2102 – or just over 2 years ago. This means that a LOT of services will be affected and will need to be patched, and quickly.

Openssl, or its libraries, are used in a vast range of security critical services across the internet. As the heartbleed site notes:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

That point about networked appliances is particularly worrying. in the last two years a lot of devices (routers, switches, firewalls etc) may have shipped with embedded services built against vulnerable versions of the openssl library.

In my case alone I now have to generate new X509 certificates for all my webservers, my mail (both SMTP and POP/IMAP) services, and my openVPN services. I will also need to look critically at my ssh implementation and setup. I am lucky that I only have a small network.

My guess is that most professional sysadmins are having a very bad day today.

Categories: LUG Community Blogs

Reproducible builds on Copr with tito and git-annex

Planet SurreyLUG - Mon, 07/04/2014 - 08:22

Recently, Fedora's Copr service was launched which lets individuals create personal repos and build packages on Fedora's servers for a number of Fedora and EL versions (similar to OBS and PPAs).

I've set up a couple of repos under it, and one which contains builds of various gems as dependencies for librarian-puppet. Setting up and tracking RPM builds is made quite easy with git and tito, which lets you set up a simple directory structure in your RPM repo, track specs and source binaries (.gem files), tag and release changes to Copr:

$ tree . |-- README.md |-- rel-eng | |-- packages | | |-- rubygem-highline | | |-- rubygem-librarian | | |-- rubygem-librarian-puppet | | `-- rubygem-thor | |-- releasers.conf | `-- tito.props |-- rubygem-highline | |-- highline-1.6.20.gem | `-- rubygem-highline.spec |-- rubygem-librarian | |-- librarian-0.1.2.gem | `-- rubygem-librarian.spec |-- rubygem-librarian-puppet | |-- librarian-puppet-0.9.17.gem | `-- rubygem-librarian-puppet.spec |-- rubygem-thor | |-- rubygem-thor.spec | `-- thor-0.15.4.gem `-- setup_sources.sh 6 directories, 16 files

(my librarian-puppet-copr repo)

However storing binary files in git has lots of problems, including the resulting size of the repo. To solve this, I use git-annex so the repo only stores metadata and a local cache of the binaries. The binaries can then be fetched from the web on a clean checkout using the setup_sources.sh script and git-annex on the fly.

Setting this up is easy:

  1. mkdir myrepo && cd myrepo
  2. git init
  3. git annex init
  4. tito init
Next, configure Copr and tito to use Copr using Mirek Suchý's blog post and my rel-eng directory. The key part of the rel-eng configuration for using the git-annex support in tito 0.5.0 is setting the builder in tito.props:

[buildconfig] builder = tito.builder.GitAnnexBuilder

Adding new packages is now a matter of doing:

  1. mkdir rubygem-foo && cd rubygem-foo
  2. vim rubygem-foo.spec
  3. git annex addurl --file=foo-1.2.3.gem http://rubygems.org/downloads/foo-1.2.3.gem or copy the file into place and run git annex add foo-1.2.3.gem
  4. git commit -am "Add foo 1.2.3"
  5. tito tag --keep-version
  6. tito release copr-domcleal

git-annex will store the file in its local storage under .git/annex, replacing the file in git with a symlink based on the checksum of the contents. When you push the repo to a remote without git-annex support (like GitHub), then the binaries won't be transferred, keeping the size small. Other users can fetch binaries by adding a shared remote (e.g. a WebDAV or rsync share or web remotes using setup_sources.sh).

When tito is building SRPMs, it will "unlock" the files, fetching them from available remotes, build the RPM and then re-lock the checkout.

So the combination of git (for tracking sources and specs), git-annex (for keeping sources outside git), tito (for tagging builds) and Copr (for building and publishing) makes it easy to build and release your own RPMs, while allowing you to make the source code and build process accessible and transparent.

Categories: LUG Community Blogs

Steve Kemp: So that distribution I'm not-building?

Planet HantsLUG - Sun, 06/04/2014 - 15:35

The other week I was toying with using GNU stow to build an NFS-share, which would allow remote machines to boot from it.

It worked. It worked well. (Standard stuff, PXE booting with an NFS-root.)

Then I started wondering about distributions, since in one sense what I'd built was a minimal distribution.

On that basis yesterday I started hacking something more minimal:

  • I compiled a monolithic GNU/Linux kernel.
  • I created a minimal initrd image, using busybox.
  • I built a static version of the tcc compiler.
  • I got the thing booting, via KVM.

Unfortunately here is where I ran out of patience. Using tcc and the static C library I can compile code. But I can't link it.

$ cat > t.c <>EOF int main ( int argc, char *argv[] ) { printf("OK\n" ); return 1; } EOF $ /opt/tcc/bin/tcc t.c tcc: error: file 'crt1.o' not found tcc: error: file 'crti.o' not found ..

Attempting to fix this up resulted in nothing much better:

$ /opt/tcc/bin/tcc t.c -I/opt/musl/include -L/opt/musl/lib/

And because I don't have a full system I cannot compile t.c to t.o and use ld to link (because I have no ld.)

I had a brief flirt with the portable c-compiler, pcc, but didn't get any further with that.

I suspect the real solution here is to install gcc onto my host system, with something like --prefix=/opt/gcc, and then rsync that into my (suddenly huge) intramfs image. Then I have all the toys.

Categories: LUG Community Blogs

Dick Turpin: You rogue!

Planet WolvesLUG - Fri, 04/04/2014 - 09:43
Customer: "This laptop you sold me is well dodgy!"
Me: "What do you mean dodgy?"
Customer: "There's no Windows 7 licence and when I went to register it someone has pulled the serial number off!"
Me: "They're under the battery."

click brrrrrrrrrrrrrr
Categories: LUG Community Blogs

Jono Bacon: I Am Hiring

Planet WolvesLUG - Thu, 03/04/2014 - 17:53

I just wanted to let you folks know that I am recruiting for a community manager to join my team at Canonical.

I am looking for someone with strong technical knowledge of building Ubuntu (knowledge of how we release, how we build packages, bug management, governance etc), great community management skills, and someone who is willing to be challenged and grow in their skills and capabilities.

My goal with everyone who joins my team is not just to help them be successful in their work, but to help them be the very best at what they do in our industry. As such I am looking for someone with a passion to be successful and grow.

I think it is a great opportunity and to be part of a great team. Details of the job are available here – please apply if you are interested!?

Categories: LUG Community Blogs

Surrey LUG Bring-A-Box 12th April 2014

Surrey LUG - Thu, 03/04/2014 - 14:15
Start: 2014-04-12 11:00 End: 2014-04-12 17:00

This month's meeting is at The John Galsworthy Building at Kingston University. Our thanks go to Dan Patynski for organising.

Who

New members are very welcome. We're not a cliquey bunch, so you won't feel out of place! Usually between 10 and 30 people come along. 

Categories: LUG Community Blogs

Steve Kemp: Tagging images, and maintaining collections?

Planet HantsLUG - Thu, 03/04/2014 - 12:02

I'm an amateur photographer, although these days I tend to drop the amateur prefix, given that I shoot people for cash at least once a month.

(It isn't my main job, and I'd never actually want it to be, because I'm certain I'd become unhappy hustling for jobs and doing the promotion thing.)

Anyway over the years I've built up a large library of images, mostly organized in a hierarchy of directories beneath ~/Images.

Unlike most photographers I don't use aperture, lighttable, or any similar library management. I shoot my images in RAW, convert to JPG via rawtherapee, and keep both versions of the images.

In short I don't want to mix the "library management" functions with the "RAW conversion" because I do regard them as two separate steps. That said I'm reaching a point where I do want to start tagging images, and finding them more quickly.

In the past I wrote a couple of simple tools to inject tags into the EXIF data of images, and then indexed them. But that didn't work so well in practise. I'm starting to think instead I should index images into sqlite:

  • Size.
  • date.
  • Content hash.
  • Tags.
  • Path.

The downside is that this breaks utterly as soon as you move images around on-disk. Which is something my previous exif-manipulation was designed to avoid.

Anyway I'm thinking at the moment, but I know that the existing tools such as F-Spot, shotwell, DigiKam, and similar aren't suitable. So I either need to go standalone and use EXIF tags, accepting the fact that the tags I enter won't be visible to other tools, or I cope with the file-rename issues by attempting to update an existing sqlite database via hash/size/etc.

Categories: LUG Community Blogs

Jono Bacon: Ubuntu Online Summit Dates

Planet WolvesLUG - Thu, 03/04/2014 - 00:03

At the last Ubuntu Developer Summit we discussed the idea of making our regular online summit serve more than just developers. We are interested in showcasing not just the developer-orientated discussion sessions that we currently have, but also including content such as presentations, demos, tutorials, and other topics.

I just wanted to give everyone a heads up that the first Ubuntu Online Summit will happen from 10th – 12th June 2014. The website is not yet updated (we are going to keep everything on summit.ubuntu.com and uds.ubuntu.com can point there, and Michael is making the changes to bring over the static content).

We are really keen to get ideas for how the event can run so I am scheduling a hangout on Thurs 10th April at 5pm UTC on Ubuntu On Air where I would welcome ideas and input. I hope to see you there!

Categories: LUG Community Blogs

Tony Whitmore: Ubuntu Podcast – Season 7 starts tomorrow!

Planet HantsLUG - Tue, 01/04/2014 - 20:18

Tomorrow evening we’ll be bringing a brand new season of the Ubuntu Podcast to your ears. After an extended winter break, we’re ready to dust off our microphones and mixers, fire up our laptops and dive head first into the new season. We’ll be streaming the show live at 2030 BST so you can listen and even participate through the IRC channel. Visit the live page on the website to find out more.

As we did last year, we will be releasing new episodes for download every week. If you can’t wait for that, listen live on alternate Wednesday evenings for about an hour. You can check the recording dates on our website or add them to your Google calendar.

The show will be much as you know and hopefully love it: A mix of discussion, interviews, news, silliness and cake. It would be great if you could join us at 2030 BST tomorrow (Wednesday 2nd April) for the first live show of the season!

Pin It
Categories: LUG Community Blogs

Debian Bits: Debian Project elects Javier Merino Cacho as Project Leader

Planet HantsLUG - Tue, 01/04/2014 - 11:25

This post was an April Fools' Day joke.

In accordance with its constitution, the Debian Project has just elected Javier Merino Cacho as Debian Project Leader. More than 80% of voters put him as their first choice (or equal first) on their ballot papers.

Javier's large majority over his opponents shows how his inspiring vision for the future of the Debian project is largely shared by the other developers. Lucas Nussbaum and Neil McGovern also gained a lot of support from Debian project members, both coming many votes ahead of the None of the above ballot choice.

Javier has been a Debian Developer since February 2012 and, among other packages, works on keeping the mercurial package under control, as mercury is very poisonous for trouts.

After it was announced that he had won this year's election, Javier said: I'm flattered by the trust that Debian members have put in me. One of the main points in my platform is to remove the "Debian is old and boring" image. In order to change that, my first action as DPL is to encourage all Debian Project Members to wear a clown red nose in public.

Among others, the main points from his platform are mainly related to improve the communication style in mailing lists through an innovative filter called aponygisator, to make Debian less "old and boring", as well as solve technical issues among developers with barehanded fights. Betting on the fights will be not only allowed but encouraged for fundraising reasons.

Javier also contemplated the use of misleading talk titles such as The use of cannabis in contemporary ages: a practical approach and Real Madrid vs Barcelona to lure new users and contributors to Debian events.

Javier's platform was collaboratively written by a team of communication experts and high profile Debian contributors during the last DebConf. It has since evolved thanks to the help of many other contributors.

Categories: LUG Community Blogs
Syndicate content