News aggregator

Steve Kemp: Putting the finishing touches to a nodejs library

Planet HantsLUG - Fri, 11/04/2014 - 15:14

For the past few years I've been running a simple service to block blog/comment-spam, which is (currently) implemented as a simple JSON API over HTTP, with a minimal core and all the logic in a series of plugins.

One obvious thing I wasn't doing until today was paying attention to the anchor-text used in hyperlinks, for example:

<a href="http://fdsf.example.com/">buy viagra</a>

Blocking on the anchor-text is less prone to false positives than blocking on keywords in the comment/message bodies.

Unfortunately there seem to exist no simple nodejs modules for extracting all the links, and associated anchors, from a random Javascript string. So I had to write such a module, but .. given how small it is there seems little point in sharing it. So I guess this is one of the reasons why there often large gaps in the module ecosystem.

(Equally some modules are essentially applications; great that the authors shared, but virtually unusable, unless you 100% match their problem domain.)

I've written about this before when I had to construct, and publish, my own cidr-matching module.

Anyway expect an upload soon, currently I "parse" HTML and BBCode. Possibly markdown to follow, since I have an interest in markdown.

Categories: LUG Community Blogs

David Goodwin: Installing Debian (Jessie) on an Intel NUC D54250WYK

Planet WolvesLUG - Fri, 11/04/2014 - 11:04

Product - D54250WYK / boxd54250wykh3 – via e.g. Ballicom or eBuyer

It’s an Intel i5 4250U processor (dual core, laptop processor). Supports up to 16gb of RAM and the Intel 5000 graphics thing in it.

The box itself is really small – and silent. A laptop size hard disk can fit into it (2.5″ hdd).

Issues :

  1. BIOS needs updating before it can be installed (apparently); See Intel’s website – currently here - it’s just a case of downloading the .BIO file and sticking it on a USB stick and pressing F7 on boot and following through the prompts.
  2. Most Linux distros do not yet support the network card (Intel 559/I218-V) – I had to netboot a Debian unstable netboot iso image (from here )

Good things -

  1. BTRFS root filesystem + booting etc just worked with Jessie.
  2. X configuration just works – even though it’s quite a new graphics chipset.
  3. Boot time is VERY fast – currently <5 seconds.
Categories: LUG Community Blogs

Steve Kemp: A small assortment of content

Planet HantsLUG - Thu, 10/04/2014 - 16:34

Today I took down my KVM-host machine, rebooting it and restarting all of my guests. It has been a while since I'd done so and I was a little nerveous, as it turned out this nerveousness was prophetic.

I'd forgotten to hardwire the use of proxy_arp so my guests were all broken when the systems came back online.

If you're curious this is what my incoming graph of email SPAM looks like:

I think it is obvious where the downtime occurred, right?

In other news I'm awaiting news from the system administration job I applied for here in Edinburgh, if that doesn't work out I'll need to hunt for another position..

Finally I've started hacking on my console based mail-client some more. It is a modal client which means you're always in one of three states/modes:

  • maildir - Viewing a list of maildir folders.
  • index - Viewing a list of messages.
  • message - Viewing a single message.

As a result of a lot of hacking there is now a fourth mode/state "text-mode". Which allows you to view arbitrary text, for example scrolling up and down a file on-disk, to read the manual, or viewing messages in interesting ways.

Support is still basic at the moment, but both of these work:

-- -- Show a single file -- show_file_contents( "/etc/passwd" ) global_mode( "text" )

Or:

function x() txt = { "${colour:red}Steve", "${colour:blue}Kemp", "${bold}Has", "${underline}Definitely", "Made this work" } show_text( txt ) global_mode( "text") end x()

There will be a new release within the week, I guess, I just need to wire up a few more primitives, write more of a manual, and close some more bugs.

Happy Thursday, or as we say in this house, Hyvää torstai!

Categories: LUG Community Blogs

School is in session

Planet SurreyLUG - Tue, 08/04/2014 - 15:59
I’ve had a lot of travel lately and I’ve had some time to think, when you take a step back and look at technology you realise it’s always changing always developing updating and adding new features.  It got me thinking, I should also keep updating my knowledge bank. It’s all well and good to talk […]
Categories: LUG Community Blogs

Mick Morgan: heartbleed

Planet ALUG - Tue, 08/04/2014 - 13:45

This is nasty. There is a remotely exploitable bug in openssl which leads to the leak of memory contents from the server to the client and from the client to the server. In practice this means that an attacker can read 64K chunks of memory on a vulnerable service, thus potentially exposing security critical information.

At 19.00 UTC yesterday, openssl bug CVE-2014-0160 was announced at heartbleed.com. I picked it up following a flurry of emails on the tor relays list this morning. Roger Dingledine posted a blog commentary on the bug to the tor list giving details about the likely impacts on Tor and Tor users.

Dingledine’s blog entry says:

Here are our first thoughts on what Tor components are affected:

  • Clients: Tor Browser shouldn’t be affected, since it uses libnss rather than openssl. But Tor clients could possibly be induced to send sensitive information like “what sites you visited in this session” to your entry guards. If you’re using TBB we’ll have new bundles out shortly; if you’re using your operating system’s Tor package you should get a new OpenSSL package and then be sure to manually restart your Tor.
  • Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you’re at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay (but remember that Tor’s multi-hop design means that attacking just one relay in the client’s path is not very useful). In any case, best practice would be to update your OpenSSL package, discard all the files in keys/ in your DataDirectory, and restart your Tor to generate new keys.
  • Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn’t allow an attacker to identify the location of the hidden service, but an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.
  • Directory authorities: In addition to the keys listed in the “relays and bridges” section above, Tor directory authorities might leak their medium-term authority signing keys. Once you’ve updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don’t rotate that yet. We’ll see how this unfolds and try to think of a good solution there.
  • Tails is still tracking Debian oldstable, so it should not be affected by this bug.
  • Orbot looks vulnerable; they have some new packages available for testing.
  • The webservers in the https://www.torproject.org/ rotation needed (and got) upgrades too. Maybe we’ll need to throw away our torproject SSL web cert and get a new one too.

But as he also says earlier on, “this bug affects way more programs than just Tor”. The openssl security advisory is remarkably sparse on details, saying only that “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server.” So it is left to others to explain what this means in practice. The heartbleed announcement does just that. It opens by saying:

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

During their investigations, the heartbleed researchers attacked their own SSL protected services from outside and found that they were:

able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

According to the heartbleed site, versions 1.0.1 through 1.0.1f (inclusive) of openssl are vulnerable. The earlier 0.9.8 branch is NOT vulnerable, nor is the 1.0.0 branch. Unfortunately, the bug was introduced to openssl in December 2011 and has been available in real world use in the 1.0.1 branch since 14 March 2102 – or just over 2 years ago. This means that a LOT of services will be affected and will need to be patched, and quickly.

Openssl, or its libraries, are used in a vast range of security critical services across the internet. As the heartbleed site notes:

OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company’s site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

That point about networked appliances is particularly worrying. in the last two years a lot of devices (routers, switches, firewalls etc) may have shipped with embedded services built against vulnerable versions of the openssl library.

In my case alone I now have to generate new X509 certificates for all my webservers, my mail (both SMTP and POP/IMAP) services, and my openVPN services. I will also need to look critically at my ssh implementation and setup. I am lucky that I only have a small network.

My guess is that most professional sysadmins are having a very bad day today.

Categories: LUG Community Blogs

Reproducible builds on Copr with tito and git-annex

Planet SurreyLUG - Mon, 07/04/2014 - 08:22

Recently, Fedora's Copr service was launched which lets individuals create personal repos and build packages on Fedora's servers for a number of Fedora and EL versions (similar to OBS and PPAs).

I've set up a couple of repos under it, and one which contains builds of various gems as dependencies for librarian-puppet. Setting up and tracking RPM builds is made quite easy with git and tito, which lets you set up a simple directory structure in your RPM repo, track specs and source binaries (.gem files), tag and release changes to Copr:

$ tree . |-- README.md |-- rel-eng | |-- packages | | |-- rubygem-highline | | |-- rubygem-librarian | | |-- rubygem-librarian-puppet | | `-- rubygem-thor | |-- releasers.conf | `-- tito.props |-- rubygem-highline | |-- highline-1.6.20.gem | `-- rubygem-highline.spec |-- rubygem-librarian | |-- librarian-0.1.2.gem | `-- rubygem-librarian.spec |-- rubygem-librarian-puppet | |-- librarian-puppet-0.9.17.gem | `-- rubygem-librarian-puppet.spec |-- rubygem-thor | |-- rubygem-thor.spec | `-- thor-0.15.4.gem `-- setup_sources.sh 6 directories, 16 files

(my librarian-puppet-copr repo)

However storing binary files in git has lots of problems, including the resulting size of the repo. To solve this, I use git-annex so the repo only stores metadata and a local cache of the binaries. The binaries can then be fetched from the web on a clean checkout using the setup_sources.sh script and git-annex on the fly.

Setting this up is easy:

  1. mkdir myrepo && cd myrepo
  2. git init
  3. git annex init
  4. tito init
Next, configure Copr and tito to use Copr using Mirek Suchý's blog post and my rel-eng directory. The key part of the rel-eng configuration for using the git-annex support in tito 0.5.0 is setting the builder in tito.props:

[buildconfig] builder = tito.builder.GitAnnexBuilder

Adding new packages is now a matter of doing:

  1. mkdir rubygem-foo && cd rubygem-foo
  2. vim rubygem-foo.spec
  3. git annex addurl --file=foo-1.2.3.gem http://rubygems.org/downloads/foo-1.2.3.gem or copy the file into place and run git annex add foo-1.2.3.gem
  4. git commit -am "Add foo 1.2.3"
  5. tito tag --keep-version
  6. tito release copr-domcleal

git-annex will store the file in its local storage under .git/annex, replacing the file in git with a symlink based on the checksum of the contents. When you push the repo to a remote without git-annex support (like GitHub), then the binaries won't be transferred, keeping the size small. Other users can fetch binaries by adding a shared remote (e.g. a WebDAV or rsync share or web remotes using setup_sources.sh).

When tito is building SRPMs, it will "unlock" the files, fetching them from available remotes, build the RPM and then re-lock the checkout.

So the combination of git (for tracking sources and specs), git-annex (for keeping sources outside git), tito (for tagging builds) and Copr (for building and publishing) makes it easy to build and release your own RPMs, while allowing you to make the source code and build process accessible and transparent.

Categories: LUG Community Blogs

Steve Kemp: So that distribution I'm not-building?

Planet HantsLUG - Sun, 06/04/2014 - 15:35

The other week I was toying with using GNU stow to build an NFS-share, which would allow remote machines to boot from it.

It worked. It worked well. (Standard stuff, PXE booting with an NFS-root.)

Then I started wondering about distributions, since in one sense what I'd built was a minimal distribution.

On that basis yesterday I started hacking something more minimal:

  • I compiled a monolithic GNU/Linux kernel.
  • I created a minimal initrd image, using busybox.
  • I built a static version of the tcc compiler.
  • I got the thing booting, via KVM.

Unfortunately here is where I ran out of patience. Using tcc and the static C library I can compile code. But I can't link it.

$ cat > t.c <>EOF int main ( int argc, char *argv[] ) { printf("OK\n" ); return 1; } EOF $ /opt/tcc/bin/tcc t.c tcc: error: file 'crt1.o' not found tcc: error: file 'crti.o' not found ..

Attempting to fix this up resulted in nothing much better:

$ /opt/tcc/bin/tcc t.c -I/opt/musl/include -L/opt/musl/lib/

And because I don't have a full system I cannot compile t.c to t.o and use ld to link (because I have no ld.)

I had a brief flirt with the portable c-compiler, pcc, but didn't get any further with that.

I suspect the real solution here is to install gcc onto my host system, with something like --prefix=/opt/gcc, and then rsync that into my (suddenly huge) intramfs image. Then I have all the toys.

Categories: LUG Community Blogs

Dick Turpin: You rogue!

Planet WolvesLUG - Fri, 04/04/2014 - 09:43
Customer: "This laptop you sold me is well dodgy!"
Me: "What do you mean dodgy?"
Customer: "There's no Windows 7 licence and when I went to register it someone has pulled the serial number off!"
Me: "They're under the battery."

click brrrrrrrrrrrrrr
Categories: LUG Community Blogs

Jono Bacon: I Am Hiring

Planet WolvesLUG - Thu, 03/04/2014 - 17:53

I just wanted to let you folks know that I am recruiting for a community manager to join my team at Canonical.

I am looking for someone with strong technical knowledge of building Ubuntu (knowledge of how we release, how we build packages, bug management, governance etc), great community management skills, and someone who is willing to be challenged and grow in their skills and capabilities.

My goal with everyone who joins my team is not just to help them be successful in their work, but to help them be the very best at what they do in our industry. As such I am looking for someone with a passion to be successful and grow.

I think it is a great opportunity and to be part of a great team. Details of the job are available here – please apply if you are interested!?

Categories: LUG Community Blogs

Surrey LUG Bring-A-Box 12th April 2014

Surrey LUG - Thu, 03/04/2014 - 14:15
Start: 2014-04-12 11:00 End: 2014-04-12 17:00

This month's meeting is at The John Galsworthy Building at Kingston University. Our thanks go to Dan Patynski for organising.

Who

New members are very welcome. We're not a cliquey bunch, so you won't feel out of place! Usually between 10 and 30 people come along. 

Categories: LUG Community Blogs

Steve Kemp: Tagging images, and maintaining collections?

Planet HantsLUG - Thu, 03/04/2014 - 12:02

I'm an amateur photographer, although these days I tend to drop the amateur prefix, given that I shoot people for cash at least once a month.

(It isn't my main job, and I'd never actually want it to be, because I'm certain I'd become unhappy hustling for jobs and doing the promotion thing.)

Anyway over the years I've built up a large library of images, mostly organized in a hierarchy of directories beneath ~/Images.

Unlike most photographers I don't use aperture, lighttable, or any similar library management. I shoot my images in RAW, convert to JPG via rawtherapee, and keep both versions of the images.

In short I don't want to mix the "library management" functions with the "RAW conversion" because I do regard them as two separate steps. That said I'm reaching a point where I do want to start tagging images, and finding them more quickly.

In the past I wrote a couple of simple tools to inject tags into the EXIF data of images, and then indexed them. But that didn't work so well in practise. I'm starting to think instead I should index images into sqlite:

  • Size.
  • date.
  • Content hash.
  • Tags.
  • Path.

The downside is that this breaks utterly as soon as you move images around on-disk. Which is something my previous exif-manipulation was designed to avoid.

Anyway I'm thinking at the moment, but I know that the existing tools such as F-Spot, shotwell, DigiKam, and similar aren't suitable. So I either need to go standalone and use EXIF tags, accepting the fact that the tags I enter won't be visible to other tools, or I cope with the file-rename issues by attempting to update an existing sqlite database via hash/size/etc.

Categories: LUG Community Blogs
Syndicate content