News aggregator

MJ Ray: New comments methods

Planet ALUG - Wed, 25/06/2014 - 21:04

After years of resisting it, I’ve added the least evil Twitter/Facebook comments plugin I could find to this blog as a test and updated the comments policy a little.

Please kick the tyres and try commenting to see if it works, phase.

Categories: LUG Community Blogs

Steve Kemp: So I accidentally ... a service.

Planet HantsLUG - Mon, 23/06/2014 - 20:44

This post is partly introspection, and partly advertising. Skip if it either annoys you.

Back in February I was thinking about what to do with myself. I had two main options "Get a job", and "Start a service". Because I didn't have any ideas that seemed terribly interesting I asked people what they would pay for.

There were several replies, largely based "infrastructure hosting" (which was pretty much 50/50 split between "DNS hosting", and project hosting with something like trac, redmine, or similar).

At the time DNS seemed hard, and later I discovered there were already at least two well-regarded people doing DNS things, with revision control.

So I shelved the idea, after reaching out to both companies to no avail. (This later lead to drama, but we'll pretend it didn't.) Ultimately I sought and acquired gainful employment.

Then, during the course of my gainful employment I was exposed to Amazons Route53 service. It looked like I was going to be doing many things with this, so I wanted to understand it more thoroughly than I did. That lead to the creation of a Dynamic-DNS service - which seemed to be about the simplest thing you could do with the ability to programatically add/edit/delete DNS records via an API.

As this was a random hack put together over the course of a couple of nights I didn't really expect it to be any more popular than anything else I'd deployed, and with the sudden influx of users I wanted to see if I could charge people. Ultimately many people pretended they'd pay, but nobody actually committed. So on that basis I released the source code and decided to ignore the two main missing features - lack of MX records, and lack of sub-sub-domains. (Isn't it amazing how people who claim they want "open source" so frequently mean they want something with zero cost, they can run, and never modify and contribute toward?)

The experience of doing that though, and the reminder of the popularity of the original idea made me think that I could do a useful job with Git + DNS combined. That lead to DNS-API - GitHub based DNS hosting.

It is early days, but it looks like I have a few users, and if I can get more then I'll be happy.

So if you want to to store your DNS records in a (public) GitHub repository, and get them hosted on geographically diverse anycasted servers .. well you know where to go: Github-based DNS hosting.

Categories: LUG Community Blogs

Tony Whitmore: Tom Baker at 80

Planet HantsLUG - Mon, 23/06/2014 - 18:31

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin It
Categories: LUG Community Blogs

Meeting at "The Moon Under Water"

Wolverhampton LUG News - Mon, 23/06/2014 - 10:15
Event-Date: Wednesday, 25 June, 2014 - 19:30 to 23:00Body: 53-55 Lichfield St Wolverhampton West Midlands WV1 1EQ Eat, Drink and talk Linux
Categories: LUG Community Blogs

Andy Smith: How to work around lack of array support in puppetlabs-firewall?

Planet HantsLUG - Mon, 23/06/2014 - 04:28

After a couple of irritating firewalling oversights I decided to have a go at replacing my hacked-together firewall management scripts with the puppetlabs-firewall module.

It’s going okay, but one thing I’ve found quite irritating is the lack of support for arrays of things such as source IPs or ICMP types.

For example, let’s say I have a sequence of shell commands like this:

#!/bin/bash   readonly IPT=/sbin/iptables   for icmptype in redirect router-advertisement router-solicitation \ address-mask-request address-mask-reply; do $IPT -A INPUT -p icmp --icmp-type ${icmptype} -j DROP done

You’d think that with puppetlabs-firewall you could do this:

class bffirewall::prev4 { Firewall { require => undef, }   firewall { '00002 Disallow possibly harmful ICMP': proto => 'icmp', icmp => [ 'redirect', 'router-advertisement', 'router-solicitation', 'address-mask-request', 'address-mask-reply' ], action => 'drop', provider => 'iptables', } }

Well it is correct syntax which installs fine on the client, but taking a closer look it hasn’t worked. It’s just applied the first firewall rule out of the array, i.e.:

iptables -A INPUT -p icmp --icmp-type redirect -j DROP

There’s already a bug in Puppet’s JIRA about this.

Similarly, what if you need to add a similar rule for each of a set of source hosts? For example:

readonly MONITORS="192.168.0.244 192.168.0.238 192.168.4.71" readonly CACTI="192.168.0.246" readonly ENTROPY="192.168.0.215"   # Allow access from: # - monitoring hosts # - cacti # - the entropy VIP for host in ${MONITORS} ${CACTI} ${ENTROPY}; do $IPT -A INPUT -p tcp --dport 8888 -s ${host} -j ACCEPT done

Again, your assumption about what would work…

firewall { '08888 Allow egd connections': proto => 'tcp', dport => '8888', source => [ '192.168.0.244', '192.168.0.238', '192.168.4.71', '192.168.0.246', '192.168.0.215' ], action => 'accept', provider => 'iptables', }

…just results in the inclusion of a rule for only the first source host, with the rest being silently discarded.

This one seems to have an existing bug too; though it has a status of closed/fixed it certainly isn’t working in the most recent release. Maybe I need to be using the head of the repository for that one.

So, what to do?

Duplicating the firewall {} blocks is one option that’s always going to work as a last resort.

Puppet’s DSL doesn’t support any kind of iteration as far as I’m aware, though it will in future — no surprise, as iteration and looping is kind of a glaring omission.

Until then, does anyone know any tricks to cut down on the repetition here?

Categories: LUG Community Blogs

Martin Wimpress: OpenMediaVault on Debian

Planet HantsLUG - Sun, 22/06/2014 - 12:00

At the time of writing OpenMediaVault 0.6 is pre-release. But it is possible to install OpenMediaVault on Debian Wheezy in order to get some testing done.

Install Debian Wheezy on your target VM or test server. Go with the defaults until the 'Software selection' dialogue. Make sure everything is unselected, like this:

[ ] Debian desktop environment [ ] Web server [ ] Print server [ ] SQL database [ ] DNS Server [ ] File server [ ] Mail server [ ] SSH server [ ] Laptop [ ] Standard system utilities

After the install is complete, reboot and login to the new Debian system as root.

Update the repository sources and add the contrib and non-free repositories.

nano /etc/apt/sources.list

It should look something like this:

deb http://ftp.uk.debian.org/debian/ wheezy main contrib non-free deb-src http://ftp.uk.debian.org/debian/ wheezy main contrib non-free deb http://security.debian.org/ wheezy/updates main contrib non-free deb-src http://security.debian.org/ wheezy/updates main contrib non-free # wheezy-updates, previously known as 'volatile' deb http://ftp.uk.debian.org/debian/ wheezy-updates main contrib non-free deb-src http://ftp.uk.debian.org/debian/ wheezy-updates main contrib non-free

Now add the OpenMediaVault repository.

echo "deb http://packages.openmediavault.org/public kralizec main" > /etc/apt/sources.list.d/openmediavault.list

Update.

apt-get update

Install the OpenMediaVault repository key and Postfix.

apt-get install openmediavault-keyring postfix
  • When the 'Postfix Configuration' dialogue is displayed choose No configuration.

Update again and install OpenMediaVault.

apt-get update apt-get install openmediavault
  • When the 'Configuring mdadm' dialogue is displayed enter none.
  • Do you want to start MD arrays automatically? YES
  • When the 'ProFTPD configuration' dialogue is displayed choose standalone.

Initialise OpenMediaVault and reboot.

omv-initsystem reboot

After the reboot you should be able to connect to the OpenMediaVault WebUI and login as admin with the password of openmediavault.

That's it. Get testing.

Categories: LUG Community Blogs

Martin Wimpress: Setting up BitSync on Debian

Planet HantsLUG - Sat, 21/06/2014 - 12:00

I've replaced Dropbox with BitTorrent Sync. In order to do this I've have btsync running on a VPS (2CPU, 2GB, 400GB), my home server and assorted Arch Linux workstations.

I had a couple of reasons for migrating away from Dropbox.

  • Dropbox was costing $100 per year.
  • Dropbox encryption model is weak and I have data security/privacy.

The VPS I am running BitTorrent Sync on costs $50 per year and provides four times the storage. I run btsync on a VPS so that there is always a server "in the cloud" that is available to sync with so that my setup emulates what Dropbox used to do.

All my servers are running Debian and this is how I install btsync on Debian.

sh -c "$(curl -fsSL http://debian.yeasoft.net/add-btsync-repository.sh)" sudo apt-get install btsync

This is how I respond to the prompts:

  • Do you want to define a default BitTorrent Sync instance? : YES
  • BitTorrent Sync Daemon Credentials:
  • BitTorrent Sync Daemon Group:
  • Niceness of the BitTorrent Sync Daemon: 0
  • On which portnumber should BitTorrent Sync listen? 0
  • BitTorrent Sync Listen Port: 12345
  • Do you want BitTorrent Sync to request port mapping via UPNP? NO
  • Download Bandwith Limit: 0
  • Upload Bandwith Limit: 0
  • Web Interface Bind IP Address: 0.0.0.0
  • Web Interface Listen Port: 8888
  • The username for accessing the web interface: yourusername
  • The password for accessing the web interface: yourpassword

As you'll see, I don't use UPNP on my VPS. I elect a specific port (not actually 12345 by the way) and open that port up with ufw. I also only allow access to the WebUI port from another server I own which reverse proxies via nginx.

btsync works really well, I have it syncing hundreds of thousands of files that amount to several hundred gigabytes of data. On my Arch Linux workstations I use the brilliant btsync-gui and BitTorrent Sync is also available for Android.

That said, I still use a free Dropbox account to sync photos from mine and my wife's Android phones. I have a Dropbox instance running on my home file server and everyday it runs a script to automatically import these photos into Plex.

Such a shame, that at the time of writing, btsync is closed source :-( Maybe that will change but if it doesn't SyncThing may well be the answer when it has matured a little.

References
Categories: LUG Community Blogs

Steve Kemp: The perils of the cloud..

Planet HantsLUG - Fri, 20/06/2014 - 13:18

Recently two companies have suffed problems due to compromised AWS credentials:

  • Code Spaces
    • The company has effectively folded. Thier AWS account was compromised, and all their data and backups were deleted.
  • Bonsai
    • Within two minutes all their instances were terminated.
    • This is still live - watch updates of the recovery process.

I'm just about to commit to using Amazon for hosting DNS for paying customers, so this is the kind of thing that makes me paranoid.

I'll be storing DNS-data in Git, and if the zones were nuked on the Amazon-side I could re-upload them, but users would be dead regardless - because they'd need to update the nameservers in whois before the re-uploaded data would be useful.

I suspect I need to upload to two DNS providers, to get more redundency.

Currently I have a working system which allows me to push DNS records to a Git repository, and that seamlessly triggers a DNS update (i.e. A webhook trigged by github/bitbucket/whatever).

Before I publish anything I need to write more code, more documentation, and agree on pricing details. Then I'll setup a landing-page at http://dns-api.com/.

I've been challenged to find paying customers before launching, and thus far have two, which is positive.

The DHCP.io site has now been freed. I'm no longer going to try to commercialize it, instead I will only offer the Git-based product as a commercial service. On that basis I upped the service so users could manage up to five names per account, more if you mail me privately and beg ;)

(ObRandom: Google does hosted DNS with an API. They're expensive. I'm surprised I'd not heard of them doing this.)

Categories: LUG Community Blogs

Martin Wimpress: MATE Desktop on Debian Wheezy

Planet HantsLUG - Thu, 19/06/2014 - 22:00

I'm a member of the MATE Desktop team and until recently the majority of my involvement has been focused around Arch Linux.

However, I'm working on a MATE project that is based on a Debian derivative. MATE has recently been accepted into the Debian Backports repository for Wheezy, so I decided to do a "MATE from scratch" on Debian using an old netbook to get familiar with the MATE package naming differences between Arch Linux and Debian.

Install Debian

I installed Debian Wheezy from the netinst ISO to ensure the target install was as minimal as possible. I went with the defaults until the 'Software selection' dialogue, at this point unselect everything except "SSH server". Like this:

[ ] Debian desktop environment [ ] Web server [ ] Print server [ ] SQL database [ ] DNS Server [ ] File server [ ] Mail server [X] SSH server [ ] Laptop [ ] Standard system utilities Debian ISO with Firmware

If you're installing on hardware that requires additional firmware in order for it to work with Linux then use the netinst ISO that includes firmware.

Configure Debian

When the install is finished, reboot and configure Debian a little.

Repositories

You'll need to install lsb-release for the following to work.

apt-get install lsb-release

This is what I put in /etc/apt/sources.list.

cat >/etc/apt/sources.list<<EOF deb http://ftp.uk.debian.org/debian/ $(lsb_release -cs) main contrib non-free deb-src http://ftp.uk.debian.org/debian/ $(lsb_release -cs) main contrib non-free deb http://security.debian.org/ $(lsb_release -cs)/updates main contrib non-free deb-src http://security.debian.org/ $(lsb_release -cs)/updates main contrib non-free # $(lsb_release -cs)-updates, previously known as 'volatile' deb http://ftp.uk.debian.org/debian/ $(lsb_release -cs)-updates main contrib non-free deb-src http://ftp.uk.debian.org/debian/ $(lsb_release -cs)-updates main contrib non-free EOF Backports

MATE is only available in the wheezy-backports repository.

cat >/etc/apt/sources.list.d/backports.list <<EOF deb http://ftp.uk.debian.org/debian $(lsb_release -cs)-backports main contrib non-free deb-src http://ftp.uk.debian.org/debian $(lsb_release -cs)-backports main contrib non-free EOF

Update.

sudo apt-get update

All backports are deactivated by default (i.e. the packages are pinned to 100 by using ButAutomaticUpgrades: yes in the Release files. If you want to install something from backports run:

apt-get -t wheezy-backports install "package" Install MATE Desktop

First install the LightDM display manager.

apt-get install accountsservice lightdm lightdm-gtk-greeter

Now for the MATE Desktop itself.

apt-get -t wheezy-backports install mate-desktop-environment-extras NetworkManager

I typically use NetworkManager, so lets install that too.

apt-get install network-manager-gnome Supplementary

Depending on your hardware you may require CPU frequency utilities or additional firmware.

apt-get install cpufreqd cpufrequtil firmware-linux firmware-linux-nonfree

And, that's it! Reboot and you'll see the LightDM greeter waiting for your login credentials.

References
Categories: LUG Community Blogs

Steve Engledow (stilvoid): tmux

Planet ALUG - Tue, 17/06/2014 - 11:19

tmux is the best thing ever. That is all.

No, that is not all. Here is how I make use of tmux to make my life measurably more awesome:

First, my .tmux.conf. This changes tmux's ctrl-b magic key binding to ctrl-a as I've grown far too used to hitting that from when I used screen. I set up a few other screen-like bindings too. Finally, I set a few options that make tmux work better with urxvt.

# Set the prefix to ^A. unbind C-b set -g prefix ^A bind a send-prefix # Bind c to new-window unbind c bind c new-window -c $PWD # Bind space, n to next-window unbind " " bind " " next-window unbind n bind n next-window # Bind p to previous-window unbind p bind p previous-window # A few other settings to make things funky set -g status off set -g aggressive-resize on set -g mode-keys vi set -g default-terminal screen-256color set -g terminal-overrides 'rxvt-unicode*:sitm@'

And then here's what I have near the top of my .bashrc:

# If tmux isn't already running, run it [ -z "$TMUX" ] && exec ~/bin/tmux

...which goes with this, the contents of ~/bin/tmux:

#!/bin/bash # If there are any sessions that aren't attached, attach to the first one # Otherwise, start a new session for line in $(tmux ls -F "#{session_name},#{session_attached}"); do name=$(echo $line | cut -d ',' -f 1) attached=$(echo $line | cut -d ',' -f 2) if [ $attached -eq 0 ]; then tmux attach -t $name exit fi done tmux -u

Basically, what happens is that whenever I start a terminal session, if I'm not already attached to a tmux session, I find a session that's not already attached to and attach to it. If there aren't any, I create a new one.

This really tidies up my workflow and means that I never forget about any old sessions I'd detached.

Oh and one last thing, ctrl-a s is the best thing in tmux ever. It shows a list of tmux sessions which can be expanded to show what's running in them and you can then interactively re-attach your terminal to one of them. In short, I can start a terminal from any desktop or vt and quickly attach to something that's happening on any other. I use this feature a lot.

Categories: LUG Community Blogs

Steve Kemp: DNS is now resolved

Planet HantsLUG - Tue, 17/06/2014 - 10:13

I used to work for Bytemark, being a sysadmin and sometimes handling support requests from end-users, along with their clients.

One thing that never got old was marking DNS-related tickets as "resolved", or managing to slip that word into replies.

Similarly being married to a Finnish woman you'd be amazed how often Finnish and Finished become interchangable.

Anyway that's enough pun-discussion.

Over the past few days I've, obviously, been playing with DNS. There are two public results:

DHCP.io

This is my simple Dynamic-DNS host, which has now picked up a few users.

I posted a token on previous entry, and I've had fun seeing how people keep changing the IP address of the host skx.dhcp.io.. I should revoke the token and actually claim the name - but to be honest it is more fun seeing it update.

What is most interesting is that I can see it being used for real - I see from the access logs some people have actually scheduled curl to run on an hourly basis. Neat.

DNS-API.org

This is a simple lookup utility, allowing queries to be made, such as:

Of the two sites this is perhaps the most useful, but again I expect it isn't unique.

That about wraps things up for the moment. It may well be the case that in the future there is some Git + DNS + Amazon integration for DNS-hosting, but I'm going to leave it alone for the moment.

Despite writing about DNS several times in the past the only reason this flurry of activity arose is that I'm hacking some Amazon & CPanel integration at the moment - and I wanted to experiment with Amazon's API some more.

So, we'll mark this activity as resolved, and I shall go make some coffee now this entry is Finnish.

ObRandomUpdate: At least there was a productive side-effect here - I created/uploaded to CPAN CGI::Application::Plugin::Throttle.

Categories: LUG Community Blogs
Syndicate content