Planet HantsLUG

Syndicate content
Planet HantsLUG - http://hantslug.org.uk/planet/
Updated: 49 min 56 sec ago

Martin Wimpress: Headless cloudprint server on Debian for MFC-7360N

Sat, 05/07/2014 - 12:00

I have a Brother MFC-7360N printer at home and there is also one at work. I wanted to to get Cloudprint working with Android devices rather than use the Android app Brother provide, which is great when it works but deeply frustrating (for my wife) when it doesn't.

What I describe below is how to Cloudprint enable "Classic printers" using Debian Wheezy.

Install CUPS

Install CUPS and the Cloudprint requirements.

sudo apt-get install cups python-cups python-daemon python-pkg-resources Install the MFC-7360N Drivers

I used the URL below to access the .deb files required.

If you're running a 64-bit Debian, then install ia32-libs first.

sudo apt-get install ia32-libs

Download and install the MFC-7360N drivers.

wget -c http://download.brother.com/welcome/dlf006237/mfc7360nlpr-2.1.0-1.i386.deb wget -c http://download.brother.com/welcome/dlf006239/cupswrapperMFC7360N-2.0.4-2.i386.deb sudo dpkg -i --force-all mfc7360nlpr-2.1.0-1.i386.deb sudo dpkg -i --force-all cupswrapperMFC7360N-2.0.4-2.i386.deb Configure CUPS

Edit the CUPS configuration file commonly located in /etc/cups/cupsd.conf and make the section that looks like:

# Only listen for connections from the local machine. Listen localhost:631 Listen /var/run/cups/cups.sock

look like this:

# Listen on all interfaces Port 631 Listen /var/run/cups/cups.sock

Modify the Apache specific directives to allow connections from everywhere as well. Find the follow section in /etc/cups/cupsd.conf:

<Location /> # Restrict access to the server... Order allow,deny </Location> # Restrict access to the admin pages... <Location /admin> Order allow,deny </Location> # Restrict access to the configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny </Location>

Add Allow All after each Order allow,deny so it looks like this:

<Location /> # Restrict access to the server... Order allow,deny Allow All </Location> # Restrict access to the admin pages... <Location /admin> Order allow,deny Allow All </Location> # Restrict access to the configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow All </Location> Add the MFC-7360N to CUPS.

If your MFC-7360N is connected to your server via USB then you should be all set. Login to the CUPS administration interface on http://yourserver:631 and modify the MFC7360N printer (if one was created when the drivers where installed) then make sure you can print a test page via CUPS before proceeding.

Install Cloudprint and Cloudprint service wget -c http://davesteele.github.io/cloudprint-service/deb/cloudprint_0.11-5.1_all.deb wget -c http://davesteele.github.io/cloudprint-service/deb/cloudprint-service_0.11-5.1_all.deb sudo dpkg -i cloudprint_0.11-5.1_all.deb sudo dpkg -i cloudprint-service_0.11-5.1_all.deb Authenticate

Google accounts with 2 step verification enabled need to use an application-specific password.

Authenticate cloudprintd.

sudo service cloudprintd login

You should see something like this.

Accounts with 2 factor authentication require an application-specific password Google username: you@example.org Password: Added Printer MFC7360N

Start the Cloudprint daemon.

sudo service cloudprintd start

If everything is working correctly you should see your printer the following page:

Printing from mobile devices Android

Add the Google Cloud Print app to Android devices and you'll be able to configure your printer preferences and print from Android..

Chrome and Chromium

When printing from within Google Chrome and Chromium you can now select Cloudprint as the destination and choose your printer.

References
Categories: LUG Community Blogs

Adam Trickett: Picasa Web: Tour de France

Sat, 05/07/2014 - 08:00

The Tour de France visits Yorkshire

Location: Yorkshire
Date: 5 Jul 2014
Number of Photos in Album: 67

View Album

Categories: LUG Community Blogs

Steve Kemp: And so it begins ...

Thu, 03/07/2014 - 22:00

1. This weekend I will apply to rejoin the Debian project, as a developer.

2. In the meantime I've been begun releasing some some code which powers the git-based DNS hosting site/service.

3. This is the end of my list.

4. I lied. This is the end of my list. Powers of two, baby.

Categories: LUG Community Blogs

Alan Pope: Creating Time-lapse Videos on Ubuntu

Thu, 03/07/2014 - 16:17

I’ve previously blogged about how I sometimes setup a webcam to take pictures and turn them into videos. I thought I’d update that here with something new I’ve done, fully automated time lapse videos on Ubuntu. Here’s when I came up with:-

(apologies for the terrible music, I added that from a pre-defined set of options on YouTube)

(I quite like the cloud that pops into existence at ~27 seconds in)

Over the next few weeks there’s an Air Show where I live and the skies fill with all manner of strange aircraft. I’m usually working so I don’t always see them as they fly over, but usually hear them! I wanted a way to capture the skies above my house and make it easily available for me to view later.

So my requirements were basically this:-

  • Take pictures at fairly high frequency – one per second – the planes are sometimes quick!
  • Turn all the pictures into a time lapse video – possibly at one hour intervals
  • Upload the videos somewhere online (YouTube) so I can view them from anywhere later
  • Delete all the pictures so I don’t run out of disk space
  • Automate it all
Taking the picture

I’ve already covered this really, but for this job I have tweaked the .webcamrc file to take a picture every second, only save images locally & not to upload them. Here’s the basics of my .webcamrc:-

[ftp] dir = /home/alan/Pictures/webcam/current file = webcam.jpg tmp = uploading.jpeg debug = 1 local = 1 [grab] device = /dev/video0 text = popeycam %Y-%m-%d %H:%M:%S fg_red = 255 fg_green = 0 fg_blue = 0 width = 1280 height = 720 delay = 1 brightness = 50 rotate = 0 top = 0 left = 0 bottom = -1 right = -1 quality = 100 once = 0 archive = /home/alan/Pictures/webcam/archive/%Y/%m/%d/%H/snap%Y-%m-%d-%H-%M-%S.jpg

Key things to note, “delay = 1″ gives us an image every second. The archive directory is where the images will be stored, in sub-folders for easy management and later deletion. That’s it, put that in the home directory of the user taking pictures and then run webcam. Watch your disk space get eaten up.

Making the video

This is pretty straightforward and can be done in various ways. I chose to do two-pass x264 encoding with mencoder. In this snippet we take the images from one hour – in this case midnight to 1AM on 2nd July 2014 – from /home/alan/Pictures/webcam/archive/2014/07/02/00 and make a video in /home/alan/Pictures/webcam/2014070200.avi and a final output in /home/alan/Videos/webcam/2014070200.avi which is the one I upload.

mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=1:turbo:bitrate=9600:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup mencoder "mf:///home/alan/Pictures/webcam/archive/2014/07/02/00/*.jpg" -mf fps=60 -o /home/alan/Pictures/webcam/2014070200.avi -ovc x264 -x264encopts direct=auto:pass=2:bitrate=9600:frameref=5:bframes=1:me=umh:partitions=all:trellis=1:qp_step=4:qcomp=0.7:direct_pred=auto:keyint=300 -vf scale=-1:-10,harddup -o /home/alan/Videos/webcam/2014070200.avi Upload videos to YouTube

The project youtube-upload came in handy here. It’s pretty simple with a bunch of command line parameters – most of which should be pretty obvious – to upload to youtube from the command line. Here’s a snippet with some credentials redacted.

python youtube_upload/youtube_upload.py --email=########## --password=########## --private --title="2014070200" --description="Time lapse of Farnborough sky at 00 on 02 07 2014" --category="Entertainment" --keywords="timelapse" /home/alan/Videos/webcam/2014070200.avi

I have set the videos all to be private for now, because I don’t want to spam any subscriber with a boring video of clouds every hour. If I find an interesting one I can make it public. I did consider making a second channel, but the youtube-upload script (or rather the YouTube API) doesn’t seem to support specifying a different channel from the default one. So I’d have to switch to a different channel by default work around this, and then make them all public by default, maybe.

In addition YouTube sends me a “Well done Alan” patronising email whenever a video is upload, so I know when it breaks, I stop getting those mails.

Delete the pictures

This is easy, I just rm the /home/alan/Pictures/webcam/archive/2014/07/02/00 directory once the upload is done. I don’t bother to check if the video uploaded okay first because if it fails to upload I still want to delete the pictures, or my disk will fill up. I already have the videos archived, so can upload those later if the script breaks.

Automate it all

webcam is running constantly in a ‘screen’ window, that part is easy. I could detect when it dies and re-spawn it maybe. It has been known to crash now and then. I’ll get to that when that happens

I created a cron job which runs at 10 mins past the hour, and collects all the images from the previous hour.

10 * * * * /home/alan/bin/encode_upload.sh

I learned the useful “1 hour ago” option to the GNU date command. This lets me pick up the images from the previous hour and deals with all the silly calculation to figure out what the previous hour was.

Here (on github) is the final script. Don’t laugh.

Categories: LUG Community Blogs

Steve Kemp: Slowly releasing bits of code

Sun, 29/06/2014 - 14:37

As previously mentioned I've been working on git-based DNS hosting for a while now.

The site was launched for real last Sunday, and since that time I've received enough paying customers to cover all the costs, which is nice.

Today the site was "relaunched", by which I mean almost nothing has changed, except the site looks completely different - since the templates/pages/content is all wrapped up in Bootstrap now, rather than my ropy home-made table-based layout.

This week coming I'll be slowly making some of the implementation available as free-software, having made a start by publishing CGI::Application::Plugin::AB - a module designed for very simple A/B testing.

I don't think there will be too much interest in most of the code, but one piece I'm reasonably happy with is my webhook-receiver.

Webhooks are at the core of how my service is implemented:

  • You create a repository to host your DNS records.
  • You configure a webhook to be invoked when pushing to that repository.
  • The webhook will then receive updates, and magically update your DNS.

Because the webhook must respond quickly, otherwise github/bitbucket/whatever will believe you've timed out and report an error, you can't do much work on the back-end.

Instead I've written a component that listens for incoming HTTP POSTS, parses the body to determine which repository that came from, and then enqueues the data for later processing.

A different process will be constantly polling the job-queue (which in my case is Redis, but could be beanstalkd, or similar. Hell even use MySQL if you're a masochist) and actually do the necessary magic.

Most of the webhook processor is trivial, but handling different services (github, bitbucket, etc) while pretending they're all the same is hard. So my little toy-service will be released next week and might be useful to others.

ObRandom: New pricing unveiled for users of a single zone - which is a case I never imagined I'd need to cover. Yay for A/B testing :)

Categories: LUG Community Blogs

Martin Wimpress: Humax Foxsat HDR Custom Firmware

Sun, 29/06/2014 - 12:00

I've had the notes kicking around for absolutely ages. I haven't checked to see if this stuff is still accurate because during the last 12 months or so our viewing habbits have changed and we almost exclusively watch streamed content now.

That said, my father-in-law gave us a Humax Foxsat HDR Freesat digital recorder as a thank you for some work I did for him. It turns out the Humax Foxsat HDR is quite hackable.

Hard Disk Upgrade

I contributed to the topic below. The Humax firmware only supports disks up to 1TB but the hackers method works for 2TB drives, possibly bigger but I haven't tried. I've upgraded mine and my father-in-laws to 2TB without any problems.

Custom Firmware

These are the topics that discuss the custom firmware itself and includes the downloads. Once the custom firmware is installed and it's advanced interface has been enabled you can enable several add-ons such as Samba, Mediatomb, ssh, ftp, etc.

Web Interface

This is the topic about the web interface. No download required it is bundled in the custom firmware above.

Channel Editor

The channel editor is installed and configured via the web interface and is one of my favourite add-ons. It also allows you to enable non-freesat channels in the normal guide.

Non Freesat channels

We get our broadcasts from Astra 2A / Astra 2B / Astra 2D / Eurobird 1 (28.2°E). Other free to air channels are available and KingOfSat lists them all. These channels can only be added via the Humax setup menus, but can then be presented in the normal EPG using the Channel Editor above.

Decrypt HD recordings

I never actually tried this, but what follows might be useful.

OTA updates

This is not so relevant now, since Humax haven't released an OTA update for some time.

I have not yet found a way to prevent automatic over the air updates from being automatically applied. When an over the air update is applied the Humax still works, but the web interface and all the add-ons stop working. The can be solved by waiting for the custom firmware to be updated (which happen remarkably quickly) and then re-flashing the custom firmware. All the add-ons should start working again.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: New Time Piece

Sat, 28/06/2014 - 17:11

Personal portable tine pieces are strange things. My first watch was a small red wind-up Timex that I was given when I could read the time. The strap eventually frayed out but it was still in good order when I was given a Timex LCD digital watch when they became popular and cheap. That watch lasted all of 12 months before it died to be replaced by a Casio digital watch which lasted many years and many straps.

At University I had my grandfather's watch, a mechanical Swiss watch that was very accurate as long as I remembered to keep it wound up. I continued to wear it as my regular watch in my first job. The only problem with this watch was it wasn't shock or waterproof and it was no good when I forgot to wind it up! My better half got me a Casio hybrid analogue/digital watch which I used for many years as my out doors/DIY watch.

I was somewhat disappointed when the resin case failed and though the watch was okay it wasn't wearable anymore and I replaced it with two watches, another similar Casio and a titanium cased Lorus quartz analogue display watch. The Casio failed in exactly the same was as the previous ones, dead straps and the a case failure.

My most recent watch is a Seiko titanium cased quartz analogue display similar to the Lorus (they are the same company) but this one is slightly nicer and battery is solar re-charged so it should never need replacing. The strap is also made of titanium so unlike the continually failing resin straps it should also last quite a bit longer...

The irony of all these watches is that I sit in-front of a computer for a living, and often at home - all of which have network synchronised clocks on them that are far more reliable than any of my wrist-watches. When I'm not at a computer I usually have my mobile phone with me or another high precision time pieces is available...

Categories: LUG Community Blogs

Martin Wimpress: Integrating Dropbox photo syncing with Open Media Vault and Plex

Sat, 28/06/2014 - 12:00

I've installed Open Media Vault on a HP ProLiant MicroServer G7 N54L and use it as media server for the house. OpenMediaVault (OMV) is a network attached storage (NAS) solution based on Debian Linux. At the time of writing OMV 0.5.x is based on Debian 6.0 (Squeeze).

I use a free Dropbox account to sync photos from mine and my wife's Android phones and wanted to automate to import of these photo upload into Plex, which is also running on Open Media Vault.

Installing Dropbox on Open Media Vault 0.5.x

I looked for a Dropbox Plugin for Open Media Vault and found this:

Sadly, at the time of writing, it is unfinished and I didn't have the time to go and learn the Open Media Vault plugin API.

The Open Media Vault forum does include a Dropbox HOW-TO which is very similar to how I've run Dropbox on headless Linux servers in the past. So, I decided to adapt my existing notes to Open Media Vault.

Create a Dropbox Share

Create a Dropbox share via the OMV WebUI.

  • Access Right Management -> Shared Folders

I gave my the name "Dropbox". I know, very original.

Installing Dropbox on a headless server

Download the latest Dropbox stable release for 32-bit or 64-bit.

wget -O dropbox.tar.gz "http://www.dropbox.com/download/?plat=lnx.x86" wget -O dropbox.tar.gz "http://www.dropbox.com/download/?plat=lnx.x86_64"

Extract the archive and install Dropbox in /opt.

cd tar -xvzf dropbox.tar.gz sudo mv ~/.dropbox-dist /opt/dropbox sudo find /opt/dropbox/ -type f -exec chmod 644 {} \; sudo chmod 755 /opt/dropbox/dropboxd sudo chmod 755 /opt/dropbox/dropbox sudo ln -s /opt/dropbox/dropboxd /usr/local/bin/dropboxd

Run dropboxd.

/usr/local/bin/dropboxd

You should see output like this:

This client is not linked to any account... Please visit https://www.dropbox.com/cli_link?host_id=d3adb33fcaf3d3adb33fcaf3d3adb33f to link this machine.

Visit the URL, login with your Dropbox account and link the account. You should see the following.

Client successfully linked, Welcome Web!

dropboxd will now create a ~/Dropbox folder and start synchronizing. Stop dropboxd with CTRL+C.

Symlink the Dropbox share

Login to the OMV server as root and sym-link the Dropbox share you created earlier to the Dropbox directory in the root home directory.

mv ~/Dropbox ~/Dropbox-old ln -s /media/<UUID>/Dropbox Dropbox rsync -av -W --progress ~/Dropbox-old/ ~/Dropbox/ init.d

To run Dropbox as daemon with init.d. Create /etc/init.d/dropbox with the following content.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65#!/bin/sh ### BEGIN INIT INFO # Provides: dropbox # Required-Start: $local_fs $remote_fs $network $syslog $named # Required-Stop: $local_fs $remote_fs $network $syslog $named # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # X-Interactive: false # Short-Description: dropbox service ### END INIT INFO DROPBOX_USERS="root" DAEMON=/opt/dropbox/dropbox start() { echo "Starting dropbox..." for dbuser in $DROPBOX_USERS; do HOMEDIR=`getent passwd $dbuser | cut -d: -f6` if [ -x $DAEMON ]; then HOME="$HOMEDIR" start-stop-daemon -b -o -c $dbuser -S -u $dbuser -x $DAEMON fi done } stop() { echo "Stopping dropbox..." for dbuser in $DROPBOX_USERS; do HOMEDIR=`getent passwd $dbuser | cut -d: -f6` if [ -x $HOMEDIR/$DAEMON ]; then start-stop-daemon -o -c $dbuser -K -u $dbuser -x $DAEMON fi done } status() { for dbuser in $DROPBOX_USERS; do dbpid=`pgrep -u $dbuser dropbox` if [ -z $dbpid ] ; then echo "dropboxd for USER $dbuser: not running." else echo "dropboxd for USER $dbuser: running (pid $dbpid)" fi done } case "$1" in start) start ;; stop) stop ;; restart|reload|force-reload) stop start ;; status) status ;; *) echo "Usage: /etc/init.d/dropbox {start|stop|reload|force-reload|restart|status}" exit 1 esac exit 0

Enable the init.d script.

sudo chmod +x /etc/init.d/dropbox sudo update-rc.d dropbox defaults Starting and Stopping the Dropbox daemon

Use /etc/init.d/dropbox start to start and /etc/init.d/dropbox stop to stop.

Dropbox client

It is recommended to download the official Dropbox client to configure Dropbox and get its status.

wget "http://www.dropbox.com/download?dl=packages/dropbox.py" -O dropbox chmod 755 dropbox sudo mv dropbox /usr/local/bin/

You can check on Dropbox status by running the following.

dropbox status

For usage instructions run dropbox help.

Photo importing

So, the reason for doing all this is that I now have a Dropbox instance running on my home file server and everyday it runs a script, that I wrote, to automatically import new photos into a directory that Plex monitors. I'll post details about my photo sorting script, Phort, at a later date.

References
Categories: LUG Community Blogs

Steve Kemp: So I accidentally ... a service.

Mon, 23/06/2014 - 20:44

This post is partly introspection, and partly advertising. Skip if it either annoys you.

Back in February I was thinking about what to do with myself. I had two main options "Get a job", and "Start a service". Because I didn't have any ideas that seemed terribly interesting I asked people what they would pay for.

There were several replies, largely based "infrastructure hosting" (which was pretty much 50/50 split between "DNS hosting", and project hosting with something like trac, redmine, or similar).

At the time DNS seemed hard, and later I discovered there were already at least two well-regarded people doing DNS things, with revision control.

So I shelved the idea, after reaching out to both companies to no avail. (This later lead to drama, but we'll pretend it didn't.) Ultimately I sought and acquired gainful employment.

Then, during the course of my gainful employment I was exposed to Amazons Route53 service. It looked like I was going to be doing many things with this, so I wanted to understand it more thoroughly than I did. That lead to the creation of a Dynamic-DNS service - which seemed to be about the simplest thing you could do with the ability to programatically add/edit/delete DNS records via an API.

As this was a random hack put together over the course of a couple of nights I didn't really expect it to be any more popular than anything else I'd deployed, and with the sudden influx of users I wanted to see if I could charge people. Ultimately many people pretended they'd pay, but nobody actually committed. So on that basis I released the source code and decided to ignore the two main missing features - lack of MX records, and lack of sub-sub-domains. (Isn't it amazing how people who claim they want "open source" so frequently mean they want something with zero cost, they can run, and never modify and contribute toward?)

The experience of doing that though, and the reminder of the popularity of the original idea made me think that I could do a useful job with Git + DNS combined. That lead to DNS-API - GitHub based DNS hosting.

It is early days, but it looks like I have a few users, and if I can get more then I'll be happy.

So if you want to to store your DNS records in a (public) GitHub repository, and get them hosted on geographically diverse anycasted servers .. well you know where to go: Github-based DNS hosting.

Categories: LUG Community Blogs

Tony Whitmore: Tom Baker at 80

Mon, 23/06/2014 - 18:31

Back in March I photographed the legendary Tom Baker at the Big Finish studios in Kent. The occasion was the recording of a special extended interview with Tom, to mark his 80th birthday. The interview was conducted by Nicholas Briggs, and the recording is being released on CD and download by Big Finish.

I got to listen in to the end of the recording session and it was full of Tom’s own unique form of inventive story-telling, as well as moments of reflection. I got to photograph Tom on his own using a portable studio set up, as well as with Nick and some other special guests. All in about 7 minutes! The cover has been released now and it looks pretty good I think.

The CD is available for pre-order from the Big Finish website now. Pre-orders will be signed by Tom, so buy now!

Pin It
Categories: LUG Community Blogs