Planet HantsLUG

Syndicate content
Planet HantsLUG - http://hantslug.org.uk/planet/
Updated: 1 hour 54 min ago

Steve Kemp: Tagging images, and maintaining collections?

Thu, 03/04/2014 - 12:02

I'm an amateur photographer, although these days I tend to drop the amateur prefix, given that I shoot people for cash at least once a month.

(It isn't my main job, and I'd never actually want it to be, because I'm certain I'd become unhappy hustling for jobs and doing the promotion thing.)

Anyway over the years I've built up a large library of images, mostly organized in a hierarchy of directories beneath ~/Images.

Unlike most photographers I don't use aperture, lighttable, or any similar library management. I shoot my images in RAW, convert to JPG via rawtherapee, and keep both versions of the images.

In short I don't want to mix the "library management" functions with the "RAW conversion" because I do regard them as two separate steps. That said I'm reaching a point where I do want to start tagging images, and finding them more quickly.

In the past I wrote a couple of simple tools to inject tags into the EXIF data of images, and then indexed them. But that didn't work so well in practise. I'm starting to think instead I should index images into sqlite:

  • Size.
  • date.
  • Content hash.
  • Tags.
  • Path.

The downside is that this breaks utterly as soon as you move images around on-disk. Which is something my previous exif-manipulation was designed to avoid.

Anyway I'm thinking at the moment, but I know that the existing tools such as F-Spot, shotwell, DigiKam, and similar aren't suitable. So I either need to go standalone and use EXIF tags, accepting the fact that the tags I enter won't be visible to other tools, or I cope with the file-rename issues by attempting to update an existing sqlite database via hash/size/etc.

Categories: LUG Community Blogs

Tony Whitmore: Ubuntu Podcast – Season 7 starts tomorrow!

Tue, 01/04/2014 - 20:18

Tomorrow evening we’ll be bringing a brand new season of the Ubuntu Podcast to your ears. After an extended winter break, we’re ready to dust off our microphones and mixers, fire up our laptops and dive head first into the new season. We’ll be streaming the show live at 2030 BST so you can listen and even participate through the IRC channel. Visit the live page on the website to find out more.

As we did last year, we will be releasing new episodes for download every week. If you can’t wait for that, listen live on alternate Wednesday evenings for about an hour. You can check the recording dates on our website or add them to your Google calendar.

The show will be much as you know and hopefully love it: A mix of discussion, interviews, news, silliness and cake. It would be great if you could join us at 2030 BST tomorrow (Wednesday 2nd April) for the first live show of the season!

Pin It
Categories: LUG Community Blogs

Debian Bits: Debian Project elects Javier Merino Cacho as Project Leader

Tue, 01/04/2014 - 11:25

This post was an April Fools' Day joke.

In accordance with its constitution, the Debian Project has just elected Javier Merino Cacho as Debian Project Leader. More than 80% of voters put him as their first choice (or equal first) on their ballot papers.

Javier's large majority over his opponents shows how his inspiring vision for the future of the Debian project is largely shared by the other developers. Lucas Nussbaum and Neil McGovern also gained a lot of support from Debian project members, both coming many votes ahead of the None of the above ballot choice.

Javier has been a Debian Developer since February 2012 and, among other packages, works on keeping the mercurial package under control, as mercury is very poisonous for trouts.

After it was announced that he had won this year's election, Javier said: I'm flattered by the trust that Debian members have put in me. One of the main points in my platform is to remove the "Debian is old and boring" image. In order to change that, my first action as DPL is to encourage all Debian Project Members to wear a clown red nose in public.

Among others, the main points from his platform are mainly related to improve the communication style in mailing lists through an innovative filter called aponygisator, to make Debian less "old and boring", as well as solve technical issues among developers with barehanded fights. Betting on the fights will be not only allowed but encouraged for fundraising reasons.

Javier also contemplated the use of misleading talk titles such as The use of cannabis in contemporary ages: a practical approach and Real Madrid vs Barcelona to lure new users and contributors to Debian events.

Javier's platform was collaboratively written by a team of communication experts and high profile Debian contributors during the last DebConf. It has since evolved thanks to the help of many other contributors.

Categories: LUG Community Blogs

Martin Wimpress: Installing Nikola on Debian

Mon, 31/03/2014 - 16:19

Nikola is a static site and blog generator written in Python that I've been using for a good while now. This blog post describes how to install Nikola on Debian. Now, this may look like a long winded way to install Nikola, given that Debian .deb package exist, but in my opinion it is the correct way to install Nikola on Debian.

Installing Python

First you'll need Python and virtualenvwrapper

sudo apt-get install libpython2.7 python2.7 python2.7-dev python2.7-minimal

Remove any apt installed Python packages that we are about to replace. The versions of these packages in the Debian repositories soon get stale.

sudo apt-get purge python-setuptools python-virtualenv python-pip python-profiler

Install pip.

curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py sudo python2.7 get-pip.py

Use pip to install virtualenv and virtualenvwrapper.

sudo pip install virtualenv --upgrade sudo pip install virtualenvwrapper The Snakepit

Create a "Snakepit" directory for storing all the virtualenvs.

mkdir ~/Snakepit

Add the following your ~/.bashrc to enable virtualenvwrapper.

export WORKON_HOME=${HOME}/Snakepit if [ -f /usr/local/bin/virtualenvwrapper.sh ]; then source /usr/local/bin/virtualenvwrapper.sh elif [ -f /usr/bin/virtualenvwrapper.sh ]; then source /usr/bin/virtualenvwrapper.sh fi Create a virtualenv for Nikola

Open a new shell to ensure that the virtualenvwrapper configuration is active.

The following will create a new virtualenv called nikola based on Python 2.7.

mkvirtualenv -p /usr/bin/python2.7 ~/Snakepit/nikola-640 Working on a virtualenv

To workon, or activate, an existing virtualenv do the following.

workon nikola-640

You can switch to another virtualenv at any time, just use workon envname. Your shell prompt will change while a virtualenv is being worked on to indicate which virtualenv is currently active.

While working on a virtualenv you can pip install what you need or manually install any Python libraries safe in the knowledge you will not adversely damage any other virtualenvs or the global packages in the process. Very useful for developing a new branch which may have different library requirements than the master/head.

When you are finished working in a virtualenv you can deactivate it by simply executing deactivate.

Install Nikola requirements

Nikola is will be powered by Python 2.7 and some additional packages will be required.

sudo apt-get install python2.7-dev libfreetype6-dev libjpeg8-dev libxslt1-dev libxml2-dev libyaml-dev What are these requirements for?
  • python2.7-dev provides the header files for Python 2.7 so that Python modules with C extensions can be built.

The following are required to build pillow, the Python imaging library.

  • libjpeg8-dev
  • libfreetype6-dev

The following are required to build lxml, a Python XML library.

  • libxml2-dev
  • libxslt1-dev

The following are required to build python-coveralls.

  • libyaml-dev
Install Nikola

Download Nikola.

mkdir -p ${VIRTUAL_ENV}/src cd ${VIRTUAL_ENV}/src wget https://github.com/getnikola/nikola/archive/v6.4.0.tar.gz -O nikola-640.tar.gz tar zxvf nikola-640.tar.gz cd nikola-6.4.0

Install the Nikola requirements.

pip install -r requirements-full.txt

If you intend to use the Nikola planetoid (Planet generator) plugin you'll also need to following.

pip install peewee feedparser

Actually install nikola.

python setup.py install

Nikola is now installed. nikola help and the Nikola Handbook will assist you from here on.

Categories: LUG Community Blogs

Steve Kemp: Some things on DNS and caching

Sat, 29/03/2014 - 19:16

Although there wasn't too many comments on my what would you pay for? post I did get some mails.

I was reminded about this via Mario Langs post, which echoed a couple of private mails I received.

Despite being something that I take for granted, perhaps because my hosting comes from the Bytemark, people do seem willing to pay money for DNS hosting.

Which is odd. I mean you could do it very very very cheaply if you had just four virtual machines. You can get complex and be geo-fancy, and you could use anycast on a small AS, but really? You could just deploy four virtual machines0 to provide a.ns, b.ns, c.ns, d.ns, and be better than 90% of DNS hosters out there.

The thing that many people mentioned was Git-backed, or Git-based DNS. Which would be trivial if you used tinydns, and no much harder if you used bind.

I suspect I'm "not allowed" to do DNS-things for a while, due to my contract with Dyn, but it might be worth checking...

ObRandom: Beat me to it. Register gitdns.io, or similar, and configure hooks from github to compile tinydns records.

In other news I started documenting early thoughts about my caching reverse proxy, which has now got a name stockpile.

I wrote some stub code using node.js, and although it was functional it soon became callback hell:

  • Is this resource cachable?
  • Does this thing exist in the cache already?
  • Should we return the server's response to the client, archive to memcached, or do both?

Expressing the rules neatly is also a challenge. I want the server core to be simple and the configuration to be something like:

is_cachable ( vhost, source, request, backened ) { /** * If the file is static, then it is cachable. */ if ( request.url.match( /\.(jpg|png|txt|html?|gif)$/i ) ) { return true; } /** * If there is a cookie then the answer is false. */ if ( request.has_cookie? ) { return false ; } /** * If the server is alive we'll now pass the remainder through it * if not then we'll serve from the cache. */ if ( backend.alive? ) { return false; } else { return true; } }

I can see there is value in judging the cachability based on the server response, but I plan to ignore that except for "Expires:", "Etag", etc ,etc)

Anyway callback hell does make me want to reexamine the existing C/C++ libraries out there. Because I think I could do better.

Categories: LUG Community Blogs

Steve Kemp: A diversion on off-site storage

Wed, 26/03/2014 - 18:53

Yesterday I took a diversion from thinking about my upcoming cache project, largely because I took some pictures inside my house, and realized my offsite backup was getting full.

I have three levels of backups:

  • Home stuff on my desktop is replicated to my wifes desktop, and vice-versa.
  • A simple server running rsync (content-free http://rsync.io/).
  • A "peering" arrangement of a small group of friends. Each of us makes available a small amount of space and we copy to-from each others shares, via rsync / scp as appropriate.

Unfortunately my rsync-based personal server is getting a little too full, and will certainly be full by next year. S3 is pricy, and I don't trust the "unlimited" storage people (backblaze,etc) to be sustainable and reliable long-term.

The pricing on Google-drive seems appealing, but I guess I'm loathe to share more data with Google. Perhaps I could dedicated a single "backup.account@gmail.com" login to that, separate from all-else.

So the diversion came along when I looked for Amazon S3-comptible, self-hosted, servers. There are a few, most of them are PHP-based, or similarly icky.

So far cloudfoundry's vlob looks the most interesting, but the main project seems stalled/dead. Sadly using s3cmd to upload files failed, but certainly the `curl` based API works as expected.

I looked at Gluster, CEPH, and similar, but didn't yet come up with a decent plan for handling offsite storage, but I know I have only six months or so before the need becomes pressing. I imagine the plan has to be using N-small servers with local storage, rather than 1-Large server, purely because pricing is going to be better that way.

Decisions decisions.

Categories: LUG Community Blogs

Tony Whitmore: The goose that laid the golden eggs, but never cackled

Tue, 25/03/2014 - 20:48

I’ve wanted to visit Bletchley Park for years. It is where thousands of people toiled twenty-four hours a day to decipher enemy radio messages during the second world war, in absolute secrecy. It is where some of the brightest minds of a generation put their considerable mental skills to an incredibly valuable purpose. It is also where modern computing was born, notably through the work of Alan Turing and others.

So I was very pleased to be invited by my friend James to visit Bletchley as part of his stag weekend. After years of neglect, and in the face of demolition, the park is now being extensively restored. A new visitors’ centre will be introduced, and more of the huts opened up to the public. I have no doubt that these features will improve the experience overall, but there was a feeling of Trigger’s Broom as I looked over the huts closest to the mansion house. Never open to the public before, they looked good with new roofs and walls. But perhaps a little too clean.

And it really is only the huts closest to the house that are being renovated. Others are used by the neighbouring National Museum of Computing, small companies and a huge number are still derelict. Whilst I hope that the remaining huts will be preserved, it would be great if visitors could see the huts in their current dilapidated state too. The neglect of Bletchley Park is part of its story, and I would love to explore the derelict huts as they are now. I would love to shoot inside them – so many ideas in my head for that!

Most of the people working there were aged between eighteen and twenty-one, so you can imagine how much buzz and life there was in the place, despite the graveness of the work being carried out. Having visited the park as it is today, I wish that I had been able to visit it during the war. To see people walking around the huts, efficiency and eccentricity hand-in-hand, to know the import and intellect of what was being carried out, and how it would produce the technology that we all rely on every day, would have been incredible.

Pin It
Categories: LUG Community Blogs