News aggregator

Debian Bits: Debian Project elects Javier Merino Cacho as Project Leader

Planet HantsLUG - Tue, 01/04/2014 - 11:25

This post was an April Fools' Day joke.

In accordance with its constitution, the Debian Project has just elected Javier Merino Cacho as Debian Project Leader. More than 80% of voters put him as their first choice (or equal first) on their ballot papers.

Javier's large majority over his opponents shows how his inspiring vision for the future of the Debian project is largely shared by the other developers. Lucas Nussbaum and Neil McGovern also gained a lot of support from Debian project members, both coming many votes ahead of the None of the above ballot choice.

Javier has been a Debian Developer since February 2012 and, among other packages, works on keeping the mercurial package under control, as mercury is very poisonous for trouts.

After it was announced that he had won this year's election, Javier said: I'm flattered by the trust that Debian members have put in me. One of the main points in my platform is to remove the "Debian is old and boring" image. In order to change that, my first action as DPL is to encourage all Debian Project Members to wear a clown red nose in public.

Among others, the main points from his platform are mainly related to improve the communication style in mailing lists through an innovative filter called aponygisator, to make Debian less "old and boring", as well as solve technical issues among developers with barehanded fights. Betting on the fights will be not only allowed but encouraged for fundraising reasons.

Javier also contemplated the use of misleading talk titles such as The use of cannabis in contemporary ages: a practical approach and Real Madrid vs Barcelona to lure new users and contributors to Debian events.

Javier's platform was collaboratively written by a team of communication experts and high profile Debian contributors during the last DebConf. It has since evolved thanks to the help of many other contributors.

Categories: LUG Community Blogs

David Goodwin: Automated twitter compilation up to 01 April 2014

Planet WolvesLUG - Tue, 01/04/2014 - 06:00

Arbitrary tweets made by TheGingerDog (i.e. David Goodwin) up to 01 April 2014

(2014/03/31 src)
  • How to stop time: kiss.
    How to travel in time: read.
    How to escape time: music.
    How to feel time: write.
    How to waste time: social media. (2014/03/30 src)
  • I have a 100 metres swimming badge! (2014/03/30 src)
  • “I am a Unix Creationist. I believe the world was created on January 1, 1970 and as prophesized, will end on January 19, 2038″ – @teropa (2014/03/27 src)
  • @JohnEber007 @Buttockus @TheMoralPolice @GSpellchecker Those militant atheists, man…. t.co/X6dSZesylN
  • (2014/03/29 src)
  • UK video-on-demand regulator says 44 THOUSAND schoolkids view pornsites every month. That stat’s total cobblers. reg.cx/2acL (2014/03/29 src)
  • “This could be us …. if we didn’t like cake so much” t.co/S8uOYqclAk
  • (2014/03/29 src)
  • A recent study has found that women who carry a little extra weight live longer than the men who mention it. (2014/03/29 src)
  • Ah some acceptable haribo @moreteadoctort.co/8LLJFwWA3v
  • (2014/03/28, Bromsgrove, Worcestershire src)
  • I CAN SEE IT! I can see comet #67P! (just!) ow.ly/v33Ep t.co/Mzv8hWTubn
  • (2014/03/27 src)
  • Hello, outer space! (2014/03/28 src)
  • Bash #protip: put this in /etc/bashrc. Your sysadmin will thank you.function cat() {
    echo “=^.^=”
    }
  • (2014/03/26 src)
  • Candy Crush addicts come clean: ‘Life’s too short for sliding candies around’ bit.ly/1l4Nb1p (2014/03/26 src)
  • UK Court Says Information Stored Electronically Is Not ‘Property’ – https://www.techdirt.com/articles/20140326/04380226685/uk-court-says-information-stored-electronically-does-not-constitute-property.shtml so it can’t be stolen, right? #copyright (2014/03/26 src)
  • This is brilliant : thehawkeyeinitiative.com/post/50432219744/special-guest-edition-the-hawkeye-initiative-irl (2014/03/26 src)
  • A E Housman looks like he will be going on a trip soon. #bromsgrove t.co/RVFQnu9MZS
  • (2014/03/26, Bromsgrove, Worcestershire src)
  • Progress is being made with Bromsgrove’s new high street but there’s not much space left for shoppers. t.co/wsxueW1rvZ
  • (2014/03/26, Bromsgrove, Worcestershire src)
  • It seems like every day I need to reboot this nexus4 to get cellular data to work. #firstWorldProblems (2014/03/26, Bromsgrove, Worcestershire src)
  • BREAKING: Yahoo buys ViewMaster t.co/dbMtDq6rTd
  • (2014/03/25 src)
  • Facebook is really taking its name too literally. t.co/Anqf4RCRnr
  • (2014/03/25 src)
  • Tesco forced short passwords in order to reduce the time you spend on the login page. Right… www.troyhunt.com/2014/02/the-tesco-hack-heres-how-it-probably.html#comment-1301469646 t.co/TWViZDyEcS
  • (2014/03/25 src)
  • Needed to download a 27kb library to carry on work on commute. Watched Gravity in HD instead. @ThreeUKSupport #mad t.co/Wtij0UQg5I
  • (2014/03/24 src)
  • @SciencePorn: Stupid Science t.co/dPSgHKeb6z
  • (2014/03/23 src)
  • Ancient Bridge, Kolpino-Russian Federation. t.co/2UKbzFPiTK
  • (2014/03/23 src)
  • So where is the “long bridge” in Longbridge ? (2014/03/21, Birmingham, Birmingham src)
  • Yes sir! I would love to be on the first page of Google. Your poorly written email and gmail address in no way put me off using your service (2014/03/20 src)
  • @stwater I’ve ran a bath for my baby tonight and seriously NOT impressed!!
    Bromsgrove, sort it out! t.co/WzictwDSIn
  • Bromsgrove, sort it out! http://t.co/WzictwDSIn - embedded picture" />
    Bromsgrove, sort it out! http://t.co/WzictwDSIn - embedded picture" />
    Bromsgrove, sort it out! http://t.co/WzictwDSIn - embedded picture" />
    Bromsgrove, sort it out! http://t.co/WzictwDSIn - embedded picture" alt="'@stwater" src="/images/2014/d1fd3111b6df453d74200acdb1baebe4.jpg" /> Bromsgrove, sort it out! http://t.co/WzictwDSIn – embedded image’/> (2014/03/19 src)
  • *News* BBC iPlayer is launching on Chromecast today. Get more info here: www.bbc.co.uk/blogs/internet/posts/BBC-iPlayer-apps-on-Chromecast (2014/03/19 src)
  • Wow! So resilient! So Safe! So secure!
    The all new £1 dogecoin. t.co/eyKoROAxKP
  • The all new £1 dogecoin. http://t.co/eyKoROAxKP - embedded picture" />
    The all new £1 dogecoin. http://t.co/eyKoROAxKP - embedded picture" />
    The all new £1 dogecoin. http://t.co/eyKoROAxKP - embedded picture" />
    The all new £1 dogecoin. http://t.co/eyKoROAxKP - embedded picture" alt="'Wow!" src="/images/2014/aab4a51e592e9108a436ced196339ac1.png" /> The all new £1 dogecoin. http://t.co/eyKoROAxKP – embedded image’/> (2014/03/19 src)
  • “Mongo lasted one week”. codeinsider.us/i/2.html #realmongofacts (2014/03/18 src)
  • GOG.com To Add Linux Support – beta.slashdot.org/story/199575 #linux #gaming continues to improve (2014/03/18 src)
  • Great image RT @Blakmountphoto: @VirtualAstro last full moon of the winter over the #breconbeacons t.co/IjXOPHVSqR
  • (2014/03/18 src)
  • Threatening to “tell mum” had the desired effect of getting my sister to pay attention. #SomeThingsNeverChange (2014/03/18 src)
  • Hopefully I’ll sleep better once “the boss” returns home from her holi^h^h^h^h work trip abroad. Oh well. Today has started now. (2014/03/18, Bromsgrove, Worcestershire src)
  • A counter-argument. t.co/wqMISHifop
  • (2014/03/18 src)
  • Fill it with foam! (Conservatory roof had a gap). Need more holes to fill now …. #diy #likeAchildWithAHammer t.co/iEUGHUpoUK
  • (2014/03/18, Bromsgrove, Worcestershire src)
  • Here’s some goodies coming up in April @ArtrixArts #worcestershirehour t.co/Ra18PwNJTT
  • (2014/03/17 src)
  • OH: “you’ve got to fall off the wagon to get back on” #schoolYard (2014/03/17 src)
  • A happy range rover. t.co/BTBxm8iMT1
  • (2014/03/17 src)
  • Feeling you must thank (in return) an unknown driver who is thanking you for letting them out / through a gap etc. #BritishProblems (2014/03/17, Bromsgrove, Worcestershire src)
  • #BritishProblems hesitant to use foreign mains sockets (generally lack switches), (2014/03/17, Bromsgrove, Worcestershire src)
  • It seems I’ve not forgotten how to make a den. t.co/L37UJskf4W
  • (2014/03/16, Dudley, Dudley src)
  • “Can we play hide and seek?” t.co/iJc68IYwl2
  • (2014/03/16, Bromsgrove, Worcestershire src)
  • Blustery. But nice and sunny. t.co/lqMkiPvfa1
  • (2014/03/16, Bromsgrove, Worcestershire src)
  • Migrating office NFS server to use btrfs for /home etc. #Geek #Saturday #Workaholics #Debian (2014/03/15 src)
  • Except one. “@lydiadepillis: Richer countries less likely to think you need to believe in God to be moral: t.co/JTEjs6urjW
  • (2014/03/13 src)
  • Hackers defacing a phishing site ? www.p0ison.com/ybs-bank-got-hacked-by-team-anonghost/ #doingItWrong #security #phishing (2014/03/13 src)
  • Chromecast doesn’t seem to work with wifi passwords containing non-alphanumeric characters. #chromecast #weird #fail (2014/03/11 src)
  • Love a sunset t.co/BGHZzBimAF
  • (2014/03/09, Bromsgrove, Worcestershire src)
  • It’s all part of God’s plan. piecomic.tumblr.com/post/78680666729 (2014/03/11 src)
  • They soon learn who is boss… But if they once boss a dog they will be merciless thereafter. Floss knows this. t.co/PdyTnqig0v
  • (2014/03/08 src)
  • Epic group fancy dress! t.co/mIq1FcHOaK
  • (2014/03/05 src)
  • 250% funded with just 13 hours to go! Support us and help send more people to conferences!Get your elephpants now: phpwomen.org/elephpant
  • (2014/03/08 src)
  • After not writing a hand written letter in years (10?) … I’ve written two in one night. #whereIsTheCursor #noSpellCheck #oldskool (2014/03/07, Bromsgrove, Worcestershire src)
  • @ntoll want to see widespread IPv6 adoption? Convince audiophiles a 2^128 bit address space increases sound fidelity (2014/03/06 src)
  • It appears @chordcables wants you to ignore physics and packetised buffered protocols and spend £1,600 on a £1 cable www.chord.co.uk/blog/new-chord-ethernet-cables/ (2014/03/06 src)
  • Mecha Pac-Man has risen! t.co/lrhMVbRlaw
  • (2014/03/06 src)
  • @Joel_Hughes @tboWebDesign Thought you might appreciate this ‘real’ Yellow Pages tagline t.co/LSn8wXYeAs
  • (2014/03/07 src)
  • Lawn mowing season appears to have started t.co/8wt89GgTd8
  • (2014/03/07, Bromsgrove, Worcestershire src)
  • “It is the latest fashion!” t.co/dAJLXDFOm7
  • (2014/03/07 src)
  • @RNLI @Yorkshireimages completed painting, being framed, local #RNLI branch taking care of it shortly t.co/iJfLwRLJY8
  • (2014/03/01 src)
  • “Mr Bitcoin” may have been found. Shame the journalist has pretty much published his address etc. #deception
    mag.newsweek.com/2014/03/14/bitcoin-satoshi-nakamoto.html (2014/03/06 src)
  • Categories: LUG Community Blogs

    Mick Morgan: the netbook is not dead

    Planet ALUG - Mon, 31/03/2014 - 22:03

    I bought my first netbook, the Acer Aspire One, back in April 2009 – five years ago. That machine is still going strong and has seen umpteen different distros in its time. It currently runs Mint 16, and very happily too.

    The little Acer has nothing on it that I value over much, all my important data is stored on my desktop (and backed up appropriately) but I find it useful as a “walking about” tool simply because it is so portable, so it still gets quite a bit of use. I am planning a few trips later this year, notably to Scotland and later (by bike) to Croatia, and I want to take something with me that will give me more flexibility in my connectivity options than simply taking my phone to collect email. In particular, it would be really useful if I could connect back to my home VPN endpoint whilst I am out and about. This is exactly the sort of thing I use the Acer for when I’m in the UK. I (briefly) considered using my Galaxy Tab, which is even lighter and more portable than the Acer, but I just don’t trust Android enough to load my openvpn certificates on to it. There is also the possibility that the Tablet could be lost or stolen (I shall be camping quite a lot). So the Acer looks a good bet. The downside of taking the Acer is the need for mains power (for recharging) of course, but I shall be staying in the odd hotel en route, and I have an apartment booked in Dubrovnik so I figure it should cope. However, I am not the only fan of the Acer – my grandson loves to sit with it on his lap in my study and watch Pingu and Octonauts (parents and grandparents of small children will understand). Given that I recognise that I might lose the Tablet, I must also assume that the Acer might disappear. My grandson wouldn’t like that. So the obvious solution is to take another AAO with me – off to ebay then.

    Since my AAO is five years old, and that model has been around for longer than that, I figured I could pick up an early ZG5 with 8 or 16 Gig SSD (rather than the 160 Gig disk mine has) for about £20 – £30. Hell, I’ve bought better specced Dell laptops (I know, I know) for less than that fairly recently. I was wrong. Astonishingly, the ZG5, in all its variants, appears to still be in huge demand. I bid on a few models and lost, by quite some margin. So I set up a watch list of likely looking candidates and waited a few days to get a feel for the prices being paid. The lowest price paid was £37.00 (plus £10.00 P&P) for a very early ZG5 still running Linpus, another Linpus ZG5 with only 512Mb of RAM and an 8 Gig SSD went for £50.00 plus £9.00 P&P), most of the later models (with 1 Gig of RAM and 160 – 250 Gig disks went for around £70 – £90 (plus various P&P charges). In all I watched 20 different auctions (plus a few for similar devices such as the MSI Wind and the Samsung NC10). The lowest price I saw after the £37.00 device was £55.00 and the highest was £103.00 with a mean of just over £67. That is astonishing when you consider that you can pick up a new generic 7″ Android tablet for around £70.00 and you can get a decent branded one for just over £100. And as I said, old laptops go for silly money – just take a look at ebay or gumtree and you can pick up job lots of half a dozen or more for loose change.

    So – despite what all the pundits may have said, clearly the netbook as a concept still meets the requirements of enough people (like me I guess) to keep the market bouyant. Intriguingly, the most popular machines sold (in terms of numbers of bidders) were all running XP. I just hope the the buyers intended to do as I did and wipe them to install a linux distro. Of course, having started the search for another ZG5, I just couldn’t let it go without buying one. I was eventually successful on a good one with the same specification as my original model.

    The only drawback is that it is not blue…..

    Well, at least I don’t think it will be stolen.

    Categories: LUG Community Blogs

    Martin Wimpress: Installing Nikola on Debian

    Planet HantsLUG - Mon, 31/03/2014 - 16:19

    Nikola is a static site and blog generator written in Python that I've been using for a good while now. This blog post describes how to install Nikola on Debian. Now, this may look like a long winded way to install Nikola, given that Debian .deb package exist, but in my opinion it is the correct way to install Nikola on Debian.

    Installing Python

    First you'll need Python and virtualenvwrapper

    sudo apt-get install libpython2.7 python2.7 python2.7-dev python2.7-minimal

    Remove any apt installed Python packages that we are about to replace. The versions of these packages in the Debian repositories soon get stale.

    sudo apt-get purge python-setuptools python-virtualenv python-pip python-profiler

    Install pip.

    curl -O https://raw.github.com/pypa/pip/master/contrib/get-pip.py sudo python2.7 get-pip.py

    Use pip to install virtualenv and virtualenvwrapper.

    sudo pip install virtualenv --upgrade sudo pip install virtualenvwrapper The Snakepit

    Create a "Snakepit" directory for storing all the virtualenvs.

    mkdir ~/Snakepit

    Add the following your ~/.bashrc to enable virtualenvwrapper.

    export WORKON_HOME=${HOME}/Snakepit if [ -f /usr/local/bin/virtualenvwrapper.sh ]; then source /usr/local/bin/virtualenvwrapper.sh elif [ -f /usr/bin/virtualenvwrapper.sh ]; then source /usr/bin/virtualenvwrapper.sh fi Create a virtualenv for Nikola

    Open a new shell to ensure that the virtualenvwrapper configuration is active.

    The following will create a new virtualenv called nikola based on Python 2.7.

    mkvirtualenv -p /usr/bin/python2.7 ~/Snakepit/nikola-640 Working on a virtualenv

    To workon, or activate, an existing virtualenv do the following.

    workon nikola-640

    You can switch to another virtualenv at any time, just use workon envname. Your shell prompt will change while a virtualenv is being worked on to indicate which virtualenv is currently active.

    While working on a virtualenv you can pip install what you need or manually install any Python libraries safe in the knowledge you will not adversely damage any other virtualenvs or the global packages in the process. Very useful for developing a new branch which may have different library requirements than the master/head.

    When you are finished working in a virtualenv you can deactivate it by simply executing deactivate.

    Install Nikola requirements

    Nikola is will be powered by Python 2.7 and some additional packages will be required.

    sudo apt-get install python2.7-dev libfreetype6-dev libjpeg8-dev libxslt1-dev libxml2-dev libyaml-dev What are these requirements for?
    • python2.7-dev provides the header files for Python 2.7 so that Python modules with C extensions can be built.

    The following are required to build pillow, the Python imaging library.

    • libjpeg8-dev
    • libfreetype6-dev

    The following are required to build lxml, a Python XML library.

    • libxml2-dev
    • libxslt1-dev

    The following are required to build python-coveralls.

    • libyaml-dev
    Install Nikola

    Download Nikola.

    mkdir -p ${VIRTUAL_ENV}/src cd ${VIRTUAL_ENV}/src wget https://github.com/getnikola/nikola/archive/v6.4.0.tar.gz -O nikola-640.tar.gz tar zxvf nikola-640.tar.gz cd nikola-6.4.0

    Install the Nikola requirements.

    pip install -r requirements-full.txt

    If you intend to use the Nikola planetoid (Planet generator) plugin you'll also need to following.

    pip install peewee feedparser

    Actually install nikola.

    python setup.py install

    Nikola is now installed. nikola help and the Nikola Handbook will assist you from here on.

    Categories: LUG Community Blogs

    Meeting at "The Moon Under Water"

    Wolverhampton LUG News - Mon, 31/03/2014 - 10:58


    53-55 Lichfield St
    Wolverhampton
    West Midlands
    WV1 1EQ

    Eat, Drink and talk Linux

    Event Date and Time:  Wed, 02/04/2014 - 19:30 - 23:00
    Categories: LUG Community Blogs

    Steve Kemp: Some things on DNS and caching

    Planet HantsLUG - Sat, 29/03/2014 - 19:16

    Although there wasn't too many comments on my what would you pay for? post I did get some mails.

    I was reminded about this via Mario Langs post, which echoed a couple of private mails I received.

    Despite being something that I take for granted, perhaps because my hosting comes from the Bytemark, people do seem willing to pay money for DNS hosting.

    Which is odd. I mean you could do it very very very cheaply if you had just four virtual machines. You can get complex and be geo-fancy, and you could use anycast on a small AS, but really? You could just deploy four virtual machines0 to provide a.ns, b.ns, c.ns, d.ns, and be better than 90% of DNS hosters out there.

    The thing that many people mentioned was Git-backed, or Git-based DNS. Which would be trivial if you used tinydns, and no much harder if you used bind.

    I suspect I'm "not allowed" to do DNS-things for a while, due to my contract with Dyn, but it might be worth checking...

    ObRandom: Beat me to it. Register gitdns.io, or similar, and configure hooks from github to compile tinydns records.

    In other news I started documenting early thoughts about my caching reverse proxy, which has now got a name stockpile.

    I wrote some stub code using node.js, and although it was functional it soon became callback hell:

    • Is this resource cachable?
    • Does this thing exist in the cache already?
    • Should we return the server's response to the client, archive to memcached, or do both?

    Expressing the rules neatly is also a challenge. I want the server core to be simple and the configuration to be something like:

    is_cachable ( vhost, source, request, backened ) { /** * If the file is static, then it is cachable. */ if ( request.url.match( /\.(jpg|png|txt|html?|gif)$/i ) ) { return true; } /** * If there is a cookie then the answer is false. */ if ( request.has_cookie? ) { return false ; } /** * If the server is alive we'll now pass the remainder through it * if not then we'll serve from the cache. */ if ( backend.alive? ) { return false; } else { return true; } }

    I can see there is value in judging the cachability based on the server response, but I plan to ignore that except for "Expires:", "Etag", etc ,etc)

    Anyway callback hell does make me want to reexamine the existing C/C++ libraries out there. Because I think I could do better.

    Categories: LUG Community Blogs

    David Goodwin: Amavis / SpamAssassin

    Planet WolvesLUG - Fri, 28/03/2014 - 15:58
    SpamAssassin

    Some random bits and pieces related to SpamAssassin and Amavis

    I’ve been looking for additional rulesets to add to SpamAssassin but haven’t found many – the SARE project appears offline (for example). Eventually I found – The SOUGHT SpamAssassin ruleset which despite it’s age (published in 2007) seems to still be maintained.

    See http://taint.org/2007/08/15/004348a.html

    To enable this on Debian Wheezy, I added a cron job (/etc/cron.d/sa-update-sought) like :

    10 */3 * * * debian-spamd /usr/local/sbin/sa-update-sought

    And then created /usr/local/sbin/sa-update-sought which looks a bit like :

    #!/bin/bash if [ $UID != 119 ]; then su - debian-spamd -c "/usr/local/sbin/sa-update-sought" exit 0 fi # See http://taint.org/2007/08/15/004348a.html /usr/bin/sa-update -v --gpgkey 6C6191E3 --channel sought.rules.yerp.org --channel updates.spamassassin.org # so wow, so speed. /usr/bin/sa-compile

    (Don’t forget to chmod 755 the script and also perhaps run it containing ‘set -x’ and/or ‘set -e’)

    Amavis – deal with duplicate headers

    Firstly, Amavis was complaining about duplicated headers for some emails. Typically this would be something useless like MIME-Version, which I don’t care about. So to stop Amavis moaning about duplicated headers – add to your config under /etc/amavis/conf.d/50-user (on debian) -

    $allowed_header_tests{'multiple'} = 0; Amavis – log spamassassin rulsets and generally more

    The default Amavis log file will look something like :

    Mar 23 06:48:18 my.server /usr/sbin/amavisd-new[13368]: (13368-03) Passed CLEAN {RelayedInbound}, [client.ip.addr]:37490 [client.ip.addr] -> <someone@local>, Queue-ID: 3FDEC181A06, Message-ID: <c72c5e1d26a048c0af4be75044e1e80e@bazarchic-invitations.com> , mail_id: d-dsS6ecM4vR, Hits: -9.49, size: 34124, queued_as: 80D4118089F, dkim_sd=20132014:bazarchic-invitations.com, 3203 ms

    Which isn’t all that useful – especially if you need to know WHY it did (or didn’t) score against SpamAssassin (i.e. WHY was it -9.49).

    So, to make Amavis more verbose in logging – so you can see which SpamAssassin tests triggered etc – add to /etc/amavis/conf.d/50-user (debian) -

    $log_templ = $log_verbose_templ;

    Now you’ll see something more like :

    Mar 28 14:33:49 my.server /usr/sbin/amavisd-new[9149]: (09149-05) Passed SPAMMY {RelayedTaggedInbound}, [client.ip.addr]:62696 [client.ip.addr] <some.user@whatever> -> <someone@else.example.com>, Queue-ID: EF4F4180E71, Message-ID: <C46A064E2A2B52469C092EE761AD74602BFCCC@xxxxxx-Exch.xxxxxxx.xxxx>, mail_id: dzG4JS_4jH29, Hits: 6.314, size: 46717, queued_as: BBEB71819B4, Subject: "hello world this is a subject", From: Test_Person_<test@my.domain>, helo=whatever.server, Tests: [HTML_MESSAGE=0.001,LOCAL_SEX=5,URI_HEX=1.313], shortcircuit=no, autolearn=disabled, autolearnscore=6.314, asn=AS57307_188.227.240.0/21, 4714 ms

    Now – you can clearly see why it scored 6.314 – without needing to find the mail and read it’s headers.

    SpamAssassin – some random rules

    Add into /etc/spamassassin into a file named something like ‘local_rules.cf’

    WhatCounts – spammy mailer? # X-Mailer: WhatCounts - seems spammy. header LOCAL_WHATCOUNTS X-Mailer =~ /WhatCounts/ describe LOCAL_WHATCOUNTS Spammy mailer (WhatCounts) score LOCAL_WHATCOUNTS 3.0 Sex

    Often slipped into spammy email; presumably serious email (well, for a business at least) won’t contain such stuff.

    body LOCAL_SEX /\b(sex)\b/i describe LOCAL_SEX Email contains the word sex. score LOCAL_SEX 5.0 PHP Eval’ed code

    I saw quite a few spammy emails which contained a specific header – so this penalises such mail. It’s crude.

    # Saw email headers like : X-PHP-Originating-Script: 10000:sendme.php(3) : eval()'d code header PHP_EVAL X-PHP-Originating-Script =~ /eval\(\)\'d code/i describe PHP_EVAL Eval()'ed PHP code as source score PHP_EVAL 8.0 SpamAssassin – decode short urls

    https://github.com/smfreegard/DecodeShortURLs is a useful plugin to install – allowing you to decode shortened URLs – and hopefully then score/find them in RBLs etc.

    i.e. expanding http://t.co/BLAH to http://blahblah.server.com/something/blah.html

     

    Categories: LUG Community Blogs

    Dick Turpin: When in doubt pull out.

    Planet WolvesLUG - Fri, 28/03/2014 - 12:43
    Customer: "I think our printers broken? I've tried two brand new toner cartridges and it won't print."Engineer: "Do you want us to come out and look at it? As you know it's chargeable."Customer: "You might as well get us a new printer."Engineer: "OK."
    Three days later.
    Engineer: "Hey Pete, you know that printer for xyz? I've installed the new one. I also found out what was wrong with the old one."Me: "What was it?"Engineer: "She hadn't pulled the safety tab off the side of the toner."Me: "Bwahahahaha"

    Categories: LUG Community Blogs

    Richard Lewis: Taking notes in Haskell

    Planet ALUG - Fri, 28/03/2014 - 00:13

    The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

    Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

    Here&aposs what I typed:

    module MusicalDocuments where import Data.Maybe -- A document comprises some number of openings (double page spreads) data Document = Document [Opening] -- An opening comprises one or two pages (usually two) data Opening = Opening (Page, Maybe Page) -- A page comprises multiple systems data Page = Page [System] -- Each part is the line for a particular voice data Voice = Superius | Discantus | Tenor | Contratenor | Bassus -- A part comprises a list of musical sybmols, but it may span mutliple systems --(including partial systems) data Part = Part [MusicalSymbol] -- A piece comprises some number of sections data Piece = Piece [Section] -- A system is a collection of staves data System = System [Staff] -- A staff is a list of atomic graphical symbols data Staff = Staff [Glyph] -- A section is a collection of parts data Section = Section [Part] -- These are the atomic components, MusicalSymbols are semantic and Glyphs are --syntactic (i.e. just image elements) data MusicalSymbol = MusicalSymbol data Glyph = Glyph -- If this were real, Image would abstract over some kind of binary format data Image = Image -- One of the important properties we need in order to be able to construct pieces -- from the scanned components is to be able to say when objects of the some of the -- types are strictly contiguous, i.e. this staff immediately follows that staff class Contiguous a where immediatelyFollows :: a -> a -> Bool immediatelyPrecedes :: a -> a -> Bool immediatelyPrecedes a b = b `immediatelyFollows` a instance Contiguous Staff where immediatelyFollows :: Staff -> Staff -> Bool immediatelyFollows = undefined -- Another interesting property of this data set is that there are a number of -- duplicate scans of openings, but nothing in the metadata that indicates this, -- so our workflow needs to recognise duplicates instance Eq Opening where (==) :: Opening -> Opening -> Bool (==) a b = undefined -- Maybe it would also be useful to have equality for staves too? instance Eq Staff where (==) :: Staff -> Staff -> Bool (==) a b = undefined -- The following functions actually represent the workflow collate :: [Document] collate = undefined scan :: Document -> [Image] scan = undefined split :: Image -> Opening split = undefined paginate :: Opening -> [Page] paginate = undefined omr :: Page -> [System] omr = undefined segment :: System -> [Staff] segment = undefined tokenize :: Staff -> [Glyph] tokenize = undefined recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol recogniseMusicalSymbol = undefined part :: [Glyph] -> Maybe Part part gs = if null symbols then Nothing else Just $ Part symbols where symbols = mapMaybe recogniseMusicalSymbol gs alignable :: Part -> Part -> Bool alignable = undefined piece :: [Part] -> Maybe Piece piece = undefined

    I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

    I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

    Categories: LUG Community Blogs

    Richard Lewis: Ph.D Progress

    Planet ALUG - Thu, 27/03/2014 - 00:37

    I submitted my Ph.D thesis at the end of September 2013 in time for what was believed to be the AHRC deadline. It was a rather slim submission at around 44,000 words and rejoiced under the title of Understanding Information Technology Adoption in Musicology. Here&aposs the abstract:

    Since the mid 1990s, innovations and technologies have emerged which, to varying extents, allow content-based search of music corpora. These technologies and their applications are known commonly as music information retrieval (MIR). While there are a variety of stakeholders in such technologies, the academic discipline of musicology has always played an important motivating and directional role in the development of these technologies. However, despite this involvement of a small representation of the discipline in MIR, the technologies have so far failed to make any significant impact on mainstream musicology. The present thesis, carried out under a project aiming to examine just such an impact, attempts to address the question of why this has been the case by examining the histories of musicology and MIR to find their common roots and by studying musicologists themselves to gauge their level of technological sophistication. We find that some significant changes need to be made in both music information retrieval and musicology before the benefits of technology can really make themselves felt in music scholarship.

    (Incidentally, the whole thing was written using org-mode, including some graphs that get automatically generated each time the text is compiled. Unfortunately I did have to cheat a little bit and typed in LaTeX \cite commands rather than using proper org-mode links for the references.)

    So the thing was then examined in January 2014 by an information science, user studies expert and a musicologist. As far as it went, the defence was actually not too bad, but after defending the defensible it eventually became clear that significant portions of the thesis were just not up to scratch; not, in fact, defensible. They weren&apost prepared to pass it and have asked that I revise and then re-submit it.

    Two things seem necessary to address: 1) why did this happen? And 2) what do I do next?

    I started work on this Ph.D with only quite a vague notion of what it was going to be about. The Purcell Plus project left open the possibility of the Ph.D student doing some e-Science-enabled musicological study. But I think I&aposd come out of undergraduate and masters study with a view of academic research that was very much text-based; the process of research---according to the me of ca. 2008---was to read lots of things and synthesise them, and the more obscure and dense the stuff read the better. The process is one of noticing generalisations amongst all these sources that haven&apost been remarked on before and remarking on them, preferably with a good balance of academic rigour and barefaced rhetoric. And I brought this pre-conception into a computing department. My first year was intended to be a training year, but I was actually already highly computer literate with considerable programming experience and quite a bit of knowledge of at least symbolic work in computational musicology. Consequently, I didn&apost fully engage with learning new stuff during that first year and instead embarked on a project of attempting to be rhetorical. It wasn&apost until later on that I really started to understand that those around had a completely different idea as to how research can be carried out. While I was busy reading, most of my colleagues were doing experiments; they were actually finding out new stuff (or at least were attempting to) and had the potential to make an original contribution to knowledge. At this point I started to look for research methods that could be applicable to my subject matter and eventually hit upon a couple of actually quite standard social science methods. So I think that&aposs the first thing that went wrong: I failed to take on board soon enough the new research culture that I had (or, I suppose, should have) entered.

    I think I&aposve always been someone who thrives on the acknowledgement of things I&aposve done; I always looked forwarded to receiving my marks at school and as an undergraduate; and I liked finding opportunities to do clever jobs for people, especially little software development projects where there&aposs someone to say, "that&aposs great! Thanks for doing that." I think I quickly found that doctoral research didn&apost offer me this at all. My experience was very much a solitary one where no one was really aware of what I was working on. Consequently two things happened: first, I tended not to pursue things very far through lack of motivation; and second (and this was the really dangerous one), I kept finding other things to do that did give me that feedback. I always found ways to justify these--lets face it---procrastination activities; mainly that they were all Goldsmiths work, including quite a lot of undergraduate and masters level teaching (the latter actually including designing a course from scratch), some Computing Department admin work, and some development projects. Doing these kinds of activities is actually generally considered very good for doctoral students, but they&aposre normally deliberately constrained to ensure that the student has plenty of research time still. Through my own choice, I let them take over far too much of my research time.

    The final causal point to mention is the one that any experienced academic will immediately point to: supervision. I failed to take advantage of the possibilities of supervision. As my supervisor was both involved in the project of which the Ph.D was part and also worked in the same office as me, we never had the right kind of relationship to foster good progress and working habits from me. I spoke to my supervisor every day and so I didn&apost really push for formal supervision often enough. I can now see that it would have been better to have someone with whom I had a less familiar relationship and who had less of an interest in my work and who, as a result, would just operate and enforce the procedures of doctoral project progress. It&aposs also possible that a more formal supervision relationship would have addressed the points above: I may have been forced to solidify ideas and to identify proper methods much sooner; I may have had more of the feedback that I needed; and I may have been more strongly discouraged from engaging in so much extra-research activity.

    The purpose of all this is not to apportion blame (I have a strong sense of being responsible for everything that I do), but to state publicly something that I&aposve been finding very hard to say: I failed my Ph.D. And (and this is the important bit) to make sure that I get on with what I need to do to pass it.

    • I need disinterested supervision; I&aposve requested assistance from the Sociology Department which should fit well with the research methods I used;
    • I need to improve the reporting of the studies I carried out; this involves correcting and expanding the methods sections and also doing more analysis work;
    • I need to either extend the samples of the existing studies, or carry out credible follow up studies to improve my evidence base;
    • I need to focus the research questions better and (having done so) make the conclusions directly address them.

    I&aposm going to blog this work as it goes along. So if I stop blogging, please send me harassing emails telling me to get the f*** on with it!

    Categories: LUG Community Blogs

    Steve Kemp: A diversion on off-site storage

    Planet HantsLUG - Wed, 26/03/2014 - 18:53

    Yesterday I took a diversion from thinking about my upcoming cache project, largely because I took some pictures inside my house, and realized my offsite backup was getting full.

    I have three levels of backups:

    • Home stuff on my desktop is replicated to my wifes desktop, and vice-versa.
    • A simple server running rsync (content-free http://rsync.io/).
    • A "peering" arrangement of a small group of friends. Each of us makes available a small amount of space and we copy to-from each others shares, via rsync / scp as appropriate.

    Unfortunately my rsync-based personal server is getting a little too full, and will certainly be full by next year. S3 is pricy, and I don't trust the "unlimited" storage people (backblaze,etc) to be sustainable and reliable long-term.

    The pricing on Google-drive seems appealing, but I guess I'm loathe to share more data with Google. Perhaps I could dedicated a single "backup.account@gmail.com" login to that, separate from all-else.

    So the diversion came along when I looked for Amazon S3-comptible, self-hosted, servers. There are a few, most of them are PHP-based, or similarly icky.

    So far cloudfoundry's vlob looks the most interesting, but the main project seems stalled/dead. Sadly using s3cmd to upload files failed, but certainly the `curl` based API works as expected.

    I looked at Gluster, CEPH, and similar, but didn't yet come up with a decent plan for handling offsite storage, but I know I have only six months or so before the need becomes pressing. I imagine the plan has to be using N-small servers with local storage, rather than 1-Large server, purely because pricing is going to be better that way.

    Decisions decisions.

    Categories: LUG Community Blogs

    Steve Engledow (stilvoid): Judon't

    Planet ALUG - Wed, 26/03/2014 - 01:44

    tl;dr: broke my collar bone, ouch.

    Since my last post, I've had a second Judo session which was just as enjoyable as the first except for one thing. Towards the end of the session we went into randori (free practice) - basically one-on-one sparring. I'm pleased to say that I won my first bout but in the second I went up against a guy who'd been learning for only 6 months or so. After some grappling, he threw me quite hard and with little control and I landed similarly badly - owch.

    The first thing I realised was that I'd slammed my head pretty hard against the mat and that I was feeling a little groggy. After a little while, it became apparent that I'd also injured my shoulder. The session was over though and I winced through helping to put the mats away.

    By the time I got in my car to drive home, my shoulder was a fair bit more painful and I made an appointment to see the doctor. When I saw her, she said there's a slight chance I'd broken my collar bone but it didn't seem very bad so I could just go home and see how it was in a couple of days.

    A couple of days later I went back to the surgery. I saw a different doctor who said that she didn't think it was broken but she'd refer me for an X-Ray. The radiologist soon pointed out a nice clean break at the tip of my collar bone! I sat in A&E forever until I eventually saw a doctor who referred me to the orthopaedic clinic the following Monday.

    Finally, the orthopaedic clinic's doctor told me I'd self-managed pretty well and that I should be ok to just get on with things but definitely no falling over, lifting anything heavy, and definitely no judo for up to six weeks.

    Apparently, I've been very lucky as it's easy to dislodge the broken bone when the break is where mine is. Apparently, if I fall over or anything similar, I'm likely to end up in surgery :S

    I've had to tell this story so many times now that I thought I might as well just write it down. For some reason, people seem to want to know all of the details when I mention that I've injured myself. Sadists.

    Not Morrison's

    On an unrelated subject, I've come to realise that I've developed an unhelpful manner when dealing with doors. I'm not the most graceful of people in general but it strikes me that I have a particularly awkward way of approaching doors. When I walk up to them, I hold a hand out to grasp the handle and push or pull (somehow I usually even manage to get that bit wrong regardless of signage) without slowing down at all which means that I have to swing the door quite fast to get it out of my way by the time my body catches up. Then, as the door's opening at a hefty pace, I have to grab the edge of the door and stop it before it slams into the wall. Because I'm still moving forward, this usually means that I'm partially closing the door as I move away from it.

    In all, I feel awkward when passing through doors, and anybody directly behind me is liable to receive a door in the face if I'm not aware of them :S

    Categories: LUG Community Blogs

    Tony Whitmore: The goose that laid the golden eggs, but never cackled

    Planet HantsLUG - Tue, 25/03/2014 - 20:48

    I’ve wanted to visit Bletchley Park for years. It is where thousands of people toiled twenty-four hours a day to decipher enemy radio messages during the second world war, in absolute secrecy. It is where some of the brightest minds of a generation put their considerable mental skills to an incredibly valuable purpose. It is also where modern computing was born, notably through the work of Alan Turing and others.

    So I was very pleased to be invited by my friend James to visit Bletchley as part of his stag weekend. After years of neglect, and in the face of demolition, the park is now being extensively restored. A new visitors’ centre will be introduced, and more of the huts opened up to the public. I have no doubt that these features will improve the experience overall, but there was a feeling of Trigger’s Broom as I looked over the huts closest to the mansion house. Never open to the public before, they looked good with new roofs and walls. But perhaps a little too clean.

    And it really is only the huts closest to the house that are being renovated. Others are used by the neighbouring National Museum of Computing, small companies and a huge number are still derelict. Whilst I hope that the remaining huts will be preserved, it would be great if visitors could see the huts in their current dilapidated state too. The neglect of Bletchley Park is part of its story, and I would love to explore the derelict huts as they are now. I would love to shoot inside them – so many ideas in my head for that!

    Most of the people working there were aged between eighteen and twenty-one, so you can imagine how much buzz and life there was in the place, despite the graveness of the work being carried out. Having visited the park as it is today, I wish that I had been able to visit it during the war. To see people walking around the huts, efficiency and eccentricity hand-in-hand, to know the import and intellect of what was being carried out, and how it would produce the technology that we all rely on every day, would have been incredible.

    Pin It
    Categories: LUG Community Blogs
    Syndicate content