So I recently announced my intention to rejoin the Debian project, having been a member between 2002 & 2011 (inclusive).
In the past I resigned mostly due to lack of time, and what has changed is that these days I have more free time - primarily because my wife works in accident & emergency and has "funny shifts". This means we spend many days and evenings together, then she might work 8pm-8am for three nights in a row, which then becomes Steve-time, and can involve lots of time browsing reddit, coding obsessively, and watching bad TV (currently watching "Lost Girl". Shades of Buffy/Blood Ties/similar. Not bad, but not great.)
My NM-progress can be tracked here, and once accepted I have a plan for my activities:
I believe this will be useful, even though there will be limits - I've no patience for PHP and will just ignore it, along with its ecosystem, for example.
As progress today I reported #754899 / CVE-2014-4978 against Rawstudio, and discussed some issues with ITP: tiptop (the program seems semi-expected to be installed setuid(0), but if it is then it will allow arbitrary files to be truncated/overwritten via "tiptop -W /path/to/file"
And now sleep.
So I've recently posted a few links on Twitter, and I see followers clicking them. But also I see random hits.
Within two minutes I had 15 visitors the first few of which were:IP User-Agent Request 184.108.40.206Twitterbot/1.0;GET /robots.txt 220.127.116.11Twitterbot/1.0;GET /robots.txt 18.104.22.168python-requests/1.2.3 CPython/2.7.2+ Linux/3.0.0-16-virtualHEAD / 22.214.171.124Mozilla/5.0 ();GET / 126.96.36.199Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 188.8.131.52Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 184.108.40.206Twitterbot/1.0;GET /robots.txt 220.127.116.11Mozilla/5.0 (compatible; TweetmemeBot/3.0; +http://tweetmeme.com/)GET / 18.104.22.168MetaURI API/2.0 +metauri.comGET / 22.214.171.124Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp);GET /robots.txt
So what jumps out? The twitterbot makes several requests for /robots.txt, but never actually fetches the page itself which is interesting because there is indeed a prohibition in the supplied /robots.txt file.
A surprise was that both Google and Yahoo seem to follow Twitter links in almost real-time. Though the Yahoo site parsed and honoured /robots.txt the Google spider seemed to only make HEAD requests - and never actually look for the content or the robots file.
In addition to this a bunch of hosts from the Amazon EC2 space made requests, which was perhaps not a surprise. Some automated processing, and classification, no doubt.
Anyway beer. It's been a rough weekend.
Last year I removed all my music from Google Play Music and created my own subSonic server. I really like subSonic but don't use it a huge amount, mostly for syncing some music to my phone prior to going on holiday or business. Therefore, I've made a single one time donation to the project rather than the ongoing monthly usage fee.Installing subSonic on Debian
This is how I install subSonic on Debian Wheezy.Install Tomcat. sudo apt-get install tomcat7 Install subSonic. apt-get install ffmpeg sudo mkdir /var/subsonic sudo chown tomcat7: /var/subsonic sudo wget -c https://github.com/KHresearch/subsonic/releases/download/v4.9-kang/subsonic.war sudo cp subsonic.war /var/lib/tomcat7/webapps
Restart Tomcat.sudo service tomcat7 restart
Login to subSonic by visiting http://server.example.org:8080/subsonic and login with the credentials admin and admin. Make sure you change the password straight away.
Right, that is it. You can stop here and start filling subSonic with your music.subSonic clients
So recently I got into trouble running Redis on a host, because the data no-longer fits into RAM.
As an interim measure I fixed this by bumping the RAM allocated to the guest, but a real solution was needed. I figure there are three real alternatives:
Looking around I found a couple of Redis-alternatives, but I was curious to see how hard it would be to hack something useful myself, as a creative solution.
This evening I spotted Protocol::Redis, which is a perl module for decoding/encoding data to/from a Redis server.
It's a limited implementation which stores data in an SQLite database, and currently has support for:
It isn't hugely fast, but it is fast enough, and it should be possible to use alternative backends in the future.
I suspect I'll not add sets/hashes, but it could be done if somebody was keen.
As my previous post suggested I'd been running a service for a few years, using Redis as a key-value store.
Redis is lovely. If your dataset will fit in RAM. Otherwise it dies hard.
Inspired by Memcached, which is a simple key=value store, redis allows for more operations: using sets, using hashes, etc, etc.
As it transpires I mostly set keys to values, so it crossed my mind last night an alternative to rewriting the service to use a non-RAM-constrained service might be to juggle redis out and replace it with something else.
If it were possible to have a redis-compatible API which secretly stored the data in leveldb, sqlite, or even Berkley DB, then that would solve my problem of RAM-constraints, and also be useful.
Anyway the short version is that this might be a way forward, the real solution might be to use sqlite or postgres, but that would take a few days work. For the moment the service has been moved to a donated guest and has 2Gb of RAM instead of the paltry 512Mb it was running on previously.
Happily the server is installed/maintained by my slaughter tool so reinstalling took about ten minutes - the only hard part was migrating the Redis-contents, and that's trivial thanks to the integrated "slave of" support. (I should write that up regardless though.)