LUG Community Blogs

Steve Kemp: A brief twitter experiment

Planet HantsLUG - Sun, 13/07/2014 - 20:08

So I've recently posted a few links on Twitter, and I see followers clicking them. But also I see random hits.

Tonight I posted a link to http://transient.email/, a domain I use for "anonymous" emailing, specifically to see which bots hit the URL.

Within two minutes I had 15 visitors the first few of which were:

IP User-Agent Request 199.16.156.124Twitterbot/1.0;GET /robots.txt 199.16.156.126Twitterbot/1.0;GET /robots.txt 54.246.137.243python-requests/1.2.3 CPython/2.7.2+ Linux/3.0.0-16-virtualHEAD / 74.112.131.243Mozilla/5.0 ();GET / 50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 50.18.102.132Google-HTTP-Java-Client/1.17.0-rc (gzip)HEAD / 199.16.156.125Twitterbot/1.0;GET /robots.txt 185.20.4.143Mozilla/5.0 (compatible; TweetmemeBot/3.0; +http://tweetmeme.com/)GET / 23.227.176.34MetaURI API/2.0 +metauri.comGET / 74.6.254.127Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp);GET /robots.txt

So what jumps out? The twitterbot makes several requests for /robots.txt, but never actually fetches the page itself which is interesting because there is indeed a prohibition in the supplied /robots.txt file.

A surprise was that both Google and Yahoo seem to follow Twitter links in almost real-time. Though the Yahoo site parsed and honoured /robots.txt the Google spider seemed to only make HEAD requests - and never actually look for the content or the robots file.

In addition to this a bunch of hosts from the Amazon EC2 space made requests, which was perhaps not a surprise. Some automated processing, and classification, no doubt.

Anyway beer. It's been a rough weekend.

Categories: LUG Community Blogs

Martin Wimpress: subSonic on Debian

Planet HantsLUG - Sat, 12/07/2014 - 12:00

Last year I removed all my music from Google Play Music and created my own subSonic server. I really like subSonic but don't use it a huge amount, mostly for syncing some music to my phone prior to going on holiday or business. Therefore, I've made a single one time donation to the project rather than the ongoing monthly usage fee.

Installing subSonic on Debian

This is how I install subSonic on Debian Wheezy.

Install Tomcat. sudo apt-get install tomcat7 Install subSonic. apt-get install ffmpeg sudo mkdir /var/subsonic sudo chown tomcat7: /var/subsonic sudo wget -c https://github.com/KHresearch/subsonic/releases/download/v4.9-kang/subsonic.war sudo cp subsonic.war /var/lib/tomcat7/webapps

Restart Tomcat.

sudo service tomcat7 restart

Login to subSonic by visiting http://server.example.org:8080/subsonic and login with the credentials admin and admin. Make sure you change the password straight away.

Right, that is it. You can stop here and start filling subSonic with your music.

subSonic clients

On the rare occasions that I listen to music via subSonic I use UltraSonic for Android and Clementine on my Arch Linux workstations.

References
Categories: LUG Community Blogs

Steve Kemp: A partial perl-implementation of Redis

Planet HantsLUG - Fri, 11/07/2014 - 21:36

So recently I got into trouble running Redis on a host, because the data no-longer fits into RAM.

As an interim measure I fixed this by bumping the RAM allocated to the guest, but a real solution was needed. I figure there are three real alternatives:

  • Migrate to Postgres, MySQL, or similar.
  • Use an alternative Redis implementation.
  • Do something creative.

Looking around I found a couple of Redis-alternatives, but I was curious to see how hard it would be to hack something useful myself, as a creative solution.

This evening I spotted Protocol::Redis, which is a perl module for decoding/encoding data to/from a Redis server.

Thinking "Ahah" I wired this module up to AnyEvent::Socket. The end result was predis - A perl-implementation of Redis.

It's a limited implementation which stores data in an SQLite database, and currently has support for:

  • get/set
  • incr/decr
  • del/ping/info

It isn't hugely fast, but it is fast enough, and it should be possible to use alternative backends in the future.

I suspect I'll not add sets/hashes, but it could be done if somebody was keen.

Categories: LUG Community Blogs

James Taylor: SSL / TLS

Planet ALUG - Thu, 10/07/2014 - 15:09

Is it annoying or not that everyone says SSL Certs and SSL when they really mean TLS?

Does anyone actually mean SSL? Have there been any accidents through people confusing the two?


Categories: LUG Community Blogs

James Taylor: Cloud Computing Deployments … Revisited.

Planet ALUG - Thu, 10/07/2014 - 15:09

So its been a few years since I’ve posted, because its been so much hard work, and we’ve been pushing really hard on some projects which I just can’t talk about – annoyingly. Anyways, March 20th , 2011 I talked about Continual Integration and Continual Deployment and the Cloud and discussed two main methods – having what we now call ‘Gold Standards’ vs continually updating.

The interesting thing is that as we’ve grown as a company, and as we’ve become more ‘Enterprise’, we’ve brought in more systems administrators and begun to really separate the deployments from the development. The other thing is we have separated our services out into multiple vertical strands, which have different roles. This means we have slightly different processes for Banking or Payment based modules then we do from marketing modules. We’re able to segregate operational and content from personally identifiable information – PII having much higher regulation on who can (and auditing of who does) access.

Several other key things had to change: for instance, things like SSL keys of the servers shouldn’t be kept in the development repo. Now, of course not, I hear you yell, but its a very blurry line. For instance, should the Django configuration be kept in the repo? Well, yes, because that defines the modules and things like URLs. Should the nginx config be kept in the repo? Well, oh. if you keep *that* in then you would keep your SSL certs in…

So the answer becomes having lots of repo’s. One repo per application (django wise), and one repo per deployment containing configurations. And then you start looking at build tools to bring, for a particular server or cluster of servers up and running.

The process (for our more secure, audited services) is looking like a tool to bring an AMI up, get everything installed and configured, and then take a snapshot, and then a second tool that takes that AMI (and all the others needed) and builds the VPC inside of AWS. Its a step away from the continual deployment strategy, but it is mostly automated.


Categories: LUG Community Blogs

Chris Lamb: Lotteries

Planet ALUG - Thu, 10/07/2014 - 14:00

The cliché is that lotteries are a tax on the mathematically illiterate.

It's easy to have some sympathy for this position. Did you know trying to get rich by playing the lottery is like trying to commit suicide by flying on commercial airlines? These comparisons are superficially amusing but to look at lotteries in this rational way has seems to be in-itself irrational, ignoring the real motivations of the participants.

Even defined as a tax they are problematic – far from being progressive or redistributive, it has always seemed suspect when lottery money is spent proudly on high-brow projects such as concert hall restorations and theatre lighting rigs when—with no risk of exaggeration—there is zero overlap between the people who would benefit from the project and who funded it.

But no, what rankles me more about our lotteries isn't the unsound economics of buying a ticket or even that it's a state-run monopoly, but rather the faux philanthropic way it manages to evade all criticism by talking about the "good causes" it is helping.

Has our discourse become so relative and non-judgemental that when we are told that the lottery does some good, however slight, we are willing to forgive all of the bad? Isn't there something fundamentally dishonest about disguising the avarice, cupidity, escapism and being part of some shared cultural event—that are surely the only incentives to play this game—with some shallow feel-good fluff about good causes? And where are the people doing real good in communities complaining about this corrupting lucre, or are they just happy to take the money and don't want to ask too many awkward questions..?

"Vices are not crimes" claims Lysander Spooner, and I would not want to legislate that citizens cannot make dubious investments in any market, let alone a "lottery market", but we should at least be able to agree that this nasty regressive tax should enjoy no protection nor special privileges from the state, and it should be incapable of getting away with deflecting criticism with a bunch of photogenic children from an inner-city estate clutching a dozen branded footballs.

Categories: LUG Community Blogs

Steve Kemp: Blogspam moved, redis alternatives being examined

Planet HantsLUG - Thu, 10/07/2014 - 09:45

As my previous post suggested I'd been running a service for a few years, using Redis as a key-value store.

Redis is lovely. If your dataset will fit in RAM. Otherwise it dies hard.

Inspired by Memcached, which is a simple key=value store, redis allows for more operations: using sets, using hashes, etc, etc.

As it transpires I mostly set keys to values, so it crossed my mind last night an alternative to rewriting the service to use a non-RAM-constrained service might be to juggle redis out and replace it with something else.

If it were possible to have a redis-compatible API which secretly stored the data in leveldb, sqlite, or even Berkley DB, then that would solve my problem of RAM-constraints, and also be useful.

Looking around there are a few projects in this area nds fork of redis, ssdb, etc.

I was hoping to find a Perl Redis::Server module, but sadly nothing exists. I should look at the various node.js stub-servers which exist as they might be easy to hack too.

Anyway the short version is that this might be a way forward, the real solution might be to use sqlite or postgres, but that would take a few days work. For the moment the service has been moved to a donated guest and has 2Gb of RAM instead of the paltry 512Mb it was running on previously.

Happily the server is installed/maintained by my slaughter tool so reinstalling took about ten minutes - the only hard part was migrating the Redis-contents, and that's trivial thanks to the integrated "slave of" support. (I should write that up regardless though.)

Categories: LUG Community Blogs

Chris Dennis: Self-signed multiple-domain SSL certificates

Planet HantsLUG - Wed, 09/07/2014 - 00:54

I’ve finally worked out how to create self-signed SSL certificates for multiple domain names with openssl. 

These notes relate to Debian GNU/Linux, but the principles will apply to other operating systems.

The first step to make the process easier and repeatable in the future is to copy the default configuration file from /etc/ssl/openssl.cnf to a working directory where you can adapt it

Let’s assume that you’ve copied /etc/ssl/openssl.cnf to ~/project-openssl.cnf.  Edit the new file and set the various default values to the ones that you need — that’s better than having to respond to openssl’s prompts every time you run it.

For real non-self-signed certificates, you would generate a certificate signing request (.csr) file, ready for a certificate authority to sign it for you.  In that case, you need to follow the instructions at http://wiki.cacert.org/FAQ/subjectAltName.

But for a self-signed certificate, the subjectAltName has to go in a different place.  Make sure you’ve got this line present and un-commented in the [req] section of the config file:

[req] ... x509_extensions = v3_ca

and then this goes at the end of the [v3_ca] section:

[v3_ca] ... subjectAltName = @alt_names [alt_names] DNS.1 = example.com DNS.2 = www.example.com DNS.3 = example.co.uk DNS.4 = www.example.co.uk

There is (apparently) a limit to the number (or total length) of the alternate names, but I didn’t reach it with 11 domain names.

It’s also possible to add IP addresses to the alt_names section like this:

IP.1 = 192.168.1.1 IP.2 = 192.168.69.14

Then to create the key and self-signed certificate, run commands similar to these:

openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout project.key -out project.crt -config ~/project-openssl.cnf cp project.crt /etc/ssl/localcerts/ mv project.key /etc/ssl/private/

Note that I move (rather than copy) the key to the private directory to avoid leaving a copy of it lying around unprotected.

You can check that the certificate contains all the domains that you added by running this:

openssl x509 -in project.crt -text -noout | less Alternative approach

I haven’t tried this, but according to http://apetec.com/support/GenerateSAN-CSR.htm it’s also possible to create a CSR and then self-sign it like this:

openssl x509 -req -days 3650 -in project.csr -signkey project.key -out project.crt v3_req -extfile project-openssl.cnf References:
Categories: LUG Community Blogs

Steve Kemp: What do you do when your free service is too popular?

Planet HantsLUG - Tue, 08/07/2014 - 17:47

Once upon a time I setup a centralized service for spam-testing blog/forum-comments in real time, that service is BlogSpam.net.

This was created because the Debian Administration site was getting hammered with bogus comments, as was my personal blog.

Today the unfortunate thing happened, the virtual machine this service was running on ran out of RAM and died - The redis-store that holds all the state has now exceeded the paltry 512Mb allocated to the guest, so OOM killed it.

So I'm at an impasse - I either recode it to use MySQL instead of Redis, or something similar to allow the backing store to exceed the RAM-size, or I shut the thing down.

There seems to be virtually no liklihood somebody would sponsor a host to run the service, because people just don't pay for this kind of service.

I've temporarily given the guest 1Gb of RAM, but that comes at a cost. I've had to shut down my "builder" host - which is used to build Debian packages via pbuilder.

Offering an API, for free, which has become increasingly popular and yet equally gets almost zero feedback or "thanks" is a bit of a double-edged sword. Because it has so many users it provides a better service - but equally costs more to run in terms of time, effort, and attention.

(And I just realized over the weekend that my Flattr account is full of money (~50 euro) that I can't withdraw - since I deleted my paypal account last year. Ooops.)

Meh.

Happy news? I'm avoiding the issue of free service indefinitely with the git-based DNS product which was covering costs and now is .. doing better. (20% off your first months bill with coupon "20PERCENT".)

Categories: LUG Community Blogs
Syndicate content