As I slowly upgrade all my machines to Debian 8.0 (jessie) they’re all ending up with systemd. That’s fine; my laptop has been running it since it went into testing whenever it was. Mostly I haven’t had to care, but I’m dimly aware that it has a lot of bits I should learn about to make best use of it.
Today I discovered systemctl is-system-running. Which I’m not sure why I’d use it, but when I ran it it responded with degraded. That’s not right, thought I. How do I figure out what’s wrong? systemctl --state=failed turned out to be the answer.# systemctl --state=failed UNIT LOAD ACTIVE SUB DESCRIPTION ● systemd-modules-load.service loaded failed failed Load Kernel Modules LOAD = Reflects whether the unit definition was properly loaded. ACTIVE = The high-level unit activation state, i.e. generalization of SUB. SUB = The low-level unit activation state, values depend on unit type. 1 loaded units listed. Pass --all to see loaded but inactive units, too. To show all installed unit files use 'systemctl list-unit-files'.
Ok, so it’s failed to load some kernel modules. What’s it trying to load? systemctl status -l systemd-modules-load.service led me to /lib/systemd/systemd-modules-load which complained about various printer modules not being able to be loaded. Turned out this was because CUPS had dropped them into /etc/modules-load.d/cups-filters.conf on upgrade, and as I don’t have a parallel printer I hadn’t compiled up those modules. One of my other machines had also had an issue with starting up filesystem quotas (I think because there’d been some filesystems that hadn’t mounted properly on boot - my fault rather than systemd). Fixed that up and then systemctl is-system-running started returning a nice clean running.
Now this is probably something that was silently failing back under sysvinit, but of course nothing was tracking that other than some output on boot up. So I feel that I’ve learnt something minor about systemd that actually helped me cleanup my system, and sets me in better stead for when something important fails.
I was first elected to the Software in the Public Interest board back in 2009. I was re-elected in 2012. This July I am up for re-election again. For a variety of reasons I’ve decided not to stand; mostly a combination of the fact that I think 2 terms (6 years) is enough in a single stretch and an inability to devote as much time to the organization as I’d like. I mentioned this at the May board meeting. I’m planning to stay involved where I can.
My main reason for posting this here is to cause people to think about whether they might want to stand for the board. Nominations open on July 1st and run until July 13th. The main thing you need to absolutely commit to is being able to attend the monthly board meeting, which is held on IRC at 20:30 UTC on the second Thursday of the month. They tend to last at most 30 minutes. Of course there’s a variety of tasks that happen in the background, such as answering queries from prospective associated projects or discussing ongoing matters on the membership or board lists depending on circumstances.
It’s my firm belief that SPI do some very important work for the Free software community. Few people realise the wide variety of associated projects. SPI offload the boring admin bits around accepting donations and managing project assets (be those machines, domains, trademarks or whatever), leaving those projects able to concentrate on the actual technical side of things. Most project members don’t realise the involvement of SPI, and that’s largely a good thing as it indicates the system is working. However it also means that there can sometimes be a lack of people wanting to stand at election time, and an absence of diversity amongst the candidates.
I’m happy to answer questions of anyone who might consider standing for the board; #spi on irc.oftc.net is a good place to ask them - I am there as Noodles.
You don't need to install anything to run Go binaries.
At first - I'm sure like many others - I felt a little revultion when I heard that Go compiles to statically-linked binaries but after having used and played with Go a bit over the past few weeks, I think it's rather clever and was somewhat ahead of the game. In the current climate where DevOps folks (and developers) are getting excited about containers and componentised services, being able to simply curl a binary and have it usable in your container without needing to install a stack of dependencies is actually pretty powerful. It seems there's a general trend towards preferring readiness of use over efficiency of space used both in RAM and disk space. And it makes sense; storage is cheap these days. A 10MiB binary is no concern - even if you need several of them - when you have a 1TiB drive. The extravagance of large binaries is no longer so relevant when you're comparing it with your collection of 2GiB bluray rips. The days of needing to count the bytes are gone.
Go has the feeling of C but without all that tedious mucking about in hyperspace memory
Sometimes you just feel you need to write something fairly low level and you want more direct control than you have whilst you're working from the comfort blanket of python or ruby. Go gives you the ability to have well-defined data structures and to care about how much memory you're eating when you know your application needs to process tebibytes of data. What Go doesn't give you is the freedom to muck about in memory, fall off the end of arrays, leave pointers dangling around all over the place, and generally make tiny, tiny mistakes that take years for anyone to discover.
The build system is designed around how we (as developers) use code hosting facilities
Go has a fairly impressive set of features built in but if you need something that's not already included, there's a good chance that someone out there has written what you need. Go provides a package search tool that makes it very easy to find what you're looking for. And when you've found it, using it is stupidly simple. You add an import declaration in your code:import "github.com/codegangsta/cli"
which makes it very clear where the code has come from and where you'd need to go to check the source code and/or documentation. Next, pulling the code down and compiling it ready for linking into your own binary takes a simple:go get github.com/codegangsta/cli
Go implicitly understands git and the various methods of retrieving code so you just need to tell it where to look and it'll figure the rest out.
In summary, I'm starting to wonder if Google have a time machine. Go seems to have nicely predicted several worries and trends since its announcement: Docker, Heartbleed, and social coding.
Yesterday the new Government published a press release about the forthcoming first meeting of the new National Security Council (NSC). That meeting was due to discuss the Tory administration’s plans for a new Counter-Extremism Bill. The press release includes the following extraordinary stement which is attributed to the Prime Minister:
“For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone. “
Forgive me, but what exactly is wrong with that view? Personally I think it admirable that we live in a tolerant society (“passive” or not). Certainly I believe that tolerance of difference, tolerance of free speech, tolerance of the right to hold divergent opinion, and to voice that opinion is to be cherished and lauded. And is it not right and proper that a Government should indeed “leave alone” any and all of its citizens who are obeying the law?
Clearly, however, our Prime Minster disagrees with me and believes that a tolerant society is not what we really need in the UK because the press release continues:
“This government will conclusively turn the page on this failed approach. “
If tolerance is a “failed approach”, what are we likely to see in its place?
A while ago, I switched from tritium to herbstluftwm. In general, it’s been a good move, benefitting from active development and greater stability, even if I do slightly mourn the move from python scripting to a shell client.
One thing that was annoying me was that throwing the pointer into an edge didn’t find anything clickable. Window borders may be pretty, but they’re a pretty poor choice as the thing that you can locate most easily, the thing that is on the screen edge.
It finally annoyed me enough to find the culprit. The .config/herbstluftwm/autostart file said “hc pad 0 26″ (to keep enough space for the panel at the top edge) and changing that to “hc pad 0 -8 -7 26 -7″ and reconfiguring the panel to be on the bottom (where fewer windows have useful controls) means that throwing the pointer at the top or the sides now usually finds something useful like a scrollbar or a menu.
I wonder if this is a useful enough improvement that I should report it as an enhancement bug.
Without going into any of the details, it's a web application with a front end written using Ember and various services that it calls out to, written using whatever seems appropriate per service.
At the outset of the project, we decided we would bite the bullet and build for Docker from the outset. This meant we would get to avoid the usual dependency and developer environment setup nightmares.The problem
What we quickly realised as we started to put the bare bones of a few of the services in place, was that we had three seemingly conflicting goals for each component and for the application as a whole.
Build images that can be deployed in production.
Allow developers to run services locally.
Provide a means for running unit tests (both by developers and our CI server).
So here's what we've ended up with:The solution
Or: docker-compose to the rescueFolder structure
Here's what the project layout looks like:Project | +-docker-compose.yml | +-Service 1 | | | +-Dockerfile | | | +-docker.compose.yml | | | +-<other files> | +-Service 2 | | +-Dockerfile | +-docker.compose.yml | +-<other files> Building for production
This is the easy bit and is where we started first. The Dockerfile for each service was designed to run everything with the defaults. Usually, this is something simple like:FROM python:3-onbuild CMD ["python", "main.py"]
Our CI server can easily take these, produce images, and push them to the registry.Allowing developers to run services locally
This is slightly harder. In general, each service wants to do something slightly different when being run for development; e.g. automatically restarting when code changes. Additionally, we don't want to have to rebuild an image every time we make a code change. This is where docker-compose comes in handy.
The docker-compose.yml at the root of the project folder looks like this:service1: build: Service 1 environment: ENV: dev volumes: - Service 1:/usr/src/app links: - service2 - db ports: - 8001:8000 service2: build: Service2 environment: ENV: dev volumes: - Service 2:/usr/src/app links: - service1 - db ports: - 8002:8000 db: image: mongo
This gives us several features right away:
We can locally run all of the services together with docker-compose up
The ENV environment variable is set to dev in each service so that the service can configure itself when it starts to run things in "dev" mode where needed.
The source folder for each service is mounted inside the container. This means you don't need to rebuild the image to try out new code.
Each service is bound to a different port so you can connect to each part directly where needed.
Each service defines links to the other services it needs.
This was the trickiest part to get right. Some services have dependencies on other things even just to get unit tests running. For example, Eve is a huge pain to get running with a fake database so it's much easier to just link it to a temporary "real" database.
Additionally, we didn't want to mess with the idea that the images should run production services by default but also didn't want to require folks to need to churn out complicated docker invocations like docker run --rm -v $(pwd):/usr/src/app --link db:db service1 python -m unittest just to run the test suite after coding up some new features.
So, it was docker-compose to the rescue again :)
Each service has a docker-compose.yml that looks something like:tests: build: . command: python -m unittest volumes: - .:/usr/src/app links: - db db: image: mongo
Which sets up any dependencies needed just for the tests, mounts the local source in the container, and runs the desired command for running the tests.
So, a developer (or the CI box) can run the unit tests with:docker-compose run tests Summary
Each Dockerfile builds an image that can go straight into production without further configuration required.
Each image runs in "developer mode" if the ENV environment variable is set.
Running docker-compose up from the root of the project gets you a full stack running locally in developer mode.
Running docker-compose run tests in each service's own folder will run the unit tests for that service - starting any dependencies as needed.
I've been meaning to blog about the podcasts I listen to and the setup I use for consuming them as both have evolved a little over the past few months.The podcasts
I use syncthing to have those replicated to my laptop and home media server.
When I'm cycling to work or in the car, I use the mp3 player to listen to them. (No, when I'm in the car, I plug it in to the stereo, I don't drive with headphones on :P)
When I'm sitting at a computer or at home, I use Plex to serve up podcasts from my home media box.
I keep on top of everything by making sure that I move (rather than copy) when putting things on the mp3 player and rely on Syncthing to remove listened-to podcasts from everywhere else.
It's not the most elegant setup I've heard of but it's simple and works for me :)What next?
I find I have a lot of things I want to listen to and not really enough time to listen to them in. I've heard that some people speed podcasts up (I've heard as much as 50%). Does anyone do this? Does it make things any less enjoyable to listen to? I really enjoy the quality of what I listen to; I don't want to feel like I'm just consuming information for the sake of it.
Earlier this week I allowed some of my colleagues at work to try the Kimchi and one of them (James) liked it enough to ask for a tub of it just for himself. Being a lovely person I obliged.
My darling Rob decided that what he really wanted was Kimchiguk or Kimchijijae. Being both wonderful and lacking in ingredients, I set to do achieving something.
I read a few recipes, worked out that I lacked critical ingredients and so set about simulating Kimchiguk. I grabbed two and a bit cupfuls of kimchi from the box in the fridge, chopped it up more finely, popped it into a pot along with about a pound of cubed pork belly, two teaspoons of hot pepper flakes, two teaspoons of sugar, and two teaspoons of cornflour. I then topped that off with five cups of water, mixed it up, brought it to the boil and simmered it for 30 minutes. Then I cubed some tofu, popped that in and simmered it for another 10 minutes while some rice cooked. Served in a bowl over rice, my approximation of Kimchiguk turned out pretty well. (Note it fed both of us and there was a serving left over.)
With a do-over, I'd try harder to get pork shoulder (belly is quite fatty) and given the option I'd wait until I had hot pepper paste because the malty fermented sweetness of hot pepper paste is pretty much impossible to emulate otherwise.
Last night we tried the small amount of 'fresh kimchi' from my epic kimchi making trial.
I marinaded some beef strips in a blended paste of onion, garlic, ginger, and sesame oil for a few hours before frying that up, and serving on rice with stir-fried carrots, leek and spring onion. A little fresh kimchi on the side for flavour.
I think the Kimchi was a success. Dearest Rob said that it tasted of Kimchi. The flavours were not very well combined and quite clearly underdeveloped, and the pepper flakes were still a little hard. I expect this to resolve over the next few days as it begins to ferment.
All in all, experiment going well, expect part 3 when I know how it tastes after fermentation begins.
Theresa May hasn’t wasted any time. The Independent reports today that Ms May (Home Secretary in the coalition administration) has said that the new Tory administration will bring the Draft Communications Data Bill, previously blocked by the Liberal Democrats, back to the House of Commons with the intention of getting it passed into law. As the Independent also reports, dear Dave, who is let us say, technically challenged, has in the past expressed the view that no form of communication should be unreadable by the Goverment. This implies severe restrictions on all forms of encryption.
Given that the Tories now have the majority they lacked in the last administration, it is clear that they will see themselves free to attack the kind of liberties I, and millions like me enjoy and cherish. The Open Rights Group maintain a wiki devoted to the relevant points of each political party’s manifesto devoted to surveillance or other possible attacks on privacy. As they point out, the Tory party is committed to:
There are, of course, huge practical and technical difficulties in implementing much of what the Tories wish to do (consider for example the idiocy of attempting to outlaw VPN technology) but that won’t stop them trying. Indeed, some of the technical difficulties may cause the new administration to bring in mechanisms to get around those problems. An obvious example would be the requirement for key escrow for anyone wishing to use encryption.
Excuse me if I find that unacceptable. Time to encrypt much, much more of my everyday activity from now on.
The early results of yesterday’s poll are depressing beyond belief. It looks almost certain that the Tory party will have sufficient seats to form the next government.
I don’t often make party political points here (though my political leanings may sometimes be obvious) but I was reminded today of Neil Kinnock’s heart rending speech in Bridgend, Glamorgan, on Tuesday 7 June 1983, two days before the election in which Margaret Thatcher was returned as Prime Minister.
“If Margaret Thatcher wins on Thursday, I warn you not to be ordinary. I warn you not to be young. I warn you not to fall ill. I warn you not to get old.”
Those words resonate even more today than they did 32 years ago. I fear for the old, the poor, the disposessed, the weak, the young, the sick and yes, indeed, the ordinary people of this country. David Cameron and his cronies both inside and outside Government will now return to the task of dismantling all that is good and admirable about our society. A society should be judged on the way it treats its weakest and less able members. Cameron’s Tories are, at heart, brutal and uncaring. That frightens me.
I have spent today making my first ever batch of Kimchi. I have been documenting it in photos as I go, but thought I'd write up what I did so that if anyone else fancies having a go too, we can compare results.
For a start, this recipe is nowhere near "traditional" because I don't have access to certain ingredients such as glutinous rice flour. I'm sure if I searched in many of the asian supermarkets around the city centre I could find it, but I'm lazy so I didn't even try.
I am not writing this up as a traditional recipe because I'm kinda making it up as I go along, with hints from various sources including the great and wonderful Maangchi whose YouTube channel I follow. Observant readers or followers of Maangchi will recognise the recipe as being close to her Easy Kimchi recipe, however since I'm useless, it won't be exact. If this batch turns out okay then I'll write it up as a proper recipe for you to follow.
I started off with three Chinese Leaf cabbages which seemed to be about 1.5kg or so once I'd stripped the less nice outer leaves, cored and chopped them.
I then soaked and drained the cabbage in cold water...
...before sprinkling a total of one third of a cup of salt over the cabbage and mixing it to distribute the salt.
Then I returned to the cabbage every 30 minutes to re-mix it a total of three times. After the cabbage had been salted for perhaps 1h45m or so, I rinsed it out. Maangchi recommends washing the cabbage three times so that's what I did before setting it out to drain in a colander.
Maangchi then calls for the creation of a porridge made from sweet rice flour which it turns out is very glutinous. Since I lack the ability to get that flour easily I substituted cornflour which I hope will be okay and then continued as before. One cup of water, one third of a cup of cornflour was heated until it started to bubble and then one sixth of a cup of sugar was added. Stirred throughout, once it went translucent I turned the heat off and proceeded.
One half of a red onion, a good thumb (once peeled) of ginger, half a bulb of garlic and one third of a cup of fish sauce went into a mini-zizzer. I then diagonal-chopped about five spring onions, and one leek, before cutting a fair sized carrot into inch long pieces before halving and then thinly slicing it. Maangchi calls for julienned carrots but I am not that patient.
Into the cooled porridge I put two thirds of a cup of korean hot pepper flakes (I have the coarse, but a mix of coarse and fine would possibly be better), the zizzed onion/garlic/ginger/fish mix and the vegetables...
...before mixing that thoroughly with a spatula.
Next came the messy bit (I put latex gloves on, and thoroughly washed my gloved hands for this). Into my largest mixing bowl I put a handful of the drained cabbage into the bowl and a handful of the pepper mix. Thoroughly mixing this before adding another handful of cabbage and pepper mix I repeated until all the cabbage and hot pepper mixed vegetables are well combined. I really got your arms into it, squishing it around, separating the leek somewhat, etc.
As a final task, I scooped the kimchi into a clicklok type box, pressing it down firmly to try and remove all the air bubbles, before sealing it in for a jolly good fermenting effort. I will have to remove a little tonight for our dinner (beef strips marinated in onion, ginger and garlic, with kimchi on rice) but the rest will then sit to ferment for a bit. Expect a part-2 with the report from tonight's dinner and a part-3 with the results after fermentation.
As an aside, I got my hot pepper flakes from Sous Chef who, it turns out, also stock glutinous rice flour -- I may have to get some more from them in the future. (#notsponsored)
I was playing shop the other day with my son as he'd recently been bought a new toy till with money and food and it was going fairly well until he got a little excited and started saying "I want more money". At this, I started telling him that he couldn't just get money for nothing and that he'd need to sell me something else from his shop. This made my wife uncomfortable and she asked me to stop as she didn't want him being indoctrinated with such capitalist ideas from so early an age.
So, what's the alternative? I'm certainly not convinced there's anything wrong with teaching my son the value of currency and trade but similarly, I sort of agree that I don't want him to grow up only aware of one way that the world can work.
Educate for what's practical in the world he's growing up in and risk indoctrination or play through other scenarios and risk him growing up not knowing how to deal with financial matters?
How would that even work? I'm not sure I know how to play socialist shop. Maybe my parents failed ;)
I'm an arch linux user and I love it; there's no other distro for me. The things that arch gets criticism for are the exact same reasons I love it and they all more or less boil down to one thing: arch does not hold your hand.
It's been a while since an update in arch caused me any problems but it did today.
It seems there's an issue with the latest version of wpa_supplicant which renders it incompatible with the way wifi is setup at boot time. The problem was caught and resolved very quickly by package maintainers who simply rolled the wpa_supplicant package back. However, I was unlucky enough to have caught the intervening upgrade shortly before turning my laptop off. I came home this evening to find I had no wifi!
This wasn't a huge challenge but I haven't written a blog post for a while and someone might find this useful:
If your wifi doesn't start at boot...
And you're using a laptop with no ethernet port...
And you know an upgrade will solve your problem...
How do you get internet so you can upgrade?
First, find the name of your wireless interface:iw dev
Which will output something like:phy#0 Interface wlp2s0 ifindex 2 wdev 0x1 addr e8:b1:fc:6c:bf:b5 type managed channel 11 (2462 MHz), width: 20 MHz, center1: 2462 MHz
Where wlp2s0 is the bit we're interested in.
Now bring the interface up:ip link set wlp2s0 up
Connect to the access point:iw dev wlp2s0 connect "AP name"
Create a temporary configuration file for wpa_supplicant:wpa_passphrase "AP name" "password" > /tmp/wpa.config
Run wpa_supplicant to authenticate with the access point:wpa_supplication -iwlp2s0 -c/tmp/wpa.config
In another terminal (or you could have backgrounded the above), run dhcpcd to get an IP address from your router:dhcpcd wlp2s0
Update and reboot or whatever :)
Over the past few months, I've mentioned to a few people that I like the idea of Firefox OS and that I've used the emulator and been fairly impressed. Well I've just put my money where my mouth is and ordered a Geeksphone revolution.
I'll admit that the decision to buy the phone wasn't purely to support Firefox OS; my hand was forced and I faced a dilemma:
After dropping my nexus 4 and cracking the screen (leaving the touchscreen only partially working), I found out that a repair with genuine parts would cost me nearly £100. My N4 being quite old now - and fairly crashy with lollipop - I couldn't quite justify the repair cost.
To be honest, I've been enjoying living without a usable phone except for the odd rare moments when I've needed to make a call while out. For that reason I decided that a brand new top of the range phone was out of the question.
I didn't want to go as far as getting a feature phone (I'm so used to having a smart phone now) and the Revolution seems like it offers a good compromise on cost vs. features.
So the intention is to live with Firefox OS but if it turns out to be ghastly, the phone will happily run AOSP and I can switch back and forth at will - yes, it even dual boots.
I'm sure I'll waffle on about my first impressions when it arrives...
I previously wrote about tracking a ship around the world, but never followed up with the practical details involved with shipping my life from the San Francisco Bay Area back to Belfast. So here they are, in the hope they provide a useful data point for anyone considering a similar move.
Firstly, move out. I was in a one bedroom apartment in Fremont, CA. At the time I was leaving the US I didn’t have anywhere for my belongs to go - the hope was I’d be back in the Bay Area, but there was a reasonable chance I was going to end up in Belfast or somewhere in England. So on January 24th 2014 I had my all of my belongings moved out and put into storage, pending some information about where I might be longer term. When I say all of my belongings I mean that; I took 2 suitcases and everything else went into storage. That means all the furniture for probably a 2 bed apartment (I’d moved out of somewhere a bit larger) - the US doesn’t really seem to go in for the concept of a furnished lease the same way as the UK does.
I had deliberately picked a moving company that could handle the move out, the storage and the (potential) shipping. They handed off to a 3rd party for the far end bit, but that was to be expected. Having only one contact to deal with throughout the process really helped.
Fast forward 8 months and on September 21st I contacted my storage company to ask about getting some sort of rough shipping quote and timescales to Belfast. The estimate came back as around a 4-6 week shipping time, which was a lot faster than I was expecting. However it turned out this was the slow option. On October 27th (delay largely due to waiting for confirmation of when I’d definitely have keys on the new place) I gave the go ahead.
Container pickup (I ended up with exclusive use of a 20ft container - not quite full, but not worth part shipment) from the storage location was originally due on November 7th. Various delays at the Port of Oakland meant this didn’t happen until November 17th. It then sat in Oakland until December 2nd. At that point the ETA into Southampton was January 8th. Various other delays, including a week off the coast of LA (yay West Coast Port Backups) meant that the ship finally arrived in Southampton on January 13th. It then had to get to Belfast and clear customs. On January 22nd 2015, 2 days shy of a year since I’d seen them, my belongings and I were reunited.
So, on the face of it, the actual time on the ship was only slightly over 6 weeks, but all of the extra bits meant that the total time from “Ship it” to “I have it” was nearly 3 months. Which to be honest is more like what I was expecting. The lesson: don’t forget to factor in delays at every stage.
The relocation cost in the region of US$8000. It was more than I’d expected, but far cheaper than the cost of buying all my furniture again (plus the fact there were various things I couldn’t easily replace that were in storage). That cost didn’t cover the initial move into storage or the storage fees - it covered taking things out, packing them up for shipment and everything after that. Including delivery to a (UK) 3rd floor apartment at the far end and insurance. It’s important to note that I’d included this detail before shipment - the quote specifically mentioned it, which was useful when the local end tried to levy an additional charge for the 3rd floor aspect. They were fine once I showed them the quote as including that detail.
Getting an entire apartment worth of things I hadn’t seen in so long really did feel a bit like a second Christmas. I’d forgotten a lot of the things I had, and it was lovely to basically get a “home in a container” delivered.
About four years ago I was getting a huge volume of backscatter email to the non-existent address email@example.com. After a month or so it started to go quiet and eventually I got hardly any hits on that (or any other) address. A couple of weeks or so ago they came back. My logs for weeks ending 15 March, 22 March and 29 March show 92%, 96% and 94% respectively of all email to my main mail server is failed connection attempts from Russian domains to dear old non-existent “info”. Out of curiosity I decided to capture some of the inbound mails. Most were in Russian, but the odd one or two were in (broken) english. Below is a typical example:
To: Subject: Are you still looking for love? Look at my photos!
Date: Thu, 12 Mar 2015 15:22:08 +0300
X-Mailer: Microsoft Windows Live Mail 16.4.3528.331
Are you still looking for love? I will be very pleased to become your half and save you from loneliness. My name is Olga, 25 years old.
For now I live in Russia, but it’s a bad time in my country, and I think about moving to another state.
I need a safer place for life, is your country good for that?
If you are interested and want to get in touch with me, just look at this international dating site.
Hope to see you soon!
Just click here!
Sadly, I believe that many recipients of such emails will indeed, “click here”. Certainly enough to further propagate whatever malware was used to compromise the end system which actually sent the above email.
An old friend of mine has expressed some concern at the lack of activity on trivia of late. In his most recent email to me he said:
“You really should revive Baldric you know. Everyone will believe it if you just say you were kidnapped by aliens, and then you can just resume where you left off.”
So Peter, this one is just for you. Oh, and Happy Birthday too.
Or: Finding out what crud you installed that's eating all of your space in Arch Linux
I started running out of space on one of my Arch boxes and wondered (beyond what was in my home directory) what I'd installed that was eating up all the space.
A little bit of bash-fu does the job:for pkg in $(pacman -Qq); do size=$(pacman -Qi $pkg | grep "Installed Size" | cut -d ":" -f 2) echo "$size | $pkg" done | sed -e 's/ //g' | sort -h
This outputs a list of packages with those using the most disk space at the bottom:25.99MiB|llvm-libs 31.68MiB|raspberrypi-firmware-examples 32.69MiB|systemd 32.86MiB|glibc 41.88MiB|perl 54.31MiB|gtk2 62.13MiB|python2 73.27MiB|gcc 77.93MiB|python 84.21MiB|linux-firmware
The above is from my pi; not much I can uninstall there ;)
With all the tech world moving towards the idea that you have a single device that does everything, I've found myself suckered in to the convergence ideal in recent months. I was even genuinely excited by the recent video about Unity 8.
I have an Android phone that I use for a lot of purposes (nothing unusual: music, podcasts, messaging, web, phone calls) and a tablet that I use for more or less the same set of things with a bigger screen.
Yesterday, I managed to break my phone's screen leaving me with the horrifying prospect of a ride to work without being able to catch up on Linux Luddites (cha-ching) so I dug out my old mp3 player and it reminded me how clean and efficient the interface was.
Until a lot more work has happened, I've officially woken up from the convergence dream.
I have a Kindle Paperwhite. I use it to read books. It's fantastic for that.
The web browser is dire.
I have a Nexus 7. I use it to surf the information superhighway. The web browser is great.
It's not so good for reading books - the display is too bright (even on a low setting).
On-the-move Music / Podcasts
I've fallen back in love with the MuVo and I think I'm going to stick with it for a while.
It is, however, much to big to use in bed.
Gaming / home media
I don't do much gaming so that doesn't warrant its own hardware but shares a home in the PC behind the telly which runs a Plex server and Kodi and has a big hard drive attached with all of our media on it.
I have a watch. It's a Pebble so it tells me when things are going on but it's mostly there to tell me what time it is (and relatedly, when I'm about to be late for a meeting).
I'm left feeling a little unsure of what I need a phone for now. Breaking it and subsequently not missing it one iota today speaks volumes. I barely use SMS (I mostly stay in contact through various other means) and I don't make or receive enough phone calls for it to seem worth carting an expensive oblong around.
I'm sure I'll change my mind soon.