This post is not intended to start a flame/holy war or any other kind of religious conflict with regard to Linux desktop environments (DEs). What it is intended to do, is to simply catalogue the multitude of problems I have been encountering while using Debian Jessie and GNOME 3.14.I LOVE GNOME (I truly do)
“Wait,” you might say, “doesn’t this conflict with the title of this blog post?”
Well yes, it does. But I want you, my learned reader, to understand that I wish that the GNOME DE was as stable and solid as it should be. As it could be. And hopefully as it will be.
You see, this is what Linux and other Unix-like operating systems have been known and reputed for – their stability. I love what the GNOME devs did when they decided to reimagine the desktop for GNOME 3: they used space sensibly, vertically, which to me feels more natural and intuitive. And I love how it’s meant to stay out of the way – another good design motif.
But in terms of stability, sadly, GNOME has been something of a disappointment to me, and I wish this were not the case. Perhaps this is just a consequence of its ambition, and that will always garner my respect. Or maybe my install went terribly wrong, somewhere. But I don’t reckon. So, without further ado…
DISCLAIMER: WRT the issues with Debian Jessie‘s implementation of GNOME Shell/GNOME 3, I shall simply refer to it as GNOME. I apologise to the purists out there. I am only commenting on my experience in Debian Jessie, not anyone else’s, nor of any other GNU/Linux distribution. Finally, I intentionally do not go into detail here and am not providing numerous distro/upstream links to “validate” my own claims. I don’t need to. If you’re interested, just search anything I have put below. I am pretty confident you will find stuff…The 10 Problems
Have you had similar experiences to these? Do comment below.1. Tracker
The problems with GNOME start from the very moment you log in: it’s a disk-thrashing, sluggard of a desktop. And yes, I am using a disk, not a SSD. Why? Because badly written software doesn’t deserve a place in my CPU, let alone being so resource-hogging as to require an SSD.
So yes, Tracker is the first problem with GNOME. From logging in, all the way through your session, to shutting down your machine, it’s there – consuming all available CPU, disk I/O and (perhaps due to a memory leak), system memory. Happily gobbling it all up like a sickly child with no manners.
Perhaps I am being unfair, inferring that Tracker is “bad software”. It’s not a bad idea and its search seems to work well. But it doesn’t reign itself in. And software that doesn’t adhere to users’ choices through its own preferences panel is software that needs attention.
There are too many people/posts on the web with/of similar experiences. But, why not just disable tracking completely, you ask? Like, through the GUI you mean..? Mmm.2. Crashes and Freezes
Next up is something akin to heresy: crashing and freezing of the whole desktop UI. Seriously, it’s that bad.
You are in the middle of something, as you might be in a productive desktop environment, and BAM! no window response. That’s it. All gone. This single issue is by far the most perplexing and irritating, totally demolishing my productivity recently.
When you start searching on t’interweb about this, you realise that this has haunted GNOME for years, and in multiple versions. The nearest posts I have found on the web which seem related to the problem I have are here:
An alternative way to make GNOME hang on you is to use the live user switching. Just set up another user account, then Switch User via this menu. Then, as your new user, switch back to your original account.
Do this a few times for maximum effect, until you get stuck looking at the frozen greeter, just after it’s accepted your password for logging back in.
Enjoy the view.
It’ll last a while.
In fact, no need to take a photo. This’ll last long enough.
Moving on…4. GNOME Online Accounts
Ahh, GOA. Such a good idea. Implemented in such an average way.
GNOME Online Accounts is meant to centralise internet service (or “cloud”, hwk-ding) accounts through one easy GUI component, and then share the online resources of each account with the appropriate desktop software. Think, Google Calendar being visible in your desktop calendar, which is a separate desktop application than, say, your email reader (where you could read your GMail). But no need to set up each application separately; just set up the GOA and each application gets relevant access. Get the idea?
The account set-up bit of this is, actually, great. I’m all for it too – this whole concept. It just makes so much sense.
One of the problems with it is that things don’t work properly. For example, if you use two-factor authentication in your Google account, and rely on application-specific passwords, then GOA doesn’t like that. You will be constantly prompted for your Google account password, which is never accepted.
To be fair to Jessie, I haven’t seen this happen recently, so it may have finally been plugged. Or I may just be lucky.5. Evolution’s management of GOA’s SMTP/IMAP accounts
Another problem is SMTP/IMAP accounts. Sure, they integrate nicely with Evolution. Until you edit parts of the account in Evolution, which are more application-specific. Then, you return to your account folders list with your GOA mail account being renamed to “Untitled”. A rummage through, and edit of, the relevant ~/.config files is required to correct this error. Not so slick.
I still have hope though. One day this stuff will work great.6. Evolution Hangs
Yep, another hangy-crashy thing. Sometimes, for no discernible reason, when you close Evolution is hangs, mid-termination. Forever. You have to send a KILL to it to actually get it to close off completely. Why? Who knows. It appears to be a timeout or spinlock type of problem. Sorry for being vague, but look, just do this Google search and pick a year. It looks like this bug has been around in one incarnation or another for a very long time.7. Nautilus Hangs
Are you seeing a pattern here? Yep, our faithful friend and file utility, Nautilus, also hangs. Quite often. Why it does this, I have not yet been able to determine. Sigkill to the rescue. (You can do a Google search on this too…)8. Standby and resume with remote file system mounted
Now, I admit, this is a silly thing to do when you look at it, because you are clearly asking for trouble if you have a remote filesystem mounted into your own filesystem, and then put your machine to sleep for a while.
You can make the problem worse still, if you have laptop with a docking station. Simply put it to sleep, undock, wake the machine, then reconnect using your wireless instead of ethernet. The outcome varies from a locked desktop (where nothing works), to a frozen nautilus.
Again, a silly thing to do, perhaps, but also an innocent mistake at times. Like, when you’re rushing to attend a meeting, for example.
So, why not be offered a notification, when requesting to “sleep” the machine, saying that remote filesystems are mounted? I think even I might be able to knock up some code for that one (but I’d prefer to leave it to the experts, who I respect fully and who would do it far better than I).9. Audio Output Switching
As you may have gathered from previous comments, when it comes to GNOME I am primarily a business user. My business runs and relies on GNU software & Linux. For the experience and knowledge I have gained – not to mention being able to sustain an income and lifestyle I’m happy with, I am indebted to many people for their determined efforts in the free software community.
Unfortunately, little bugs creep in here and there – that’s the rule of life. One minor annoyance with Jessie, that wasn’t present in its predecessor Wheezy, is automatic audio output switching. In Wheezy, after a small tweak to the kernel module loading (via /etc/modprobe.d), the audio output would be directed to my docking station’s analogue jack when the laptop was docked, and then automatically switch to the laptop’s speakers when undocked.
Unfortunately, in Jessie, when my laptop is docked I have to hit the Super (Windows) key and get to the Sound preferences, then switch the output device. After undocking, the same story. This is, apparently, fixed upstream, but regressive and annoying nonetheless.10. The long pauses and (what seems like) catastrophic resource “sharing”
This is so subjective an issue that I thought it barely worth mentioning, but an issue it is nonetheless. And one that I actually feel is perhaps the worst of all.
When key processes are busy in the GNOME Desktop Environment – say Tracker for sake of argument, the “hit” on the rest of the system is shocking. Right now, as I type this blog entry, any mouse-based GUI interactions are extremely sluggish. This could be the reason why:top - 16:34:34 up 2:00, 2 users, load average: 16.31, 15.97, 13.93
So what is causing such a load on my machine? It doesn’t take long to figure it out, in top:PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9187 smd 39 19 2239548 210440 34852 R 83.7 1.3 3:50.74 tracker-extract 9148 smd 20 0 693940 59696 8652 S 7.6 0.4 4:33.53 tracker-store
For reference, my trusty ThinkPad T420 uses a 2nd gen Core i7 processor (dual core w/hyperthreading), 16GB DDR3 memory (dual channel), a 64GB mSATA SSD system drive and 500GB Seagate Momentus 7200.4 drive for my /home. It’s a set-up that’s still powerful enough for getting things done, and I’ve grown quite fond of this chunky, heavy laptop (by 2016 standards). Yes, it’s a bit clunky now, but it’s still got it where it counts, and has only required minimal servicing over the years (since 2011).
Back to the main issue, though. You see, I grew up on Amigas. Fully pre-emptive multitasking spoilt me, and I’ve never looked back, or sideways, since. These days, all modern operating systems provide significantly more advanced multitasking and far, far more powerful hardware, but the user’s needs should always come first in a desktop environment. So, having an unresponsive desktop for hours, because a non-GUI process is taking too much CPU and I/O, is not a productivity boon, to say the least.And just when you thought my tirade was complete, for a special BONUS… 11. Dejadup/duplicity and the inability to restore a backup!!
I love how well integrated Dejadup is into Nautilus. It’s a neat idea, to be able to just navigate to anywhere on your file-system and then say “hey, you know what? I wonder if that file I was looking for used to live here?“, or “I really must restore the earlier version of this file, or that file…”.. And so on. It even states on its website, that it “hides the complexity of doing backups the Right Way (encrypted, off-site, and regular) and uses duplicity as the backend” [my link].
‘GNOME Backups’ was designed to facilitate exactly this, using the Dejadup/duplicity combo, with two main Nautilus integration actions. Firstly, you can right-click in a folder (on blank space) and select “Restore missing files”. Or, you can right-click on a specific file and select “Revert to previous version”. In either case, a dialog will appear prompting you to select a date, from a range of dates when backups occurred. Great, huh?
Except a backup is only good when you’re able to restore it. I was not able to restore mine. The “Revert” functionality simply failed, every time I tried, with a “File not found in archive”-style error message each time. I also tried restoring the entire backup, which also failed. This issue pretty much covers it.
I was originally going to entitle this blog post, Debian’s GNOME is a broken user experience, but shied away from making such a bold, and somewhat unfair, claim. However, it’s hard not to conclude that this might actually be the case.
GNOME 2 used to be amazingly solid. In fact, in my younger years I didn’t use it because I perceived it as being a little boring, instead opting for KDE (v2, then v3) as my go-to desktop for quite a while. I would love to have the stability of GNOME 2 – at least as I experienced it – just in GNOME 3 form.
The biggest problem about GNOME 3 / Gnome Shell, is that I like it so damn much. For me, despite all the wrinkles and annoyances, the occasional memory leaks of “background” indexing processes, the frequent hanging of various applications and the seemingly (at times) untested nature of the software, it’s actually brilliant. It’s fast, feature-full, yet fluid. That’s a rare combination in software.
For me, it’s faster to work in than any other DE, because it combines enough functionality with equally enough transparency. For instance, when I am editing a client’s website files and want to upload them, Nautilus is the hero – allowing me to quickly mount the remote filesystem, upload my files, and then disconnect. No need to launch additional software for that task. We’re just moving data from one filesystem to another, right? That’s what a file manager does and, in the main, Nautilus is exceptional at it.
I’ve been using Debian for some time now, migrating away from Fedora on my netbook to start with, and then later on my main work laptop. In general it’s an operating system that does so much right, it’s hard when things occasionally don’t work as expected.
I won’t say that Jessie’s innings with GNOME have been the best; fair from it. But hopefully we can look forward to a smoother experience as time goes on.
The new 5.2L V12 twin turbo DB11 from Aston has appeared at Geneva, with rather delicious body styling and gorgeous paint. Why say no! (apart from price and availability)
Thanks to David Morris for the following report on February’s meeting
I’ve done a quick summary of the meeting from last night in the hope it will encourage some more people to come along. Feel free to nit pick!
I added Wylug on Meetup.com – http://www.meetup.com/West-Yorkshire-Linux-users-group/ and we always have quite a few people who say they’re coming but rarely do they all turn up. I wonder if it’s because we don’t have the meetings in the city centre. We discussed having the next meeting in the centre so if anyone has any recommendations please send them to the group.
Linux security problems
Since the last meetup we’ve had the Linux Mint website being completely owned with the download links to the legitimate ISOs being replaced by versions including a backdoor and malware. The result of this is that anyone who downloaded Linux Mint on February 20th should check the blog post at http://blog.linuxmint.com/?p=2994 to see how they should check if they might have the hacked version.
There’s also been a wider problem with an implementation of one of the core C libraries built into many linux based devices – glibc. The issue is that a remote attacker could send a malformed response to a DNS request, overflow the memory allocated for the response on the target computer and then get additonal unwanted code to run and take over the computer. the result is pretty bad – a remote exploit that doesn’t require any sort of access to the target computer, but the method of attack is a bit cumbersome and requires the attacker to send the malicious DNS packet response before a real DNS server can respond. This means the attacker needs to preferably be a man-in-the-middle which could be accomplished if you connect via a public internet connection (e.g. Starbucks Wifi). There’s a sensationalist write up about this at Ars Technica and a nice blog about how to achieve this in the real world here: https://blog.cloudflare.com/a-tale-of-a-dns-exploit-cve-2015-7547/
Home users should patch as soon as possible – for enterprise the same goes but you could mitigate it by having a local DNS resolver which drops larger packets than 1024 bytes in the short term whilst you roll out the patch. The bigger issue is that there’ll be a lot of embedded devices with the same security problem, and not many people have the ability to update them.
Horay – there’s a new Pi! Recently announced, at the end of February, the new Raspberry Pi 3 has:
1.2GHz Quad Core Broadcom 64 bit processor
Wifi on board – 802.11b/g/n
Bluetooth Low Energy on board
1GB Ram @ 900 Mhz
4 x USB 2 ports
40 pin GPIO
HDMI, RCA video output
Fully compatible with Raspberry Pi 1 and 2.
It’s around £24.99 + VAT but already sold out everywhere and available on back order.
The Pi 3 uses a bit less power when idling than the Pi 2 and the same at full load so it’ll save a few pennies if you’re running 10 Pi 2’s in a multi room sound and media serving extravaganza
I wondered if the Pi3 might suit as a replacement server for my current monolithic Plex server – but I need transcoding at 1080P and the Plex recommendations are at least Core 2 Duo 2.4GHz for each stream.
Graham suggested I look at the Intel NUC range – these look like they will manage the job of transcoding very well, the small ones are fanless (and therefore silent in the living room) and they look good so have the all important WAF (wife agreement factor).
Darren wanted to know how we go about generating and using SSH keys. Andy and Graham summarised that you create a SSH key pair using the program ssh-keygen in terminal. Normally you would want to add a few extra flags for more security and entropy, but for testing you can run ssh-keygen with no flags.
Once you run ssh-keygen, the program will use some random inputs form your computer to generate your own unique set of keys – one public and one private. These are Asymmetrical Keys so one key can always decode something encoded by the other key (and vice versa), but there’s no way to work out what the other key looks like if you only have one of the keys. For example if I have one key and I encrypt the word “sausages”, then only the other key can be used to decrypt the enciphered text back to “sausages”. This allows one person to prove that they have the matching key to the other key as the only key that can be used to decrypt the encrypted text is the opposite key to the one used to encrypt it.
The private key is a file which is always to be kept safe by yourself and not given to anyone else (unless you want them to impersonate you or login as you). The public key is a file which you can put onto a remote server so that when you want to login to the server you don’t have to necessarily type a password as you can authenticate yourself to the server using your private key against the public key which you previously sent to the server. The security benefit of this is that it’s impossible to brute-force a key using existing hardware technology in the way that someone might brute-force your SSH password. As long as you keep your SSH private key secure, where no one can steal it, then no one can login to your servers as you.
When you want to login to the SSH server it sets up an encrypted tunnel between you and the server, you provide the user that you want to authenticate as, the server checks if that user has a public key stored on the server, it finds the public key which you placed previously on the server, and it then sends back a message encrypted with the public key. Your SSH program receives the encrypted message, decrypts it using the private (aka opposite) key, then re-encodes it with some extra info and sends a hash of this back to the server to prove that it has the private key.
It sounds a bit messy but in practice it’s really easy to do – ssh-keygen generates the keys for you, you need to send the public one to the server you want to automate the login to, then next time when you connect using SSH you tell your SSH client where to look for your private key. There’s a great, and much more thorough, explanation about it here
Ubuntu 16.04 LTS
The newest version of Ubuntu with Long Term Support (LTS) is nearly ready. This is a major release and the LTS badge means that ubuntu will support the system for 5 years with updates and patches – after which it’s normally recommended to upgrade to the newest LTS.
The 16.04 LTS beta is finally available to test out, but non of us had actually tested it so we have no news to report on what it’s like in practice.
Amazon Dash button with embedded Linux
I went to the Amazon AWS Leeds meetup a few weeks ago. It was very good, definitely recommended, and there was an Amazon representative there who gave a great demonstration and explanation of how the Amazon dash button works and then replicated it’s functionality on a Raspberry Pi 2.
We don’t have the dash button yet in the UK, but I had heard of it before. It’s a small oval shaped button, supplied for free from Amazon, using embedded Linux, linked to your Amazon account and allows you to to do only one thing – to order one specific product without doing anything except pressing a physical button. For example if I had a Amazon Dash button for washing powder, I would stick the dash button on my washing machine and when I noticed I as running out of washing powder, I press the button and the next day Amazon deliver more washing powder. Here’s a picture of how it looks.
When you press the button this boots the device and starts a small LED blinking. After the device boots it connects to the local Wifi, connects to Amazon and then uses the MTQQ protocol to send a packet to Amazon AWS with the device ID and the state of the button (e.g. it was pressed or it was pressed and held down). Amazon processes the incoming packet to see if the device is allowed to order or not. If it is then it orders the product and sets the device to not be allowed to order any more, and sends back a packet to say it was successful in ordering. If the device isn’t allowed to order then a packet is sent back saying the order failed. If the order was successful the LED turns solid green (or maybe blue, I cant remember which he said) to let the user know that it was successful, and red if not.Then after a few seconds the devices powers off.
When the product is confirmed as received by you (e.g. on signed delivery or delivery note from the shipment company) then the device logic at Amazon is changed to allow it to order again. this means you can’t accidentally order a ton of product, but you can order the product again as soon as it’s delivered.
There’s a teardown here and an of how to get it to run your own bare metals code. Apparently you can order the devices form ebay and then reprogram them yourself if you feel inclined to do this.
Dropbox alternative Tresorit unintsall script will delete everything in the directory it is run
A few months ago I moved from Dropbox to Tresorit in a bid to provide end-to-end encryption for my backups and shared folders/files.
I tried to uninstall the Tresorit client on my Ubuntu laptop and unfortunately the uninstall script completely removed *everything* in my home folder. The result of this is that it wiped all my user settings, my pictures I had saved in my home folder, some work I had saved only at home, all the SSH keys and SSH config file for remotely connecting to servers and the desktop customisation I had spent a bit of time on to get Ubuntu’s Unity interface looking a bit nicer.
The Tresorit support team wasn’t very understanding and confirmed that the uninstaller runs rm -rf in the current folder which results in all your files being wiped. There’s no warning that this will happen and no way to recover the files
This is a good example of why we should always check the source code of any script file (e.g. those ending with .sh) before running them.
Dropbox alternative Seafile
Andy has been running the open source Seafile file synchronisaiton and cloud backup software at his place of work, serving thousands of subscribers using the community edition hosted inside their network. Seafile runs on Linux (can run on a Raspberry Pi) and Andy recommended it – it apparently runs great, has client-side encryption, Linux clients, iphone/Android clients, file snapshots (aka scheduled backups) and the community edition has a lot of features. I’ll certainly be looking at moving to it from Tresorit.
Darren was using an a browser I hadn’t seen before called Abrowser. This is based on the Firefox source but it removes the Firefox copyrighted logo and turns Firefox into completely free software. https://trisquel.info/en/wiki/abrowser-help
William Hill have been in contact to see if we would like any sponsorship. I raised this at the meeting and we all agreed that it might be worthwhile to ask them to provide a talk of how they use Linux and what sort of things they do at William Hill. I’ll get back to them and see what we can come up with.
I’m sure there was a lot more that we discussed but I never remember to make notes so apologies if I’ve missed something out which was important!
The next meeting is scheduled for Monday 28th March. There’s been a few suggestions that we should have the meeting in Leeds City centre as this is easier for more people to attend than the Lord Darcy pub. Any suggestions please let us know.
Wylug-discuss mailing list
It’s kind of silly that smartphones commonly have as much, if not more, memory than new laptops.
It’s also kind of sucky that phone screens commonly ship as 2560×1440, whereas premium laptops are still 1920×1080.
For $655 (£470) you can get:
The post Silly state of affairs, that smartphone memory bests laptops in some cases. appeared first on life at warp.
Here is my monthly update covering a large part of what I have been doing in the free software world (previously):
This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:
I also filed 137 FTBFS bugs against aac-tactics, angular.js, astyle, bcftools, blacs-mpi, bogofilter, boxes, caldav-tester, ccdproc, ckeditor, coq-float, cqrlog, dasher, django-recurrence, dspdfviewer, eclipse-egit, ess, etcd, felix-latin, fio, flexml, funny-manpages, gap-atlasrep, garmin-plugin, gitlab, gnome-mines, graphicsmagick, haskell-nettle, healpy, hg-git, hunspell, hwloc, ijs, ipset, janest-core-extended, jpathwatch, kcompletion, kcompletion, keyrings.alt, kodi-pvr-hts, kodi-pvr-vdr-vnsi, libcommons-compress-java, libgnome2-wnck-perl, libkate, liblrdf, libm4ri, libnet-server-mail-perl, libsis-jhdf5-java, libspectre, libteam, libwnck, libwnckmm, libxkbcommon, lombok, lombok-patcher, mako, maven-dependency-analyzer, mopidy-mpris, mricron, multcomp, netty-3.9, numexpr, ocaml-textutils, openimageio, openttd-openmsx, osmcoastline, osmium-tool, php-guzzle, php-net-smartirc, plexus-component-metadata, polari, profitbricks-client, pyentropy, pynn, pyorbital, pypuppetdb, python-aioeventlet, python-certifi, python-hglib, python-kdcproxy, python-matplotlib-venn, python-mne, python-mpop, python-multipletau, python-pbh5tools, python-positional, python-pydot-ng, python-pysam, python-snuggs, python-tasklib, r-cran-arm, r-cran-httpuv, r-cran-tm, rjava, ros-geometry-experimental, ros-image-common, ros-pluginlib, ros-ros-comm, rows, rr, ruby-albino, ruby-awesome-print, ruby-default-value-for, ruby-fast-gettext, ruby-github-linguist, ruby-gruff, ruby-hipchat, ruby-omniauth-crowd, ruby-packetfu, ruby-termios, ruby-thinking-sphinx, ruby-tinder, ruby-versionomy, ruby-zentest, sbsigntool, scikit-learn, scolasync, sdl-image1.2, signon-ui, sisu-guice, sofa-framework, spykeutils, ssreflect, sunpy, tomcat-maven-plugin, topmenu-gtk, trocla, trocla, tzdata, verbiste, wcsaxes, whitedune, wikidiff2, wmaker, xmlbeans, xserver-xorg-input-aiptek & zeroc-icee-java.FTP Team
As a Debian FTP assistant I ACCEPTed 107 packages: androguard, android-platform-dalvik, android-platform-development, android-platform-frameworks-base, android-platform-frameworks-native, android-platform-libnativehelper, android-platform-system-core, android-platform-system-extras, android-platform-tools-base, android-sdk-meta, apktool, armci-mpi, assertj-core, bart, bind9, caja, caldav-tester, clamav, class.js, diamond, diffoscope, django-webpack-loader, djangocms-admin-style, dnsvi, esptool, fuel-astute, gcc-6-cross, gcc-6-cross-ports, gdal, giella-core, gnupg, golang-github-go-ini-ini, golang-github-tarm-serial, gplaycli, gradle-jflex-plugin, haskell-mountpoints, haskell-simple, hurd, iceweasel, insubstantial, intellij-annotations, jetty9, juce, keyrings.alt, leptonlib, libclamunrar, libdate-pregnancy-perl, libgpg-error, libhtml5parser-java, libica, libvoikko, linux, llvm-toolchain-3.8, lombok-patcher, mate-dock-applet, mate-polkit, mono-reference-assemblies, mxt-app, node-abab, node-array-equal, node-array-flatten, node-array-unique, node-bufferjs, node-cors, node-deep-extend, node-original, node-setimmediate, node-simplesmtp, node-uglify-save-license, node-unpipe, oar, openjdk-8, openjdk-9, pg8000, phantomjs, php-defaults, php-random-compat, php-symfony-polyfill, pnetcdf, postgresql-debversion, pulseaudio-dlna, pyconfigure, pyomo, pysatellites, python-fuelclient, python-m3u8, python-pbh5tools, python-qtpy, python-shellescape, python-tunigo, pyutilib, qhull, r-cran-rjsonio, r-cran-tm, reapr, ruby-fog-dynect, scummvm-tools, symfony, talloc, tesseract, twextpy, unattended-upgrades, uwsgi, vim-command-t, win-iconv, xkcdpass & xserver-xorg-video-ast.
I additionally REJECTed 4 packages.
Recently I had a conversation with a programmer who repeated the adage that programming in perl consists of writing line-noise. This isn't true but it reminded me of my love of fuzzers. Fuzzers are often used to generate random input files which are fed to tools, looking for security problems, segfaults, and similar hilarity.
To the untrained eye the output of most fuzzers is essentially line-noise, since you often start with a valid input file and start flipping bits, swapping bytes, and appending garbage.
Anyway this made me wonder what happens if you fed random garbage into a perl interpreter? I wasn't brave enough to try it, because knowing my luck the fuzzer would write a program like so:system( "rm -rf /home/steve" );
But I figured it was still an interesting idea, and I could have a go at fuzzing something else. I picked gawk, the GNU implementation of awk because the codebase is pretty small, and I understand it reasonably well.
Almost immediately my fuzzer found some interesting segfaults and problems. Here's a nice simple example:$ gawk 'for (i = ) in steve kemp rocks' .. gawk: cmd. line:1: fatal error: internal error: segfault Aborted
I look forward to seeing what happens when other people fuzz perl..
This is a bit of an odd posting since it's about something I've done but is also here to help me explain why I did it and thus perhaps encourage some discussion around the topic within the Kicad community...
Recently (as you will know if you follow this blog anywhere it is syndicated) I have started playing with Kicad for the development of some hardware projects I've had a desire for. In addition, some of you may be aware that I used to work for a hardware/software consultancy called Simtec, and there I got to play for a while with an EDA tool called Mentor Designview. Mentor was an expensive, slow, clunky, old-school EDA tool, but I grew to understand and like the workflow.
I spent time looking at gEDA and Eagle when I wanted to get back into hardware hacking for my own ends; but neither did I really click with. On the other hand, a mere 10 minutes with Kicad and I knew I had found the tool I wanted to work with long-term.
I designed the beer'o'meter project (a flow meter for the pub we are somehow intimately involved with) and then started on my first personal surface-mount project -- SamDAC which is a DAC designed to work with our HiFi in our study at home.
As I worked on the SamDAC project, I realised that I was missing a very particular thing from Mentor, something which I had low-level been annoyed by while looking at other EDA tools -- Kicad lacks a mechanism to mark a wire as being linked to somewhere else on the same sheet. Almost all of the EDA tools I've looked at seem to lack this nicety, and honestly I miss it greatly, so I figured it was time to see if I could successfully hack on Kicad.
Kicad is written in C++, and it has been mumble mumble years since I last did any C++, either for personal hacking or professionally, so it took a little while for that part of my brain to kick back in enough for me to grok the codebase. Kicad is not a small project, taking around ten minutes to build on my not-inconsiderable computer. And while it beavered away building, I spent time looking around the source code, particularly the schematic editor eeschema.
To skip ahead a bit, after a couple of days of hacking around, I had a proof-of-concept for the intra-sheet links which I had been missing from my days with Mentor, and some ERC (electrical rules checking) to go alongside that to help produce schematics without unwanted "sharp corners".
In total, I added:
All of this is meant to allow schematic capture engineers to more clearly state their intentions regarding what they are drawing. The intra-sheet link could be thought of like a no-connect element, except instead of saying "this explicitly goes nowhere" we're saying "this explicitly goes somewhere else on this sheet, you can go look for it".
Obviously, people who dislike (or simply don't want to use) such intra-sheet link elements can just disable that ERC tickybox and not be bothered by them in the least (well except for the toolbar button and menu item I suppose).
Whether this work gets accepted into Kicad, or festers and dies on the vine, it was good fun developing it and I'd like to illustrate how it could help you, and why I wrote it in the first place:A contrived story
Note, while this story is meant to be taken seriously, it is somewhat contrived, the examples are likely electrical madness, but please just think about the purpose of the checks etc.
To help to illustrate the feature and why it exists, I'd like to tell you a somewhat contrived story about Fred. Fred is a schematic capture engineer and his main job is to review schematics generated by his colleagues. Fred and his colleagues work with Kicad (hurrah) but of late they've been having a few issues with being able to cleanly review schematics.
Fred's colleagues are not the neatest of engineers. In particular they tend to be quite lazy when it comes to running busses, which are not (for example) address and data busses, around their designs and they tend to simply have wires which end in mid-space and pick up somewhere else on the sheet. All this is perfectly reasonable of course, and Kicad handles it with aplomb. Sadly it seems quite error prone for Fred's workplace.
As an example, Fred's colleague Ben has been designing the power supply for a particular board. As with most power supplies, plenty of capacitors are needed to stabilise the regulators and smooth the output. In the example below, the intent is that all of the capacitors are on the FOO net.
Sadly there's a missing junction and/or slightly misplaced label in the upper section which means that C2 and C3 simply don't join to the FOO net. This could easily be missed, but the ERC can't spot it at all since there's more than one thing on each net, so the pins of the capacitors are connected to something.
Fred is very sad that this kind of problem can sometimes escape notice by the schematic designer Ben, Fred himself, and the layout engineer, resulting in boards which simply do not work. Fred takes it upon himself to request that the strict wiring checks ERC is made mandatory for all designs, and that the design engineers be required to use intra-sheet link symbols when they have signals which wander off to other parts of the sheet like FOO does in the example. Without any further schematic changes, strict wiring checks enabled gives the following points of ERC concern for Ben to think about:
As you can see, the ERC is pointing at the wire ends and the warnings are simply that the wires are dangling and that this is not acceptable. This warning is very like the pin-not-connected warnings which can be silenced with an explicit no-connect schematic element. Ben, being a well behaved and gentle soul, obeys the design edicts from Fred and seeks out the intra-sheet link symbols, clearing off the ERC markers and then adding intra-sheet links to his design:
This silences the dangling end ERC check, which is good, however it results in another ERC warning:
This time, the warning for Ben to deal with is that the intra-sheet links are pointless. Each exists without a companion to link to because of the net name hiccough in the top section. It takes Ben a moment to realise that the mistake which has been made is that a junction is missing in the top section. He adds the junction and bingo the ERC is clean once more:
Now, this might not seem like much gain for so much effort, but Ben can now be more confident that the FOO net is properly linked across his design and Fred can know, when he looks at the top part of the design, that Ben intended for the FOO net to go somewhere else on the sheet and he can look for it.Why do this at all?
Okay, dropping out of our story now, let's discuss why these ERC checks are worthwhile and why the intra-sheet link schematic element is needed.
Note: This bit is here to remind me of why I did the work, and to hopefully explain a little more about why I think it's worth adding to Kicad...
Designers are (one assumes) human beings. As humans we (and I count myself here too) are prone to mistakes. Sadly mistakes are often subtle and could easily be thought of as deliberate if the right thought processes are not followed carefully when reviewing. Anyone who has ever done code review, proofread a document, or performed any such activity, will be quite familiar with the problems which can be introduced by a syntactically and semantically valid construct which simply turns out to be wrong in the greater context.
When drawing designs, I often end up with bits of wire sticking out of schematic sections which are not yet complete. Sadly if I sleep between design sessions, I often lose track of whether such a dangling wire is meant to be attached to more stuff, or is simply left because the net is picked up elsewhere on the sheet. With intra-sheet link elements available, if I had intended the latter, I'd have just dropped such an element on the end of the wire before I stopped for the day.
Also, when drawing designs, I sometimes forget to label a wire, especially if it has just passed through a filter or current-limiting resistor or similar. As such, even with intra-sheet link elements to show me when I mean for a net to go bimbling off across the sheet, I can sometimes end up with unnamed nets whose capacitors end up not used for anything useful. This is where the ERC comes in.
By having the ERC complain if a wire dangles -- the design engineer won't forget to add links (or check more explicitly if the wire is meant to be attached to something else). By having junctions which don't actually link anything warned about, the engineer can't just slap a junction blob down on the end of a wire to silence that warning, since that doesn't mean anything to a reviewer later down the line. By having the ERC warn if a net has exactly one intra-sheet link attached to it, missing net names and errors such as that shown in my contrived example above can be spotted and corrected.
Ultimately this entire piece of work is about ensuring that the intent of the design engineer is captured clearly in the schematic. If the design engineer meant to leave that wire dangling because it's joining to another bit of wire elsewhere on the sheet, they can put the intra-sheet links in to show this. The associated ERC checks are there purely to ensure that the validation of this intent is not bypassed accidentally, or deliberately, in order to make the use of this more worthwhile and to increase the usefulness of the ERC on designs where signals jump around on sheets where wiring them up directly would just create a mess.
Just ordered a new PSU for re-purposed server (from front-line to a backup server), plus drive caddies for new front-line servers. Great guns!
The post New hardware ordered for @warphost. Onwards and upwards! appeared first on life at warp.
If I see just one more clichéd, top-down image of a coffee cup, notepad, laptop and pastry, I’m gonna … I’m gonna … be miffed. And maybe write a letter.
The post Please, Interweb, no more top-down coffee, notepad and pastry photos. appeared first on life at warp.
Maybe it's just me, but I reckon DSLs are the next (ok ok, they've been around for ages) big (ok, hipster) thing. I know I'm by no means the first to say so it's just that I'm increasingly bemused at seeing things squeezed into data structures they've outgrown.
In general, as everyone's finally warming to the idea that you can use code to describe not just your application but also how it's deployed, we're reaching a state where that code needs to be newbie-friendly - by which I mean that it ought to be easily understandable by humans. If it isn't, it's prone to mistakes.
A few months ago, I experimented with creating a DSL for writing web pages and I was fairly happy with the result (though there's lots more work to be done). I'm thinking of applying the same ideas to CloudFormation.resources: db: type: rds engine: mysql size: c3.xlarge app: type: ec2 ami: my-app-image size: t2.micro scale: min: 1 max: 10 expose: 80 security: db: app app: 0.0.0.0:80
Obviously I've put little to no thought into the above but it shouldn't be too hard to come up with something useful.
Maybe some day soon ;)
What’s this ? What’s this ? a monthly meeting of West Yorkshire Linux Users Group. Come to the Lord Darcy on Harrogate Road for around 7:30 ish next Monday, the last day of February. Look for some people huddled round a laptop(s) trying to plumb the eternal verities and have fun at the same time.
I've been aware of Sonos as a premium wireless speaker solution for a long time, but the price always seemed excessive for what, on the face of it, offers little more than a simple Bluetooth speaker. But after Subsonic needed its database rebuilding for the third time and I was unable to play music for a dinner party, enough was enough. I was willing at last to pay the premium for something that was purported to work.Background
My music collection is mostly comprised of purchased Audio CDs that I have ripped under Linux. Currently I have a Music folder on our MythTV system, and have installed Subsonic to share our music to our many tablets and phones, using the excellent Subsonic Android App. If I want to play from Subsonic to my music system then I have a Logitech Bluetooth Audio Receiver Adapter that receives the audio and plays it through my old-school Sony amplifier.
The main issues with this set-up is that the music only plays in the living room and not elsewhere in the house. We have bought an additional Creative D80 Bluetooth Wireless Speaker, but of course each can only play independent streams.
I also find Bluetooth a frustrating technology where you don't have a simple 1:1 paradigm. In our case we have probably a dozen tablets and phones, each determined to pair with the Bluetooth receivers and then prevent other devices from connecting.Choosing Sonos Speakers
The Sonos range comprises of the small Play:1 at £155, a medium-sized Play:3 at £229 and a larger Play:5 at £413. On the Goldilocks principle of the middle one being "just right", I opted for two of the Play:3 at £229 each - one for the living room and one for the kitchen. The plan was to move those elsewhere at a later stage and hopefully upgrade the living room system to a pair of Play:5 speakers.
The important thing to understand is that neither the Play:1 nor the Play:3 speakers have a Line-In. This means that you can only play from on-line content. If you currently subscribe to one of the supported Sonos Services, then that is fine, but if you're wanting to play content from a CD or other input source - then you can't. The Play:5 does have a Line-In, as does the Sonos Connect at £264.
A word about the Sonos Connect. A simple way to imagine it is that it is basically a Play:5, but without the speaker. In other words it has the same Sonos interface with Line-In but no speaker. If you have an existing music system then this is potentially ideal and with hindsight I wish that what I had done was to purchase one Play:3 for the kitchen and one Sonos Connect for the living room. The opposing view is that a pair of Play:5 speakers complete obsoletes an existing music system - so why not do away with the legacy equipment.Amazon Prime Music
One great disappointment was that, whilst Sonos supports Amazon Music, it does not support Amazon Prime Music. One of the main reasons we had bought Sonos was to play Amazon Prime Music, so this was a major problem. At the time of writing it is available in the US as a Beta service and has been for a few months. One can only hope that it will trickle across to the UK in due course.Google Play Music
Hoping that the Amazon Prime issue would be resolved, we signed up to a 30 day free trial with Google Play Music. This worked extremely well, except for the recommended playlists which do not appear as a Sonos Queue. The main issue we found was that our children would choose a song and click "Play Next" and this would interrupt the playlist - very irritating if you were enjoying a particular song. We assumed that this was a feature of Sonos, but Spotify does not work like that (see below).Spotify
We then subscribed to the 30 days free trial with Spotify. You only need the individual member subscription to work with Sonos, but the ongoing cost is the same as Google Play. The only advantage of Spotify is that the recommended playlists appear as a proper Sonos queue, enabling you to save it as a Sonos Playlist, or add a song into the queue.Subsonic
One delight was that we were able to play our local music via Subsonic. This is a Beta service and I did have a small problem getting it working. Unfortunately I cannot remember the nature of the problem, other than an Internet search solved it.Conclusions
Obviously we were disappointed at the lack of Amazon Prime Music. I was also a little disappointed at the abrupt handling of music changes - if you click "Play Now" the music stops instantly and the next track starts. I do feel that with a premium set-up like this that music transitions should be handled more smoothly.
We also have had issues with our children messing about with Sonos - as the interface is open to all. We have sufficient control of our children that this isn't a significant problem, but knowing some families this could be a serious issue. I do feel there should be some security, to enable clients to be de-authorised, or limited only to a subset of features.
Will I continue to invest in Sonos? Undoubtedly yes, but I think the next purchase will be a Sonos Connect followed by a better set of audio speakers.
Today February 14th, the Free Software Foundation Europe (FSFE) celebrates the "I Love Free Software" day. I Love Free Software day is a day for Free Software users to appreciate and thank the contributors of their favourite software applications, projects and organisations.
We take this opportunity to say "thank you" to all the Debian upstreams and downstreams, and all the Debian developers and contributors. Thanks for your work and dedication to free software!
There are many ways to participate in this ILoveFS day and we encourage everybody to join in and celebrate. Show your love to Debian developers, contributors and teams virtually on social networks using the #ilovefs hashtag and spreading the word in your own social media circles, or by visiting the ILoveFS campaign website to find and use some of the promotional materials available such as postcards and banners.
Scientists have successfully detected gravity waves, 100 years after Einstein predicted them.
“It would have been wonderful to watch Einstein’s face had we been able to tell him.”
The post Gravity waves detected. This will change everything. appeared first on life at warp.
Tails (The amnesic incognito live system) is a live OS based on Debian GNU/Linux which aims at preserving the user's privacy and anonymity by using the Internet anonymously and circumventing censorship. Installed on a USB device, it is configured to leave no trace on the computer you are using unless asked explicitly.
As of today, the people the most needy for digital security are not computer experts. Being able to get started easily with a new tool is critical to its adoption, and even more in high-risk and stressful environments. That's why we wanted to make it faster, simpler, and more secure to install Tails for new users.
The previous process for getting started with Tails was very complex and was problematic for less tech-savvy users. It required starting Tails three times, and copying the full ISO image onto a USB stick twice before having a fully functional Tails USB stick with persistence enabled.
This can now be done simply by installing Tails Installer in your existing Debian system, using sid, stretch or jessie-backports, plugging a USB stick and choosing if one wants to update the USB stick or to install Tails using a previously downloaded ISO image.
Tails Installer also helps Tails users to create an encrypted persistent storage for personal files and settings in the rest of the available space.
Moore’s law is at an end. It was good while it lasted
We have regular sessions on the second Saturday of each month. Bring a 'box', bring a notebook, bring anything that might run Linux, or just bring yourself and enjoy socialising/learning/teaching or simply chilling out!
This month's meeting is at The Feathers Pub, Merstham
42 High St, Merstham, Redhill, Surrey, RH1 3EA
01737 645643 · http://www.thefeathersmerstham.co.uk
NOTE the pub opens at 12 Noon.
The post Nom nom. #chocolate #hotelchocolat #spoilt #restraint #temptation appeared first on life at warp.
I'm slowly planning the redesign of the cluster which powers the Debian Administration website.
Currently the design is simple, and looks like this:
In brief there is a load-balancer that handles SSL-termination and then proxies to one of four Apache servers. These talk back and forth to a MySQL database. Nothing too shocking, or unusual.
(In truth there are two database servers, and rather than a single installation of HAProxy it runs upon each of the webservers - One is the master which is handled via ucarp. Logically though traffic routes through HAProxy to a number of Apache instances. I can lose half of the servers and things still keep running.)
When I setup the site it all ran on one host, it was simpler, it was less highly available. It also struggled to cope with the load.
Half the reason for writing/hosting the site in the first place was to document learning experiences though, so when it came to time to make it scale I figured why not learn something and do it neatly? Having it run on cheap and reliable virtual hosts was a good excuse to bump the server-count and the design has been stable for the past few years.
Recently though I've begun planning how it will be deployed in the future and I have a new design:
Rather than having the Apache instances talk to the database I'll indirect through an API-server. The API server will handle requests like these:
I expect to have four API handler endpoints: /articles, /comments, /users & /weblogs. Again we'll use a floating IP and a HAProxy instance to route to multiple API-servers. Each of which will use local caching to cache articles, etc.
This should turn the middle layer, running on Apache, into simpler things, and increase throughput. I suspect, but haven't confirmed, that making a single HTTP-request to fetch a (formatted) article body will be cheaper than making N-database queries.
Anyway that's what I'm slowly pondering and working on at the moment. I wrote a proof of concept API-server based CMS two years ago, and my recollection of that time is that it was fast to develop, and easy to scale.