News aggregator

Dick Turpin: When in doubt pull out.

Planet WolvesLUG - Fri, 28/03/2014 - 12:43
Customer: "I think our printers broken? I've tried two brand new toner cartridges and it won't print."Engineer: "Do you want us to come out and look at it? As you know it's chargeable."Customer: "You might as well get us a new printer."Engineer: "OK."
Three days later.
Engineer: "Hey Pete, you know that printer for xyz? I've installed the new one. I also found out what was wrong with the old one."Me: "What was it?"Engineer: "She hadn't pulled the safety tab off the side of the toner."Me: "Bwahahahaha"

Categories: LUG Community Blogs

Richard Lewis: Taking notes in Haskell

Planet ALUG - Fri, 28/03/2014 - 00:13

The other day we had a meeting at work with a former colleague (now at QMUL) to discuss general project progress. The topics covered included the somewhat complicated workflow that we&aposre using for doing optical music recognition (OMR) on early printed music sources. It includes mensural notation specific OMR software called Aruspix. Aruspix itself is fairly accurate in its output, but the reason why our workflow is non-trivial is that the sources we&aposre working with are partbooks; that is, each part (or voice) of a multi-part texture is written on its own part of the page, or even on a different page. This is very different to modern score notation in which each part is written in vertical alignment. In these sources, we don&apost even know where separate pieces begin and end, and they can actually begin in the middle of a line. The aim is to go from the double page scans ("openings") to distinct pieces with their complete and correctly aligned parts.

Anyway, our colleague from QMUL was very interested in this little part of the project and suggested that we spend the afternoon, after the style of good software engineering, formalising the workflow. So that&aposs what we did. During the course of the conversation diagrams were drawn on the whiteboard. However (and this was really the point of this post) I made notes in Haskell. It occurred to me a few minutes into the conversation that laying out some types and the operations over those types that comprise our workflow is pretty much exactly the kind of formal specification we needed.

Here&aposs what I typed:

module MusicalDocuments where import Data.Maybe -- A document comprises some number of openings (double page spreads) data Document = Document [Opening] -- An opening comprises one or two pages (usually two) data Opening = Opening (Page, Maybe Page) -- A page comprises multiple systems data Page = Page [System] -- Each part is the line for a particular voice data Voice = Superius | Discantus | Tenor | Contratenor | Bassus -- A part comprises a list of musical sybmols, but it may span mutliple systems --(including partial systems) data Part = Part [MusicalSymbol] -- A piece comprises some number of sections data Piece = Piece [Section] -- A system is a collection of staves data System = System [Staff] -- A staff is a list of atomic graphical symbols data Staff = Staff [Glyph] -- A section is a collection of parts data Section = Section [Part] -- These are the atomic components, MusicalSymbols are semantic and Glyphs are --syntactic (i.e. just image elements) data MusicalSymbol = MusicalSymbol data Glyph = Glyph -- If this were real, Image would abstract over some kind of binary format data Image = Image -- One of the important properties we need in order to be able to construct pieces -- from the scanned components is to be able to say when objects of the some of the -- types are strictly contiguous, i.e. this staff immediately follows that staff class Contiguous a where immediatelyFollows :: a -> a -> Bool immediatelyPrecedes :: a -> a -> Bool immediatelyPrecedes a b = b `immediatelyFollows` a instance Contiguous Staff where immediatelyFollows :: Staff -> Staff -> Bool immediatelyFollows = undefined -- Another interesting property of this data set is that there are a number of -- duplicate scans of openings, but nothing in the metadata that indicates this, -- so our workflow needs to recognise duplicates instance Eq Opening where (==) :: Opening -> Opening -> Bool (==) a b = undefined -- Maybe it would also be useful to have equality for staves too? instance Eq Staff where (==) :: Staff -> Staff -> Bool (==) a b = undefined -- The following functions actually represent the workflow collate :: [Document] collate = undefined scan :: Document -> [Image] scan = undefined split :: Image -> Opening split = undefined paginate :: Opening -> [Page] paginate = undefined omr :: Page -> [System] omr = undefined segment :: System -> [Staff] segment = undefined tokenize :: Staff -> [Glyph] tokenize = undefined recogniseMusicalSymbol :: Glyph -> Maybe MusicalSymbol recogniseMusicalSymbol = undefined part :: [Glyph] -> Maybe Part part gs = if null symbols then Nothing else Just $ Part symbols where symbols = mapMaybe recogniseMusicalSymbol gs alignable :: Part -> Part -> Bool alignable = undefined piece :: [Part] -> Maybe Piece piece = undefined

I then added the comments and implemented the part function later on. Looking at it now, I keep wondering whether the types of the functions really make sense; especially where a return type is a type that&aposs just a label for a list or pair.

I haven&apost written much Haskell code before, and given that I&aposve only implemented one function here, I still haven&apost written much Haskell code. But it seemed to be a nice way to formalise this procedure. Any criticisms (or function implementations!) welcome.

Categories: LUG Community Blogs

Richard Lewis: Ph.D Progress

Planet ALUG - Thu, 27/03/2014 - 00:37

I submitted my Ph.D thesis at the end of September 2013 in time for what was believed to be the AHRC deadline. It was a rather slim submission at around 44,000 words and rejoiced under the title of Understanding Information Technology Adoption in Musicology. Here&aposs the abstract:

Since the mid 1990s, innovations and technologies have emerged which, to varying extents, allow content-based search of music corpora. These technologies and their applications are known commonly as music information retrieval (MIR). While there are a variety of stakeholders in such technologies, the academic discipline of musicology has always played an important motivating and directional role in the development of these technologies. However, despite this involvement of a small representation of the discipline in MIR, the technologies have so far failed to make any significant impact on mainstream musicology. The present thesis, carried out under a project aiming to examine just such an impact, attempts to address the question of why this has been the case by examining the histories of musicology and MIR to find their common roots and by studying musicologists themselves to gauge their level of technological sophistication. We find that some significant changes need to be made in both music information retrieval and musicology before the benefits of technology can really make themselves felt in music scholarship.

(Incidentally, the whole thing was written using org-mode, including some graphs that get automatically generated each time the text is compiled. Unfortunately I did have to cheat a little bit and typed in LaTeX \cite commands rather than using proper org-mode links for the references.)

So the thing was then examined in January 2014 by an information science, user studies expert and a musicologist. As far as it went, the defence was actually not too bad, but after defending the defensible it eventually became clear that significant portions of the thesis were just not up to scratch; not, in fact, defensible. They weren&apost prepared to pass it and have asked that I revise and then re-submit it.

Two things seem necessary to address: 1) why did this happen? And 2) what do I do next?

I started work on this Ph.D with only quite a vague notion of what it was going to be about. The Purcell Plus project left open the possibility of the Ph.D student doing some e-Science-enabled musicological study. But I think I&aposd come out of undergraduate and masters study with a view of academic research that was very much text-based; the process of research---according to the me of ca. 2008---was to read lots of things and synthesise them, and the more obscure and dense the stuff read the better. The process is one of noticing generalisations amongst all these sources that haven&apost been remarked on before and remarking on them, preferably with a good balance of academic rigour and barefaced rhetoric. And I brought this pre-conception into a computing department. My first year was intended to be a training year, but I was actually already highly computer literate with considerable programming experience and quite a bit of knowledge of at least symbolic work in computational musicology. Consequently, I didn&apost fully engage with learning new stuff during that first year and instead embarked on a project of attempting to be rhetorical. It wasn&apost until later on that I really started to understand that those around had a completely different idea as to how research can be carried out. While I was busy reading, most of my colleagues were doing experiments; they were actually finding out new stuff (or at least were attempting to) and had the potential to make an original contribution to knowledge. At this point I started to look for research methods that could be applicable to my subject matter and eventually hit upon a couple of actually quite standard social science methods. So I think that&aposs the first thing that went wrong: I failed to take on board soon enough the new research culture that I had (or, I suppose, should have) entered.

I think I&aposve always been someone who thrives on the acknowledgement of things I&aposve done; I always looked forwarded to receiving my marks at school and as an undergraduate; and I liked finding opportunities to do clever jobs for people, especially little software development projects where there&aposs someone to say, "that&aposs great! Thanks for doing that." I think I quickly found that doctoral research didn&apost offer me this at all. My experience was very much a solitary one where no one was really aware of what I was working on. Consequently two things happened: first, I tended not to pursue things very far through lack of motivation; and second (and this was the really dangerous one), I kept finding other things to do that did give me that feedback. I always found ways to justify these--lets face it---procrastination activities; mainly that they were all Goldsmiths work, including quite a lot of undergraduate and masters level teaching (the latter actually including designing a course from scratch), some Computing Department admin work, and some development projects. Doing these kinds of activities is actually generally considered very good for doctoral students, but they&aposre normally deliberately constrained to ensure that the student has plenty of research time still. Through my own choice, I let them take over far too much of my research time.

The final causal point to mention is the one that any experienced academic will immediately point to: supervision. I failed to take advantage of the possibilities of supervision. As my supervisor was both involved in the project of which the Ph.D was part and also worked in the same office as me, we never had the right kind of relationship to foster good progress and working habits from me. I spoke to my supervisor every day and so I didn&apost really push for formal supervision often enough. I can now see that it would have been better to have someone with whom I had a less familiar relationship and who had less of an interest in my work and who, as a result, would just operate and enforce the procedures of doctoral project progress. It&aposs also possible that a more formal supervision relationship would have addressed the points above: I may have been forced to solidify ideas and to identify proper methods much sooner; I may have had more of the feedback that I needed; and I may have been more strongly discouraged from engaging in so much extra-research activity.

The purpose of all this is not to apportion blame (I have a strong sense of being responsible for everything that I do), but to state publicly something that I&aposve been finding very hard to say: I failed my Ph.D. And (and this is the important bit) to make sure that I get on with what I need to do to pass it.

  • I need disinterested supervision; I&aposve requested assistance from the Sociology Department which should fit well with the research methods I used;
  • I need to improve the reporting of the studies I carried out; this involves correcting and expanding the methods sections and also doing more analysis work;
  • I need to either extend the samples of the existing studies, or carry out credible follow up studies to improve my evidence base;
  • I need to focus the research questions better and (having done so) make the conclusions directly address them.

I&aposm going to blog this work as it goes along. So if I stop blogging, please send me harassing emails telling me to get the f*** on with it!

Categories: LUG Community Blogs

Steve Kemp: A diversion on off-site storage

Planet HantsLUG - Wed, 26/03/2014 - 18:53

Yesterday I took a diversion from thinking about my upcoming cache project, largely because I took some pictures inside my house, and realized my offsite backup was getting full.

I have three levels of backups:

  • Home stuff on my desktop is replicated to my wifes desktop, and vice-versa.
  • A simple server running rsync (content-free http://rsync.io/).
  • A "peering" arrangement of a small group of friends. Each of us makes available a small amount of space and we copy to-from each others shares, via rsync / scp as appropriate.

Unfortunately my rsync-based personal server is getting a little too full, and will certainly be full by next year. S3 is pricy, and I don't trust the "unlimited" storage people (backblaze,etc) to be sustainable and reliable long-term.

The pricing on Google-drive seems appealing, but I guess I'm loathe to share more data with Google. Perhaps I could dedicated a single "backup.account@gmail.com" login to that, separate from all-else.

So the diversion came along when I looked for Amazon S3-comptible, self-hosted, servers. There are a few, most of them are PHP-based, or similarly icky.

So far cloudfoundry's vlob looks the most interesting, but the main project seems stalled/dead. Sadly using s3cmd to upload files failed, but certainly the `curl` based API works as expected.

I looked at Gluster, CEPH, and similar, but didn't yet come up with a decent plan for handling offsite storage, but I know I have only six months or so before the need becomes pressing. I imagine the plan has to be using N-small servers with local storage, rather than 1-Large server, purely because pricing is going to be better that way.

Decisions decisions.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Judon't

Planet ALUG - Wed, 26/03/2014 - 01:44

tl;dr: broke my collar bone, ouch.

Since my last post, I've had a second Judo session which was just as enjoyable as the first except for one thing. Towards the end of the session we went into randori (free practice) - basically one-on-one sparring. I'm pleased to say that I won my first bout but in the second I went up against a guy who'd been learning for only 6 months or so. After some grappling, he threw me quite hard and with little control and I landed similarly badly - owch.

The first thing I realised was that I'd slammed my head pretty hard against the mat and that I was feeling a little groggy. After a little while, it became apparent that I'd also injured my shoulder. The session was over though and I winced through helping to put the mats away.

By the time I got in my car to drive home, my shoulder was a fair bit more painful and I made an appointment to see the doctor. When I saw her, she said there's a slight chance I'd broken my collar bone but it didn't seem very bad so I could just go home and see how it was in a couple of days.

A couple of days later I went back to the surgery. I saw a different doctor who said that she didn't think it was broken but she'd refer me for an X-Ray. The radiologist soon pointed out a nice clean break at the tip of my collar bone! I sat in A&E forever until I eventually saw a doctor who referred me to the orthopaedic clinic the following Monday.

Finally, the orthopaedic clinic's doctor told me I'd self-managed pretty well and that I should be ok to just get on with things but definitely no falling over, lifting anything heavy, and definitely no judo for up to six weeks.

Apparently, I've been very lucky as it's easy to dislodge the broken bone when the break is where mine is. Apparently, if I fall over or anything similar, I'm likely to end up in surgery :S

I've had to tell this story so many times now that I thought I might as well just write it down. For some reason, people seem to want to know all of the details when I mention that I've injured myself. Sadists.

Not Morrison's

On an unrelated subject, I've come to realise that I've developed an unhelpful manner when dealing with doors. I'm not the most graceful of people in general but it strikes me that I have a particularly awkward way of approaching doors. When I walk up to them, I hold a hand out to grasp the handle and push or pull (somehow I usually even manage to get that bit wrong regardless of signage) without slowing down at all which means that I have to swing the door quite fast to get it out of my way by the time my body catches up. Then, as the door's opening at a hefty pace, I have to grab the edge of the door and stop it before it slams into the wall. Because I'm still moving forward, this usually means that I'm partially closing the door as I move away from it.

In all, I feel awkward when passing through doors, and anybody directly behind me is liable to receive a door in the face if I'm not aware of them :S

Categories: LUG Community Blogs

Tony Whitmore: The goose that laid the golden eggs, but never cackled

Planet HantsLUG - Tue, 25/03/2014 - 20:48

I’ve wanted to visit Bletchley Park for years. It is where thousands of people toiled twenty-four hours a day to decipher enemy radio messages during the second world war, in absolute secrecy. It is where some of the brightest minds of a generation put their considerable mental skills to an incredibly valuable purpose. It is also where modern computing was born, notably through the work of Alan Turing and others.

So I was very pleased to be invited by my friend James to visit Bletchley as part of his stag weekend. After years of neglect, and in the face of demolition, the park is now being extensively restored. A new visitors’ centre will be introduced, and more of the huts opened up to the public. I have no doubt that these features will improve the experience overall, but there was a feeling of Trigger’s Broom as I looked over the huts closest to the mansion house. Never open to the public before, they looked good with new roofs and walls. But perhaps a little too clean.

And it really is only the huts closest to the house that are being renovated. Others are used by the neighbouring National Museum of Computing, small companies and a huge number are still derelict. Whilst I hope that the remaining huts will be preserved, it would be great if visitors could see the huts in their current dilapidated state too. The neglect of Bletchley Park is part of its story, and I would love to explore the derelict huts as they are now. I would love to shoot inside them – so many ideas in my head for that!

Most of the people working there were aged between eighteen and twenty-one, so you can imagine how much buzz and life there was in the place, despite the graveness of the work being carried out. Having visited the park as it is today, I wish that I had been able to visit it during the war. To see people walking around the huts, efficiency and eccentricity hand-in-hand, to know the import and intellect of what was being carried out, and how it would produce the technology that we all rely on every day, would have been incredible.

Pin It
Categories: LUG Community Blogs

Steve Kemp: New GPG-key

Planet HantsLUG - Mon, 24/03/2014 - 20:26

I've now generated a new GPG-key for myself:

$ gpg --fingerprint 229A4066 pub 4096R/0C626242 2014-03-24 Key fingerprint = D516 C42B 1D0E 3F85 4CAB 9723 1909 D408 0C62 6242 uid Steve Kemp (Edinburgh, Scotland) <steve@steve.org.uk> sub 4096R/229A4066 2014-03-24

The key can be found online via mit.edu : 0x1909D4080C626242

This has been signed with my old key:

pub 1024D/CD4C0D9D 2002-05-29 Key fingerprint = DB1F F3FB 1D08 FC01 ED22 2243 C0CF C6B3 CD4C 0D9D uid Steve Kemp <steve@steve.org.uk> sub 2048g/AC995563 2002-05-29

If there is anybody who has signed my old key who wishes to sign my new one then please feel free to get in touch to arrange it.

Categories: LUG Community Blogs

PDFTK – The PDF Toolkit

Planet SurreyLUG - Mon, 24/03/2014 - 13:59

PDFTK – The PDF Toolkit

I have long been a keen user of pdftk, the PDF Toolkit, but am frequently surprised when people have not heard of it. True, it is a command line tool, but it is easy to incorporate into service menus, scripts etc and doubtless there is a GUI front-end for it somewhere (in fact there is one linked to from the above page).

Clearly a blog post is called for, but, whilst you wait for a post that will never arrive, here is a link to some examples that should open your eyes to what is possible with pdftk.

To get started on a Debian-based system:

$ sudo apt-get install pdftk $ man pdftk

Enjoy.


Categories: LUG Community Blogs

Steve Kemp: So I failed at writing some clustered code in Perl

Planet HantsLUG - Mon, 24/03/2014 - 10:41

Until this time next month I'll be posting code-based discussions only.

Recently I've been wanting to explore creating clustered services, because clusters are definitely things I use professionally.

My initial attempt was to write an auto-clustering version of memcached, because that's a useful tool. Writing the core of the service took an hour or so:

  • Simple KeyVal.pm implementation.
  • Give it the obvious methods get, set, delete.
  • Make it more interesting by creating a read-only append-log.
  • The logfile will be replayed for clustering.

At the point I was done the following code worked:

use KeyVal; # Create an object, and set some values my $obj = KeyVal->new( logfile => "/tmp/foo.log" ); $obj->incr( "steve" ); $obj->incr( "steve" ); print $obj->get( "steve" ) # prints 2. # Now replay the append-only log my $replay = KeyVal->new( logfile => "/tmp/foo.log" ); $replay->replay(); print $replay->get( "steve" ) # prints 2.

In the first case we used the primitives to increment a value twice, and then fetch it. In the second case we used the logfile the first object created to replay all prior transactions, then output the value.

Neat. The next step was to make it work over a network. Trivial.

Finally I wanted to autodetect peers, and deploy replication. Each host would send out regular messages along the lines of "Do you have updates made since $time?". Any that did would replay the logfile from the given unixtime offset.

However here I ran into problems. Peer discovery was supposed to be basic, and I figured I'd write something that did leader election by magic. Unfortunately Perls threading code is .. unpleasant:

  • I wanted to store all known-peers in a singleton.
  • Then I wanted to create threads that would announce and receive updates.

This failed. Majorly. Because you cannot launch the implementation of a class-method as a thread. Equally you cannot make a variable which is "complex" shared across threads.

I wrote some demo code which works without packages and a shared singleton:

The Ruby version, by contrast, is much more OO and neater. Meh.

I've now shelved the project.

My next, big, task was to make the network service utterly memcached compatible. That would have been fiddly, but not impossible. Right now I just use a simple line-based network protocol.

I suspect I could have got what I wanted using EventMachine, or similar, but that's a path I've not yet explored, and I'm happy enough with that decision.

Categories: LUG Community Blogs

Chris Lamb: Fingerspitzengefühl

Planet ALUG - Mon, 24/03/2014 - 00:53

Loanwords can often appear more insightful than they really are. How prescient of another culture to codify such a concept into a single word! They surely must have lofty and perceptive discussions if it was necessary to coin and codify one.

But whilst there is always the danger of over-inflating the currency of the loanword—especially the compound one—there must be a few that are worth the trouble.

One such term is fingerspitzengefühl. Literally meaning "finger-tips feeling" in German, it attempts to capture the idea of an intuitive sophistication, flair or instinct. Someone exhibiting fingerspitzengefühl would be able to respond appropriately, delicately and tactfully to certain things or situations.

Oliver Reichenstein clarifies the distinction from a personal taste:

Whether I like pink or not, sugar in my coffee, red or white wine, these things are a matter of personal taste. These are personal preferences, and both designers and non-designers have them. This is the taste we shouldn't bother discussing.

Whether I set a text’s line height to 100% or 150% is not a matter of taste, it is a matter of knowing the principles of typography.

However, whether I set a text’s line height at 150% or 145% is a matter of Fingerspitzengefühl; wisdom in craft, or sophistication.

Fingerspitzengefühl is therefore not innate and is probably refined over the years via subconscious—rather than conscious—study and reflection. However, it always flows naturally in the moment, not dissimilar to Castiglione's sprezzatura.

Personally, I am particularly enamoured how this concept of a "trained taste" appears to blur the line between an objective and subjective aesthetic, putting me somewhat at odds with those who baldly assert that taste is "obviously" entirely individual.

Categories: LUG Community Blogs

Andrew Savory: Mastering the mobile app challenge at Adobe Summit

Planet ALUG - Sun, 23/03/2014 - 18:00

I’m presenting a 2 hour Building mobile apps with PhoneGap Enterprise lab at Adobe Summit in Salt Lake City on Tuesday, with my awesome colleague John Fait. Here’s a sneak preview of the blurb which will be appearing over on the Adobe Digital Marketing blog tomorrow. I’m posting it here as it may be interesting to the wider Apache Cordova community to see what Adobe are doing with a commercial version of the project…

~

Mobile apps are the next great challenge for marketing experts. Bruce Lefebvre sets the the scene perfectly in So, You Want to Build an App. In his mobile app development and content management with AEM session at Adobe Summit he’ll show you how Adobe Marketing Cloud solutions are providing amazing capabilities for delivering mobile apps. It’s a must-see session to learn about AEM and PhoneGap.

But what if you want to gain hands-on practical experience of AEM, PhoneGap, and mobile app development? If you want to roll up your sleeves and build a mobile app yourself, then we’ve got an awesome lab planned for you. In “Building mobile apps with PhoneGap Enterprise“, you’ll have the opportunity to create, build, and update a mobile application with Adobe Experience Manager. You’ll see how easy it is to deliver applications across multiple platforms. You’ll also learn how you can easily monitor app engagement through integration with Adobe Analytics and Adobe Mobile Services.

If you want to know how you can deliver more effective apps, leverage your investment in AEM, and bridge the gap between marketers and developers, then you need to attend this lab at Adobe Summit. Join us for this extended deep dive into the integration between AEM and PhoneGap. No previous experience is necessary – you don’t need to know how to code, and you don’t need to know any of the Adobe solutions, as we’ll explain it all as we go along. Some of you will also be able to leave the lab with the mobile app you wrote, so that you can show your friends and colleagues how you’re ready to continuously drive mobile app engagement and ROI, reduce your app time to market, and deliver a unified experience across channels and brands.

Are you ready to master the mobile app challenge?

~

All hyperbole aside, I think this is going to be a pretty interesting technology space to watch:

  • Being able to build a mobile app from within a CMS is incredibly powerful for certain classes of mobile app. Imagine people having the ability to build mobile apps with an easy drag-and-drop UI that requires no coding. Imagine being able to add workflows (editorial review, approvals etc) to your mobile app development.
  • No matter how we might wish we didn’t have these app content silos, you can’t argue against the utility of content-based mobile apps until the mobile web matures sufficiently so that everyone can build offline mobile websites with ease. Added together with over-the-air content updates, it’s really nice to be able to have access to important content even when the network is not available.
  • Analytics and mobile metrics are providing really useful ways to understand how people are using websites and apps. Having all the SDKs embedded in your app automatically with no extra coding required means that everyone can take advantage of these metrics. Hopefully this will lead to a corresponding leap in app quality and usability.
  • By using Apache Cordova, we’re ensuring that these mobile app silos are at least built using open standards and open technologies (HTML, CSS, JS, with temporary native shims). So when mobile web runtimes are mature enough, it will be trivial to switch front-end app to front-end website without retooling the entire back-end content management workflow.

Exciting times.

Categories: LUG Community Blogs

Martin Wimpress: Memory consumption of Linux desktop environments

Planet HantsLUG - Sun, 23/03/2014 - 14:30

For the last 9 months or so I've spent my spare time working with the MATE Desktop Team. Every so often, via the various on-line MATE communities, the topic of how "light weight" MATE is when compared to other desktop environments crops up and quite often XFCE is suggested as a lighter alternative. After all MATE and XFCE both provide a traditional desktop environment based on GTK+ so this suggestion is sensible. But is XFCE actually "lighter" than MATE?

I've found MATE to be (subjectively) more responsive that XFCE and there have been two recent blog posts that indicate MATE has lower memory requirements than XFCE.

Given that I'm comfortably running MATE on the Raspberry Pi Model B (which has just 512MB RAM) I've been stating that MATE is well suited for use on resource constrained hardware and professional workstations alike. This is still true, but I've also said that MATE is lighter than XFCE and I might have to eat humble pie on that one.

The topic of measursing desktop environment resource use came up on the #archlinux-tu IRC channel recently and someone suggested using ps_mem.py to gather the memory usage data. ps_mem.py provides a far more robust mechanism for gathering memory usage data than I've seen in previous comparisons.

So the seed was planted, I created seven VirtualBox guests and set to work comparing the memory requirements of all the Linux desktop environments I could.

Damn it, just tell me what the "lightest" desktop environment is!

OK, for those of you who just want the final answer, with none of the explaination, here it is:

| Desktop Environment | Memory Used | | ---------------------|------------:| | LXDE | 84.9 MiB | | Enlightenment 0.18.5 | 89.6 MiB | | XFCE 4.10.2 | 105.8 MiB | | MATE 1.8.0 | 121.6 MiB | | Cinnamon 2.0.14 | 167.1 MiB | | GNOME3 3.10 | 256.4 MiB | | KDE 4.12 | 358.8 MiB | Bullshit! How did you come up with these numbers?

TL;DR

All the VirtualBox VMs are 32-bit with 768MB RAM and based on the same core Arch Linux installation. I achieved this using my ArchInstaller script which is designed for quickly installing reproducible Arch Linux setups.

Each VM differs only by the packages that are required for the given desktop environment. The desktop environments native display manager is also installed but if it doesn't have one then lightdm was chosen. LXDE, XFCE, MATE, Cinnamon and GNOME all have gvfs-smb installed as this enables accessing Windows and Samba shares (a common requirement for home and office) in their respective file managers and the KDE install includes packages to provide equivalent functionality. You can see the specific desktop environment packages or package groups that were installed here:

Each VM was booted, logged in and any initial desktop environment configuration was completed choosing the default options if prompted. Then ps_mem was installed, the VM shut down and a snapshot made.

Each VM was then started, logged in via the display manager, the desktop environment was fully loaded and waited for disk activity to settle. Then ps -efH and ps_mem were executed via SSH and the results sent back to my workstation. When the process and memory collections were conducted there had been no desktop interaction and no applications had been launched.

Your numbers are wrong I can get xxx desktop to run in yyy less memory!

Well done, you probably can.

Each virtual machine has VirtualBox guest additions, OpenSSH, Network Manager, avahi-daemon, ntpd, rpc.statd, syslog-ng and various other bits and bobs installed and running. Some of these are not required or have lighter alternatives available.

So, while I freely accept that each desktop environment can be run in less memory, the results here are relative to a consistent base setup.

However, what is important to note is that I think the Cinnamon results are too low. Cinnamon is forked from GNOME3 and the Arch Linux package groups for Cinnamon only install the core Cinnamon packages but none of the GNOME3 applications or components that would be required to create a full desktop environment.

So comparing Cinnamon with the other desktops in this test is not a fair comparison. For example, GNOME3 and KDE default installs on Arch Linux include all the accessibility extensions and applications for sight or mobility impaired individuals where as Cinnamon does not. This is just one example of where I think the Cinnamon results are skewed.

The RAM is there to be used. Is lighter actually better?

No, and Yes.

I subscribe to the school of thought that RAM is there to be used. But;

  • I want to preserve as much free RAM for the applications I run, not for feature bloat in the desktop environment. I'm looking at you KDE.
  • I want a fully integrated desktop experience, but not one that is merely lighter because it lacks features. I'm looking at you LXDE.
  • I want a consistent user interface that any of my family could use, not one that favours style over substance. I'm looking at you Enlightenment.

Another take on lightness is that the more RAM used, the more code that needs executing. Therefore, higher CPU utilisation and degraded desktop performance on modest hardware. This could also translate into degraded battery performance.

This is why I choose MATE Desktop. It is a fully integrated desktop environment, that is responsive, feature full, has reasonable memory requirements and scales from single core armv6h CPU with 512MB RAM to multi core x86_64 CPU with 32GB RAM (for me at least).

Without the full stats it never happened. Prove it!

He is the full data capture from ps_mem.py for each desktop environment.

LXDE Private + Shared = RAM used Program 176.0 KiB + 37.5 KiB = 213.5 KiB dbus-launch 308.0 KiB + 38.0 KiB = 346.0 KiB dhcpcd 320.0 KiB + 85.5 KiB = 405.5 KiB rpcbind 352.0 KiB + 76.0 KiB = 428.0 KiB lxdm-binary 388.0 KiB + 79.0 KiB = 467.0 KiB lxsession 560.0 KiB + 34.0 KiB = 594.0 KiB crond 576.0 KiB + 51.0 KiB = 627.0 KiB systemd-logind 476.0 KiB + 280.0 KiB = 756.0 KiB avahi-daemon (2) 584.0 KiB + 191.5 KiB = 775.5 KiB at-spi-bus-launcher 764.0 KiB + 53.5 KiB = 817.5 KiB systemd-udevd 4.6 MiB + -3890.5 KiB = 817.5 KiB menu-cached 612.0 KiB + 211.0 KiB = 823.0 KiB gvfsd 496.0 KiB + 332.0 KiB = 828.0 KiB lxdm-session 628.0 KiB + 226.0 KiB = 854.0 KiB at-spi2-registryd 772.0 KiB + 83.5 KiB = 855.5 KiB VBoxService 764.0 KiB + 93.0 KiB = 857.0 KiB rpc.statd 704.0 KiB + 165.5 KiB = 869.5 KiB ntpd 712.0 KiB + 174.5 KiB = 886.5 KiB accounts-daemon 4.8 MiB + -3888.0 KiB = 1.0 MiB gvfsd-fuse 896.0 KiB + 267.5 KiB = 1.1 MiB gvfsd-trash 5.0 MiB + -3765.0 KiB = 1.4 MiB upowerd 5.1 MiB + -3691.5 KiB = 1.4 MiB gvfs-udisks2-volume-monitor 5.1 MiB + -3774.0 KiB = 1.5 MiB udisksd 1.0 MiB + 505.5 KiB = 1.5 MiB dbus-daemon (3) 1.2 MiB + 531.0 KiB = 1.7 MiB (sd-pam) (2) 1.7 MiB + 276.0 KiB = 1.9 MiB syslog-ng 1.0 MiB + 1.0 MiB = 2.1 MiB systemd (3) 1.4 MiB + 940.0 KiB = 2.3 MiB lxpolkit 1.3 MiB + 1.2 MiB = 2.5 MiB sshd (2) 2.3 MiB + 665.0 KiB = 3.0 MiB NetworkManager 2.6 MiB + 502.5 KiB = 3.0 MiB VBoxClient (4) 11.2 MiB + -7782.5 KiB = 3.6 MiB polkitd 3.2 MiB + 696.5 KiB = 3.9 MiB openbox 2.9 MiB + 1.9 MiB = 4.8 MiB lxpanel 5.2 MiB + 61.5 KiB = 5.3 MiB systemd-journald 3.6 MiB + 1.8 MiB = 5.4 MiB pcmanfm 7.0 MiB + 1.6 MiB = 8.5 MiB nm-applet 16.4 MiB + 504.0 KiB = 16.9 MiB Xorg --------------------------------- 84.9 MiB ================================= Enlightenment Private + Shared = RAM used Program 172.0 KiB + 46.5 KiB = 218.5 KiB dbus-launch 316.0 KiB + 40.0 KiB = 356.0 KiB dhcpcd 336.0 KiB + 87.5 KiB = 423.5 KiB rpcbind 560.0 KiB + 37.0 KiB = 597.0 KiB crond 580.0 KiB + 54.0 KiB = 634.0 KiB systemd-logind 688.0 KiB + 67.5 KiB = 755.5 KiB systemd-udevd 480.0 KiB + 276.0 KiB = 756.0 KiB avahi-daemon (2) 700.0 KiB + 133.5 KiB = 833.5 KiB ntpd 768.0 KiB + 78.5 KiB = 846.5 KiB VBoxService 580.0 KiB + 267.0 KiB = 847.0 KiB tempget 544.0 KiB + 312.0 KiB = 856.0 KiB enlightenment_start 764.0 KiB + 94.0 KiB = 858.0 KiB rpc.statd 600.0 KiB + 280.5 KiB = 880.5 KiB at-spi-bus-launcher 624.0 KiB + 298.0 KiB = 922.0 KiB at-spi2-registryd 724.0 KiB + 309.5 KiB = 1.0 MiB accounts-daemon 784.0 KiB + 386.5 KiB = 1.1 MiB enlightenment_fm 952.0 KiB + 395.0 KiB = 1.3 MiB efreetd 1.0 MiB + 517.0 KiB = 1.5 MiB dbus-daemon (3) 5.3 MiB + -3781.0 KiB = 1.7 MiB udisksd 1.2 MiB + 483.0 KiB = 1.7 MiB (sd-pam) (2) 1.6 MiB + 234.0 KiB = 1.9 MiB syslog-ng 1.1 MiB + 1.0 MiB = 2.1 MiB systemd (3) 1.4 MiB + 814.5 KiB = 2.2 MiB lightdm (2) 1.3 MiB + 1.1 MiB = 2.4 MiB sshd (2) 2.6 MiB + 575.5 KiB = 3.2 MiB VBoxClient (4) 2.4 MiB + 781.0 KiB = 3.2 MiB NetworkManager 10.9 MiB + -7741.5 KiB = 3.3 MiB polkitd 6.2 MiB + 68.5 KiB = 6.3 MiB systemd-journald 11.3 MiB + -2300.0 KiB = 9.1 MiB nm-applet 16.3 MiB + 426.0 KiB = 16.7 MiB Xorg 19.9 MiB + 1.5 MiB = 21.4 MiB enlightenment --------------------------------- 89.6 MiB ================================= XFCE Private + Shared = RAM used Program 176.0 KiB + 31.5 KiB = 207.5 KiB dbus-launch 292.0 KiB + 26.5 KiB = 318.5 KiB gpg-agent 312.0 KiB + 36.0 KiB = 348.0 KiB dhcpcd 324.0 KiB + 84.5 KiB = 408.5 KiB rpcbind 484.0 KiB + 94.0 KiB = 578.0 KiB xfconfd 560.0 KiB + 31.0 KiB = 591.0 KiB crond 584.0 KiB + 49.0 KiB = 633.0 KiB systemd-logind 476.0 KiB + 250.0 KiB = 726.0 KiB avahi-daemon (2) 600.0 KiB + 163.5 KiB = 763.5 KiB at-spi-bus-launcher 624.0 KiB + 169.0 KiB = 793.0 KiB at-spi2-registryd 620.0 KiB + 180.0 KiB = 800.0 KiB gvfsd 752.0 KiB + 49.5 KiB = 801.5 KiB systemd-udevd 764.0 KiB + 54.5 KiB = 818.5 KiB sh 768.0 KiB + 57.5 KiB = 825.5 KiB VBoxService 764.0 KiB + 91.0 KiB = 855.0 KiB rpc.statd 708.0 KiB + 163.5 KiB = 871.5 KiB ntpd 712.0 KiB + 168.5 KiB = 880.5 KiB accounts-daemon 856.0 KiB + 177.5 KiB = 1.0 MiB gvfsd-fuse 828.0 KiB + 229.5 KiB = 1.0 MiB gvfsd-trash 992.0 KiB + 285.0 KiB = 1.2 MiB tumblerd 1.0 MiB + 252.0 KiB = 1.3 MiB upowerd 5.1 MiB + -3728.5 KiB = 1.4 MiB gvfs-udisks2-volume-monitor 5.1 MiB + -3802.0 KiB = 1.4 MiB udisksd 1.1 MiB + 354.0 KiB = 1.5 MiB xfce4-notifyd 1.2 MiB + 489.0 KiB = 1.7 MiB (sd-pam) (2) 1.3 MiB + 493.5 KiB = 1.8 MiB dbus-daemon (3) 1.5 MiB + 460.0 KiB = 1.9 MiB Thunar 1.7 MiB + 266.0 KiB = 1.9 MiB syslog-ng 5.4 MiB + -3474.5 KiB = 2.0 MiB lightdm (2) 1.1 MiB + 1.0 MiB = 2.1 MiB systemd (3) 1.4 MiB + 682.5 KiB = 2.1 MiB panel-6-systray 1.6 MiB + 637.5 KiB = 2.2 MiB xfce4-session 1.9 MiB + 529.5 KiB = 2.4 MiB xfsettingsd 1.6 MiB + 896.5 KiB = 2.5 MiB panel-2-actions 1.3 MiB + 1.2 MiB = 2.5 MiB sshd (2) 2.3 MiB + 574.0 KiB = 2.9 MiB NetworkManager 2.6 MiB + 446.5 KiB = 3.0 MiB VBoxClient (4) 2.5 MiB + 585.5 KiB = 3.0 MiB xfce4-power-manager (2) 2.1 MiB + 1.1 MiB = 3.2 MiB xfwm4 11.2 MiB + -7865.0 KiB = 3.5 MiB polkitd 3.1 MiB + 1.3 MiB = 4.4 MiB xfce4-panel 3.8 MiB + 1.6 MiB = 5.4 MiB xfdesktop 6.2 MiB + 63.5 KiB = 6.3 MiB systemd-journald 10.5 MiB + -2789.5 KiB = 7.8 MiB nm-applet 22.6 MiB + 844.5 KiB = 23.4 MiB Xorg --------------------------------- 105.8 MiB ================================= MATE Private + Shared = RAM used Program 172.0 KiB + 29.5 KiB = 201.5 KiB dbus-launch 248.0 KiB + 57.5 KiB = 305.5 KiB rtkit-daemon 312.0 KiB + 33.0 KiB = 345.0 KiB dhcpcd 324.0 KiB + 84.5 KiB = 408.5 KiB rpcbind 440.0 KiB + 89.0 KiB = 529.0 KiB dconf-service 560.0 KiB + 29.0 KiB = 589.0 KiB crond 580.0 KiB + 46.0 KiB = 626.0 KiB systemd-logind 548.0 KiB + 116.0 KiB = 664.0 KiB gconfd-2 544.0 KiB + 182.0 KiB = 726.0 KiB gconf-helper 580.0 KiB + 146.5 KiB = 726.5 KiB at-spi-bus-launcher 480.0 KiB + 248.0 KiB = 728.0 KiB avahi-daemon (2) 696.0 KiB + 47.5 KiB = 743.5 KiB systemd-udevd 612.0 KiB + 163.0 KiB = 775.0 KiB at-spi2-registryd 4.6 MiB + -3935.0 KiB = 777.0 KiB gvfsd 768.0 KiB + 56.5 KiB = 824.5 KiB VBoxService 764.0 KiB + 89.0 KiB = 853.0 KiB rpc.statd 704.0 KiB + 160.5 KiB = 864.5 KiB ntpd 732.0 KiB + 148.5 KiB = 880.5 KiB accounts-daemon 808.0 KiB + 202.5 KiB = 1.0 MiB gvfsd-trash 860.0 KiB + 154.0 KiB = 1.0 MiB gvfsd-fuse 1.0 MiB + 249.0 KiB = 1.3 MiB upowerd 5.0 MiB + -3758.5 KiB = 1.4 MiB gvfs-udisks2-volume-monitor 5.1 MiB + -3810.0 KiB = 1.4 MiB udisksd 1.4 MiB + 377.0 KiB = 1.8 MiB (sd-pam) (2) 1.7 MiB + 267.0 KiB = 1.9 MiB syslog-ng 1.5 MiB + 476.5 KiB = 1.9 MiB dbus-daemon (3) 1.5 MiB + 412.5 KiB = 1.9 MiB polkit-mate-authentication-agent-1 1.4 MiB + 591.5 KiB = 2.0 MiB lightdm (2) 1.4 MiB + 884.0 KiB = 2.2 MiB systemd (3) 5.9 MiB + -3514.5 KiB = 2.5 MiB mate-screensaver 1.3 MiB + 1.2 MiB = 2.5 MiB sshd (2) 2.0 MiB + 559.5 KiB = 2.5 MiB mate-session 1.9 MiB + 675.5 KiB = 2.6 MiB notification-area-applet 2.0 MiB + 734.0 KiB = 2.7 MiB mate-power-manager 2.2 MiB + 579.0 KiB = 2.8 MiB NetworkManager 2.6 MiB + 417.5 KiB = 3.0 MiB VBoxClient (4) 2.8 MiB + 678.0 KiB = 3.4 MiB marco 11.2 MiB + -7886.5 KiB = 3.5 MiB polkitd 2.7 MiB + 930.0 KiB = 3.6 MiB wnck-applet 3.5 MiB + 304.5 KiB = 3.8 MiB pulseaudio 2.7 MiB + 1.2 MiB = 3.9 MiB mate-volume-control-applet 3.0 MiB + 1.0 MiB = 4.0 MiB clock-applet 3.6 MiB + 1.1 MiB = 4.7 MiB mate-settings-daemon 3.7 MiB + 1.2 MiB = 4.9 MiB mate-panel 7.0 MiB + 314.0 KiB = 7.3 MiB systemd-journald 6.1 MiB + 1.5 MiB = 7.6 MiB caja 7.8 MiB + 1.1 MiB = 8.8 MiB nm-applet 17.2 MiB + 1.2 MiB = 18.4 MiB Xorg --------------------------------- 121.6 MiB ================================= Cinnamon Private + Shared = RAM used Program 240.0 KiB + 55.5 KiB = 295.5 KiB rtkit-daemon 312.0 KiB + 32.0 KiB = 344.0 KiB dhcpcd 340.0 KiB + 83.5 KiB = 423.5 KiB rpcbind 384.0 KiB + 78.5 KiB = 462.5 KiB dbus-launch (2) 556.0 KiB + 28.0 KiB = 584.0 KiB crond 576.0 KiB + 44.0 KiB = 620.0 KiB systemd-logind 548.0 KiB + 110.0 KiB = 658.0 KiB gconfd-2 460.0 KiB + 246.0 KiB = 706.0 KiB avahi-daemon (2) 540.0 KiB + 179.0 KiB = 719.0 KiB gconf-helper 584.0 KiB + 167.0 KiB = 751.0 KiB at-spi-bus-launcher 620.0 KiB + 152.0 KiB = 772.0 KiB at-spi2-registryd 616.0 KiB + 170.0 KiB = 786.0 KiB gvfsd 748.0 KiB + 45.5 KiB = 793.5 KiB systemd-udevd 772.0 KiB + 54.5 KiB = 826.5 KiB VBoxService 700.0 KiB + 129.5 KiB = 829.5 KiB ntpd 760.0 KiB + 87.0 KiB = 847.0 KiB rpc.statd 4.7 MiB + -3940.5 KiB = 863.5 KiB accounts-daemon 860.0 KiB + 162.0 KiB = 1.0 MiB gvfsd-fuse 816.0 KiB + 210.5 KiB = 1.0 MiB gvfsd-trash 5.0 MiB + -3866.0 KiB = 1.2 MiB upowerd 5.1 MiB + -3753.5 KiB = 1.4 MiB gvfs-udisks2-volume-monitor 1.1 MiB + 283.0 KiB = 1.4 MiB udisksd 1.1 MiB + 319.5 KiB = 1.4 MiB cupsd 5.3 MiB + -3709.0 KiB = 1.7 MiB csd-printer 1.6 MiB + 219.0 KiB = 1.8 MiB syslog-ng 1.4 MiB + 582.5 KiB = 1.9 MiB lightdm (2) 1.6 MiB + 512.5 KiB = 2.1 MiB dbus-daemon (4) 1.4 MiB + 1.0 MiB = 2.4 MiB systemd (4) 1.3 MiB + 1.1 MiB = 2.4 MiB sshd (2) 1.8 MiB + 624.5 KiB = 2.4 MiB (sd-pam) (3) 2.3 MiB + 335.0 KiB = 2.7 MiB colord 2.2 MiB + 519.0 KiB = 2.7 MiB NetworkManager 2.6 MiB + 447.5 KiB = 3.0 MiB VBoxClient (4) 2.5 MiB + 695.5 KiB = 3.2 MiB polkit-gnome-authentication-agent-1 6.7 MiB + -3304.0 KiB = 3.5 MiB cinnamon-screensaver 11.2 MiB + -7914.5 KiB = 3.5 MiB polkitd 7.0 MiB + -3244.0 KiB = 3.8 MiB cinnamon-session 3.5 MiB + 343.5 KiB = 3.9 MiB pulseaudio 5.2 MiB + 56.5 KiB = 5.3 MiB systemd-journald 3.8 MiB + 2.0 MiB = 5.8 MiB nm-applet 5.4 MiB + 1.9 MiB = 7.3 MiB cinnamon-settings-daemon 8.1 MiB + 1.1 MiB = 9.2 MiB cinnamon-launch 8.4 MiB + 2.0 MiB = 10.3 MiB nemo 32.0 MiB + -5304.5 KiB = 26.8 MiB Xorg 37.5 MiB + 5.3 MiB = 42.9 MiB cinnamon --------------------------------- 167.1 MiB ================================= GNOME3 Private + Shared = RAM used Program 172.0 KiB + 33.5 KiB = 205.5 KiB dbus-launch 272.0 KiB + 14.0 KiB = 286.0 KiB ssh-agent 244.0 KiB + 48.5 KiB = 292.5 KiB rtkit-daemon 312.0 KiB + 30.0 KiB = 342.0 KiB dhcpcd 324.0 KiB + 21.0 KiB = 345.0 KiB systemd-localed 324.0 KiB + 22.5 KiB = 346.5 KiB systemd-hostnamed 324.0 KiB + 82.5 KiB = 406.5 KiB rpcbind 364.0 KiB + 77.0 KiB = 441.0 KiB dconf-service 564.0 KiB + 27.0 KiB = 591.0 KiB crond 536.0 KiB + 60.5 KiB = 596.5 KiB obexd 560.0 KiB + 47.5 KiB = 607.5 KiB bluetoothd 592.0 KiB + 41.0 KiB = 633.0 KiB systemd-logind 556.0 KiB + 102.0 KiB = 658.0 KiB gconfd-2 544.0 KiB + 170.0 KiB = 714.0 KiB gconf-helper 620.0 KiB + 125.0 KiB = 745.0 KiB at-spi2-registryd 500.0 KiB + 249.0 KiB = 749.0 KiB avahi-daemon (2) 592.0 KiB + 158.5 KiB = 750.5 KiB at-spi-bus-launcher 620.0 KiB + 137.0 KiB = 757.0 KiB gvfsd 720.0 KiB + 44.5 KiB = 764.5 KiB systemd-udevd 692.0 KiB + 105.0 KiB = 797.0 KiB gdm 688.0 KiB + 140.0 KiB = 828.0 KiB gvfsd-burn 704.0 KiB + 128.5 KiB = 832.5 KiB ntpd 768.0 KiB + 84.0 KiB = 852.0 KiB rpc.statd 744.0 KiB + 123.5 KiB = 867.5 KiB accounts-daemon 720.0 KiB + 257.5 KiB = 977.5 KiB (sd-pam) 852.0 KiB + 131.0 KiB = 983.0 KiB gvfsd-fuse 776.0 KiB + 257.0 KiB = 1.0 MiB zeitgeist-daemon 956.0 KiB + 161.5 KiB = 1.1 MiB gdm-simple-slave 1.0 MiB + 188.0 KiB = 1.2 MiB upowerd 1.0 MiB + 261.5 KiB = 1.3 MiB gvfs-udisks2-volume-monitor 1.1 MiB + 216.0 KiB = 1.3 MiB udisksd 1.1 MiB + 298.5 KiB = 1.4 MiB cupsd 1.1 MiB + 469.5 KiB = 1.6 MiB gdm-session-worker 1.3 MiB + 314.0 KiB = 1.6 MiB gsd-printer 1.5 MiB + 285.5 KiB = 1.7 MiB gnome-keyring-daemon 1.6 MiB + 207.0 KiB = 1.8 MiB syslog-ng 1.0 MiB + 912.5 KiB = 1.9 MiB systemd (2) 1.3 MiB + 661.0 KiB = 2.0 MiB mission-control-5 1.6 MiB + 421.0 KiB = 2.0 MiB gnome-session 1.7 MiB + 398.5 KiB = 2.0 MiB colord 1.6 MiB + 511.5 KiB = 2.1 MiB zeitgeist-datahub 1.3 MiB + 1.0 MiB = 2.3 MiB sshd (2) 1.8 MiB + 624.0 KiB = 2.5 MiB gnome-shell-calendar-server 2.1 MiB + 399.0 KiB = 2.5 MiB NetworkManager 2.2 MiB + 468.5 KiB = 2.6 MiB dbus-daemon (3) 2.4 MiB + 839.5 KiB = 3.2 MiB evolution-source-registry 2.6 MiB + 647.0 KiB = 3.3 MiB gnome-control-center-search-provider 3.5 MiB + 355.5 KiB = 3.8 MiB pulseaudio 3.4 MiB + 1.0 MiB = 4.4 MiB tracker-miner-fs 3.3 MiB + 1.7 MiB = 5.0 MiB goa-daemon 4.4 MiB + 749.5 KiB = 5.1 MiB polkitd 4.0 MiB + 1.8 MiB = 5.8 MiB nm-applet 5.2 MiB + 857.5 KiB = 6.1 MiB tracker-store 6.6 MiB + 58.5 KiB = 6.7 MiB systemd-journald 5.8 MiB + 1.9 MiB = 7.7 MiB gnome-settings-daemon 6.4 MiB + 2.5 MiB = 8.9 MiB evolution-alarm-notify 9.1 MiB + 1.6 MiB = 10.7 MiB Xorg 10.1 MiB + 1.1 MiB = 11.2 MiB seahorse 11.9 MiB + 2.5 MiB = 14.4 MiB epiphany 18.9 MiB + 2.6 MiB = 21.6 MiB WebKitWebProcess 24.3 MiB + 972.0 KiB = 25.3 MiB evolution-calendar-factory 57.9 MiB + 5.2 MiB = 63.2 MiB gnome-shell --------------------------------- 256.4 MiB ================================= KDE Private + Shared = RAM used Program 68.0 KiB + 7.0 KiB = 75.0 KiB start_kdeinit 72.0 KiB + 13.5 KiB = 85.5 KiB kwrapper4 120.0 KiB + 21.0 KiB = 141.0 KiB agetty 172.0 KiB + 25.5 KiB = 197.5 KiB dbus-launch 292.0 KiB + 26.5 KiB = 318.5 KiB gpg-agent 312.0 KiB + 30.0 KiB = 342.0 KiB dhcpcd 324.0 KiB + 81.5 KiB = 405.5 KiB rpcbind 556.0 KiB + 25.0 KiB = 581.0 KiB crond 596.0 KiB + 40.0 KiB = 636.0 KiB systemd-logind 496.0 KiB + 234.0 KiB = 730.0 KiB avahi-daemon (2) 700.0 KiB + 96.5 KiB = 796.5 KiB ntpd 768.0 KiB + 44.5 KiB = 812.5 KiB VBoxService 768.0 KiB + 83.0 KiB = 851.0 KiB rpc.statd 832.0 KiB + 40.5 KiB = 872.5 KiB systemd-udevd 832.0 KiB + 51.5 KiB = 883.5 KiB startkde 728.0 KiB + 264.5 KiB = 992.5 KiB accounts-daemon 516.0 KiB + 638.0 KiB = 1.1 MiB nepomukserver 652.0 KiB + 519.5 KiB = 1.1 MiB kdm (2) 1.0 MiB + 346.0 KiB = 1.4 MiB upowerd 848.0 KiB + 686.0 KiB = 1.5 MiB klauncher 1.2 MiB + 465.0 KiB = 1.6 MiB (sd-pam) (2) 1.3 MiB + 298.0 KiB = 1.6 MiB udisksd 1.6 MiB + 225.5 KiB = 1.8 MiB akonadi_control 1.6 MiB + 151.0 KiB = 1.8 MiB syslog-ng 1.1 MiB + 967.5 KiB = 2.0 MiB systemd (3) 1.6 MiB + 408.5 KiB = 2.0 MiB dbus-daemon (2) 668.0 KiB + 1.5 MiB = 2.1 MiB kdeinit4 1.3 MiB + 1.0 MiB = 2.3 MiB sshd (2) 1.4 MiB + 1.4 MiB = 2.8 MiB kio_trash (2) 2.2 MiB + 1.0 MiB = 3.2 MiB klipper 2.8 MiB + 383.5 KiB = 3.2 MiB VBoxClient (4) 2.9 MiB + 454.0 KiB = 3.3 MiB NetworkManager 2.5 MiB + 989.0 KiB = 3.4 MiB ksmserver 3.3 MiB + 542.0 KiB = 3.8 MiB kuiserver 3.2 MiB + 875.0 KiB = 4.1 MiB kglobalaccel 3.4 MiB + 743.0 KiB = 4.1 MiB akonadi_migration_agent 3.5 MiB + 741.5 KiB = 4.2 MiB polkit-kde-authentication-agent-1 3.8 MiB + 638.5 KiB = 4.4 MiB knotify4 3.9 MiB + 892.0 KiB = 4.8 MiB akonadi_maildispatcher_agent 3.9 MiB + 954.5 KiB = 4.8 MiB nepomukfileindexer 4.1 MiB + 930.5 KiB = 5.0 MiB nepomukfilewatch 4.1 MiB + 1.2 MiB = 5.3 MiB akonadi_newmailnotifier_agent 5.3 MiB + 50.5 KiB = 5.4 MiB systemd-journald 4.4 MiB + 1.0 MiB = 5.4 MiB korgac 4.2 MiB + 1.2 MiB = 5.4 MiB akonadi_nepomuk_feeder 4.7 MiB + 850.5 KiB = 5.6 MiB kactivitymanagerd 13.4 MiB + -7873.5 KiB = 5.7 MiB polkitd 14.1 MiB + -7803.5 KiB = 6.5 MiB akonadiserver 5.9 MiB + 905.5 KiB = 6.8 MiB nepomukstorage 5.6 MiB + 1.8 MiB = 7.4 MiB akonadi_sendlater_agent 5.7 MiB + 2.2 MiB = 8.0 MiB kmix 5.9 MiB + 2.4 MiB = 8.3 MiB akonadi_archivemail_agent 6.0 MiB + 2.4 MiB = 8.3 MiB akonadi_folderarchive_agent 6.0 MiB + 2.4 MiB = 8.4 MiB akonadi_mailfilter_agent 11.9 MiB + -1763.0 KiB = 10.2 MiB kded4 13.4 MiB + -1419.0 KiB = 12.1 MiB kwin 12.2 MiB + 3.0 MiB = 15.2 MiB akonadi_agent_launcher (4) 13.8 MiB + 3.2 MiB = 17.1 MiB krunner 65.1 MiB + -44927.5 KiB = 21.3 MiB mysqld 33.7 MiB + 111.5 KiB = 33.8 MiB virtuoso-t 37.4 MiB + -816.5 KiB = 36.7 MiB Xorg 38.3 MiB + 7.8 MiB = 46.1 MiB plasma-desktop --------------------------------- 358.8 MiB ================================= Final thoughts

On Arch Linux at least, XFCE has lower resource requirements than MATE. When I said different in the past I was wrong, unless you use openSUSE in which case I was probably right, maybe.

LXDE does achieve what it set out to do as it is indeed a the lightest weight desktop environment I tested.

Anyone want to share some humble pie?

Categories: LUG Community Blogs

Steve Kemp: That is it, I'm going to do it

Planet HantsLUG - Sat, 22/03/2014 - 15:49

That's it, I'm going to do it: I have now committed myself to writing a scalable, caching, reverse HTTP proxy.

The biggest question right now is implementation language; obviously "threading" of some kind is required so it is a choice between Perl's anyevent, Python's twisted, Rubys event machine, or node.js.

I'm absolutely, definitely, not going to use C, or C++.

Writing a a reverse proxy in node.js is almost trivial, the hard part will be working out which language to express the caching behaviour, on a per type, and per-resource basis.

I will ponder.

Categories: LUG Community Blogs

Running Core Apps on the Desktop

Planet SurreyLUG - Fri, 21/03/2014 - 13:20

One of the (many) challenges with creating a new mobile platform is that of bootstrapping app developers. The Ubuntu SDK – based on well known tools like Qt & QtCreator and supporting HTML5, C++ & GL – we can run our applications both on mobile devices and standard Ubuntu desktops.

With our convergence plans being a core component of Ubuntu over the coming releases, we can take advantage of this when testing mobile apps.

Developers can create & users can test applications without having to commit funds to a dedicated Ubuntu mobile device. Of course in the future Ubuntu mobile devices will be ubiquitous but for now we can support those users, testers and developers right now to use Ubuntu mobile apps on the desktop.

So with the Core Apps Hack Days just around the corner, now is a great time to install the core apps on your desktop and help test them, file bugs and contribute patches!

For users of Ubuntu 13.10 and Trusty (14.04) we have a Core Apps Daily PPA which has builds of all the core apps. Installing them is a cinch on 13.10 and 14.04 via the PPA & touch-coreapps metapackage:-

sudo add-apt-repository ppa:ubuntu-touch-coreapps-drivers/daily
sudo apt-get update
sudo apt-get install touch-coreapps

Note: This has been tested on 13.10 and 14.04 but not on previous releases.

If you later wish to remove them simply use sudo ppa-purge ppa:ubuntu-touch-coreapps-drivers/daily or just sudo apt-get autoremove touch-coreapps.

Once installed you should see icons for all the core apps in your dash

We welcome constructive feedback and bug reports about all aspects of the Core Apps. You can find us in #ubuntu-app-devel on freenode and details of where to file bugs can be found on the Core Apps wiki pages.

You can get started with developing apps on Ubuntu via the SDK, the documentation for which is at developer.ubuntu.com.

Categories: LUG Community Blogs
Syndicate content