LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 16:22

A problem encountered in part 2 was that I couldn’t measure the maximum number of parallel libguestfs appliances that can be run at the same time. There are two reasons for that. The simpler one is that libvirt has a limit of 20 connections, which is easily overcome by setting LIBGUESTFS_ATTACH_METHOD=appliance to eliminate libvirt and run qemu directly. The harder one is that by the time the last appliances in the test are starting to launch, earlier ones have already shut down and their threads have exited.

What is needed is for the test to work in two phases: In the first phase we start up all the threads and launch all the appliances. Only when this is complete do we enter the second phase where we shut down all the appliances.

The easiest way to do this is by modifying the test to use a barrier (or in fact to implement a barrier using the condition primitives). See the modified test script below.

With the modified test script I was able to run ≥ 110 and < 120 parallel appliances in ~ 13 GB of free RAM, or around 120 MB / appliance, still with excellent performance and nearly linear scalability:

#!/usr/bin/perl -w use strict; use threads qw(yield); use threads::shared qw(cond_broadcast cond_wait lock); use Sys::Guestfs; use Time::HiRes qw(time); my $nr_threads_launching :shared; sub test { my $g = Sys::Guestfs->new; $g->add_drive_ro ("/dev/null"); $g->launch (); return $g; } # Get everything into cache. test (); test (); test (); sub thread { my $g = test (); { lock ($nr_threads_launching); $nr_threads_launching--; cond_broadcast ($nr_threads_launching); cond_wait ($nr_threads_launching) until $nr_threads_launching == 0; } $g->close (); } # Test increasing numbers of threads until it fails. for (my $nr_threads = 10; $nr_threads < 1000; $nr_threads += 10) { my $start_t = time (); $nr_threads_launching = $nr_threads; my @threads; foreach (1..$nr_threads) { push @threads, threads->create (\&thread) } foreach (@threads) { $_->join (); if (my $err = $_->error ()) { die "launch failed with $nr_threads threads: $err" } } my $end_t = time (); printf ("%d %.2f\n", $nr_threads, $end_t - $start_t); }
Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 15:28

One problem with the previous test is that I hit a limit of 20 parallel appliances and mistakenly thought that I’d hit a memory limit. In fact libvirt out of the box limits the number of client connections to 20. You can adjust libvirt’s limit by editing /etc/libvirt/libvirtd.conf, but easier for us is to simply eliminate libvirt from the equation by doing:

export LIBGUESTFS_ATTACH_METHOD=appliance

which causes libguestfs to run qemu directly. In my first test I reached 48 parallel launches before I killed the program (because that’s a lot of parallelism and there seemed no end in sight). Scalability of the libguestfs / qemu combination was excellent again:

But there’s more! (In the next part …)


Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 13:54

I wrote the Perl script below to find out how many libguestfs appliances we can start in parallel. The results are surprising (-ly good):

What’s happening here is that we’re booting up a KVM guest with 500 MB of memory, booting the Linux kernel, booting a minimal userspace, then shutting the whole lot down. And then doing that in parallel with 1, 2, .. 20 threads.

[Note: Hardware is my Lenovo x230 laptop with an Intel Core(TM) i7-3520M CPU @ 2.90GHz, 2 cores with 4 threads, 16 GB of RAM with approx. 13 GB free. Software is: Fedora 18 with libguestfs 1.20.2, libvirt 1.0.2 (from Rawhide), qemu 1.4.0 (from Rawhide)]

The test fails at 21 threads because there isn’t enough free memory, so each qemu instance is allocating around 660 MB of RAM. This is wrong: It failed because libvirt out of the box limits the maximum number of clients to 20. See next part in this series.

Up to 4 parallel launches, you can clearly see the effect of better utilization of the parallelism of the CPU — the total elapsed time hardly moves, even though we’re doing up to 4 times more work.

#!/usr/bin/perl -w use strict; use threads; use Sys::Guestfs; use Time::HiRes qw(time); sub test { my $g = Sys::Guestfs->new; $g->add_drive_ro ("/dev/null"); $g->launch (); } # Get everything into cache. test (); test (); test (); # Test increasing numbers of threads until it fails. for my $nr_threads (1..100) { my $start_t = time (); my @threads; foreach (1..$nr_threads) { push @threads, threads->create (\&test) } foreach (@threads) { $_->join (); if (my $err = $_->error ()) { die "launch failed with nr_threads = $nr_threads: $err" } } my $end_t = time (); printf ("%d %.2f\n", $nr_threads, $end_t - $start_t); }
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Fri, 22/02/2013 - 13:04

This just popped up in my Google alerts: https://github.com/M2IHP13-admin/JonesForth-arm


Categories: LUG Community Blogs

Rev. Simon Rumble: Alternatives to Posterous?

Planet GLLUG - Wed, 20/02/2013 - 00:50

My blog has been on Posterous for some years now and it's been awesome. The best thing about it is that you just email a bunch of stuff, with whatever attachments in whatever format are relevant, and they just work.

Sadly they're shutting down following their talent acquisition by Twitter. That's a real shame. Now I have to find an alternative.

Requirements:
  • Hosted. I'm not going to maintain a server just for blogging thanks.
  • Allows custom JS. I'm always testing out new analytics tools on my own sites.
  • Post through email
Squarespace is lovely but pretty expensive for what I need, unless I can consolidate all three sites into one platform while keeping the domains (waiting on their ticket response).

I had high hopes for Markdown-based blog tools like Jekyll, but I find them a bit clunky. Posterous has got me used to a really easy blogging workflow that works well for me.

Any suggestions? I'm happy to pay.

Permalink | Leave a comment  »

Categories: LUG Community Blogs

Richard WM Jones: 20130218_112227

Planet GLLUG - Mon, 18/02/2013 - 12:30

It works too …

sd 4:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 4:0:0:0: [sdc] 4096-byte physical blocks sd 4:0:0:0: [sdc] Write Protect is off sd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 4:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA sd 5:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 5:0:0:0: [sdd] 4096-byte physical blocks sd 5:0:0:0: [sdd] Write Protect is off sd 5:0:0:0: [sdd] Mode Sense: 00 3a 00 00 sd 5:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Categories: LUG Community Blogs

Dean Wilson: FOSDEM 2013

Planet GLLUG - Sat, 16/02/2013 - 14:39
Well, that's another FOSDEM over with. In general this year seemed the same as the last couple of years but slightly bigger than usual (although it seems that way every year). The (newish) K building was in full swing with dozens of project stalls and dev rooms. The usual suspects - virtualisation / cloud, configuration management and MySQL rooms had nearly as many people trying to get in to the rooms as they did sitting down.

I think some of the main dev rooms have reached the level of popularity that forces you to either arrive early, get a seat and not move for the rest of the day or accept a very high level of probability that you won't get to see the talks you want. I know a few of us had trouble cherry picking sessions across tracks - which obviously means we have excellent taste in topics. I wonder if having the same talks on both days would make it easier to move around as a visitor - you'd attempt to catch it the first time and if that fails, come back tomorrow. I realise however that this puts even more of a burden on speakers that graciously give their own time in both the preparation and performing of their talks. It does seem that scaling the rooms is the problem of the day once again.

I'd like to say a big thank you to all the organisers, speakers and other attendees for making it another enjoyable couple of days. See you next year.

Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Wed, 13/02/2013 - 19:39

To clarify, what is the memory overhead, or how many guests can you cram onto a single host, memory being the typical limiting factor when you virtualize.

This was the question someone asked at work today. I don’t know the answer either, but the small program I wrote (below) aims to find out. If you believe the numbers below from qemu 1.2.2 running on Fedora 18, then the overhead is around 150 MB per qemu process that cannot be shared, plus around 200 MB per host (that is, shared between all qemu processes).

guest size 256 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 404.20 MB Shared writable memory: 0.03 MB guest size 512 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 643.76 MB Shared writable memory: 0.03 MB guest size 1024 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 1172.38 MB Shared writable memory: 0.03 MB guest size 2048 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 2237.16 MB Shared writable memory: 0.03 MB guest size 4096 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 4245.13 MB Shared writable memory: 0.03 MB

The number to pay attention to is “Anonymous memory” since that is what cannot be shared between guests (except if you have KSM and your guests are such that KSM can be effective).

There are some known shortcomings with my testing methodology that I summarise below. You may be able to see others.

  1. We’re testing a libguestfs appliance. A libguestfs appliance does not have the full range of normal qemu devices that a real guest would have, and so the overhead of a real guest is likely to be higher. The main difference is probably lack of a video device (so no video RAM is evident).
  2. This uses virtio-scsi. Real guests use IDE, virtio-blk, etc which may have quite different characteristics.
  3. This guest has one user network device (ie. SLIRP) which could be quite different from a real network device.
  4. During the test, the guest only runs for a few seconds. A normal, long-running guest would experience qemu memory growth or even memory leaks. You could fix this relatively easily by adding some libguestfs busy-work after the launch.
  5. The guest does not do any significant writes, so during the test qemu won’t be storing any cached or in-flight data blocks.
  6. It only accounts for memory used by qemu in userspace, not memory used by the host kernel on behalf of qemu.
  7. The effectiveness or otherwise of KSM is not tested. It’s likely that KSM depends heavily on your workload, so it wouldn’t be fair to publish any KSM figures.
  8. The script uses /proc/PID/maps but it would be better to use smaps so that we can see how much of the file-backed copy-on-write segments have actually been copied. Currently the script overestimates these by assuming that (eg) all the data pages from a library would be dirtied by qemu.

Another interesting question would be whether qemu is getting better or worse over time.

#!/usr/bin/perl -w # Estimate memory usage of qemu-kvm at different guest RAM sizes. # By Richard W.M. Jones <rjones@redhat.com> use strict; use Sys::Guestfs; no warnings "portable"; # 64 bit platform required. # Loop over different guest RAM sizes. my $mbytes; for $mbytes (256, 512, 1024, 2048, 4096) { print "guest size ", $mbytes, " MB:\n"; my $g = Sys::Guestfs->new; # Ensure we're using the direct qemu launch backend, otherwise # libvirt stops us from finding the qemu PID. $g->set_attach_method ("appliance"); # Set guest memory size. $g->set_memsize ($mbytes); # Enable user networking just to be more like a "real" guest. $g->set_network (1); # Launch guest with one dummy disk. $g->add_drive ("/dev/null"); $g->launch (); # Get process ID of qemu. my $pid = $g->get_pid (); die unless $pid > 0; # Read the memory maps of the guest. open MAPS, "/proc/$pid/maps" or die "cannot open memory map of pid $pid"; my @maps = <MAPS>; close MAPS; # Kill qemu. $g->close (); # Parse the memory maps. my $shared_file_backed = 0; my $anonymous = 0; my $shared_writable = 0; my $map; foreach $map (@maps) { chomp $map; if ($map =~ m/ ^([0-9a-f]+)-([0-9a-f]+) \s (....) \s [0-9a-f]+ \s ..:.. \s (\d+) \s+ (\S+)? /x) { my ($start, $end) = (hex $1, hex $2); my $size = $end - $start; my $mode = $3; my $inode = $4; my $filename = $5; # could also be "[heap]", "[vdso]", etc. # Shared file-backed text: r-xp, r--p, etc. with a file backing. if ($inode != 0 && ($mode eq "r-xp" || $mode eq "r--p" || $mode eq "---p")) { $shared_file_backed += $size; } # Anonymous memory: rw-p. elsif ($mode eq "rw-p") { $anonymous += $size; } # Writable and shared. Not sure what this is ... elsif ($mode eq "rw-s") { $shared_writable += $size; } # Ignore [vdso], [vsyscall]. elsif (defined $filename && ($filename eq "[vdso]" || $filename eq "[vsyscall]")) { } # Ignore ---p with no file. What's this? elsif ($inode == 0 && $mode eq "---p") { } # Ignore kvm-vcpu. elsif ($filename eq "anon_inode:kvm-vcpu") { } else { warn "warning: could not parse '$map'\n"; } } else { die "incorrect maps format: '$map'"; } } printf("Shared memory backed by a file: %.2f MB\n", $shared_file_backed / 1024.0 / 1024.0); printf("Anonymous memory (eg. malloc, COW, stack), not shared: %.2f MB\n", $anonymous / 1024.0 / 1024.0); printf("Shared writable memory: %.2f MB\n", $shared_writable / 1024.0 / 1024.0); print "\n"; }
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Mon, 11/02/2013 - 15:23

I’m told that Richard Harman (twitter) will mention libguestfs in his talk about malware analysis at ShmooCon next Saturday (16th).

The conference is in Washington DC at the Hyatt Regency, but talks should be available online afterwards (also good because it’s sold out!)


Categories: LUG Community Blogs

Dean Wilson: Puppet Camp - Ghent 2013

Planet GLLUG - Mon, 11/02/2013 - 14:11
It's been a while since I've attended a Puppet Camp but considering the quality of the last one (organised by Patrick Debois) and the fact it was being held in the lovely city of Ghent again I thought it'd be a wise investment to scrape together the time off.

The quality of the talks seemed quite high and considering the number of newer users present the content level was well pitched. A couple of deeper talks for the more experienced members would have been nice but we mostly made our own in the open sessions. Facter, writing MCollective plugins, off-line and bulk catalogue compilation and the murky corners of our production puppets all came under discussion - in some cases quite fruitfully.

The wireless was a point of annoyance and amusement (depending on the person and the time of day). We had 20 users for an audience of ten times that - the attitudes covered the gamut from "I only need to check my mail once a day" to "I have my own tethering" and all the way to "This is my brute force script I run in a loop". You can tell when most of us lost our access based on the twitter hash tag.

I was a little surprised at the number of Puppet Camps there will be this year - 27 was the number mentioned. I think a lot of the more experienced members of the community value the camps and confs as a chance to catch up with each other and the PuppetLabs people and I'd hate to see us sticking to our own local camps and losing the cross pollination of ideas, plans and pains.

You can also view the Puppet Camp slides for a number of the sessions.

Categories: LUG Community Blogs
Syndicate content