LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Wed, 13/03/2013 - 00:01

Thanks to infernix who contributed this tip on how to use libguestfs to access Ceph (and in theory, sheepdog, gluster, iscsi and more) devices.

If you apply this small patch to libguestfs you can use these distributed filesystems straight away by doing:

$ guestfish ><fs> set-attach-method appliance ><fs> add-drive /dev/null ><fs> config -set drive.hd0.file=rbd:pool/volume ><fs> run

… followed by usual guestfish commands.

This is a temporary hack, until we properly model Ceph (etc) through the libguestfs stable API. Nevertheless it works as follows:

  1. The add-drive /dev/null adds a drive, known to libguestfs.
  2. Implicitly this means that libguestfs adds a -drive option when it runs qemu.
  3. The custom qemu -set drive.hd0.file=... parameter modifies the preceding -drive option added by libguestfs so that the file is changed from /dev/null to whatever you want. In this case, to a Ceph rbd:... path.

Categories: LUG Community Blogs

Karanbir Singh: nazar.karan.org in maint mode

Planet GLLUG - Tue, 12/03/2013 - 02:15

Hi,

https://nazar.karan.org/ services are going to be partially down as I migrate services over to a faster, more memory, lesser power consuming, many more cores machine. Everything should be back to production by midday Mar 12th, 2013. Services impacted include:

  • git repos
  • Reimzul's irc interface
  • Alt.Bsys triggers

There is a backup instance running, so if anyone needs to get to some specific data in a rush, ping me on irc and we can get access setup.

- KB

Categories: LUG Community Blogs

davblog - Dave Cross: Doctor Who News

Planet GLLUG - Sun, 10/03/2013 - 17:22

I’m getting bored of the number of media outlets who are taking the slightest of comments that someone makes about the upcoming Doctor Who anniversary special and spinning it into a story packed full of completely unsubstantiated nonsense. Headlines like “No Doctors To Return For 50th Special” which, when you read them turn out to be based on the fact that Colin Baker hasn’t had a phone call from Steven Moffat.

Obviously it’s good for the show that it gets all of this publicity and I don’t, for one second, expect the production team to do anything to put a stop to it. They’ll tell us what they want us to know when they want us to know it. Not a moment sooner.

But in the meantime, anyone who has ever appeared in Doctor Who has to watch what they say for fear of it being overhead by a tabloid journalist and being used to reinforce what ever story the journalist wants to write.

In an attempt to counter this, I’ve set up whonews.tv. The plan is that I’ll read these stories, extract the actual facts that they are based on and explain what we can actually believe based on those facts. Forensic analysis of entertainment news, I suppose.

I’ve also got a page where I list the best current information we have about what is actually happening for the show’s 50th anniversary. I’ll try to keep that up to date as more details emerge over the coming months.

Oh, and there’s at Twitter account too – WhoNews50. You might want to follow that.

Let me know if you find it useful.

Related Posts:
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Tue, 05/03/2013 - 13:55

Matthew Astley just emailed me to say he has forked TechTalk and added support for markdown (the wiki-ish markup language).


Categories: LUG Community Blogs

Martin A. Brooks: An open letter to National Car Parks

Planet GLLUG - Mon, 04/03/2013 - 18:31

Dear National Car Parks,

As it seems the existence of your call centre is to goad people into a state of rage and you insist that everything has to be put in writing, I shall put my complaint to you in writing via my blog, before sending it to you in writing.

I am a season ticket holder for the station car park at Harold Wood.  My car is parked there on a daily basis.  On return to my vehicle on February 28th, I was surprised to find I had been ticketed.  Here’s the ticket:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note the contravention: “Parked on double yellow lines”.  This was a surprise to me because there were and are no double yellow lines present.  For the non-drivers reading this, and confused NCP parking attendants, double yellow lines tend to look like this:

 

 

 

 

 

 

 

 

 

This is quite clear, right?  Two parallel yellow lines at specific distances from the side of the road with a crossing bar at each end.  Now, here’s a picture of where I was parked:

 

See the bay to the left of the red car? That’s where my car was, wholly within the confines of the white border.  You might want to check out the full size version of the photo but, looking at the example double yellow lines above, can you see where the double yellow lines I was ticketed for parking on are? I’ll sit here quietly while you check.

Done? Good.  You didn’t spot the double yellow lines, did you? Correct, that’s because there aren’t any to spot. I can read your mind.  “Ahhh, Martin, you silly boy.  You’ve parked on a restricted area, there are yellow hatchings. Tsk.”

That’s almost true, there are indeed yellow hatchings but that’s irrelevant.  Why so?  Just this:  When NCP took over the car park many years ago, they repainted all of the bays with fresh white markings.  Here’s a closeup of part of that bay:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note that the white line is painted over the yellow one.  Had NCP not intended for people to use this as a parking bay, why on earth would they specifically paint bay markings there?  The answer is, they wouldn’t.  Those markings used to denoted an area where you weren’t supposed to park because it’s where a fast food place used to have its bins.  Rather than scraping the old lines up, they just painted new ones over the top.

Add to this the fact that I have been parked in that exact same spot dozens of times over the past couple of years, and this is the first time my car has been ticketed, I think what we’re dealing with here is an overzealous or brainless parking attendant.I currently can’t even talk to you about this over the phone because even after 4 days, the ticket hasn’t appeared on your system. I now have to waste some of my life sorting this out.

No love whatsoever, merely tired rage,

Martin A. Brooks

 

Categories: LUG Community Blogs

davblog - Dave Cross: Money From HMRC

Planet GLLUG - Sat, 02/03/2013 - 11:57

I got a letter from HMRC this morning – to my company, not to me personally. It basically said “we’ve been looking at the PAYE you paid in 2010/11 and it looks like you’ve overpaid by [a surprisingly large number of pounds]“.

Now 2010/11 was the year that I was having some difficulties with my accountants. The difficulties eventually got so bad that I switched to my new accountants (who I’m still very happy with). So it doesn’t really surprise me that something went wrong that year, although the amount (it’s about 25% of the PAYE/NI I paid that year) is impressive.

What really surprises me is the tone of this letter. Having told me that I’ve overpaid (and, helpfully, pointed out the exact extra payment that I made) the letter goes on to say:

Before I can agree to either a refund or a credit, please let me the reason the overpayment has arisen

And:

Please complete the enclosed P35D giving a full explanation as to how the overpayment occurred.

Below are some example of reasons I cannot accept to justify an overpayment

  • Duplicate payments with no evidence to explain why
  • Statements such as ‘Payment(s) made in error’ with no further explanation
  • The overpayment is due to monthly payments which do not match our records
  • Any other explanation without evidence to support it

I’m finding it hard to read that in any way other than “we know we’ve got some of your money but you can’t have it back until you’ve explained in detail just how crap your record-keeping is”.

You’ve got my money. You know it’s my money. Either my accountants or I screwed up in some way. There’s no more detailed explanation than that. Just give it back to me, you bastards.

Related Posts:
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Tue, 26/02/2013 - 20:31

New in libguestfs ≥ 1.21.15, the virt-df and virt-alignment-scan tools now use parallel appliances when scanning your libvirt guests.

The amount of parallelism is selected heuristically when the tool starts up — by dividing the amount of free memory in MB by 500. You can also override this choice by using the new -P option to both tools, but the default should be fine for everyone. -P 1 disables multiple threads.

Users won’t see much difference, although I found that both tools are noticeably faster.

The implementation of threads in these tools is a little bit interesting. Of course there is a pool of worker threads. These take the libvirt guests from a list sorted in alphabetical order and process them.

However each guest takes a variable amount of time to process, and the trick is that the output from each thread mustn’t overlap or be in non-alphabetical order.

The worker threads do two things to ensure this: Firstly output from each guest scan is saved up in an open_memstream buffer. Secondly, domains are retired in order using a pthread condition variable — each worker waits until the previous domain has been retired, before retiring (ie. printing) its own result.

The outcome is that there should be no difference between what the old tools and the rewritten tools print out.


Categories: LUG Community Blogs

Rev. Simon Rumble: Current status

Planet GLLUG - Tue, 26/02/2013 - 04:56

Chilling in Jervis Bay. Weather has turned out much better than the forecasts. Lovely.

See the full gallery on Posterous

Permalink | Leave a comment  »

Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 16:46

[Part 1, part 2, part 3.]

Finally I modified the test to do some representative work: We now load a real Windows XP guest, inspect it (a heavyweight operation), and mount and stat each filesystem. I won’t reproduce the entire test program again because only the test subroutine has changed:

sub test { my $g = Sys::Guestfs->new; $g->add_drive_ro ("/tmp/winxp.img"); $g->launch (); # Inspect the guest (ignore the result). $g->inspect_os (); # Approximate what virt-df does. my %fses = $g->list_filesystems (); foreach (keys %fses) { my $mounted = 0; eval { $g->mount_ro ($_, "/"); $mounted = 1; }; if ($mounted) { $g->statvfs ("/"); $g->umount_all (); } } return $g; }

Even with all that work going on, I was able to inspect more than 1 disk per second on my laptop, and run 60 threads in parallel with good performance and scalability:


Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 16:22

A problem encountered in part 2 was that I couldn’t measure the maximum number of parallel libguestfs appliances that can be run at the same time. There are two reasons for that. The simpler one is that libvirt has a limit of 20 connections, which is easily overcome by setting LIBGUESTFS_ATTACH_METHOD=appliance to eliminate libvirt and run qemu directly. The harder one is that by the time the last appliances in the test are starting to launch, earlier ones have already shut down and their threads have exited.

What is needed is for the test to work in two phases: In the first phase we start up all the threads and launch all the appliances. Only when this is complete do we enter the second phase where we shut down all the appliances.

The easiest way to do this is by modifying the test to use a barrier (or in fact to implement a barrier using the condition primitives). See the modified test script below.

With the modified test script I was able to run ≥ 110 and < 120 parallel appliances in ~ 13 GB of free RAM, or around 120 MB / appliance, still with excellent performance and nearly linear scalability:

#!/usr/bin/perl -w use strict; use threads qw(yield); use threads::shared qw(cond_broadcast cond_wait lock); use Sys::Guestfs; use Time::HiRes qw(time); my $nr_threads_launching :shared; sub test { my $g = Sys::Guestfs->new; $g->add_drive_ro ("/dev/null"); $g->launch (); return $g; } # Get everything into cache. test (); test (); test (); sub thread { my $g = test (); { lock ($nr_threads_launching); $nr_threads_launching--; cond_broadcast ($nr_threads_launching); cond_wait ($nr_threads_launching) until $nr_threads_launching == 0; } $g->close (); } # Test increasing numbers of threads until it fails. for (my $nr_threads = 10; $nr_threads < 1000; $nr_threads += 10) { my $start_t = time (); $nr_threads_launching = $nr_threads; my @threads; foreach (1..$nr_threads) { push @threads, threads->create (\&thread) } foreach (@threads) { $_->join (); if (my $err = $_->error ()) { die "launch failed with $nr_threads threads: $err" } } my $end_t = time (); printf ("%d %.2f\n", $nr_threads, $end_t - $start_t); }
Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 15:28

One problem with the previous test is that I hit a limit of 20 parallel appliances and mistakenly thought that I’d hit a memory limit. In fact libvirt out of the box limits the number of client connections to 20. You can adjust libvirt’s limit by editing /etc/libvirt/libvirtd.conf, but easier for us is to simply eliminate libvirt from the equation by doing:

export LIBGUESTFS_ATTACH_METHOD=appliance

which causes libguestfs to run qemu directly. In my first test I reached 48 parallel launches before I killed the program (because that’s a lot of parallelism and there seemed no end in sight). Scalability of the libguestfs / qemu combination was excellent again:

But there’s more! (In the next part …)


Categories: LUG Community Blogs

Richard WM Jones: data

Planet GLLUG - Mon, 25/02/2013 - 13:54

I wrote the Perl script below to find out how many libguestfs appliances we can start in parallel. The results are surprising (-ly good):

What’s happening here is that we’re booting up a KVM guest with 500 MB of memory, booting the Linux kernel, booting a minimal userspace, then shutting the whole lot down. And then doing that in parallel with 1, 2, .. 20 threads.

[Note: Hardware is my Lenovo x230 laptop with an Intel Core(TM) i7-3520M CPU @ 2.90GHz, 2 cores with 4 threads, 16 GB of RAM with approx. 13 GB free. Software is: Fedora 18 with libguestfs 1.20.2, libvirt 1.0.2 (from Rawhide), qemu 1.4.0 (from Rawhide)]

The test fails at 21 threads because there isn’t enough free memory, so each qemu instance is allocating around 660 MB of RAM. This is wrong: It failed because libvirt out of the box limits the maximum number of clients to 20. See next part in this series.

Up to 4 parallel launches, you can clearly see the effect of better utilization of the parallelism of the CPU — the total elapsed time hardly moves, even though we’re doing up to 4 times more work.

#!/usr/bin/perl -w use strict; use threads; use Sys::Guestfs; use Time::HiRes qw(time); sub test { my $g = Sys::Guestfs->new; $g->add_drive_ro ("/dev/null"); $g->launch (); } # Get everything into cache. test (); test (); test (); # Test increasing numbers of threads until it fails. for my $nr_threads (1..100) { my $start_t = time (); my @threads; foreach (1..$nr_threads) { push @threads, threads->create (\&test) } foreach (@threads) { $_->join (); if (my $err = $_->error ()) { die "launch failed with nr_threads = $nr_threads: $err" } } my $end_t = time (); printf ("%d %.2f\n", $nr_threads, $end_t - $start_t); }
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Fri, 22/02/2013 - 13:04

This just popped up in my Google alerts: https://github.com/M2IHP13-admin/JonesForth-arm


Categories: LUG Community Blogs

Rev. Simon Rumble: Alternatives to Posterous?

Planet GLLUG - Wed, 20/02/2013 - 00:50

My blog has been on Posterous for some years now and it's been awesome. The best thing about it is that you just email a bunch of stuff, with whatever attachments in whatever format are relevant, and they just work.

Sadly they're shutting down following their talent acquisition by Twitter. That's a real shame. Now I have to find an alternative.

Requirements:
  • Hosted. I'm not going to maintain a server just for blogging thanks.
  • Allows custom JS. I'm always testing out new analytics tools on my own sites.
  • Post through email
Squarespace is lovely but pretty expensive for what I need, unless I can consolidate all three sites into one platform while keeping the domains (waiting on their ticket response).

I had high hopes for Markdown-based blog tools like Jekyll, but I find them a bit clunky. Posterous has got me used to a really easy blogging workflow that works well for me.

Any suggestions? I'm happy to pay.

Permalink | Leave a comment  »

Categories: LUG Community Blogs

Richard WM Jones: 20130218_112227

Planet GLLUG - Mon, 18/02/2013 - 12:30

It works too …

sd 4:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 4:0:0:0: [sdc] 4096-byte physical blocks sd 4:0:0:0: [sdc] Write Protect is off sd 4:0:0:0: [sdc] Mode Sense: 00 3a 00 00 sd 4:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA sd 5:0:0:0: [sdd] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) sd 5:0:0:0: [sdd] 4096-byte physical blocks sd 5:0:0:0: [sdd] Write Protect is off sd 5:0:0:0: [sdd] Mode Sense: 00 3a 00 00 sd 5:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Categories: LUG Community Blogs

Dean Wilson: FOSDEM 2013

Planet GLLUG - Sat, 16/02/2013 - 14:39
Well, that's another FOSDEM over with. In general this year seemed the same as the last couple of years but slightly bigger than usual (although it seems that way every year). The (newish) K building was in full swing with dozens of project stalls and dev rooms. The usual suspects - virtualisation / cloud, configuration management and MySQL rooms had nearly as many people trying to get in to the rooms as they did sitting down.

I think some of the main dev rooms have reached the level of popularity that forces you to either arrive early, get a seat and not move for the rest of the day or accept a very high level of probability that you won't get to see the talks you want. I know a few of us had trouble cherry picking sessions across tracks - which obviously means we have excellent taste in topics. I wonder if having the same talks on both days would make it easier to move around as a visitor - you'd attempt to catch it the first time and if that fails, come back tomorrow. I realise however that this puts even more of a burden on speakers that graciously give their own time in both the preparation and performing of their talks. It does seem that scaling the rooms is the problem of the day once again.

I'd like to say a big thank you to all the organisers, speakers and other attendees for making it another enjoyable couple of days. See you next year.

Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Wed, 13/02/2013 - 19:39

To clarify, what is the memory overhead, or how many guests can you cram onto a single host, memory being the typical limiting factor when you virtualize.

This was the question someone asked at work today. I don’t know the answer either, but the small program I wrote (below) aims to find out. If you believe the numbers below from qemu 1.2.2 running on Fedora 18, then the overhead is around 150 MB per qemu process that cannot be shared, plus around 200 MB per host (that is, shared between all qemu processes).

guest size 256 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 404.20 MB Shared writable memory: 0.03 MB guest size 512 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 643.76 MB Shared writable memory: 0.03 MB guest size 1024 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 1172.38 MB Shared writable memory: 0.03 MB guest size 2048 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 2237.16 MB Shared writable memory: 0.03 MB guest size 4096 MB: Shared memory backed by a file: 201.41 MB Anonymous memory (eg. malloc, COW, stack), not shared: 4245.13 MB Shared writable memory: 0.03 MB

The number to pay attention to is “Anonymous memory” since that is what cannot be shared between guests (except if you have KSM and your guests are such that KSM can be effective).

There are some known shortcomings with my testing methodology that I summarise below. You may be able to see others.

  1. We’re testing a libguestfs appliance. A libguestfs appliance does not have the full range of normal qemu devices that a real guest would have, and so the overhead of a real guest is likely to be higher. The main difference is probably lack of a video device (so no video RAM is evident).
  2. This uses virtio-scsi. Real guests use IDE, virtio-blk, etc which may have quite different characteristics.
  3. This guest has one user network device (ie. SLIRP) which could be quite different from a real network device.
  4. During the test, the guest only runs for a few seconds. A normal, long-running guest would experience qemu memory growth or even memory leaks. You could fix this relatively easily by adding some libguestfs busy-work after the launch.
  5. The guest does not do any significant writes, so during the test qemu won’t be storing any cached or in-flight data blocks.
  6. It only accounts for memory used by qemu in userspace, not memory used by the host kernel on behalf of qemu.
  7. The effectiveness or otherwise of KSM is not tested. It’s likely that KSM depends heavily on your workload, so it wouldn’t be fair to publish any KSM figures.
  8. The script uses /proc/PID/maps but it would be better to use smaps so that we can see how much of the file-backed copy-on-write segments have actually been copied. Currently the script overestimates these by assuming that (eg) all the data pages from a library would be dirtied by qemu.

Another interesting question would be whether qemu is getting better or worse over time.

#!/usr/bin/perl -w # Estimate memory usage of qemu-kvm at different guest RAM sizes. # By Richard W.M. Jones <rjones@redhat.com> use strict; use Sys::Guestfs; no warnings "portable"; # 64 bit platform required. # Loop over different guest RAM sizes. my $mbytes; for $mbytes (256, 512, 1024, 2048, 4096) { print "guest size ", $mbytes, " MB:\n"; my $g = Sys::Guestfs->new; # Ensure we're using the direct qemu launch backend, otherwise # libvirt stops us from finding the qemu PID. $g->set_attach_method ("appliance"); # Set guest memory size. $g->set_memsize ($mbytes); # Enable user networking just to be more like a "real" guest. $g->set_network (1); # Launch guest with one dummy disk. $g->add_drive ("/dev/null"); $g->launch (); # Get process ID of qemu. my $pid = $g->get_pid (); die unless $pid > 0; # Read the memory maps of the guest. open MAPS, "/proc/$pid/maps" or die "cannot open memory map of pid $pid"; my @maps = <MAPS>; close MAPS; # Kill qemu. $g->close (); # Parse the memory maps. my $shared_file_backed = 0; my $anonymous = 0; my $shared_writable = 0; my $map; foreach $map (@maps) { chomp $map; if ($map =~ m/ ^([0-9a-f]+)-([0-9a-f]+) \s (....) \s [0-9a-f]+ \s ..:.. \s (\d+) \s+ (\S+)? /x) { my ($start, $end) = (hex $1, hex $2); my $size = $end - $start; my $mode = $3; my $inode = $4; my $filename = $5; # could also be "[heap]", "[vdso]", etc. # Shared file-backed text: r-xp, r--p, etc. with a file backing. if ($inode != 0 && ($mode eq "r-xp" || $mode eq "r--p" || $mode eq "---p")) { $shared_file_backed += $size; } # Anonymous memory: rw-p. elsif ($mode eq "rw-p") { $anonymous += $size; } # Writable and shared. Not sure what this is ... elsif ($mode eq "rw-s") { $shared_writable += $size; } # Ignore [vdso], [vsyscall]. elsif (defined $filename && ($filename eq "[vdso]" || $filename eq "[vsyscall]")) { } # Ignore ---p with no file. What's this? elsif ($inode == 0 && $mode eq "---p") { } # Ignore kvm-vcpu. elsif ($filename eq "anon_inode:kvm-vcpu") { } else { warn "warning: could not parse '$map'\n"; } } else { die "incorrect maps format: '$map'"; } } printf("Shared memory backed by a file: %.2f MB\n", $shared_file_backed / 1024.0 / 1024.0); printf("Anonymous memory (eg. malloc, COW, stack), not shared: %.2f MB\n", $anonymous / 1024.0 / 1024.0); printf("Shared writable memory: %.2f MB\n", $shared_writable / 1024.0 / 1024.0); print "\n"; }
Categories: LUG Community Blogs

Richard WM Jones: rich

Planet GLLUG - Mon, 11/02/2013 - 15:23

I’m told that Richard Harman (twitter) will mention libguestfs in his talk about malware analysis at ShmooCon next Saturday (16th).

The conference is in Washington DC at the Hyatt Regency, but talks should be available online afterwards (also good because it’s sold out!)


Categories: LUG Community Blogs

Dean Wilson: Puppet Camp - Ghent 2013

Planet GLLUG - Mon, 11/02/2013 - 14:11
It's been a while since I've attended a Puppet Camp but considering the quality of the last one (organised by Patrick Debois) and the fact it was being held in the lovely city of Ghent again I thought it'd be a wise investment to scrape together the time off.

The quality of the talks seemed quite high and considering the number of newer users present the content level was well pitched. A couple of deeper talks for the more experienced members would have been nice but we mostly made our own in the open sessions. Facter, writing MCollective plugins, off-line and bulk catalogue compilation and the murky corners of our production puppets all came under discussion - in some cases quite fruitfully.

The wireless was a point of annoyance and amusement (depending on the person and the time of day). We had 20 users for an audience of ten times that - the attitudes covered the gamut from "I only need to check my mail once a day" to "I have my own tethering" and all the way to "This is my brute force script I run in a loop". You can tell when most of us lost our access based on the twitter hash tag.

I was a little surprised at the number of Puppet Camps there will be this year - 27 was the number mentioned. I think a lot of the more experienced members of the community value the camps and confs as a chance to catch up with each other and the PuppetLabs people and I'd hate to see us sticking to our own local camps and losing the cross pollination of ideas, plans and pains.

You can also view the Puppet Camp slides for a number of the sessions.

Categories: LUG Community Blogs
Syndicate content