Planet HantsLUG

Syndicate content
Planet HantsLUG - http://hantslug.org.uk/planet/
Updated: 16 min 22 sec ago

Andy Smith: Giving Cinema Paradiso a try

Sat, 23/09/2017 - 23:38
Farewell, LoveFiLM

I’ve been a customer of LoveFiLM for something like 12 years—since before they were owned by Amazon. In their original incarnation they were great: very cheap, and titles very often arrived in exactly the order you specified, i.e. they often managed to send the thing from the very top of the list.

In 2011 they got bought by Amazon and I was initially a bit concerned, but to be honest Amazon have run it well. The single list disappeared and was replaced by three priority lists; high, normal and low, and then a list of things that haven’t yet been released. New rentals were supposed to almost always come from the high priority list (as long as you had enough titles on there) but in a completely unpredictable order. Though of course they would keep multi-disc box sets together, and send lower-numbered seasons before later seasons.

Amazon have now announced that they’re shutting LoveFiLM by Post down at the end of October which I think is a shame, as it was a service I still enjoy.

It was inevitable I suppose due to the increasing popularity of streaming and downloads, and although I’m perfectly able to do the streaming and download thing, receiving discs by post still works for me.

I am used to receiving mockery for consuming some of my entertainment on little plastic discs that a human being has to physically transport to my residence, but LoveFiLM’s service was still cheap, the selection was very good, things could be rented as soon as they were available on disc, and the passive nature of just making a list and having the things sent to me worked well for me.

Cinema Paradiso

My first thought was that that was it for the disc-by-post rental model in the UK. That progress had left it behind. But very quickly people pointed me to Cinema Paradiso. After a quick look around I’ve decided to give it a try and so here are my initial thoughts.

Pricing

At a casual glance the pricing is slightly worse than LoveFiLM’s. I was paying £6.99 a month for 2 discs at home, unlimited rental per month. £6.98 at Cinema Paradiso gets you 2 discs at home but only 4 rentals per month.

I went back through my LoveFiLM rental history for the last year and found there were only 2 months where I managed to rent more than 4 discs, and those times I rented 5 and 6 discs respectively. Realistically it doesn’t seem like 4 discs per month will be much of a restriction to me.

Annoyingly, Cinema Paradiso have a 2 week trial period but only if you sign up to the £9.98 subscription (6 discs a month). You’d have to remember to downgrade to the cheaper subscriptions after 2 weeks, if that’s all you wanted.

Selection

I was pleasantly surprised at how good the selection is at Cinema Paradiso. Not only did they have every title that is currently on my LoveFiLM rental list (96 titles), but they also had a few things that LoveFiLM thinks haven’t been released yet.

I’m not going to claim that my tastes are particularly niche, but there are a few foreign language films and some anime in there, and release dates range from the 70s to 2017.

Manual approval

It seems that new Cinema Paradiso signups need to be manually approved, and this happens only on week days between 8am and mid day. I’ve signed up on a Saturday evening so nothing will get sent out until Monday I suppose.

It’s probably not a big deal as we’re talking about the postal service here so even with LoveFiLM nothing would get posted out until Monday anyway. It is a little jarring after moving away from the behemoth that is Amazon though, and serves as a reminder that Cinema Paradiso is a much smaller company.

Searching for titles

The search feature is okay. It provides suggestions as you type but if your title is obscure then it may not appear in the list of suggestions at all you and need to submit the search box and look through the longer list that appears.

A slight niggle is that if you have moused over any of the initial suggestions it replaces your text with that, so if your title isn’t amongst the suggestions you now have to re-type it.

I like that it shows a rating from Rotten Tomatoes as well as from their own site’s users. LoveFiLM shows IMDB ratings which I don’t trust very much, and also Amazon ratings, which I don’t trust at all for movies or TV. Seeing some of the shockingly-low Rotten Tomatoes scores for some of my LoveFiLM titles resulted in my Cinema Paradiso list shrinking to 83 titles!

Rental list mechanics

It’s hard to tell for sure at this stage because I haven’t yet got my account approved and had any rentals, but it looks to me like the rental list mechanics are a bit clunky compared to LoveFiLM’s.

At LoveFiLM at the point of adding a new title you would choose which of the three “buckets” to put a rental in; high priority, normal priority, or low priority. Every title in those buckets were of equal priority to every other item in the same bucket. So, when adding a new title all you had to consider was whether it was high, medium or low.

Cinema Paradiso has a single big list of rentals. In some ways this might appeal because you can fine-tune what order you would like things in. But I would suggest that very few people want to put that much effort into ordering their list. Personally, when I add a new title I can cope with:

  • “I want to see this soon”
  • “I want to see this some time”
  • “I want to see this, but I’m not bothered when”

Cinema Paradiso appears to want me to say:

  • “Put this at the top, I want it immediately!”
  • “This belongs at #11, just after the 6th season of American Horror Story, but before Capitalism: A love Story
  • “Just stick it at the end”

I can’t find any explanation anywhere on their site as to how the selection actually works, so the logical assumption is that they go down your list from top to bottom until they find a title that you want that they have available right now. Without the three buckets to put titles in, it seems to me then that every addition will have to involve some list management unless I either want to see that title really soon, or probably never.

I’ll have to give it a go but this mechanism seems a bit more awkward than LoveFiLM’s approach and needlessly so, because LoveFiLM’s way doesn’t make any promises about which of the titles in each bucket will come next either, nor even that it will be anything from the high priority bucket at all. Although I cannot remember a time when something has come that wasn’t from the high priority bucket.

Cinema Paradiso does let you have more than one list, and you can divide your disc allocation between lists, but I don’t think I could emulate the high/normal/low with that. Having a 2 disc allocation I’d always be getting one disc from the “high” list and one disc from the “normal” priority, which isn’t how I’d want that to work.

Let’s see how it goes.

Referral

I did not know when I signed up that there was a referral scheme which is a shame because I do know some people already using Cinema Paradiso. If you’re going to sign up then please use my referral link. I will get a ⅙ reduction in rental fees for each person that does.

Categories: LUG Community Blogs

Andy Smith: Tricky issues when upgrading to the GoCardless “Pro” API

Thu, 21/09/2017 - 21:06
Background

Since 2012 BitFolk has been using GoCardless as a Direct Debit payment provider. On the whole it has been a pleasant experience:

  • Their API is a pleasure to integrate against, having excellent documentation
  • Their support is responsive and knowledgeable
  • Really good sandbox environment with plenty of testing tools
  • The fees, being 1% capped at £2.00, are pretty good for any kind of payment provider (much less than PayPal, Stripe, etc.)

Of course, if I was submitting Direct Debits myself there would be no charge at all, but BitFolk is too small and my bank (Barclays) are not interested in talking to me about that.

The “Pro” API

In September 2014 GoCardless came out with a new version of their API called the “Pro API”. It made a few things nicer but didn’t come with any real new features applicable to BitFolk, and also added a minimum fee of £0.20.

The original API I’d integrated against has a 1% fee capped at £2.00, and as BitFolk’s smallest plan is £10.79 including VAT the fee would generally be £0.11. Having a £0.20 fee on these payments would represent nearly a doubling of fees for many of my payments.

So, no compelling reason to use the Pro API.

Over the years, GoCardless made more noise about their Pro API and started calling their original API the “legacy API”. I could see the way things were going. Sure enough, eventually they announced that the legacy API would be disabled on 31 October 2017. No choice but to move to the Pro API now.

Payment caps

There aren’t normally any limits on Direct Debit payments. When you let your energy supplier or council or whatever do a Direct Debit, they can empty your bank account if they like.

The Direct Debit Guarantee has very strong provisions in it for protecting the payee and essentially if you dispute anything, any time, you get your money back without question and the supplier has to pursue you for the money by other means if they still think the charge was correct. A company that repeatedly gets Direct Debit chargebacks is going to be kicked off the service by their bank or payment provider.

The original GoCardless API had the ability to set caps on the mandate which would be enforced their side. A simple “X amount per Y time period”. I thought that this would provide some comfort to customers who may not be otherwise familiar with authorising Direct Debits from small companies like BitFolk, so I made use of that feature by default.

This turned out to be a bad decision.

The main problem with this was that there was no way to change the cap. If a customer upgraded their service then I’d have to cancel their Direct Debit mandate and ask them to authorise a new one because it would cease being possible to charge them the correct amount. Authorising a new mandate was not difficult—about the same amount of work as making any sort of online payment—but asking people to do things is always a pain point.

There was a long-standing feature request with GoCardless to implement some sort of “follow this link to authorise the change” feature, but it never happened.

Payment caps and the new API

The Pro API does not support mandates with a capped amount per interval. Given that I’d already established that it was a mistake to do that, I wasn’t too bothered about that.

I’ve since discovered however that the Pro API not only does not support setting the caps, it does not have any way to query them either. This is bad because I need to use the Pro API with mandates that were created in the legacy API. And all of those have caps.

Here’s the flow I had using the legacy API.
Legacy payment process

This way if the charge was coming a little too early, I could give some latitude and let it wait a couple of days until it could be charged. I’d also know if the problem was that the cap was too low. In that case there would be no choice but to cancel the customer’s mandate and ask them to authorise another one, but at least I would know exactly what the problem was.

With the Pro API, there is no way to check timings and charge caps. All I can do is make the charge, and then if it’s too soon or too much I get the same error message:

“Validation failed / exceeds mandate cap”

That’s it. It doesn’t tell me what the cap is, it doesn’t tell me if it’s because I’m charging too soon, nor if I’m charging too much. There is no way to distinguish between those situations.

Backwards compatible – sort of

GoCardless talk about the Pro API being backwards compatible to the legacy API, so that once switched I would still be able to create payments against mandates that were created using the legacy API. I would not need to get customers to re-authorise.

This is true to a point, but my use of caps per interval in the legacy API has severely restricted how compatible things are, and that’s something I wasn’t aware of. Sure, their “Guide to upgrading” does briefly mention that caps would continue to be enforced:

“Pre-authorisation mandates are not restricted, but the maximum amount and interval that you originally specified will still apply.”

That is the only mention of this issue in that entire document, and that statement would be fine by me, if there would have continued to be a way to tell which failure mode would be encountered.

Thinking that I was just misunderstanding, I asked GoCardless support about this. Their reply:

Thanks for emailing.

I’m afraid the limits aren’t exposed within the new API. The only solution as you suggest, is to try a payment and check for failure.

Apologies for the inconvenience caused here and if you have any further queries please don’t hesitate to let us know.

What now?

I am not yet sure of the best way to handle this.

The nuclear option would be to cancel all mandates and ask customers to authorise them again. I would like to avoid this if possible.

I am thinking that most customers continue to be fine on the “amount per interval” legacy mandates as long as they don’t upgrade, so I can leave them as they are until that happens. If they upgrade, or if a DD payment ever fails with “exceeds mandate cap” then I will have to cancel their mandate and ask them to authorise again. I can see if their mandate was created before ~today and advise them on the web site to cancel it and authorise it again.

Conclusion

I’m a little disappointed that GoCardless didn’t think that there would need to be a way to query mandate caps even though creating new mandates with those limits is no longer possible.

I can’t really accept that there is a good level of backwards compatibility here if there is a feature that you can’t even tell is in use until it causes a payment to fail, and even then you can’t tell which details of that feature cause the failure.

I understand why they haven’t just stopped honouring the caps: it wouldn’t be in line with the consumer-focused spirit of the Direct Debit Guarantee to alter things against customer expectations, and even sending out a notification to the customer might not be enough. I think they should have gone the other way and allowed querying of things that they are going to continue to enforce, though.

Could I have tested for this? Well, the difficulty there is that the GoCardless sandbox environment for the Pro API starts off clean with no access to any of your legacy activity neither from live nor from legacy sandbox. So I couldn’t do something like the following:

  1. Create legacy mandate in legacy sandbox, with amount per interval caps
  2. Try to charge against the legacy mandate from the Pro API sandbox, exceeding the cap
  3. Observe that it fails but with no way to tell why

I did note that there didn’t seem to be attributes of the mandate endpoint that would let me know when it could be charged and what the amount left to charge was, but it didn’t set off any alarm bells. Perhaps it should have.

Also I will admit I’ve had years to switch to Pro API and am only doing it now when forced. Perhaps if I had made a start on this years ago, I’d have noted what I consider to be a deficiency, asked them to remedy it and they might have had time to do so. I don’t actually think it’s likely they would bump the API version for that though. In my defence, as I mentioned, there is nothing attractive about the Pro API for my use, and it does cost more, so no surprise I’ve been reluctant to explore it.

So, if you are scrambling to update your GoCardless integration before 31 October, do check that you are prepared for payments against capped mandates to fail.

Categories: LUG Community Blogs

Andy Smith: When is a 64-bit counter not a 64-bit counter?

Sun, 03/09/2017 - 20:17

…when you run a Xen device backend (commonly dom0) on a kernel version earlier than 4.10, e.g. Debian stable.

TL;DR

Xen netback devices used 32-bit counters until that bug was fixed and released in kernel version 4.10.

On a kernel with that bug you will see counter wraps much sooner than you would expect, and if the interface is doing enough traffic for there to be multiple wraps in 5 minutes, your monitoring will no longer be accurate.

The problem

A high-bandwidth VPS customer reported that the bandwidth figures presented by BitFolk’s monitoring bore no resemblance to their own statistics gathered from inside their VPS. Their figures were a lot higher.

About octet counters

The Linux kernel maintains byte/octet counters for its network interfaces. You can view them in /sys/class/net/interface>/statistics/*_bytes.

They’re a simple count of bytes transferred, and so the count always goes up. Typically these are 64-bit unsigned integers so their maximum value would be 18,446,744,073,709,551,615 (264-1).

When you’re monitoring bandwidth use the monitoring system records the value and the timestamp. The difference in value over a known period allows the monitoring system to work out the rate.

Wrapping

Monitoring of network devices is often done using SNMP. SNMP has 32-bit and 64-bit counters.

The maximum value that can be held in a 32-bit counter is 4,294,967,295. As that is a byte count, that represents 34,359,738,368 bits or 34,359.74 megabits. Divide that by 300 (seconds in 5 minutes) and you get 114.5. Therefore if the average bandwidth is above 114.5Mbit/s for 5 minutes, you will overflow a 32-bit counter. When the counter overflows it wraps back through zero.

Wrapping a counter once is fine. We have to expect that a counter will wrap eventually, and as counters never decrease, if a new value is smaller than the previous one then we know it has wrapped and can still work out what the rate should be.

The problem comes when the counter wraps more than once. There is no way to tell how many times it has wrapped so the monitoring system will have to assume the answer is once. Once traffic reaches ~229Mbit/s the counters will be wrapping at least twice in 5 minutes and the statistics become meaningless.

64-bit counters to the rescue

For that reason, network traffic is normally monitored using 64-bit counters. You would have to have a traffic rate of almost 492 Petabit/s to wrap a 64-bit byte counter in 5 minutes.

The thing is, I was already using 64-bit SNMP counters.

Examining the sysfs files

I decided to remove SNMP from the equation by going to the source of the data that SNMP uses: the kernel on the device being monitored.

As mentioned, the kernel’s interface byte counters are exposed in sysfs at /sys/class/net/interface>/statistics/*_bytes. I dumped out those values every 10 seconds and watched them scroll in a terminal session.

What I observed was that these counters, for that particular customer, were wrapping every couple of minutes. I never observed a value greater than 8,469,862,875. That’s larger than a 32-bit counter would hold, but very close to what a 33 bit counter would hold (8,589,934,591).

64-bit counters not to the rescue

Once I realised that the kernel’s own counters were wrapping every couple of minutes inside the kernel it became clear that using 64-bit counters in SNMP was not going to help at all, and multiple wraps would be seen in 5 minutes.

What a difference a minute makes

To test the hypothesis I switched to 1-minute polling. Here’s what 12 hours of real data looks like under both 5- and 1-minute polling.

As you can see that is a pretty dramatic difference.

The bug

By this point, I’d realised that there must be a bug in Xen’s netback driver (the thing that makes virtual network interfaces in dom0).

I went searching through the source of the kernel and found that the counters had changed from an unsigned long in kernel version 4.9 to a u64 in kernel version 4.10.

Of course, once I knew what to search for it was easy to unearth a previous bug report. If I’d found that at the time of the initial report that would have saved 2 days of investigation!

Even so, the fix for this was only committed in February of this year so, unfortunately, is not present in the kernel in use by the current Debian stable. Nor in many other current distributions.

For Xen set-ups on Debian the bug could be avoided by using a backports kernel or packaging an upstream kernel.

Or you could do 1-minute polling as that would only wrap one time at an average bandwidth of ~572Mbit/s and should be safe from multiple wraps up to ~1.1Gbit/s.

Inside the VPS the counters are 64-bit so it isn’t an issue for guest administrators.

Categories: LUG Community Blogs

Debian Bits: Work on Debian for mobile devices continues

Thu, 17/08/2017 - 13:39

Work on Debian for mobile devices, i.e. telephones, tablets, and handheld computers, continues. During the recent DebConf17 in Montréal, Canada, more than 50 people had a meeting to reconsider opportunities and challenges for Debian on mobile devices.

A number of devices were shown at DebConf:

  • PocketCHIP: A very small handheld computer with keyboard, Wi-Fi, USB, and Bluetooth, running Debian 8 (Jessie) or 9 (Stretch).
  • Pyra: A modular handheld computer with a touchscreen, gaming controls, Wi-Fi, keyboard, multiple USB ports and SD card slots, and an optional modem for either Europe or the USA. It will come preinstalled with Debian.
  • Samsung Galaxy S Relay 4G: An Android smartphone featuring a physical keyboard, which can already run portions of Debian userspace on the Android kernel. Kernel upstreaming is on the way.
  • ZeroPhone: An open-source smartphone based on Raspberry Pi Zero, with a small screen, classic telephone keypad and hardware switches for telephony, Wi-Fi, and the microphone. It is running Debian-based Raspbian OS.

The photo (click to enlarge) shows all four devices, together with a Nokia N900, which was the first Linux-based smartphone by Nokia, running Debian-based Maemo and a completely unrelated Gnuk cryptographic token, which just sneaked into the setting.

If you like to participate, please

Categories: LUG Community Blogs

Debian Bits: Debian turns 24!

Wed, 16/08/2017 - 15:50

Today is Debian's 24th anniversary. If you are close to any of the cities celebrating Debian Day 2017, you're very welcome to join the party!

If not, there's still time for you to organize a little celebration or contribution to Debian. For example, spread the word about Debian Day with this nice piece of artwork created by Debian Developer Daniel Lenharo de Souza and Valessio Brito, taking inspiration from the desktop themes Lines and softWaves by Juliette Belin:

If you also like graphics design, or design in general, have a look at https://wiki.debian.org/Design and join the team! Or you can visit the general list of Debian Teams for many other opportunities to participate in Debian development.

Thanks to everybody who has contributed to develop our beloved operating system in these 24 years, and happy birthday Debian!

Categories: LUG Community Blogs

Debian Bits: DebConf17 closes in Montreal and DebConf18 dates announced

Sat, 12/08/2017 - 21:59

Today, Saturday 12 August 2017, the annual Debian Developers and Contributors Conference came to a close. With over 405 people attending from all over the world, and 169 events including 89 talks, 61 discussion sessions or BoFs, 6 workshops and 13 other activities, DebConf17 has been hailed as a success.

Highlights included DebCamp with 117 participants, the Open Day,
where events of interest to a broader audience were offered, talks from invited speakers (Deb Nicholson, Matthew Garrett and Katheryn Sutter), the traditional Bits from the DPL, lightning talks and live demos and the announcement of next year's DebConf (DebConf18 in Hsinchu, Taiwan).

The schedule has been updated every day, including 32 ad-hoc new activities, planned
by attendees during the whole conference.

For those not able to attend, talks and sessions were recorded and live streamed, and videos are being made available at the Debian meetings archive website. Many sessions also facilitated remote participation via IRC or a collaborative pad.

The DebConf17 website will remain active for archive purposes, and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf18 will be held in Hsinchu, Taiwan, from 29 July 2018 until 5 August 2018. It will be the first DebConf held in Asia. For the days before DebConf the local organisers will again set up DebCamp (21 July - 27 July), a session for some intense work on improving the distribution, and organise the Open Day on 28 July 2018, aimed at the general public.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf17, particularly our Platinum Sponsors Savoir-Faire Linux, Hewlett Packard Enterprise, and Google.

About Savoir-faire Linux

Savoir-faire Linux is a Montreal-based Free/Open-Source Software company with offices in Quebec City, Toronto, Paris and Lyon. It offers Linux and Free Software integration solutions in order to provide performance, flexibility and independence for its clients. The company actively contributes to many free software projects, and provides mirrors of Debian, Ubuntu, Linux and others.

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (HPE) is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services.

About Google

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf since more than ten years, at gold level since DebConf12, and at platinum level for this DebConf17.

Categories: LUG Community Blogs

Steve Kemp: A day in the life of Steve

Sat, 12/08/2017 - 21:00

I used to think I was a programmer who did "sysadmin-stuff". Nowadays I interact with too many real programmers to believe that.

Or rather I can code/program/develop, but I'm not often as good as I could be. These days I'm getting more consistent with writing tests, and I like it when things are thoroughly planned and developed. But too often if I'm busy, or distracted, I think to myself "Hrm .. compiles? Probably done. Oops. Bug, you say?"

I was going to write about working with golang today. The go language is minimal and quite neat. I like the toolset:

  • go fmt
    • Making everything consistent.
  • go test

Instead I think today I'm going to write about something else. Since having a child a lot of my life is different. Routine becomes something that is essential, as is planning and scheduling.

So an average week-day goes something like this:

  • 6:00AM
    • Wake up (naturally).
  • 7:00AM
    • Wake up Oiva and play with him for 45 minutes.
  • 7:45AM
    • Prepare breakfast for my wife, and wake her up, then play with Oiva for another 15 minutes while she eats.
  • 8:00AM
    • Take tram to office.
  • 8:30AM
    • Make coffee, make a rough plan for the day.
  • 9:00AM
    • Work, until lunchtime which might be 1pm, 2pm, or even 3pm.
  • 5:00PM
    • Leave work, and take bus home.
    • Yes I go to work via tram, but come back via bus. There are reasons.
  • 5:40PM
    • Arrive home, and relax in peace for 20 minutes.
  • 6:00PM-7:00PM
    • Take Oiva for a walk, stop en route to relax in a hammock for 30 minutes reading a book.
  • 7:00-7:20PM
    • Feed Oiva his evening meal.
  • 7:30PM
    • Give Oiva his bath, then pass him over to my wife to put him to bed.
  • 7:30PM - 8:00pm
    • Relax
  • 8:00PM - 10:00PM
    • Deal with Oiva waking up, making noises, or being unsettled.
    • Try to spend quality time with my wife, watch TV, read a book, do some coding, etc.
  • 10:00PM ~ 11:30PM
    • Go to bed.

In short I'm responsible for Oiva from 6ish-8ish in the morning, then from 6PM-10PM (with a little break while he's put to bed.) There are some exceptions to this routine - for example I work from home on Monday/Friday afternoons, and Monday evenings he goes to his swimming classes. But most working-days are the same.

Weekends are a bit different. There I tend to take him 6AM-8AM, then 1PM-10PM with a few breaks for tea, and bed. At the moment we're starting to reach the peak-party time of year, which means weekends often involve negotiation(s) about which parent is having a party, and which parent is either leaving early, or not going out at all.

Today I have him all day, and it's awesome. He's just learned to say "Daddy" which makes any stress, angst or unpleasantness utterly worthwhile.

Categories: LUG Community Blogs

Debian Bits: DebConf17 starts today in Montreal

Sun, 06/08/2017 - 14:00

DebConf17, the 18th annual Debian Conference, is taking place in Montreal, Canada from August 6 to August 12, 2017.

Debian contributors from all over the world have come together at Collège Maisonneuve during the preceding week for DebCamp (focused on individual work and team sprints for in-person collaboration developing Debian), and the Open Day on August 5th (with presentations and workshops of interest to a wide audience).

Today the main conference starts with nearly 400 attendants and over 120 activities scheduled, including 45- and 20-minute talks and team meetings, workshops, a job fair, talks from invited speakers, as well as a variety of other events.

The full schedule at https://debconf17.debconf.org/schedule/ is updated every day, including activities planned ad-hoc by attendees during the whole conference.

If you want to engage remotely, you can follow the video streaming of the events happening in the three talk rooms: Buzz (the main auditorium), Rex, and Bo, or join the conversation about what is happening in the talk rooms: #debconf17-buzz, #debconf17-rex and #debconf17-bo, and the BoF (discussions) rooms: #debconf17-potato and #debconf17-woody (all those channels in the OFTC IRC network).

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf17, particularly our Platinum Sponsors Savoir-Faire Linux, Hewlett Packard Enterprise, and Google.

Categories: LUG Community Blogs

Debian Bits: Google Platinum Sponsor of DebConf17

Sat, 05/08/2017 - 21:30

We are very pleased to announce that Google has committed support to DebConf17 as a Platinum sponsor.

Google is one of the largest technology companies in the world, providing a wide range of Internet-related services and products as online advertising technologies, search, cloud computing, software, and hardware.

Google has been supporting Debian by sponsoring DebConf since more than ten years, and at gold level since DebConf12.

With this additional commitment as Platinum Sponsor for DebConf17, Google contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Google, for your support of DebConf17!

DebConf17 is starting!

Many Debian contributors are already taking advantage of DebCamp and the Open Day to work individually or in groups developing and improving Debian. DebConf17 will officially start on August 6, 2017. Visit the DebConf17 website at https://debconf17.debconf.org to know the schedule, live streaming and other details.

Categories: LUG Community Blogs

Debian Bits: DebConf17 Open Day

Sat, 05/08/2017 - 14:00

Today, the day preceeding the official start of the annual Debian Conference, is the Open Day at DebConf17, at Collège Maisonneuve in Montreal (Canada).

This day is open to the public with events of interest to a wide audience.

The schedule of today's events include, among others:

  • A Newbie's Newbie Guide to Debian
  • Ask Anything About Debian
  • Debian Packaging 101
  • Debian InstallFest
  • Presentations or workshops related to free software projects and local organizations.

Everyone is welcome to attend! It is a great possibility for interested users to meet our community and for Debian to widen our community.

See the full schedule for today's events at https://debconf17.debconf.org/schedule/open-day/.

If you want to engage remotely, you can watch the video streaming of the Open Day events happening in the "Rex" room, or join the conversation in the channels #debconf17-rex, #debconf17-potato and #debconf17-woody in the OFTC IRC network.

DebConf is committed to a safe and welcome environment for all participants. See the DebConf Code of Conduct and the Debian Code of Conduct for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf17, particularly our Platinum Sponsors Savoir-Faire Linux, Hewlett Packard Enterprise, and Google.

Categories: LUG Community Blogs

Andy Smith: A slightly more realistic look at lvmcache

Thu, 03/08/2017 - 13:59
Recap And then…

I decided to perform some slightly more realistic benchmarks against lvmcache.

The problem with the initial benchmark was that it only covered 4GiB of data with a 4GiB cache device. Naturally once lvmcache was working correctly its performance was awesome – the entire dataset was in the cache. But clearly if you have enough fast block device available to fit all your data then you don’t need to cache it at all and may as well just use the fast device directly.

I decided to perform some fio tests with varying data sizes, some of which were larger than the cache device.

Test methodology

Once again I used a Zipf distribution with a factor of 1.2, which should have caused about 90% of the hits to come from just 10% of the data. I kept the cache device at 4GiB but varied the data size. The following data sizes were tested:

  • 1GiB
  • 2GiB
  • 4GiB
  • 8GiB
  • 16GiB
  • 32GiB
  • 48GiB

With the 48GiB test I expected to see lvmcache struggling, as the hot 10% (~4.8GiB) would no longer fit within the 4GiB cache device.

A similar fio job spec to those from the earlier articles was used:

[cachesize-1g] size=512m ioengine=libaio direct=1 iodepth=8 numjobs=2 readwrite=randread random_distribution=zipf:1.2 bs=4k unlink=1 runtime=30m time_based=1 per_job_logs=1 log_avg_msec=500 write_iops_log=/var/tmp/fio-${FIOTEST}

…the only difference being that several different job files were used each with a different size= directive. Note that as there are two jobs, the size= is half the desired total data size: each job lays out a data file of the specified size.

For each data size I took care to fill the cache with data first before doing a test run, as unreproducible performance is still seen against a completely empty cache device. This produced IOPS logs and a completion latency histogram. Test were also run against SSD and HDD to provide baseline figures.

Results IOPS graphs All-in-one

Immediately we can see that for data sizes 4GiB and below performance converges quite quickly to near-SSD levels. That is very much what we would expect when the cache device is 4GiB, so big enough to completely cache everything.

Let’s just have a look at the lower-performing configurations.

Low-end performers

For 8, 16 and 32GiB data sizes performance clearly gets progressively worse, but it is still much better than baseline HDD. The 10% of hot data still fits within the cache device, so plenty of acceleration is still happening.

For the 48GiB data size it is a little bit of a different story. Performance is still better (on average) than baseline HDD, but there are periodic dips back down to roughly HDD figures. This is because not all of the 10% hot data fits into the cache device any more. Cache misses cause reads from HDD and consequently end up with HDD levels of performance for those reads.

The results no longer look quite so impressive, with even the 8GiB data set achieving only a few thousand IOPS on average. Are things as bad as they seem? Well no, I don’t think they are, and to see why we will have to look at the completion latency histograms.

Completion latency histograms

The above graphs are generated by fitting a Bezier curve to a scatter of data points each of which represents a 500ms average of IOPS achieved. The problem there is the word average.

It’s important to understand what effect averaging the figures gives. We’ve already seen that HDDs are really slow. Even if only a few percent of IOs end up missing cache and going to HDD, the massive latency of those requests will pull the average for the whole 500ms window way down.

Presumably we have a cache because we suspect we have hot spots of data, and we’ve been trying to evaluate that by doing most of the reads from only 10% of the data. Do we care what the average performance is then? Well it’s a useful metric but it’s not going to say much about the performance of reads from the hot data.

The histogram of completion latencies can be more useful. This shows how long it took between issuing the IO and completing the read for a certain percentage of issued IOs. Below I have focused on the 50% to 99% latency buckets, with the times for each bucket averaged between the two jobs. In the interests of being able to see anything at all I’ve had to double the height of the graph and still cut off the y axis for the three worst performers.

A couple of observations:

  • Somewhere between 70% and 80% of IOs complete with a latency that’s so close to SSD performance as to be near-indistinguishable, no matter what the data size. So what I think I am proving is that:

    you can cache a 48GiB slow backing device with 4GiB of fast SSD and if you have 10% hot data then you can expect it to be served up at near-SSD latencies 70%–80% of the time. If your hot spots are larger (not so hot) then you won’t achieve that. If your fast device is larger than 1/12th the backing device then you should do better than 70%–80%.

  • If the cache were perfect then we should expect the 90th percentile to be near SSD performance even for the 32GiB data set, as the 10% hot spot of ~3.2GiB fits inside the 4GiB cache. For whatever reason this is not achieved, but for that data size the 90th percentile latency is still about half that of HDD.
  • When the backing device is many times larger (32GiB+) than the cache device, the 99th percentile latencies can be slightly worse than for baseline HDD.

    I hesitate to suggest there is a problem here as there are going to be very few samples in the top 1%, so it could just be showing close to HDD performance.

Conclusion

Assuming you are okay with using a 4.12..x kernel, and assuming you are already comfortable using LVM, then at the moment it looks fairly harmless to deploy lvmcache.

Getting a decent performance boost out of it though will require you to check that your data really does have hot spots and size your cache appropriately.

Measuring your existing workload with something like blktrace is probably advisable, and these days you can feed the output of blktrace back into fio to see what performance might be like in a difference configuration.

Full test output

You probably want to stop reading here unless the complete output of all the fio runs is of interest to you.

Baseline SSD $ cd /srv/ssd/fio && FIOTEST=ssd-1g fio ~/cachesize-1g.fio cachesize-1g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [385.7MB/0KB/0KB /s] [98.8K/0/0 iops] [eta 00m:00s] cachesize-1g: (groupid=0, jobs=1): err= 0: pid=15206: Thu Aug 3 03:59:51 2017 read : io=346130MB, bw=196910KB/s, iops=49227, runt=1800001msec slat (usec): min=0, max=4188, avg= 4.76, stdev= 4.10 clat (usec): min=17, max=7093, avg=155.55, stdev=59.28 lat (usec): min=91, max=7098, avg=160.77, stdev=59.32 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 112], 20.00th=[ 119], | 30.00th=[ 124], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 147], | 70.00th=[ 159], 80.00th=[ 181], 90.00th=[ 225], 95.00th=[ 274], | 99.00th=[ 366], 99.50th=[ 402], 99.90th=[ 474], 99.95th=[ 506], | 99.99th=[ 924] lat (usec) : 20=0.01%, 50=0.01%, 100=0.57%, 250=92.21%, 500=7.16% lat (usec) : 750=0.04%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=14.54%, sys=42.95%, ctx=50708834, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=88609406/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-1g: (groupid=0, jobs=1): err= 0: pid=15207: Thu Aug 3 03:59:51 2017 read : io=346960MB, bw=197381KB/s, iops=49345, runt=1800001msec slat (usec): min=0, max=4180, avg= 4.76, stdev= 3.93 clat (usec): min=16, max=7097, avg=155.16, stdev=59.32 lat (usec): min=60, max=7101, avg=160.38, stdev=59.36 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 119], | 30.00th=[ 124], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 145], | 70.00th=[ 157], 80.00th=[ 179], 90.00th=[ 225], 95.00th=[ 274], | 99.00th=[ 370], 99.50th=[ 406], 99.90th=[ 474], 99.95th=[ 506], | 99.99th=[ 892] lat (usec) : 20=0.01%, 50=0.01%, 100=0.59%, 250=92.13%, 500=7.22% lat (usec) : 750=0.05%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=14.75%, sys=43.14%, ctx=50816422, majf=0, minf=14 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=88821698/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=693090MB, aggrb=394291KB/s, minb=196909KB/s, maxb=197381KB/s, mint=1800001msec, maxt=1800001msec Disk stats (read/write): xvdf: ios=177420039/9, merge=0/6, ticks=27109524/0, in_queue=27169920, util=100.00% Baseline HDD $ cd /srv/slow/fio && FIOTEST=hdd-1g fio ~/cachesize-1g.fio cachesize-1g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [2650KB/0KB/0KB /s] [662/0/0 iops] [eta 00m:00s] cachesize-1g: (groupid=0, jobs=1): err= 0: pid=18951: Thu Aug 3 04:30:08 2017 read : io=2307.8MB, bw=1312.5KB/s, iops=328, runt=1800099msec slat (usec): min=0, max=101, avg=14.31, stdev= 9.09 clat (usec): min=58, max=943511, avg=24362.98, stdev=47600.55 lat (usec): min=63, max=943517, avg=24378.15, stdev=47600.62 clat percentiles (usec): | 1.00th=[ 71], 5.00th=[ 82], 10.00th=[ 96], 20.00th=[ 127], | 30.00th=[ 181], 40.00th=[ 596], 50.00th=[ 1432], 60.00th=[ 9152], | 70.00th=[21632], 80.00th=[38656], 90.00th=[73216], 95.00th=[115200], | 99.00th=[228352], 99.50th=[280576], 99.90th=[415744], 99.95th=[468992], | 99.99th=[593920] lat (usec) : 100=11.17%, 250=23.28%, 500=4.34%, 750=2.55%, 1000=2.09% lat (msec) : 2=8.86%, 4=1.99%, 10=6.48%, 20=8.26%, 50=15.33% lat (msec) : 100=9.26%, 250=5.65%, 500=0.72%, 750=0.03%, 1000=0.01% cpu : usr=0.30%, sys=0.85%, ctx=572788, majf=0, minf=17 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=590611/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-1g: (groupid=0, jobs=1): err= 0: pid=18952: Thu Aug 3 04:30:08 2017 read : io=2233.4MB, bw=1270.5KB/s, iops=317, runt=1800071msec slat (usec): min=0, max=113, avg=14.63, stdev= 9.18 clat (usec): min=56, max=1153.7K, avg=25167.62, stdev=48500.18 lat (usec): min=64, max=1153.8K, avg=25183.12, stdev=48500.29 clat percentiles (usec): | 1.00th=[ 72], 5.00th=[ 84], 10.00th=[ 99], 20.00th=[ 135], | 30.00th=[ 199], 40.00th=[ 892], 50.00th=[ 2640], 60.00th=[12224], | 70.00th=[21888], 80.00th=[39168], 90.00th=[74240], 95.00th=[117248], | 99.00th=[234496], 99.50th=[288768], 99.90th=[428032], 99.95th=[485376], | 99.99th=[602112] lat (usec) : 100=10.22%, 250=22.53%, 500=4.15%, 750=2.20%, 1000=1.83% lat (msec) : 2=8.43%, 4=1.66%, 10=6.40%, 20=11.41%, 50=15.37% lat (msec) : 100=9.26%, 250=5.69%, 500=0.79%, 750=0.04%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.32%, sys=0.82%, ctx=551097, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=571725/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=4540.4MB, aggrb=2582KB/s, minb=1270KB/s, maxb=1312KB/s, mint=1800071msec, maxt=1800099msec Disk stats (read/write): xvde: ios=1162334/4, merge=0/1, ticks=28774452/692, in_queue=28775512, util=100.00% 1GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-1g-full fio ~/cachesize-1g.fio) cachesize-1g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) cachesize-1g: Laying out IO file(s) (1 file(s) / 512MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [381.4MB/0KB/0KB /s] [97.7K/0/0 iops] [eta 00m:00s] cachesize-1g: (groupid=0, jobs=1): err= 0: pid=31905: Thu Aug 3 01:50:57 2017 read : io=340715MB, bw=193829KB/s, iops=48457, runt=1800001msec slat (usec): min=0, max=4500, avg= 7.09, stdev= 4.48 clat (usec): min=27, max=7408, avg=155.45, stdev=48.00 lat (usec): min=94, max=7415, avg=162.99, stdev=48.04 clat percentiles (usec): | 1.00th=[ 104], 5.00th=[ 112], 10.00th=[ 118], 20.00th=[ 124], | 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 151], | 70.00th=[ 163], 80.00th=[ 179], 90.00th=[ 211], 95.00th=[ 245], | 99.00th=[ 322], 99.50th=[ 350], 99.90th=[ 414], 99.95th=[ 450], | 99.99th=[ 644] lat (usec) : 50=0.01%, 100=0.21%, 250=95.28%, 500=4.49%, 750=0.01% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=14.89%, sys=52.95%, ctx=39879588, majf=0, minf=18 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=87222941/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-1g: (groupid=0, jobs=1): err= 0: pid=31906: Thu Aug 3 01:50:57 2017 read : io=347850MB, bw=197888KB/s, iops=49472, runt=1800001msec slat (usec): min=0, max=4535, avg= 7.06, stdev= 4.60 clat (usec): min=25, max=7437, avg=152.11, stdev=47.15 lat (usec): min=62, max=7443, avg=159.62, stdev=47.23 clat percentiles (usec): | 1.00th=[ 104], 5.00th=[ 111], 10.00th=[ 116], 20.00th=[ 123], | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], | 70.00th=[ 157], 80.00th=[ 175], 90.00th=[ 203], 95.00th=[ 239], | 99.00th=[ 314], 99.50th=[ 342], 99.90th=[ 410], 99.95th=[ 446], | 99.99th=[ 724] lat (usec) : 50=0.01%, 100=0.26%, 250=95.76%, 500=3.97%, 750=0.01% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=15.41%, sys=53.33%, ctx=39452819, majf=0, minf=17 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=89049700/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=688565MB, aggrb=391716KB/s, minb=193828KB/s, maxb=197888KB/s, mint=1800001msec, maxt=1800001msec Disk stats (read/write): dm-2: ios=176255221/16, merge=0/0, ticks=26526400/152, in_queue=26612320, util=100.00%, aggrios=58763012/5471, aggrmerge=0/0, aggrticks=8781482/98230, aggrin_queue=8904092, aggrutil=100.00% dm-1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=176289036/11, aggrmerge=0/5, aggrticks=26394628/0, aggrin_queue=26430140, aggrutil=100.00% xvdc: ios=176289036/11, merge=0/5, ticks=26394628/0, in_queue=26430140, util=100.00% dm-0: ios=176289036/16, merge=0/0, ticks=26344448/0, in_queue=26417584, util=100.00% dm-3: ios=0/16397, merge=0/0, ticks=0/294692, in_queue=294692, util=1.07%, aggrios=0/16397, aggrmerge=0/0, aggrticks=0/294580, aggrin_queue=294580, aggrutil=1.07% xvdd: ios=0/16397, merge=0/0, ticks=0/294580, in_queue=294580, util=1.07% 2GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-2g-full fio ~/cachesize-2g.fio) cachesize-2g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-2g: Laying out IO file(s) (1 file(s) / 1024MB) cachesize-2g: Laying out IO file(s) (1 file(s) / 1024MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [382.1MB/0KB/0KB /s] [98.4K/0/0 iops] [eta 00m:00s] cachesize-2g: (groupid=0, jobs=1): err= 0: pid=3134: Wed Aug 2 17:32:04 2017 read : io=343242MB, bw=195266KB/s, iops=48816, runt=1800001msec slat (usec): min=0, max=4825, avg= 6.98, stdev= 4.42 clat (usec): min=26, max=6629, avg=154.48, stdev=47.36 lat (usec): min=93, max=6634, avg=161.93, stdev=47.41 clat percentiles (usec): | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 123], | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 151], | 70.00th=[ 163], 80.00th=[ 179], 90.00th=[ 207], 95.00th=[ 237], | 99.00th=[ 314], 99.50th=[ 342], 99.90th=[ 418], 99.95th=[ 458], | 99.99th=[ 1020] lat (usec) : 50=0.01%, 100=0.32%, 250=95.80%, 500=3.84%, 750=0.02% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% cpu : usr=14.32%, sys=52.25%, ctx=39483832, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=87869839/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-2g: (groupid=0, jobs=1): err= 0: pid=3135: Wed Aug 2 17:32:04 2017 read : io=338592MB, bw=192621KB/s, iops=48155, runt=1800001msec slat (usec): min=0, max=4485, avg= 7.00, stdev= 4.45 clat (usec): min=16, max=365626, avg=156.69, stdev=493.84 lat (usec): min=66, max=365646, avg=164.17, stdev=493.95 clat percentiles (usec): | 1.00th=[ 102], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 122], | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 141], 60.00th=[ 149], | 70.00th=[ 161], 80.00th=[ 177], 90.00th=[ 205], 95.00th=[ 241], | 99.00th=[ 318], 99.50th=[ 346], 99.90th=[ 426], 99.95th=[ 474], | 99.99th=[12352] lat (usec) : 20=0.01%, 50=0.01%, 100=0.40%, 250=95.46%, 500=4.10% lat (usec) : 750=0.01%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% lat (msec) : 100=0.01%, 250=0.01%, 500=0.01% cpu : usr=14.08%, sys=51.79%, ctx=38995321, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=86679580/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=681834MB, aggrb=387887KB/s, minb=192621KB/s, maxb=195266KB/s, mint=1800001msec, maxt=1800001msec Disk stats (read/write): dm-2: ios=174542882/20, merge=0/0, ticks=26569204/1076, in_queue=26658352, util=100.00%, aggrios=58194070/26591, aggrmerge=0/0, aggrticks=8834468/168569, aggrin_queue=9026514, aggrutil=100.00% dm-1: ios=15/46976, merge=0/0, ticks=0/8080, in_queue=8084, util=0.35%, aggrios=174561550/46008, aggrmerge=1/6963, aggrticks=26076884/8636, aggrin_queue=26119484, aggrutil=100.00% xvdc: ios=174561550/46008, merge=1/6963, ticks=26076884/8636, in_queue=26119484, util=100.00% dm-0: ios=174561536/6745, merge=0/0, ticks=26039312/1316, in_queue=26111052, util=100.00% dm-3: ios=20660/26054, merge=0/0, ticks=464092/496312, in_queue=960408, util=5.44%, aggrios=20660/26051, aggrmerge=0/3, aggrticks=464080/495444, aggrin_queue=959516, aggrutil=5.44% xvdd: ios=20660/26051, merge=0/3, ticks=464080/495444, in_queue=959516, util=5.44% 4GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-4g-full fio ~/cachesize-4g.fio) cachesize-4g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-4g: Laying out IO file(s) (1 file(s) / 2048MB) cachesize-4g: Laying out IO file(s) (1 file(s) / 2048MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [375.8MB/0KB/0KB /s] [96.2K/0/0 iops] [eta 00m:00s] cachesize-4g: (groupid=0, jobs=1): err= 0: pid=26795: Wed Aug 2 16:22:51 2017 read : io=249614MB, bw=142002KB/s, iops=35500, runt=1800001msec slat (usec): min=0, max=4469, avg= 7.00, stdev= 4.50 clat (usec): min=9, max=1363.9K, avg=215.84, stdev=3099.68 lat (usec): min=65, max=1363.9K, avg=223.29, stdev=3099.98 clat percentiles (usec): | 1.00th=[ 103], 5.00th=[ 109], 10.00th=[ 114], 20.00th=[ 121], | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], | 70.00th=[ 159], 80.00th=[ 177], 90.00th=[ 211], 95.00th=[ 255], | 99.00th=[ 354], 99.50th=[ 402], 99.90th=[ 4384], 99.95th=[35072], | 99.99th=[142336] lat (usec) : 10=0.01%, 50=0.01%, 100=0.30%, 250=94.32%, 500=5.21% lat (usec) : 750=0.06%, 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.03% lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=10.92%, sys=38.72%, ctx=29995970, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=63901149/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-4g: (groupid=0, jobs=1): err= 0: pid=26796: Wed Aug 2 16:22:51 2017 read : io=239913MB, bw=136484KB/s, iops=34120, runt=1800001msec slat (usec): min=0, max=4613, avg= 7.08, stdev= 4.64 clat (usec): min=20, max=1315.4K, avg=224.86, stdev=3340.78 lat (usec): min=67, max=1315.4K, avg=232.39, stdev=3341.07 clat percentiles (usec): | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 116], 20.00th=[ 122], | 30.00th=[ 127], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 147], | 70.00th=[ 157], 80.00th=[ 177], 90.00th=[ 219], 95.00th=[ 274], | 99.00th=[ 390], 99.50th=[ 442], 99.90th=[10304], 99.95th=[39680], | 99.99th=[154624] lat (usec) : 50=0.01%, 100=0.27%, 250=92.97%, 500=6.52%, 750=0.12% lat (usec) : 1000=0.01% lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.02%, 50=0.04% lat (msec) : 100=0.02%, 250=0.02%, 500=0.01%, 750=0.01%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=10.28%, sys=37.64%, ctx=29036049, majf=0, minf=14 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=61417783/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=489527MB, aggrb=278486KB/s, minb=136483KB/s, maxb=142002KB/s, mint=1800001msec, maxt=1800001msec Disk stats (read/write): dm-2: ios=125296525/29, merge=0/0, ticks=27228000/6004, in_queue=27295900, util=100.00%, aggrios=41794883/193226, aggrmerge=0/0, aggrticks=9825693/38738, aggrin_queue=9881421, aggrutil=84.96% dm-1: ios=86/514016, merge=0/0, ticks=16/61332, in_queue=61376, util=2.08%, aggrios=125186978/561183, aggrmerge=0/17273, aggrticks=18938212/71188, aggrin_queue=19034916, aggrutil=85.91% xvdc: ios=125186978/561183, merge=0/17273, ticks=18938212/71188, in_queue=19034916, util=85.91% dm-0: ios=125186892/64571, merge=0/0, ticks=18904024/11972, in_queue=18966848, util=84.96% dm-3: ios=197671/1091, merge=0/0, ticks=10573040/42912, in_queue=10616040, util=33.53%, aggrios=197671/1084, aggrmerge=0/7, aggrticks=10573020/39132, aggrin_queue=10612148, aggrutil=33.53% xvdd: ios=197671/1084, merge=0/7, ticks=10573020/39132, in_queue=10612148, util=33.53% 8GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-8g-full fio ~/cachesize-8g.fio) cachesize-8g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-8g: Laying out IO file(s) (1 file(s) / 4096MB) cachesize-8g: Laying out IO file(s) (1 file(s) / 4096MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [18048KB/0KB/0KB /s] [4512/0/0 iops] [eta 00m:00s] cachesize-8g: (groupid=0, jobs=1): err= 0: pid=15599: Wed Aug 2 14:53:25 2017 read : io=13260MB, bw=7542.7KB/s, iops=1885, runt=1800146msec slat (usec): min=0, max=385, avg= 8.25, stdev= 5.69 clat (usec): min=9, max=1516.6K, avg=4231.66, stdev=26655.31 lat (usec): min=69, max=1516.6K, avg=4240.38, stdev=26657.41 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 105], 20.00th=[ 107], | 30.00th=[ 109], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], | 70.00th=[ 122], 80.00th=[ 124], 90.00th=[ 159], 95.00th=[16064], | 99.00th=[113152], 99.50th=[175104], 99.90th=[358400], 99.95th=[456704], | 99.99th=[700416] lat (usec) : 10=0.01%, 100=0.45%, 250=92.98%, 500=0.41%, 750=0.05% lat (usec) : 1000=0.04% lat (msec) : 2=0.08%, 4=0.01%, 10=0.22%, 20=1.29%, 50=2.04% lat (msec) : 100=1.25%, 250=0.93%, 500=0.21%, 750=0.03%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.95%, sys=2.94%, ctx=3521460, majf=0, minf=17 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=3394445/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-8g: (groupid=0, jobs=1): err= 0: pid=15600: Wed Aug 2 14:53:25 2017 read : io=13886MB, bw=7899.2KB/s, iops=1974, runt=1800112msec slat (usec): min=0, max=2743, avg= 8.26, stdev= 5.84 clat (usec): min=2, max=1515.1K, avg=4040.21, stdev=24950.67 lat (usec): min=69, max=1515.1K, avg=4048.94, stdev=24952.83 clat percentiles (usec): | 1.00th=[ 100], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 107], | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 118], 60.00th=[ 120], | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 157], 95.00th=[15552], | 99.00th=[109056], 99.50th=[166912], 99.90th=[333824], 99.95th=[419840], | 99.99th=[634880] lat (usec) : 4=0.01%, 100=0.56%, 250=92.98%, 500=0.31%, 750=0.05% lat (usec) : 1000=0.05% lat (msec) : 2=0.07%, 4=0.01%, 10=0.24%, 20=1.33%, 50=2.04% lat (msec) : 100=1.25%, 250=0.91%, 500=0.19%, 750=0.02%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.94%, sys=3.09%, ctx=3684550, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=3554777/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=27145MB, aggrb=15441KB/s, minb=7542KB/s, maxb=7899KB/s, mint=1800112msec, maxt=1800146msec Disk stats (read/write): dm-2: ios=6948764/45, merge=0/0, ticks=28718292/11380, in_queue=28731160, util=100.00%, aggrios=2383761/603042, aggrmerge=0/0, aggrticks=12202621/76005, aggrin_queue=12278754, aggrutil=100.00% dm-1: ios=48/1607066, merge=0/0, ticks=4/178612, in_queue=178652, util=5.79%, aggrios=6527499/1756743, aggrmerge=0/51937, aggrticks=781204/210664, aggrin_queue=991672, aggrutil=41.93% xvdc: ios=6527499/1756743, merge=0/51937, ticks=781204/210664, in_queue=991672, util=41.93% dm-0: ios=6527451/202019, merge=0/0, ticks=780348/38028, in_queue=818476, util=38.13% dm-3: ios=623784/41, merge=0/0, ticks=35827512/11376, in_queue=35839136, util=100.00%, aggrios=623784/10, aggrmerge=0/31, aggrticks=35827612/2744, aggrin_queue=35830320, aggrutil=100.00% xvdd: ios=623784/10, merge=0/31, ticks=35827612/2744, in_queue=35830320, util=100.00% 16GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-16g-full fio ~/cachesize-16g.fio) cachesize-16g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-16g: Laying out IO file(s) (1 file(s) / 8192MB) cachesize-16g: Laying out IO file(s) (1 file(s) / 8192MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [8288KB/0KB/0KB /s] [2072/0/0 iops] [eta 00m:00s] cachesize-16g: (groupid=0, jobs=1): err= 0: pid=2957: Wed Aug 2 13:13:01 2017 read : io=5417.5MB, bw=3081.9KB/s, iops=770, runt=1800060msec slat (usec): min=0, max=264, avg=10.14, stdev= 6.63 clat (usec): min=64, max=1972.9K, avg=10370.01, stdev=35718.97 lat (usec): min=71, max=1972.9K, avg=10380.72, stdev=35719.88 clat percentiles (usec): | 1.00th=[ 102], 5.00th=[ 105], 10.00th=[ 107], 20.00th=[ 116], | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 122], 60.00th=[ 131], | 70.00th=[ 143], 80.00th=[ 165], 90.00th=[31104], 95.00th=[65280], | 99.00th=[173056], 99.50th=[228352], 99.90th=[374784], 99.95th=[440320], | 99.99th=[618496] lat (usec) : 100=0.28%, 250=82.96%, 500=0.10%, 750=0.04%, 1000=0.03% lat (msec) : 2=0.06%, 4=0.02%, 10=0.38%, 20=3.05%, 50=6.44% lat (msec) : 100=3.82%, 250=2.43%, 500=0.36%, 750=0.02%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.40%, sys=1.39%, ctx=1393813, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1386875/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-16g: (groupid=0, jobs=1): err= 0: pid=2958: Wed Aug 2 13:13:01 2017 read : io=6208.7MB, bw=3531.9KB/s, iops=882, runt=1800089msec slat (usec): min=0, max=437, avg=10.21, stdev= 6.74 clat (usec): min=1, max=1988.1K, avg=9046.94, stdev=31703.98 lat (usec): min=71, max=1989.2K, avg=9057.72, stdev=31704.93 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 105], 20.00th=[ 107], | 30.00th=[ 112], 40.00th=[ 119], 50.00th=[ 121], 60.00th=[ 125], | 70.00th=[ 137], 80.00th=[ 157], 90.00th=[27776], 95.00th=[57088], | 99.00th=[152576], 99.50th=[201728], 99.90th=[329728], 99.95th=[395264], | 99.99th=[577536] lat (usec) : 2=0.01%, 50=0.01%, 100=0.41%, 250=83.62%, 500=0.09% lat (usec) : 750=0.04%, 1000=0.03% lat (msec) : 2=0.05%, 4=0.01%, 10=0.39%, 20=3.06%, 50=6.51% lat (msec) : 100=3.52%, 250=2.00%, 500=0.25%, 750=0.01%, 1000=0.01% lat (msec) : 2000=0.01% cpu : usr=0.53%, sys=1.56%, ctx=1597691, majf=0, minf=15 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1589415/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=11626MB, aggrb=6613KB/s, minb=3081KB/s, maxb=3531KB/s, mint=1800060msec, maxt=1800089msec Disk stats (read/write): dm-2: ios=2976172/77, merge=0/0, ticks=28759760/45044, in_queue=28805924, util=100.00%, aggrios=998518/56115, aggrmerge=0/0, aggrticks=9860773/22645, aggrin_queue=9883437, aggrutil=100.00% dm-1: ios=88/149089, merge=0/0, ticks=4/18444, in_queue=18452, util=0.58%, aggrios=2489913/161345, aggrmerge=0/6821, aggrticks=314936/21908, aggrin_queue=336800, aggrutil=17.38% xvdc: ios=2489913/161345, merge=0/6821, ticks=314936/21908, in_queue=336800, util=17.38% dm-0: ios=2489825/19183, merge=0/0, ticks=314816/4448, in_queue=319308, util=16.88% dm-3: ios=505643/74, merge=0/0, ticks=29267500/45044, in_queue=29312552, util=100.00%, aggrios=505643/12, aggrmerge=0/62, aggrticks=29267500/10580, aggrin_queue=29278060, aggrutil=100.00% xvdd: ios=505643/12, merge=0/62, ticks=29267500/10580, in_queue=29278060, util=100.00% 32GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-32g-full fio ~/cachesize-32g.fio) cachesize-32g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-32g: Laying out IO file(s) (1 file(s) / 16384MB) cachesize-32g: Laying out IO file(s) (1 file(s) / 16384MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [4624KB/0KB/0KB /s] [1156/0/0 iops] [eta 00m:00s] cachesize-32g: (groupid=0, jobs=1): err= 0: pid=23915: Wed Aug 2 11:42:12 2017 read : io=3378.9MB, bw=1922.5KB/s, iops=480, runt=1800108msec slat (usec): min=0, max=152, avg=11.84, stdev= 9.33 clat (usec): min=61, max=1523.8K, avg=16633.48, stdev=52554.00 lat (usec): min=68, max=1523.8K, avg=16645.89, stdev=52556.11 clat percentiles (usec): | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 108], | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 133], | 70.00th=[ 147], 80.00th=[ 8768], 90.00th=[49920], 95.00th=[104960], | 99.00th=[257024], 99.50th=[333824], 99.90th=[536576], 99.95th=[626688], | 99.99th=[880640] lat (usec) : 100=0.57%, 250=78.79%, 500=0.19%, 750=0.06%, 1000=0.05% lat (msec) : 2=0.09%, 4=0.02%, 10=0.47%, 20=2.96%, 50=6.82% lat (msec) : 100=4.70%, 250=4.22%, 500=0.94%, 750=0.11%, 1000=0.02% lat (msec) : 2000=0.01% cpu : usr=0.34%, sys=0.87%, ctx=916902, majf=0, minf=18 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=864974/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-32g: (groupid=0, jobs=1): err= 0: pid=23916: Wed Aug 2 11:42:12 2017 read : io=5022.5MB, bw=2857.5KB/s, iops=714, runt=1800090msec slat (usec): min=0, max=173, avg=11.35, stdev= 9.15 clat (usec): min=62, max=1768.2K, avg=11185.81, stdev=48075.30 lat (usec): min=70, max=1768.2K, avg=11197.69, stdev=48077.75 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 108], | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 125], | 70.00th=[ 137], 80.00th=[ 153], 90.00th=[26240], 95.00th=[63744], | 99.00th=[220160], 99.50th=[309248], 99.90th=[585728], 99.95th=[733184], | 99.99th=[1122304] lat (usec) : 100=0.46%, 250=85.40%, 500=0.22%, 750=0.06%, 1000=0.04% lat (msec) : 2=0.07%, 4=0.02%, 10=0.25%, 20=2.04%, 50=5.27% lat (msec) : 100=2.99%, 250=2.39%, 500=0.62%, 750=0.12%, 1000=0.03% lat (msec) : 2000=0.02% cpu : usr=0.50%, sys=1.27%, ctx=1372044, majf=0, minf=17 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1285733/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=8401.3MB, aggrb=4779KB/s, minb=1922KB/s, maxb=2857KB/s, mint=1800090msec, maxt=1800108msec Disk stats (read/write): dm-2: ios=2150707/143, merge=0/0, ticks=28767744/34836, in_queue=28804592, util=100.00%, aggrios=751272/286183, aggrmerge=0/0, aggrticks=11166794/101613, aggrin_queue=11268450, aggrutil=100.00% dm-1: ios=62/755357, merge=0/0, ticks=12/85084, in_queue=85108, util=2.67%, aggrios=1798588/823298, aggrmerge=494/28038, aggrticks=232996/102144, aggrin_queue=335068, aggrutil=15.62% xvdc: ios=1798588/823298, merge=494/28038, ticks=232996/102144, in_queue=335068, util=15.62% dm-0: ios=1799020/96389, merge=0/0, ticks=234124/20548, in_queue=254704, util=13.23% dm-3: ios=454734/6803, merge=0/0, ticks=33266248/199208, in_queue=33465540, util=100.00%, aggrios=454734/6184, aggrmerge=0/619, aggrticks=33266152/97928, aggrin_queue=33364124, aggrutil=100.00% xvdd: ios=454734/6184, merge=0/619, ticks=33266152/97928, in_queue=33364124, util=100.00% 48GiB data $ (cd /srv/cache/fio && FIOTEST=cachesize-48g-full fio ~/cachesize-48g.fio) cachesize-48g: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=8 ... fio-2.16 Starting 2 processes cachesize-48g: Laying out IO file(s) (1 file(s) / 24576MB) cachesize-48g: Laying out IO file(s) (1 file(s) / 24576MB) Jobs: 2 (f=2): [r(2)] [100.0% done] [4084KB/0KB/0KB /s] [1021/0/0 iops] [eta 00m:00s] cachesize-48g: (groupid=0, jobs=1): err= 0: pid=11765: Tue Aug 1 12:02:44 2017 read : io=3108.7MB, bw=1768.4KB/s, iops=442, runt=1800134msec slat (usec): min=0, max=75057, avg=12.49, stdev=100.40 clat (usec): min=61, max=7717.4K, avg=18080.36, stdev=79780.10 lat (usec): min=70, max=7717.4K, avg=18093.41, stdev=79783.33 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 108], | 30.00th=[ 115], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 133], | 70.00th=[ 145], 80.00th=[ 197], 90.00th=[44288], 95.00th=[103936], | 99.00th=[296960], 99.50th=[419840], 99.90th=[1028096], 99.95th=[1269760], | 99.99th=[2179072] lat (usec) : 100=0.49%, 250=81.20%, 500=0.22%, 750=0.05%, 1000=0.04% lat (msec) : 2=0.07%, 4=0.03%, 10=0.34%, 20=2.26%, 50=6.08% lat (msec) : 100=4.04%, 250=3.79%, 500=1.03%, 750=0.19%, 1000=0.07% lat (msec) : 2000=0.09%, >=2000=0.01% cpu : usr=0.37%, sys=0.78%, ctx=846462, majf=0, minf=17 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=795801/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 cachesize-48g: (groupid=0, jobs=1): err= 0: pid=11766: Tue Aug 1 12:02:44 2017 read : io=3181.9MB, bw=1809.1KB/s, iops=452, runt=1800123msec slat (usec): min=0, max=177, avg=12.61, stdev=10.17 clat (usec): min=51, max=4963.9K, avg=17663.67, stdev=73795.84 lat (usec): min=68, max=4963.1K, avg=17676.84, stdev=73797.60 clat percentiles (usec): | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 107], 20.00th=[ 110], | 30.00th=[ 118], 40.00th=[ 122], 50.00th=[ 129], 60.00th=[ 137], | 70.00th=[ 151], 80.00th=[ 9664], 90.00th=[48384], 95.00th=[99840], | 99.00th=[259072], 99.50th=[354304], 99.90th=[978944], 99.95th=[1269760], | 99.99th=[2179072] lat (usec) : 100=0.47%, 250=78.72%, 500=0.21%, 750=0.06%, 1000=0.05% lat (msec) : 2=0.09%, 4=0.03%, 10=0.43%, 20=2.92%, 50=7.31% lat (msec) : 100=4.70%, 250=3.92%, 500=0.82%, 750=0.12%, 1000=0.05% lat (msec) : 2000=0.08%, >=2000=0.02% cpu : usr=0.40%, sys=0.80%, ctx=867597, majf=0, minf=16 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=100.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=814542/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=8 Run status group 0 (all jobs): READ: io=6290.5MB, aggrb=3578KB/s, minb=1768KB/s, maxb=1809KB/s, mint=1800123msec, maxt=1800134msec Disk stats (read/write): dm-2: ios=1610343/207, merge=0/0, ticks=28775436/83448, in_queue=28860328, util=100.00%, aggrios=562278/204739, aggrmerge=0/0, aggrticks=11044750/380021, aggrin_queue=11424792, aggrutil=100.00% dm-1: ios=53/537570, merge=0/0, ticks=4/63980, in_queue=64000, util=1.97%, aggrios=1303106/589473, aggrmerge=993/17280, aggrticks=184484/78092, aggrin_queue=262524, aggrutil=11.76% xvdc: ios=1303106/589473, merge=993/17280, ticks=184484/78092, in_queue=262524, util=11.76% dm-0: ios=1304046/69309, merge=0/0, ticks=188420/16684, in_queue=205128, util=9.97% dm-3: ios=382737/7340, merge=0/0, ticks=32945828/1059400, in_queue=34005248, util=100.00%, aggrios=382737/6163, aggrmerge=0/1177, aggrticks=32945132/641880, aggrin_queue=33587056, aggrutil=100.00% xvdd: ios=382737/6163, merge=0/1177, ticks=32945132/641880, in_queue=33587056, util=100.00%
Categories: LUG Community Blogs

Steve Kemp: So I did a thing, then another thing.

Wed, 02/08/2017 - 21:00

So I did start a project, to write a puppet-dashboard, it is functionally complete, but the next step is to allow me to raise alerts based on failing runs of puppet - in real-time.

(i.e. Now that I have a dashboard I wish to not use it. I want to be alerted to failures, without having to remember to go look for them. Something puppet-dashboard can't do ..)

In other news a while back I slipped in a casual note about having a brain scan done, here in sunny Helsinki.

One of the cool things about that experience, in addition to being told I wasn't going to drop dead that particular day, was that the radiologist told me that I could pay €25 to get a copy of my brain data in DICOM format.

I've not yet played with this very much, but I couldn't resist a brief animation:

  • See my brain.
    • Not the best quality, or the best detail, but damn. It is my brain.
    • I shall do better with more experimentation I think.
    • After I posted it my wife, a doctor, corrected me: That wasn't a gif of my brain, instead it was a gif of my skull. D'oh!
Categories: LUG Community Blogs