News aggregator

Chris Lamb: Free software activities in January 2017

Planet ALUG - Tue, 31/01/2017 - 09:54

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Created github-sync, a tool to mirror arbitrary repositories onto GitHub.
  • Submitted two pull requests to the word-wrap Chrome browser extension that adds the ability to wrap text via the right-click context menu:
    • Support dynamically-added <textarea> elements in "rich" Javascript applications such as mail clients, etc. (#2)
    • Avoid an error message if no "editable" has been selected yet. (#1)
  • Submitted a pull request to wordwarvi (a "retro-styled old school side-scrolling shooter") to ensure the build is reproducible. (#5)
  • Filed a pull request with the yard Ruby documentation tool to ensure the generated output is reproducible. (#1048)
  • Made some improvements to travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change:
    • Merged a pull request from Evgeni Golov to allow for skipped tests. (#39)
    • Add logging when running autopkgtests. (commit)
  • Merged a pull request from jwilk for python-fadvise my Python interface to the posix_fadvise(2) interface to predeclare an pattern for accessing data. (#6)
  • Filed an issue against the redis key-value database regarding build failures on non-x86 architectures. (#3768)
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

(I have previously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.)

This month I:

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Comparators:
    • Display magic file type when we know the file format but can't find file-specific details. (Closes: #850850).
    • Ensure our "APK metadata" file appears first, fixing non-deterministic tests. (998b288)
    • Fix APK extration with absolute filenames. (Closes: #850485).
    • Don't error if directory containing ELF debug symbols already exists. (Closes: #850807).
    • Support comparing .ico files (Closes: #850730).
    • If we don't have a tool (eg. apktool), don't blow up when attempting to unpack it.
  • Output formats:
    • Add Markdown output format. (Closes: #848141).
    • Add RestructuredText output format.
    • Use an optimised indentation routine throughout all presenters.
    • Move text presenter to use the Visitor pattern.
    • Correctly escape value of href="" elements (re. #849411).
  • Tests:
    • Prevent FTBFS by loading fixtures as UTF-8 in case surrounding terminal is not Unicode-aware. (Closes: #852926).
    • Skip tests if binutils can't handle the object file format. (Closes: #851588).
    • Actually compare the output of text/ReST/Markdown formats to fixtures.
    • Add tests for: Comparing two empty directories, HTML output, image.ICOImageFile, --html-dir, --text-color & no arguments (beyond the filenames) emits the text output.
  • Profiling:
    • Count the number of calls, not just the total time.
    • Skip as much profiling overhead when not enabled for a ~2% speedup.
  • Misc:
    • Alias an expensive Config() lookup for a 10% optimisation.
    • Avoid expensive regex creation until we actually need it, speeding up diff parsing by 2X.
    • Use Pythonic logging functions based on __name__, etc.
    • Drop milliseconds from logging output.

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Store files directly onto S3.
  • Drop big unique_together index to save disk space.
  • Show SHA256 checksums where space permits.

Debian LTS

This month I have been paid to work 12.75 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 773-1 for python-crypto fixing a vulnerability where calling AES.new with an invalid parameter could crash the Python interpreter.
  • Issued DLA 777-1 for libvncserver addressing two heap-based buffer overflow attacks based on invalid FramebufferUpdate data.
  • Issued DLA 778-1 for pcsc-lite correcting a use-after-free vulnerability.
  • Issued DLA 795-1 for hesiod which fixed a weak SUID check as well as removed the hard-coding of a fallback domain if the configuration file could not be found.
  • Issued DLA 810-1 for libarchive fixing a heap buffer overflow.
Uploads
  • python-django:
    • 1:1.10.5-1 — New upstream stable release.
    • 1:1.11~alpha1-1 — New upstream experimental release.
  • gunicorn (19.6.0-10) — Moved debian/README.Debian to debian/NEWS so that the recent important changes will be displayed to users when upgrading to stretch.
  • redis:
    • 3:3.2.6-2 & 4:4.0-rc2-2 — Tidy patches and rename RunTimeDirectory to RuntimeDirectory in .service files. (Closes: #850534)
    • 3:3.2.6-3 — Remove a duplicate redis-server binary by symlinking /usr/bin/redis-check-rdb. This was found by the dedup service.
    • 3:3.2.6-4 — Expand the documentation in redis-server.service and redis-sentinel.service regarding the default hardening options and how, in most installations, they can be increased.
    • 3:3.2.6-5, 3:3.2.6-6, 4:4.0-rc2-3 & 4:4.0-rc2-4 — Add taskset calls to try and avoid build failures due to parallelism in upstream test suite.

I also made the following non-maintainer uploads:

  • cpio:
    • 2.12+dfsg-1 — New upstream release (to experimental), refreshing all patches, etc.
    • 2.12+dfsg-2 — Add missing autoconf to Build-Depends.
  • xjump (2.7.5-6.2) — Make the build reproducible by passing -n to gzip calls in debian/rules. (Closes: #777354)
  • magicfilter (1.2-64.1) — Make the build reproducible by passing -n to gzip calls in debian/rules. (Closes: #777478)
Debian bugs filed RC bugs

I also filed 16 FTBFS bugs against bzr-git, coq-highschoolgeometry, eclipse-anyedit, eclipse-gef, libmojolicious-plugin-assetpack-perl, lua-curl, node-liftoff, node-liftoff, octave-msh, pcb2gcode, qtile, rt-authen-externalauth, ruby-hamster, ruby-sshkit, tika & txfixtures.

FTP Team

As a Debian FTP assistant I ACCEPTed 35 packages: chromium-browser, debichem, flask-limiter, golang-github-golang-leveldb, golang-github-nebulouslabs-demotemutex, golang-github-nwidger-jsoncolor, libatteanx-endpoint-perl, libproc-guard-perl, libsub-quote-perl, libtest-mojibake-perl, libytnef, linux, lua-sql, node-graceful-readlink, node-invariant, node-rollup, node-socket.io-parser, node-timed-out, olefile, packaging-tutorial, pgrouting, pyparallel, python-coards, python-django-tagging, python-graphviz, python-irc, python-mechanicalsoup, python-persistent, python-scandir, python-stopit, r-cran-zelig, ruby-ast, ruby-whitequark-parser, sagetex & u-boot-menu.

Categories: LUG Community Blogs

Debian Bits: Savoir-faire Linux Platinum Sponsor of DebConf17

Planet HantsLUG - Mon, 30/01/2017 - 17:50

We are very pleased to announce that Savoir-faire Linux has committed support to DebConf17 as a Platinum sponsor.

"Debian acts as a model for both Free Software and developer communities. Savoir-faire Linux promotes both vision and values of Debian. Indeed, we believe that it's an essential piece, in a social and political way, to the freedom of users using modern technological systems", said Cyrille Béraud, president of Savoir-faire Linux.

Savoir-faire Linux is a Montreal-based Free/Open-Source Software company with offices in Quebec City, Toronto, Paris and Lyon. It offers Linux and Free Software integration solutions in order to provide performance, flexibility and independence for its clients. The company actively contributes to many free software projects, and provide mirrors of Debian, Ubuntu, Linux and others.

Savoir-faire Linux was present at DebConf16 program with a talk about Ring, its GPL secure and distributed communication system. Ring package was accepted in Debian testing during DebCamp in 2016 and will be part of Debian Stretch. OpenDHT, the distributed hash table implementation used by Ring, also appeared in Debian experimental during last DebConf.

With this commitment as Platinum Sponsor, Savoir-faire Linux contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Savoir-faire Linux, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf17 website at http://debconf17.debconf.org.

Categories: LUG Community Blogs

Mick Morgan: variable substitution – redux

Planet ALUG - Mon, 30/01/2017 - 15:40

Back in October last year, I posted a note about the usage of variable substitution in lighttpd’s configuration files. In fact I got that post very slightly wrong (now corrected) in that I showed the test I applied in the file as: “$HTTP[“remoteip”] !~ “12.34.56.78″”. (Note the “!~” when I should have used “!=”). This works, in that it would limit access, but it is subtly wrong because it does not limit access in quite the way I intended. I only noticed this when I later came to change the variable assignment to allow access from three separate IP addresses (on which more later) rather than just one.

The “!~” operator is a perl style regular expression “not” match whilst the “!=” operator is the more strict string not equal match. This matters. My construct using the perl regex not wouldn’t actually just limit access solely to remote address 12.34.56.78 but would also allow in addresses of the form n12n.n34n.n56n.n78n where “n” is any other valid numeral (or none). So for example, my construct would have allowed in connections from 125.134.56.178 or 212.34.156.78 or 121.34.156.78 etc. That is not what I wanted at all.

The (correct) assignment and test now looks like this:

var.IP = “12\.34\.56\.78|23\.45\.67\.78|34\.56\.78\.90”

$HTTP[“remoteip”] !~ var.IP {
$HTTP[“url”] =~ “^/wp-admin/” {
url.access-deny = (“”)
}

Which says, allow connections from address 12.34.56.78 or 23.45.67.89 or 34.56.78.90 but no others.

For reference, the BNF like notation used in the basic configuration for lighty is given on the redmine wiki.

Categories: LUG Community Blogs

PuppetModule.info: documenting Puppet Forge modules

Planet SurreyLUG - Mon, 30/01/2017 - 15:14

Recently I launched PuppetModule.info, a new site that publishes documentation for every module on Puppet Forge and GitHub - aka "Puppet Strings as a Service".

This comes after the release of Puppet Strings 1.0.0 which is the latest generation tool to parse Puppet manifests and extract docs in either HTML or JSON formats. It can handle docs at the top of classes, READMEs, parameter lists and descriptions, parameter typing and even types/providers, and replaces the old puppet doc tool.

The site is a fork of the well-known RubyDoc.info as it uses the same YARD engine to build and documentation, updating it to use Puppet Strings and handle downloads of modules from Puppet Forge. It can display modules from the Forge at:

All of the known modules can be seen on the Puppet Modules index page, which is refreshed hourly from the Forge:

And module docs can also be loaded directly from GitHub checkouts from the GitHub repository listing.

So please start adding links to your modules so users can quickly skip to your published documentation: [![puppetmodule.info docs](http://www.puppetmodule.info/images/badge.png)](http://www.puppetmodule.info/m/AUTHOR-MODULE)

If you find issues, please report them or send PRs to the puppetmodule.info repository, but otherwise, I hope you find the documentation easily accessible and useful!

Categories: LUG Community Blogs

Mick Morgan: not welcome here

Planet ALUG - Mon, 30/01/2017 - 14:45

US President Trump has said that refugees and travellers from seven, mainly majority Muslim, countries are barred from entry to the US. Notwithstanding our own dear PM’s invitation to Trump, some 1.2 million brits have so far signed a Parliamentary petition to “Prevent Donald Trump from making a State Visit to the United Kingdom“.

I do not actually agree with the wording of the petition. As a republican at heart I really don’t care whether the Queen would be “embarrassed” by meeting Trump. Besides, she has met many, arguably worse, leaders in her time (think Robert Mugabe, or Nicolae Ceaușescu). The point here is that to extend an invitation to Trump so quickly, and whilst he is advocating such distateful and divisive policies gives the distinct impression that the UK endorses those policies. We do not, and should be seen to be highly critical of those policies. In her visit to the US last week, Theresa May made much of the UK’s ability as a close friend and partner of the USA to feel free to criticise that partner. She has done no such thing. In my view that is shameful.

When I signed the petition there were around 600,000 other signatories. The total is still climbing. That is encouraging. No doubt the petition will be ignored though.

(Postscript. El Reg has a discussion raging about the petition. Apparently I am a “virtue signaller”. Oh well, I don’t feel bad about it. After all, I’m a blogger so by definition I’m already self interested and vain.)

Categories: LUG Community Blogs

Jonathan McDowell: BelFOSS 2017

Planet ALUG - Sun, 29/01/2017 - 23:18

On Friday I attended the second BelFOSS conference. I’d spoken about my involvement with Debian at the conference last year, which seemed to be well received. This year I’d planned to just be a normal attendee, but ended up roped in at a late stage to be part of a panel discussing various licensing issues. I had a thoroughly enjoyable day - there were many great speakers, and plenty of opportunity for interesting chats with other attendees.

The conference largely happens through the tireless efforts of Jonny McCullagh, though of course many people are involved in bringing it together. It’s a low budget single day conference which has still managed to fill its single track attendee capacity both years, and attract more than enough speakers. Last year Red Hat and LPI turned up, this year Matt Curry from Allstate’s Arizona office appeared, but in general it’s local speakers talking to a local audience. This is really good to see - I don’t think Jonny would object at all if he managed to score a `big name’ speaker, but one of his aims is to get students interested and aware of Free Software, and I think it helps a lot that the conference allows them to see that it’s actively in use in lots of aspects of the industry here in Northern Ireland.

Here’s hoping that BelFOSS becomes an annual fixture in the NI tech calendar!

Categories: LUG Community Blogs

Debian Bits: Debian at FOSDEM 2017

Planet HantsLUG - Sat, 28/01/2017 - 13:00

On February 4th and 5th, Debian will be attending FOSDEM 2017 in Brussels, Belgium; a yearly gratis event (no registration needed) run by volunteers from the Open Source and Free Software community. It's free, and it's big: more than 600 speakers, over 600 events, in 29 rooms.

This year more than 45 current or past Debian contributors will speak at FOSDEM: Alexandre Viau, Bradley M. Kuhn, Daniel Pocock, Guus Sliepen, Johan Van de Wauw, John Sullivan, Josh Triplett, Julien Danjou, Keith Packard, Martin Pitt, Peter Van Eynde, Richard Hartmann, Sebastian Dröge, Stefano Zacchiroli and Wouter Verhelst, among others.

Similar to previous years, the event will be hosted at Université libre de Bruxelles. Debian contributors and enthusiasts will be taking shifts at the Debian stand with gadgets, T-Shirts and swag. You can find us at stand number 4 in building K, 1 B; CoreOS Linux and PostgreSQL will be our neighbours. See https://wiki.debian.org/DebianEvents/be/2017/FOSDEM for more details.

We are looking forward to meeting you all!

Categories: LUG Community Blogs

Steve Kemp: So I've been playing with hardware

Planet HantsLUG - Fri, 27/01/2017 - 23:00

At the end of December I decided I was going to do hardware "things", and so far that has worked out pretty well.

One of the reasons I decided to play with Arduinos is that I assumed I could avoid all forms of soldering. I've done soldering often enough to know I can manage it, but not quite often enough that I feel comfortable doing so.

Unfortunately soldering has become a part of my life once again, as too many of the things I've been playing with have required pins soldering to them before I can connect them.

Soldering aside I've been having fun, and I have deployed several "real" projects in and around my flat. Perhaps the most interesting project shows the arrival time of the next tram to arrive at the end of my street:

That's simple, reliable, and useful. I have another project which needs to be documented which combineds a WeMos D1 and a vibration sensor - no sniggers - to generate an alert when the washing machine is done. Having a newborn baby around the place means that we have a lot of laundry to manage, and we keep forgetting that we've turned the washing machine on. Oops.

Anyway. Hardware. More fun than I expected. I've even started ordering more components for bigger projects.

I'll continue to document the various projects online, mostly to make sure I remember the basics:

Categories: LUG Community Blogs

request for reviews of last meeting

West Yorkshire LUG News - Wed, 25/01/2017 - 11:34

If you want your 15 minutes of fame, just post your report of January’s meeting on the wylug-discuss mailing list. If you have not already joined now is a fine time to do so. Just follow the link from our home page.

Jonathan McDowell: Experiments with 1-Wire

Planet ALUG - Tue, 24/01/2017 - 21:49

As previously mentioned, at the end of last year I got involved with a project involving the use of 1-Wire. In particular a DS28E15 device, intended to be used as a royalty tracker for a licensed piece of hardware IP. I’d no previous experience with 1-Wire (other than knowing it’s commonly used for driving temperature sensors), so I took it as an opportunity to learn a bit more about it.

The primary goal was to program a suitable shared key into the DS28E15 device that would also be present in the corresponding hardware device. A Maxim programmer had been ordered, but wasn’t available in stock so had to be back ordered. Of course I turned to my trusty Bus Pirate, which claimed 1-Wire support. However it failed to recognise the presence of the device at all. After much head scratching I finally listened to a co-worker who had suggested it was a clock speed issue - the absence of any option to select the 1-Wire speed in the Bus Pirate or any mention of different speeds in the documentation I had read had made me doubt it was an issue. Turns out that the Bus Pirate was talking “standard” 1-Wire and the DS28E15 only talks “overdrive” 1-Wire, to the extent that it won’t even announce its presence if the reset pulse conforms to the standard, rather than overdrive, reset time period. Lesson learned: listen to your co-workers sooner.

A brief period of yak shaving led to adding support to the Bus Pirate for the overdrive mode (since landed in upstream), and resulted in a search request via the BP interface correctly finding the device and displaying its ROM ID. This allowed exploration of the various commands the authenticator supports, to verify that the programming sequence operated as expected. These allow for setting the shared secret, performing a SHA256 MAC against this secret and a suitable nonce, and retrieving the result.

Next problem: the retrieved SHA256 MAC did not match the locally computed value. Initially endianness issues were suspected, but trying the relevant permutations did not help. Some searching found an implementation of SHA256 for the DS28E15 that showed differences between a standard SHA256 computation and what the authenticator performs. In particular SHA256 normally adds the current working state (a-g) to the current hash value (h0-h7) at the end of every block. The authenticator does this for all but the final block, where instead the hash value is set to the working state. I haven’t been able to find any documentation from Maxim that this is how things are calculated, nor have I seen any generic implementation of SHA256 which supports this mode. However rolling my own C implementation based on the code I found and using it to compare the results retrieved from the device confirms that this is what’s happening.

So at this point we’re done, right? Wait for the proper programming hardware to turn up, write the key to the devices, profit? Well, no. There was a bit of a saga involving the programmer (actually programmers, one with at least some documentation that allowed the creation of a Python tool to allow setting the key and reading + recording the ROM ID for tracking, and one with no programming documentation that came with a fancy GUI for manually doing the programming), but more importantly it was necessary to confirm that the programmed device interacted with the hardware correctly.

Initial testing with the hardware was unsuccessful. Again endianness issues were considered and permutations tried, but without success. A simple key constructed to avoid such issues was tried, and things worked fine. There was a hardware simulation of both components available, so it was decided to run that and obtain a capture of the traffic between them. As the secret key was known this would then allow the random nonce to be captured, and the corresponding (correct) hash value. Tests could then be performed in software to determine what the issue was & how to generate the same hash for verification.

Two sets of analyzer software were tried, OpenBench LogicSniffer (OLS) and sigrok. As it happened both failed to correctly decode the bitstream detected as 1-Wire, but were able to show the captured data graphically, allowing for decoding by eye. A slight patch to OLS to relax the timing constraints allowed it to successfully decode the full capture and provided the appropriate data for software reproduction. The end issue? A 256 bit number (as defined in VHDL) is not the same as 32 element byte array… Obvious when you know what the issue is!

So? What did I learn, other than a lot about 1-Wire? Firstly, don’t offhandedly discount suggestions that you don’t think make sense. Secondly, having a tool (in this case the Bus Pirate) that lets you easily play with a protocol via a simple interface is invaluable in understanding it. Thirdly, don’t trust manufacturers to be doing something in a normal fashion when they claim to be using a well defined technology. Fourthly, be conscious about all of the different ways bitstreams can be actually processed in memory. It’s not just endianness. Finally, spending the time to actually understand what’s going on up front can really help when things don’t work as you’d expect later on - without the yak shaving to support Overdrive on the BP I wouldn’t have been able to so quickly use the simulation capture to help diagnose the issue.

Categories: LUG Community Blogs

YES FOLKS ITS TODAY

West Yorkshire LUG News - Mon, 23/01/2017 - 13:29

This month the monthly meeting is on TUESDAY of THIS WEEK. Come and join us in the Lord Darcy! be surprised about Linux ! Run a web server on you laptop ! The future here today!

Steve Engledow (stilvoid): Angst

Planet ALUG - Sun, 22/01/2017 - 03:06

I had planned to spend this evening playing games; something I really enjoy doing but rarely set aside any time for. However, while we were eating dinner, I put some music on and it got me in the mood for playing some guitar. Over the course of dinner and playing with my son afterwards, that developed into wanting to write and record some music. I used to write electronic nonsense sometimes but this evening, I fancied trying my hand at some metal.

The first 90 minutes was - as almost every time I get the rare combination of an urge to do something musical and time to do it in - spent trying to remember how my setup worked, which bits of software I needed to install, and how to get the right combination of inputs and outputs I want. I eventually got it sussed and decided I'd better write it down for my own future reference.

Hardware
  1. Plug the USB audio interface from the V-Amp3 into the laptop.
  2. Plug external audio sources into the audio interface's input. (e.g. the V-Amp or a synth).
  3. Plug some headphones into the headphone socket of the audio interface.
  4. Switch on the audio interface's monitoring mode ;) (this kept me going for a little while; it's a small switch)
Software
  1. The following packages need to be installed at a minimum:

    • qjackctl
    • qsynth
    • soundfont-fluidsynth
    • vkeybd
    • ardour
    • hydrogen
  2. Use pavucontrol or similar to disable the normal audio system and just use the USB audio interface.

  3. Qjackctl needs the following snippets in its config for when jack comes up and goes down, respectively:

    • pacmd suspend true

      This halts pulseaudio so that jack can take over

    • pacmd suspend false

      This starts puseaudio back up again

  4. Use the connection tool in Jack to hook hydrogen's and qsynth's outputs to ardour's input. Use the ALSA tab to connect vkeybd to qsynth.

  5. When starting Ardour and Hydrogen, make sure they're both configured to use Jack for MIDI. Switch Ardour's clock from Internal to JACK.

For posterity, here's this evening's output.

Categories: LUG Community Blogs

Monthly Meeting Tuesday 24th Jan 2017

West Yorkshire LUG News - Wed, 18/01/2017 - 17:02

This month’s meeting is on Tuesday 24th Jan, not the usual Thursday. The location is still the same, The Lord Darcy on Harrogate Road. If you have never used linux before, please come and try it out on one of our laptops. If you want to discuss things before hand, or between meetings, try our meetups pages, or the email mailing lists at the right hand side of this page. See you next week!

Jonathan McDowell: Cloning a USB LED device

Planet ALUG - Sat, 14/01/2017 - 12:53

A month or so ago I got involved in a discussion on IRC about notification methods for a headless NAS. One of the options considered was some sort of USB attached LED. DealExtreme had a cheap “Webmail notifier”, which was already supported by mainline kernels as a “Riso Kagaku” device but it had been sold out for some time.

This seemed like a fun problem to solve with a tinyAVR and V-USB. I had my USB relay board so I figured I could use that to at least get some code to the point that the kernel detected it as the right device, and the relay output could be configured as one of the colours to ensure it was being driven in roughly the right manner. The lack of a full lsusb dump (at least when I started out) made things a bit harder, plus the fact that the Riso uses an output report unlike the relay code, which uses a control message. However I had the kernel source for the driver and with a little bit of experimentation had something which would cause the driver to be loaded and the appropriate files in /sys/class/leds/ to be created. The relay was then successfully activated when the red LED was supposed to be on.

hid-led 0003:1294:1320.0001: hidraw0: USB HID v1.01 Device [MAIL MAIL ] on usb-0000:00:14.0-6.2/input0 hid-led 0003:1294:1320.0001: Riso Kagaku Webmail Notifier initialized

I subsequently ordered some Digispark clones and modified the code to reflect the pins there (my relay board used pins 1+2 for USB, the Digispark uses pins 3+4). I then soldered a tricolour LED to the board, plugged it in and had a clone of the Riso Kaguku device for about £1.50 in parts (no doubt much cheaper in bulk). Very chuffed.

In case it’s useful to someone, the code is released under GPLv3+ and is available at https://the.earth.li/gitweb/?p=riso-kagaku-clone.git;a=summary or on GitHub at https://github.com/u1f35c/riso-kagaku-clone. I’m seeing occasional issues on an older Dell machine that only does USB2 with enumeration, but it generally is fine once it gets over that.

(FWIW, Jon, who started the original discussion, ended up with a BlinkStick Nano which is a neater device with 2 LEDs but still based on an Tiny85.)

Categories: LUG Community Blogs

Andy Smith: XFS, Reflinks and Deduplication

Planet HantsLUG - Tue, 10/01/2017 - 21:45
btrfs Past

This post is about XFS but it’s about features that first hit Linux in btrfs, so we need to talk about btrfs for a bit first.

For a long time now, btrfs has had a useful feature called reflinks. Basically this is exposed as cp --reflink=always and takes advantage of extents and copy-on-write in order to do a quick copy of data by merely adding another reference to the extents that the data is currently using, rather than having to read all the data and write it out again, as would be the case in other filesystems.

Here’s an excerpt from the man page for cp:

When –reflink[=always] is specified, perform a lightweight copy, where the data blocks are copied only when modified. If this is not possible the copy fails, or if –reflink=auto is specified, fall back to a standard copy.

Without reflinks a common technique for making a quick copy of a file is the hardlink. Hardlinks have a number of disadvantages though, mainly due to the fact that since there is only one inode all hardlinked copies must have the same metadata (owner, group, permissions, etc.). Software that might modify the files also needs to be aware of hardlinks: naive modification of a hardlinked file modifies all copies of the file.

With reflinks, life becomes much easier:

  • Each copy has its own inode so can have different metadata. Only the data extents are shared.
  • The filesystem ensures that any write causes a copy-on-write, so applications don’t need to do anything special.
  • Space is saved on a per-extent basis so changing one extent still allows all the other extents to remain shared. A change to a hardlinked file requires a new copy of the whole file.

Another feature that extents and copy-on-write allow is block-level out-of-band deduplication.

  • Deduplication – the technique of finding and removing duplicate copies of data.
  • Block-level – operating on the blocks of data on storage, not just whole files.
  • Out-of-band – something that happens only when triggered or scheduled, not automatically as part of the normal operation of the filesystem.

btrfs has an ioctl that a userspace program can use—presumably after finding a sequence of blocks that are identical—to tell the kernel to turn one into a reference to the other, thus saving some space.

It’s necessary that the kernel does it so that any IO that may be going on at the same time that may modify the data can be dealt with. Modifications after the data is reflinked will just case a copy-on-write. If you tried to do it all in a userspace app then you’d risk something else modifying the files at the same time, but by having the kernel do it then in theory it becomes completely safe to do it at any time. The kernel also checks that the sequence of extents really are identical.

In-band deduplication is a feature that’s being worked on in btrfs. It already exists in ZFS though, and there is it rarely recommended for use as it requires a huge amount of memory for keeping hashes of data that has been written. It’s going to be the same story with btrfs, so out-of-band deduplication is still something that will remain useful. And it exists as a feature right now, which is always a bonus.

XFS Future

So what has all this got to do with XFS?

Well, in recognition that there might be more than one Linux filesystem with extents and so that reflinks might be more generally useful, the extent-same ioctl got lifted up to be in the VFS layer of the kernel instead of just in btrfs. And the good news is that XFS recently became able to make use of it.

When I say “recently” I do mean really recently. I mean like kernel release 4.9.1 which came out on 2017-01-04. At the moment it comes with massive EXPERIMENTAL warnings, requires a new filesystem to be created with a special format option, and will need an xfsprogs compiled from recent git in order to have a mkfs.xfs that can create such a filesystem.

So before going further, I’m going to assume you’ve compiled a new enough kernel and booted into it, then compiled up a new enough xfsprogs. Both of these are quite simple things to do, for example the Debian documentation for building kernel packages from upstream code works fine.

XFS Reflink Demo

Make yourself a new filesystem, with the reflink=1 format option.

# mkfs.xfs -L reflinkdemo -m reflink=1 /dev/xvdc meta-data=/dev/xvdc isize=512 agcount=4, agsize=3276800 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=0, rmapbt=0, reflink=1 data = bsize=4096 blocks=13107200, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=6400, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0

Put it in /etc/fstab for convenience, and mount it somewhere.

# echo "LABEL=reflinkdemo /mnt/xfs xfs relatime 0 2" >> /etc/fstab # mkdir -vp /mnt/xfs mkdir: created directory ‘/mnt/xfs’ # mount /mnt/xfs # df -h /mnt/xfs Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 339M 50G 1% /mnt/xfs

Create a few files with random data.

# mkdir -vp /mnt/xfs/reflink mkdir: created directory ‘/mnt/xfs/reflink’ # chown -c andy: /mnt/xfs/reflink changed ownership of ‘/mnt/xfs/reflink’ from root:root to andy:andy # exit $ for i in {1..5}; do > echo "Writing $i…"; dd if=/dev/urandom of=/mnt/xfs/reflink/$i bs=1M count=1024; > done Writing 1… 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.34193 s, 247 MB/s Writing 2… 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.33207 s, 248 MB/s Writing 3… 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.33527 s, 248 MB/s Writing 4… 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.33362 s, 248 MB/s Writing 5… 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.32859 s, 248 MB/s $ df -h /mnt/xfs Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 5.4G 45G 11% /mnt/xfs $ du -csh /mnt/xfs 5.0G /mnt/xfs 5.0G total

Copy a file and as expected usage will go up by 1GiB. And it will take a little while, even on my nice fast SSDs.

$ time cp -v /mnt/xfs/reflink/{,copy_}1 ‘/mnt/xfs/reflink/1’ -> ‘/mnt/xfs/reflink/copy_1’   real 0m3.420s user 0m0.008s sys 0m0.676s $ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 6.4G 44G 13% /mnt/xfs 6.0G /mnt/xfs/reflink 6.0G total

So what about a reflink copy?

$ time cp -v --reflink=always /mnt/xfs/reflink/{,reflink_}1 ‘/mnt/xfs/reflink/1’ -> ‘/mnt/xfs/reflink/reflink_1’   real 0m0.003s user 0m0.000s sys 0m0.004s $ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 6.4G 44G 13% /mnt/xfs 7.0G /mnt/xfs/reflink 7.0G total

The apparent usage went up by 1GiB but the amount of free space as shown by df stayed the same. No more actual storage was used because the new copy is a reflink. And the copy got done in 4ms as opposed to 3,420ms.

Can we tell more about how these files are laid out? Yes, we can use the filefrag -v command to tell us more.

$ filefrag -v /mnt/xfs/reflink/{,copy_,reflink_}1 Filesystem type is: 58465342 File size of /mnt/xfs/reflink/1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/1: 1 extent found File size of /mnt/xfs/reflink/copy_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 917508.. 1179651: 262144: last,eof /mnt/xfs/reflink/copy_1: 1 extent found File size of /mnt/xfs/reflink/reflink_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/reflink_1: 1 extent found

What we can see here is that all three files are composed of a single extent which is 262,144 4KiB blocks in size, but it also tells us that /mnt/xfs/reflink/1 and /mnt/xfs/reflink/reflink_1 are using the same range of physical blocks: 1572884..1835027.

XFS Deduplication Demo

We’ve demonstrated that you can use cp --reflink=always to take a cheap copy of your data, but what about data that may already be duplicates without your knowledge? Is there any way to take advantage of the extent-same ioctl for deduplication?

There’s a couple of software solutions for out-of-band deduplication in btrfs, but one I know that works also in XFS is duperemove. You will need to use a git checkout of duperemove for this to work.

A quick reminder of the storage use before we start.

$ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 6.4G 44G 13% /mnt/xfs 7.0G /mnt/xfs/reflink 7.0G total $ filefrag -v /mnt/xfs/reflink/{,copy_,reflink_}1 Filesystem type is: 58465342 File size of /mnt/xfs/reflink/1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/1: 1 extent found File size of /mnt/xfs/reflink/copy_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 917508.. 1179651: 262144: last,eof /mnt/xfs/reflink/copy_1: 1 extent found File size of /mnt/xfs/reflink/reflink_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/reflink_1: 1 extent found

Run duperemove.

# duperemove -hdr --hashfile=/var/tmp/dr.hash /mnt/xfs/reflink Using 128K blocks Using hash: murmur3 Gathering file list... Adding files from database for hashing. Loading only duplicated hashes from hashfile. Using 2 threads for dedupe phase Kernel processed data (excludes target files): 4.0G Comparison of extent info shows a net change in shared extents of: 1.0G $ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 5.4G 45G 11% /mnt/xfs 7.0G /mnt/xfs/reflink 7.0G total $ filefrag -v /mnt/xfs/reflink/{,copy_,reflink_}1 Filesystem type is: 58465342 File size of /mnt/xfs/reflink/1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/1: 1 extent found File size of /mnt/xfs/reflink/copy_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/copy_1: 1 extent found File size of /mnt/xfs/reflink/reflink_1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/reflink_1: 1 extent found

The output of du remained the same, but df says that there’s now 1GiB more free space, and filefrag confirms that what’s changed is that copy_1 now uses the same extents as 1 and reflink_1. The duplicate data in copy_1 that in theory we did not know was there, has been discovered and safely reference-linked to the extent from 1, saving us 1GiB of storage.

By the way, I told duperemove to use a hash file because otherwise it will keep that in RAM. For the sake of 7 files that won’t matter but it will if I have millions of files so it’s a habit I get into. It uses that hash file to avoid having to repeatedly re-hash files that haven’t changed.

All that has been demonstrated so far though is whole-file deduplication, as copy_1 was just a regular copy of 1. What about when a file is only partially composed of duplicate data? Well okay.

$ cat /mnt/xfs/reflink/{1,2} > /mnt/xfs/reflink/1_2 $ ls -lah /mnt/xfs/reflink/{1,2,1_2} -rw-r--r-- 1 andy andy 1.0G Jan 10 15:41 /mnt/xfs/reflink/1 -rw-r--r-- 1 andy andy 2.0G Jan 10 16:55 /mnt/xfs/reflink/1_2 -rw-r--r-- 1 andy andy 1.0G Jan 10 15:41 /mnt/xfs/reflink/2 $ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 7.4G 43G 15% /mnt/xfs 9.0G /mnt/xfs/reflink 9.0G total $ filefrag -v /mnt/xfs/reflink/{1,2,1_2} Filesystem type is: 58465342 File size of /mnt/xfs/reflink/1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/1: 1 extent found File size of /mnt/xfs/reflink/2 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262127: 20.. 262147: 262128: 1: 262128.. 262143: 2129908.. 2129923: 16: 262148: last,eof /mnt/xfs/reflink/2: 2 extents found File size of /mnt/xfs/reflink/1_2 is 2147483648 (524288 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262127: 262164.. 524291: 262128: 1: 262128.. 524287: 655380.. 917539: 262160: 524292: last,eof /mnt/xfs/reflink/1_2: 2 extents found

I’ve concatenated 1 and 2 together into a file called 1_2 and as expected, usage goes up by 2GiB. filefrag confirms that the physical extents in 1_2 are new. We should be able to do better because this 1_2 file does not contain any new unique data.

$ duperemove -hdr --hashfile=/var/tmp/dr.hash /mnt/xfs/reflink Using 128K blocks Using hash: murmur3 Gathering file list... Adding files from database for hashing. Using 2 threads for file hashing phase Kernel processed data (excludes target files): 4.0G Comparison of extent info shows a net change in shared extents of: 3.0G $ df -h /mnt/xfs; du -csh /mnt/xfs/reflink Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 5.4G 45G 11% /mnt/xfs 9.0G /mnt/xfs/reflink 9.0G total

We can. Apparent usage stays at 9GiB but real usage went back to 5.4GiB which is where we were before we created 1_2.

And the physical layout of the files?

$ filefrag -v /mnt/xfs/reflink/{1,2,1_2} Filesystem type is: 58465342 File size of /mnt/xfs/reflink/1 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: last,shared,eof /mnt/xfs/reflink/1: 1 extent found File size of /mnt/xfs/reflink/2 is 1073741824 (262144 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262127: 20.. 262147: 262128: shared 1: 262128.. 262143: 2129908.. 2129923: 16: 262148: last,shared,eof /mnt/xfs/reflink/2: 2 extents found File size of /mnt/xfs/reflink/1_2 is 2147483648 (524288 blocks of 4096 bytes) ext: logical_offset: physical_offset: length: expected: flags: 0: 0.. 262143: 1572884.. 1835027: 262144: shared 1: 262144.. 524271: 20.. 262147: 262128: 1835028: shared 2: 524272.. 524287: 2129908.. 2129923: 16: 262148: last,shared,eof /mnt/xfs/reflink/1_2: 3 extents found

It shows that 1_2 is now made up from the same extents as 1 and 2 combined, as expected.

Less of the urandom

These synthetic demonstrations using a handful of 1GiB blobs of data from /dev/urandom are all very well, but what about something a little more like the real world?

Okay well let’s see what happens when I take ~30GiB of backup data created by rsnapshot on another host.

rsnapshot is a backup program which makes heavy use of hardlinks. It runs periodically and compares the previous backup data with the new. If they are identical then instead of storing an identical copy it makes a hardlink. This saves a lot of space but does have a lot of limitations as discussed previously.

This won’t be the best example because in some ways there is expected to be more duplication; this data is composed of multiple backups of the same file trees. But on the other hand there shouldn’t be as much because any truly identical files have already been hardlinked together by rsnapshot. But it is a convenient source of real-world data.

So, starting state:

(I deleted all the reflink files)

$ df -h /mnt/xfs; sudo du -csh /mnt/xfs/rsnapshot Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 30G 21G 59% /mnt/xfs 29G /mnt/xfs/rsnapshot 29G total

A small diversion about how rsnapshot lays out its backups may be useful here. They are stored like this:

  • rsnapshot_root / [iteration a] / [client foo] / [directory structure from client foo]
  • rsnapshot_root / [iteration a] / [client bar] / [directory structure from client bar]
  • rsnapshot_root / [iteration b] / [client foo] / [directory structure from client foo]
  • rsnapshot_root / [iteration b] / [client bar] / [directory structure from client bar]

The iterations are commonly things like daily.0, daily.1daily.6. As a consequence, the paths:

rsnapshot/daily.*/client_foo

would be backups only from host foo, and:

rsnapshot/daily.0/*

would be backups from all hosts but only the most recent daily sync.

Let’s first see what the savings would be like in looking for duplicates in just one client’s backups.

Here’s the backups I have in this blob of data. The names of the clients are completely made up, though they are real backups.

Client Size (MiB) darbee 14,504 achorn 11,297 spader 2,612 reilly 2,276 chino 2,203 audun 2,184

So let’s try deduplicating all of the biggest one’s—darbee‘s—backups:

$ df -h /mnt/xfs Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 30G 21G 59% /mnt/xfs # time duperemove -hdr --hashfile=/var/tmp/dr.hash /mnt/xfs/rsnapshot/*/darbee Using 128K blocks Using hash: murmur3 Gathering file list... Kernel processed data (excludes target files): 8.8G Comparison of extent info shows a net change in shared extents of: 6.8G 9.85user 78.70system 3:27.23elapsed 42%CPU (0avgtext+0avgdata 23384maxresident)k 50703656inputs+790184outputs (15major+20912minor)pagefaults 0swaps $ df -h /mnt/xfs Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 25G 26G 50% /mnt/xfs

3m27s of run time, somewhere between 5 and 6.8GiB saved. That’s 35%!

Now to deduplicate the lot.

# time duperemove -hdr --hashfile=/var/tmp/dr.hash /mnt/xfs/rsnapshot Using 128K blocks Using hash: murmur3 Gathering file list... Kernel processed data (excludes target files): 5.4G Comparison of extent info shows a net change in shared extents of: 3.4G 29.12user 188.08system 5:02.31elapsed 71%CPU (0avgtext+0avgdata 34040maxresident)k 34978360inputs+572128outputs (18major+45094minor)pagefaults 0swaps $ df -h /mnt/xfs Filesystem Size Used Avail Use% Mounted on /dev/xvdc 50G 23G 28G 45% /mnt/xfs

5m02 used this time, and another 2–3.4G saved.

Since the actual deduplication does take some time (the kernel having to read the extents, mainly), and most of it was already done in the first pass, a full pass would more likely take the sum of the times, i.e. more like 8m29s.

Still, a total of about 7GiB was saved which is 23%.

It would be very interesting to try this on one of my much larger backup stores.

Why Not Just Use btrfs?

Using a filesystem that already has all of these features would certainly seem easier, but I personally don’t think btrfs is stable enough yet. I use it at home in a relatively unexciting setup (8 devices, raid1 for data and metadata, no compression or deduplication) and I wish I didn’t. I wouldn’t dream of using it in a production environment yet.

I’m on the btrfs mailing list and there are way too many posts regarding filesystems that give ENOSPC and become unavailable for writes, or systems that were unexpectedly powered off and when powered back on the btrfs filesystem is completely lost.

I expect the reflink feature in XFS to become non-experimental before btrfs is stable enough for production use.

ZFS?

ZFS is great. It doesn’t have out-of-band deduplication or reflinks though, and they don’t plan to any time soon.

Categories: LUG Community Blogs

Viva Amiga: The Story of a Beautiful Machine | Documentary

Planet SurreyLUG - Mon, 09/01/2017 - 13:08

Acclaimed documentary film Viva Amiga is a retro love letter to the freaks, geeks, and geniuses behind the best damned computer ever made: the Amiga.

Source: Viva Amiga: The Story of a Beautiful Machine | Documentary

The post Viva Amiga: The Story of a Beautiful Machine | Documentary appeared first on dowe.io.

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (November and December 2016)

Planet HantsLUG - Mon, 09/01/2017 - 00:30

The following contributors got their Debian Developer accounts in the last two months:

  • Karen M Sandler (karen)
  • Sebastien Badia (sbadia)
  • Christos Trochalakis (ctrochalakis)
  • Adrian Bunk (bunk)
  • Michael Lustfield (mtecknology)
  • James Clarke (jrtc27)
  • Sean Whitton (spwhitton)
  • Jerome Georges Benoit (calculus)
  • Daniel Lange (dlange)
  • Christoph Biedl (cbiedl)
  • Gustavo Panizzo (gefa)
  • Gert Wollny (gewo)
  • Benjamin Barenblat (bbaren)
  • Giovani Augusto Ferreira (giovani)
  • Mechtilde Stehmann (mechtilde)
  • Christopher Stuart Hoskin (mans0954)

The following contributors were added as Debian Maintainers in the last two months:

  • Dmitry Bogatov
  • Dominik George
  • Gordon Ball
  • Sruthi Chandran
  • Michael Shuler
  • Filip Pytloun
  • Mario Anthony Limonciello
  • Julien Puydt
  • Nicholas D Steeves
  • Raoul Snyman

Congratulations!

Categories: LUG Community Blogs

Steve Kemp: Patching scp and other updates.

Planet HantsLUG - Sun, 08/01/2017 - 17:39

I use openssh every day, be it the ssh command for connecting to remote hosts, or the scp command for uploading/downloading files.

Once a day, or more, I forget that scp uses the non-obvious -P flag for specifying the port, not the -p flag that ssh uses.

Enough is enough. I shall not file a bug report against the Debian openssh-client page, because no doubt compatibility with both upstream, and other distributions, is important. But damnit I've had enough.

apt-get source openssh-client shows the appropriate code:

fflag = tflag = 0; while ((ch = getopt(argc, argv, "dfl:prtvBCc:i:P:q12346S:o:F:")) != -1) switch (ch) { .. .. case 'P': addargs(&remote_remote_args, "-p"); addargs(&remote_remote_args, "%s", optarg); addargs(&args, "-p"); addargs(&args, "%s", optarg); break; .. .. case 'p': pflag = 1; break; .. .. ..

Swapping those two flags around, and updating the format string appropriately, was sufficient to do the necessary.

In other news I've done some hardware development, using both Arduino boards and the WeMos D1-mini. I'm still at the stage where I'm flashing lights, and doing similarly trivial things:

I have more complex projects planned for the future, but these are on-hold until the appropriate parts are delivered:

  • MP3 playback.
  • Bluetooth-speakers.
  • Washing machine alarm.
  • LCD clock, with time set by NTP, and relay control.

Even with a few LEDs though I've had fun, for example writing a trivial binary display.

Categories: LUG Community Blogs
Syndicate content