LUG Community Blogs

Windows 10 Black Screen After Remote Desktop

Planet SurreyLUG - 11 hours 2 min ago

I logged into my Windows 10 Professional (1703) desktop from home yesterday, using Remmina on Ubuntu 16.04. I wasn’t surprised when my desktop wallpaper was black, I know it does this to save bandwidth, but when I returned to the office this morning my desktop was still black and, as it is set by the administrator via GPO, could not be changed.

Searching the Internet was not helpful on this occasion; so I have made this quick posts to help others.

It turns out that this is not some weird absence of wallpaper, but rather is an plain black wallpaper image, which has managed to get itself cached. The solution is consequentially simple - find and terminate said cached image.

  1. Open File Explorer and navigate to %APPDATA% (you can type that into the top address/location field).
  2. In the search box at the top right enter the text “Cache” (see image below).
  3. Delete the cached version of the black wallpaper once it is found.
  4. Sign out and then sign back in.

Please do comment below if this was helpful, or if you needed to alter these instructions at all.

Categories: LUG Community Blogs

Chris Lamb: Faking cleaner URLs in the Debian BTS

Planet ALUG - Fri, 03/11/2017 - 08:21

Debian bug #846500 requests that the Bug Tracking System moves the canonical URL for a given bug from:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=846500

… to the shorter, cleaner and generally less ugly:

https://bugs.debian.org/846500

(The latter currently redirects to the former.)

However, whilst we wait for a fix we can abuse the window.history object from the HTML History API to fake this locally:

var m = window.location.href .match(/https:\/\/bugs.debian.org\/cgi-bin\/bugreport.cgi\?bug=(\d+)(#.*)?$/); if (!m) return; for (var x of document.getElementsByTagName("a")) { var href = x.getAttribute("href"); if (href && href.match(/^[^:]+\.cgi/)) { // Mangle relative URIs; <base> tag does not DTRT x.setAttribute('href', "/cgi-bin/" + href); } } history.replaceState({}, "", "/" + m[1] + window.location.hash);

This should work with most "user script" managers — I happen to use TamperMonkey in Chrome.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in October 2017

Planet ALUG - Tue, 31/10/2017 - 21:22

Here is my monthly update covering what I have been doing in the free software world in October 2017 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Improve names in output of "internal" binwalk members. (#877525).
  • Don't crash on malformed md5sums files. (#877473).
  • Omit misleading "any of" prefix when only complaining about a single module on import. [...]
  • Adjust tests as ps2ascii now varies its output on timezone. [...]

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Clojure considers .class file to be stale if it shares the same timestamp of the .clj. We thus adjust the timestamps of the .clj to always be younger. (#877418).
  • Print a message in --verbose mode if no canonical time was specified. [...]

buildinfo.debian.net

buildinfo.debian.net is my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them.

  • Always show SHA-256 checksums, regardless of the browser viewport size. [...]
  • Add an API endpoint to fetch specific .buildinfo files for a certain package/version/architecture. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed
  • devscripts: Please print the actual arguments debuild makes to Lintian. (#880124)
  • hw-detect: Drop reference to floppy disks; it's almost 2018. (#880122)
  • debci:
    • Use deb.debian.org over http.debian.net. (#879654)
    • Document how to use an alternative mirror. (#879655)
Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Followed up on a large number of upstream "pings" that have been left dormant.
  • Issued DLA 1121-1 to fix an out-of-bounds read vulnerability in curl where a malicious FTP server could abuse this to prevent clients from interacting with it.
  • Issued DLA 1123-1 for the "Go" programming language where an attacker could generate a MIME request such that the server ran out of file descriptors.
  • Issued DLA 1126-1 for the libxfont font selection and rasterisation library, correcting two vulnerabilities, both involving the library being tricked into reading invalid/random memory.
  • Issued DLA 1134-1 for sdl-image1.2, an image loading library. A maliciously-crafted .xcf file could cause a stack-based buffer overflow resulting in potential code execution.
Uploads
  • python-django:
    • 2.0~beta1-1 — New upstream 2.x release.
    • 1.11.6-1 — New upstream bugfix release.
  • gunicorn (19.6.0-10+deb9u1) — Prepared a release for stable to avoid a runtime dependency on a compiler. (#877722)
  • redis:
    • 4:4.0.2-3:
      • Drop the Debian-specific /etc/redis/redis-server.pre-up.d (etc.) hooks and remove them if unchanged.
      • Include systemd redis-server@.service and redis-sentinel@.service template files to easily run multiple Redis instances. (#877702)
      • Patch redis.conf and sentinel.conf with quilt instead of maintaining our own versions under debian/.
    • 4:4.0.2-4:
      • Add input validity checking to cluster config slot numbers to fix CVE-2017-15047. (#878076)
      • Drop debian/bin/generate-parts now we aren't calling it.
      • Correct Bash-ism in NEWS file.
    • 4:4.0.2-5: Replace the existing patch for CVE-2017-15047 with an upstream-blessed version that covers another case.
  • redisearch (0.21.3-5) — Initial release.
  • docbook2man (2.0.0-40) — Correct spelling mistakes in binaries and other misc packaging tidying.
  • python-redis (2.10.6-1) — New upstream release.
  • bfs (1.1.3-1) — New upstream release.
FTP Team

As a Debian FTP assistant I ACCEPTed 103 packages: amcheck, argagg, binutils, blockui, bro-pkg, chkservice, citus, django-axes, docker-containerd, doctest, dtkwidget, duktape, feed2exec, fontforge, fonttools, gcc-8, gcc-8-cross, generator-scripting-language, gitgraph.js, haskell-uri-encode, hoel, iniparser, its, jquery-areyousure, kodi, libcatmandu-mods-perl, libcatmandu-template-perl, libcatmandu-xml-perl, libcatmandu-xsd-perl, libcode-tidyall-plugin-sortlines-naturally-perl, libgdamm5.0, libinfinity, libmods-record-perl, libreoffice-dictionaries, libset-intervaltree-perl, libsodium, linux, linux-grsec, ltsp-manager, lxqt-themes, mailman3-core, measurement-kit, mini-buildd, musescore, node-babel, node-babel-eslint, node-babel-loader, node-babel-plugin-add-module-exports, node-babel-plugin-transform-define, node-gulp-newer, node-regenerate-unicode-properties, node-regexpu-core, node-regjsparser, node-unicode-data, node-unicode-loose-match, openjdk-9, orafce, pgaudit, pgsql-ogr-fdw, pk4, postgresql-mysql-fdw, powa-archivist, python-azure-devtools, python-colormap, python-darkslide, python-dotenv, python-karborclient, python-logfury, python-lupa, python-marshmallow, python-murano-pkg-check, python-octaviaclient, python-pathspec, python-pgpy, python-pydub, python-randomize, python-sabyenc, python-searchlightclient, python-stestr, python-subunit2sql, python-twitter, python-utils, python-wsgilog, r-cran-bindr, r-cran-desc, r-cran-hms, r-cran-readstata13, r-cran-rprojroot, r-cran-wikidatar, r-cran-wikipedir, r-cran-wikitaxa, repmgr, requests-file, resteasy3.0, sdl-kitchensink, stardicter, systemd-el, thunderbird, tomcat8.0, uwsgi-plugin-luajit, uwsgi-plugin-mongo, uwsgi-plugin-php & uwsgi-plugin-v8.

I additionally filed 3 RC bugs against packages that had incomplete debian/copyright files against: fonttools, generator-scripting-language & libsodium.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): Introducing 석진 the car

Planet ALUG - Tue, 24/10/2017 - 19:44

For many years now, I have been driving a diesel based VW Passat Estate. It has served me very well and been as reliable as I might have hoped given how awful I am with cars. Sadly Gunther was reaching the point where it was going to cost more per year to maintain than the car was worth, and also I've been being more and more irked by not having a car from the future.

I spent many months doing spreadsheets, trying to convince myself I could somehow afford a Tesla of some variety. Sadly I never quite managed it. As such I set my sights on the more viable BEVs such as the Nissan Leaf. For a while, nothing I saw was something I wanted. I am quite unusual it seems, in that I don't want a car which is a "Look at me, I'm driving an electric car" fashion statement. I felt like I'd never get something which looked like a normal car, but happened to be a BEV.

Then along came the Hyundai Ioniq. Hybrid, Plug-in Hybrid, and BEV all looking basically the same, and not in-your-face-special. I began to covet. Eventually I caved and arranged a test drive of an Ioniq plug-in hybrid because the BEV was basically on 9 month lead. I enjoyed the drive and was instantly very sad because I didn't want a plug-in hybrid, I wanted a BEV. Despondent, I left the dealership and went home.

I went online and found a small number of second-hand Ioniq BEVs but as I scrolled through the list, none seemed to be of the right trim level. Then, just as I was ready to give up hope, I saw a new listing, no photo, of the right thing. One snag, it was 200 miles away. No matter, I rang the place, confirmed it was available, and agreed to sleep on the decision.

The following morning, I hadn't decided to not buy, so I called them up, put down a deposit to hold the car until I could test drive it, and then began the long and awkward process of working out how I would charge the car given I have no off-street parking so I can't charge at home. (Yeah yeah, you'd think I'd have checked that first, but no I'm just not that sensible). Over the week I convinced myself I could do it, I ordered RFID cards for various schemes, signed up with a number of services, and then, on Friday last week, I drove down to a hotel near the dealership and had a fitful night's sleep.

I rocked up to the dealership exactly as they opened for business, shook the hand of the very helpful salesman who had gone through the purchase process with me over the phone during the week, and got to see the car. Instant want coursed through me as I sat in it and decided "Yes, this somehow feels right".

I took the car for about a 45 minute test drive just to see how it felt relative to the plug-in hybrid I'd driven the week before and it was like night and day. The BEV felt so much better to drive. I was hooked. Back to the dealership and we began the paperwork. Emptying Gunther of all of the bits and bobs scattered throughout his nooks and crannies took a while and gave me a chance to say goodbye to a car which, on reflection, had actually been a pleasure to own, even when its expensive things went wrong, more than once. But once I'd closed the Passat for the last time, and handed the keys over, it was quite a bittersweet moment as the salesman drove off in what still felt like my car, despite (by this point) it not being so.

Sitting in the Ioniq though, I headed off for the 200 mile journey back home. With about 90% charge left after the test drive, I had two stops planned at rapid chargers and I headed toward my first.

Unfortunately disaster struck, the rapid (50KW) charger refused to initialise, and I ended up with my car on the slower (7KW) charger to get enough juice into it to drive on to the next rapid charger enabled service station. When I got the message that my maximum charge period (45m) had elapsed, I headed back to the car to discover I couldn't persuade it to unlock from the car. Much hassle later, and an AA man came and together we learned that it takes 2 to tango, one to pull the emergency release in the boot, the other to then unplug the cable.

Armed with this knowledge, I headed on my way to a rapid charger I'd found on the map which wasn't run by the same company. Vainly hoping that this would work better, I plugged the car in, set the charger going, and headed into the adjacent shop for a rest break. I went back to the car about 20 minutes later to see the charger wasn't running. Horror of horrors. I imagined maybe some nasty little oik had pressed 'stop' so I started the charger up again, and sat in the car to read my book. After about 3 minutes, the charge stopped. Turns out that charger was slightly iffy and couldn't cope with the charge current and kept emergency-stopping as a result. The lovely lady I spoke to about it directed me to a nearby (12 miles or so, easily done with the charge I had) charger in the grounds of a gorgeous chateau hotel. That one worked perfectly and I filled up. I drove on to my second planned stop and that charge went perfectly too. In fact, every charge since has gone flawlessly. So perhaps my baptism of failed charges has hardened me to the problems with owning a BEV.

I've spent the past few days trying different charge points around Manchester enjoying my free charge capability, and trying different names for the car before I finally settled on 석진 which is a reasonable Korean boy's name (since the car is Korean I even got to set that as the bluetooth ID) and it's roughly pronounced sock/gin which are two wonderful things in life.

I'm currently sat in a pub, eating a burger, enjoying myself while 석진 suckles on the teat of "free" electricity to make up for the fact that I've developed a non-trivial habit of leaving Audi drivers in the dust at traffic lights.

Further updates may happen as I play with Android Auto and other toys in an attempt to eventually be able to ask the car to please "freeze my buttocks" (a feature it has in the form of air-conditioned seats.)

Categories: LUG Community Blogs

Mick Morgan: multilingual chat

Planet ALUG - Sat, 14/10/2017 - 15:29

I use email fairly extensively for my public communication but I use XMPP (with suitable end-to-end encryption) for my private, personal communication. And I use my own XMPP server to facilitate this. But as I have mentioned in previous posts my family and many of my friends insist on using proprietary variants of this open standard (facebook, whatsapp etc. ad nauseam). I was thus amused to note that I am not alone in having difficulty in keeping track of “which of my contacts use which chat systems“.

(My thanks, as ever, to Randall Munroe over at XKCD.)

I must find a client which can handle all of my messaging systems. Better yet, I’d like one which worked, and seamlessly synchronised, across my mobile devices and my linux desktop. Even better again, such a client should offer simple (i.e. easy to use) e-to-e crypto and use an open server platform which I can manage myself.

Proprietary systems suck.

Categories: LUG Community Blogs

Chris Lamb: python-gfshare: Secret sharing in Python

Planet ALUG - Sat, 07/10/2017 - 10:12

I've just released python-gfshare, a Python library that implements Shamir’s method for secret sharing, a technique to split a "secret" into multiple parts.

An arbitrary number of those parts are then needed to recover the original file but any smaller combination of parts are useless to an attacker.

For instance, you might split a GPG key into a “3-of-5” share, putting one share on each of three computers and two shares on a USB memory stick. You can then use the GPG key on any of those three computers using the memory stick.

If the memory stick is lost you can ultimately recover the key by bringing the three computers back together again.

For example:

$ pip install gfshare >>> import gfshare >>> shares = gfshare.split(3, 5, b"secret") >>> shares {104: b'1\x9cQ\xd8\xd3\xaf', 164: b'\x15\xa4\xcf7R\xd2', 171: b'>\xf5*\xce\xa2\xe2', 173: b'd\xd1\xaaR\xa5\x1d', 183: b'\x0c\xb4Y\x8apC'} >>> gfshare.combine(shares) b"secret"

After removing two "shares" we can still reconstruct the secret as we have 3 out of the 5 originals:

>>> del shares['104'] >>> del shares['171'] >>> gfshare.combine(shares) b"secret"

Under the hood it uses Daniel Silverstone’s libgfshare library. The source code is available on GitHub as is the documentation.

Patches welcome.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): F/LOSS (in)activity, September 2017

Planet ALUG - Wed, 04/10/2017 - 12:53

In the interests of keeping myself "honest" regarding F/LOSS activity, here's a report, sadly it's not very good.

Unfortunately, September was a poor month for me in terms of motivation and energy for F/LOSS work. I did some amount of Gitano work, merging a patch from Richard Ipsum for help text of the config command. I also submitted another patch to the STM32F103xx Rust repository, though it wasn't a particularly big thing. Otherwise I've been relatively quiet on the Rust/USB stuff and have otherwise kept away from projects.

Sometimes one needs to take a step away from things in order to recuperate and care for oneself rather than the various demands on ones time. This is something I had been feeling I needed for a while, and with a lack of motivation toward the start of the month I gave myself permission to take a short break.

Next weekend is the next Gitano developer day and I hope to pick up my activity again then, so I should have more to report for October.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in September 2017

Planet ALUG - Sat, 30/09/2017 - 18:31

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):

  • Submitted a pull request to Quadrapassel (the Gnome version of Tetris) to start a new game when the pause button is pressed outside of a game. This means you would no longer have to use the mouse to start a new game. [...]
  • Made a large number of improvements to AptFS — my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders — including moving away from manual parsing of package lists [...] and numerous code tidying/refactoring changes.
  • Sent a small patch to django-sitetree, a Django library for menu and breadcrumb navigation elements to not mask test exit codes from the surrounding shell. [...]
  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Add support for "sloppy" backports. Thanks to Bernd Zeimetz for the idea and ongoing testing. [...]
    • Merged a pull request from James McCoy to pass DEB_BUILD_PROFILES through to the build. [...]
    • Workaround Travis CI's HTTP proxy which does not appear to support SRV records. [...]
    • Run debc from devscripts if the build was successful [...] and output the .buildinfo file if it exists [...].
  • Fixed a few issues in local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool:
    • Fix an issue where file permissions from the remote could result in a local archive that was impossible to access. [...]
    • Clear out empty directories on the local repository. [...]
  • Updated django-staticfiles-dotd, my Django staticfiles adaptor to concatentate static media in .d-style directories to support Python 3.x by using bytes objects (commit) and move away from monkeypatch as it does not have a Python 3.x port yet (commit).
  • I also posted a short essay to my blog entitled "Ask the Dumb Questions" as well as provided an update on the latest Lintian release.
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)


I also made the following changes to our tooling:

reproducible-check

reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]

I also blogged about this utility. [...]

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in setup.py. [...]
    • Compare types with identity not equality. [...] [...]
    • Use logging.py's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.



Debian

My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

Lintian

I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers:

I also blogged specifically about the Lintian 2.5.54 release.

Patches contributed
  • debconf: Please add a context manager to debconf.py. (#877096)
  • nm.debian.org: Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)

I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.

Debian LTS

This month I have been paid to work 15¾ hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.
Uploads
  • python-django:
    • 1.11.5-1 — New upstream security release. (#874415)
    • 1.11.5-2 — Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 — New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 — New upstream release.
    • 4.0.2-2 — Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 — Upload to stretch-backports.
  • aptfs (0.11.0-1) — New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) — New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) — New upstream release.
  • docbook-to-man (1:2.0.0-39) — Tighten autopkgtests and enable testing via travis.debian.net.
  • python-daiquiri (1.3.0-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs):

  • vimoutliner (0.3.4+pristine-9.3):
    • Make the build reproducible. (#776369)
    • Expand placeholders in Debian.README. (#575142, #725634)
    • Recommend that the ftplugin is enabled. (#603115)
    • Correct "is not enable" typo.
  • bittornado (0.3.18-10.3):
    • Make the build reproducible. (#796212).
    • Add missing Build-Depends on dh-python.
  • dtc-xen (0.5.17-1.1):
    • Make the build reproducible. (#777322)
    • Add missing Build-Depends on dh-python.
  • dict-gazetteer2k (1.0.0-5.4):
    • Make the build reproducible. (#776376).
    • Override empty-binary-packagea Lintian warning to avoid dak autoreject.
  • cgilib (0.6-1.1) — Make the build reproducible. (#776935)
  • dhcping (1.2-4.2) — Make the build reproducible. (#777320)
  • dict-moby-thesaurus (1.0-6.4) — Make the build reproducible. (#776375)
  • dtaus (0.9-1.1) — Make the build reproducible. (#777321)
  • fastforward (1:0.51-3.2) — Make the build reproducible. (#776972)
  • wily (0.13.41-7.3) — Make the build reproducible. (#777360)
Debian bugs filed
  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • bugs.debian.org: Please explicitly link to {packages,tracker}.debian.org. (#876746)
  • Requests for packaging:
    • selfspy — log everything you do on the computer. (#873955)
    • shoogle — use the Google API from the shell. (#873916)
FTP Team

As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

Categories: LUG Community Blogs

Mick Morgan: geeks rule

Planet ALUG - Sat, 30/09/2017 - 14:28

Well, sliderule, actually.

The ‘net is a truly wondrous space. I can’t recall exactly how I stumbled across the “International Sliderule Museum” but it is such a wonderful resource devoted to a tool which most people under the age of 40 will never have used that I just had to post a link to it.

Enjoy.

Categories: LUG Community Blogs

Chris Lamb: Lintian: We are all Perl developers now

Planet ALUG - Mon, 25/09/2017 - 16:26

Lintian is a static analysis tool for Debian packages, reporting on various errors, omissions and general quality-assurance issues to maintainers.

I've previously written about my exploits with Lintian as well as authoring a short tutorial on how to write your own Lintian check.

Anyway, I recently uploaded version 2.5.53 about two months since previous release. The biggest changes you may notice are supporting the latest version of the Debian Policy as well the addition of checks to encourage the migration to Python 3.

Thanks to all who contributed patches, code review and bug reports to this release. The full changelog is as follows:

lintian (2.5.53) unstable; urgency=medium The "we are all Perl developers now" release. * Summary of tag changes: + Added: - alternatively-build-depends-on-python-sphinx-and-python3-sphinx - build-depends-on-python-sphinx-only - dependency-on-python-version-marked-for-end-of-life - maintainer-script-interpreter - missing-call-to-dpkg-maintscript-helper - node-package-install-in-nodejs-rootdir - override-file-in-wrong-package - package-installs-java-bytecode - python-foo-but-no-python3-foo - script-needs-depends-on-sensible-utils - script-uses-deprecated-nodejs-location - transitional-package-should-be-oldlibs-optional - unnecessary-testsuite-autopkgtest-header - vcs-browser-links-to-empty-view + Removed: - debug-package-should-be-priority-extra - missing-classpath - transitional-package-should-be-oldlibs-extra * checks/apache2.pm: + [CL] Fix an apache2-unparsable-dependency false positive by allowing periods (".") in dependency names. (Closes: #873701) * checks/binaries.pm: + [CL] Apply patches from Guillem Jover & Boud Roukema to improve the description of the binary-file-built-without-LFS-support tag. (Closes: #874078) * checks/changes.{pm,desc}: + [CL] Ignore DFSG-repacked packages when checking for upstream source tarball signatures as they will never match by definition. (Closes: #871957) + [CL] Downgrade severity of orig-tarball-missing-upstream-signature from "E:" to "W:" as many common tools do not make including the signatures easy enough right now. (Closes: #870722, #870069) + [CL] Expand the explanation of the orig-tarball-missing-upstream-signature tag to include the location of where dpkg-source will look. Thanks to Theodore Ts'o for the suggestion. * checks/copyright-file.pm: + [CL] Address a number of issues in copyright-year-in-future: - Prevent false positives in port numbers, email addresses, ISO standard numbers and matching specific and general street addresses. (Closes: #869788) - Match all violating years in a line, not just the first (eg. "2000-2107"). - Ignore meta copyright statements such as "Original Author". Thanks to Thorsten Alteholz for the bug report. (Closes: #873323) - Expand testsuite. * checks/cruft.{pm,desc}: + [CL] Downgrade severity of file-contains-fixme-placeholder tag from "important" (ie. "E:") to "wishlist" (ie. "I:"). Thanks to Gregor Herrmann for the suggestion. + [CL] Apply patch from Alex Muntada (alexm) to use "substr" instead of "substring" in mentions-deprecated-usr-lib-perl5-directory's description. (Closes: #871767) + [CL] Don't check copyright_hints file for FIXME placeholders. (Closes: #872843) + [CL] Don't match quoted "FIXME" variants as they are almost always deliberate. Thanks to Adrian Bunk for the report. (Closes: #870199) + [CL] Avoid false positives in missing source checks for "CSS Browser Selector". (Closes: #874381) * checks/debhelper.pm: + [CL] Prevent a false positive of missing-build-dependency-for-dh_-command that can be exposed by following the advice for the recently added useless-autoreconf-build-depends tag. (Closes: #869541) * checks/debian-readme.{pm,desc}: + [CL] Ensure readme-debian-contains-debmake-template also checks for templates "Automatically generated by debmake". * checks/description.{desc,pm}: + [CL] Clarify explanation of description-starts-with-leading-spaces tag. Thanks to Taylor Kline for the report and patch. (Closes: #849622) + [NT] Skip capitalization-error-in-description-synopsis for auto-generated packages (such as dbgsym packages). * checks/fields.{desc,pm}: + [CL] Ensure that python3-foo packages have "Section: python", not just python2-foo. (Closes: #870272) + [RG] Do no longer require debug packages to be priority extra. + [BR] Use Lintian::Data for name/section mapping + [CL] Check for packages including "?rev=0&sc=0" in Vcs-Browser. (Closes: #681713) + [NT] Transitional packages should now be "oldlibs/optional" rather than "oldlibs/extra". The related tag has been renamed accordingly. * checks/filename-length.pm: + [NT] Skip the check on auto-generated binary packages (such as dbgsym packages). * checks/files.{pm,desc}: + [BR] Avoid privacy-breach-generic false positives for legal.xml. + [BR] Detect install of node package under /usr/lib/nodejs/[^/]*$ + [CL] Check for packages shipping compiled Java class files. Thanks Carnë Draug . (Closes: #873211) + [BR] Privacy breach is no longer experimental. * checks/init.d.desc: + [RG] Do not recommend a versioned dependency on lsb-base in init.d-script-needs-depends-on-lsb-base. (Closes: #847144) * checks/java.pm: + [CL] Additionally consider .cljc files as code to avoid false- positive codeless-jar warnings. (Closes: #870649) + [CL] Drop problematic missing-classpath check. (Closes: #857123) * checks/menu-format.desc: + [CL] Prevent false positives in desktop-entry-lacks-keywords-entry for "Link" and "Directory" .desktop files. (Closes: #873702) * checks/python.{pm,desc}: + [CL] Split out Python checks from "scripts" check to a new, source check of type "source". + [CL] Check for python-foo without corresponding python3-foo packages to assist in Python 2.x deprecation. (Closes: #870681) + [CL] Check for packages that Build-Depend on python-sphinx only. (Closes: #870730) + [CL] Check for packages that alternatively Build-Depend on the Python 2 and Python 3 versions of Sphinx. (Closes: #870758) + [CL] Check for binary packages that depend on Python 2.x. (Closes: #870822) * checks/scripts.pm: + [CL] Correct false positives in unconditional-use-of-dpkg-statoverride by detecting "if !" as a valid shell prefix. (Closes: #869587) + [CL] Check for missing calls to dpkg-maintscript-helper(1) in maintainer scripts. (Closes: #872042) + [CL] Check for packages using sensible-utils without declaring a dependency after its split from debianutils. (Closes: #872611) + [CL] Warn about scripts using "nodejs" as an interpreter now that nodejs provides /usr/bin/node. (Closes: #873096) + [BR] Add a statistic tag giving interpreter. * checks/testsuite.{desc,pm}: + [CL] Remove recommendations to add a "Testsuite: autopkgtest" field to debian/control as it is added when needed by dpkg-source(1) since dpkg 1.17.1. (Closes: #865531) + [CL] Warn if we see an unnecessary "Testsuite: autopkgtest" header in debian/control. + [NT] Recognise "autopkgtest-pkg-go" as a valid test suite. + [CL] Recognise "autopkgtest-pkg-elpa" as a valid test suite. (Closes: #873458) + [CL] Recognise "autopkgtest-pkg-octave" as a valid test suite. (Closes: #875985) + [CL] Update the description of unknown-testsuite to reflect that "autopkgtest" is not the only valid value; the referenced URL is out-of-date (filed as #876008). (Closes: #876003) * data/binaries/embedded-libs: + [RG] Detect embedded copies of heimdal, libgxps, libquicktime, libsass, libytnef, and taglib. + [RG] Use an additional string to detect embedded copies of openjpeg2. (Closes: #762956) * data/fields/name_section_mappings: + [BR] node- package section is javascript. + [CL] Apply patch from Guillem Jover to add more section mappings. (Closes: #874121) * data/fields/obsolete-packages: + [MR] Add dh-systemd. (Closes: #872076) * data/fields/perl-provides: + [CL] Refresh perl provides. * data/fields/virtual-packages: + [CL] Update data file from archive. This fixes a false positive for "bacula-director". (Closes: #835120) * data/files/obsolete-paths: + [CL] Add note to /etc/bash_completion.d entry regarding stricter filename requirements. (Closes: #814599) * data/files/privacy-breaker-websites: + [BR] Detect custom donation logos like apache. + [BR] Detect generic counter website. * data/standards-version/release-dates: + [CL] Add 4.0.1 and 4.1.0 as known standards versions. (Closes: #875509) * debian/control: + [CL] Mention Debian Policy v4.1.0 in the description. + [CL] Add myself to Uploaders. + [CL] Drop unnecessary "Testsuite: autopkgtest"; this is implied from debian/tests/control existing. * commands/info.pm: + [CL] Add a --list-tags option to print all tags Lintian knows about. Thanks to Rajendra Gokhale for the suggestion. (Closes: #779675) * commands/lintian.pm: + [CL] Apply patch from Maia Everett to avoid British spelling when using en_US locale. (Closes: #868897) * lib/Lintian/Check.pm: + [CL] Stop emitting {maintainer,uploader}-address-causes-mail-loops for @packages.debian.org addresses. (Closes: #871575) * lib/Lintian/Collect/Binary.pm: + [NT] Introduce an "auto-generated" argument for "is_pkg_class". * lib/Lintian/Data.pm: + [CL] Modify Lintian::Data's "all" to always return keys in insertion order, dropping dependency on libtie-ixhash-perl. * helpers/coll/objdump-info-helper: + [CL] Apply patch from Steve Langasek to accommodate binutils 2.29 outputting symbols in a different format on ppc64el. (Closes: #869750) * t/tests/fields-perl-provides/tags: + [CL] Update expected output to match new Perl provides. * t/tests/files-privacybreach/*: + [CL] Add explicit test for packages including external fonts via the Google Font API. Thanks to Ian Jackson for the report. (Closes: #873434) + [CL] Add explicit test for packages including external fonts via the Typekit API via <script/> HTML tags. * t/tests/*/desc: + [CL] Add missing entries in "Test-For" fields to make development/testing workflow less error-prone. * private/generate-tag-summary: + [CL] git-describe(1) will usually emit 7 hexadecimal digits as the abbreviated object name, However, as this can be user-dependent, pass --abbrev=0 to ensure it does not vary between systems. This also means we do not need to strip it ourselves. * private/refresh-*: + [CL] Use deb.debian.org as the default mirror. + [CL] Update locations of Contents-<arch> files; they are now namespaced by distribution (eg. "main"). -- Chris Lamb <lamby@debian.org> Wed, 20 Sep 2017 09:25:06 +0100

Categories: LUG Community Blogs

Andy Smith: Giving Cinema Paradiso a try

Planet HantsLUG - Sat, 23/09/2017 - 23:38
Farewell, LoveFiLM

I’ve been a customer of LoveFiLM for something like 12 years—since before they were owned by Amazon. In their original incarnation they were great: very cheap, and titles very often arrived in exactly the order you specified, i.e. they often managed to send the thing from the very top of the list.

In 2011 they got bought by Amazon and I was initially a bit concerned, but to be honest Amazon have run it well. The single list disappeared and was replaced by three priority lists; high, normal and low, and then a list of things that haven’t yet been released. New rentals were supposed to almost always come from the high priority list (as long as you had enough titles on there) but in a completely unpredictable order. Though of course they would keep multi-disc box sets together, and send lower-numbered seasons before later seasons.

Amazon have now announced that they’re shutting LoveFiLM by Post down at the end of October which I think is a shame, as it was a service I still enjoy.

It was inevitable I suppose due to the increasing popularity of streaming and downloads, and although I’m perfectly able to do the streaming and download thing, receiving discs by post still works for me.

I am used to receiving mockery for consuming some of my entertainment on little plastic discs that a human being has to physically transport to my residence, but LoveFiLM’s service was still cheap, the selection was very good, things could be rented as soon as they were available on disc, and the passive nature of just making a list and having the things sent to me worked well for me.

Cinema Paradiso

My first thought was that that was it for the disc-by-post rental model in the UK. That progress had left it behind. But very quickly people pointed me to Cinema Paradiso. After a quick look around I’ve decided to give it a try and so here are my initial thoughts.

Pricing

At a casual glance the pricing is slightly worse than LoveFiLM’s. I was paying £6.99 a month for 2 discs at home, unlimited rental per month. £6.98 at Cinema Paradiso gets you 2 discs at home but only 4 rentals per month.

I went back through my LoveFiLM rental history for the last year and found there were only 2 months where I managed to rent more than 4 discs, and those times I rented 5 and 6 discs respectively. Realistically it doesn’t seem like 4 discs per month will be much of a restriction to me.

Annoyingly, Cinema Paradiso have a 2 week trial period but only if you sign up to the £9.98 subscription (6 discs a month). You’d have to remember to downgrade to the cheaper subscriptions after 2 weeks, if that’s all you wanted.

Selection

I was pleasantly surprised at how good the selection is at Cinema Paradiso. Not only did they have every title that is currently on my LoveFiLM rental list (96 titles), but they also had a few things that LoveFiLM thinks haven’t been released yet.

I’m not going to claim that my tastes are particularly niche, but there are a few foreign language films and some anime in there, and release dates range from the 70s to 2017.

Manual approval

It seems that new Cinema Paradiso signups need to be manually approved, and this happens only on week days between 8am and mid day. I’ve signed up on a Saturday evening so nothing will get sent out until Monday I suppose.

It’s probably not a big deal as we’re talking about the postal service here so even with LoveFiLM nothing would get posted out until Monday anyway. It is a little jarring after moving away from the behemoth that is Amazon though, and serves as a reminder that Cinema Paradiso is a much smaller company.

Searching for titles

The search feature is okay. It provides suggestions as you type but if your title is obscure then it may not appear in the list of suggestions at all you and need to submit the search box and look through the longer list that appears.

A slight niggle is that if you have moused over any of the initial suggestions it replaces your text with that, so if your title isn’t amongst the suggestions you now have to re-type it.

I like that it shows a rating from Rotten Tomatoes as well as from their own site’s users. LoveFiLM shows IMDB ratings which I don’t trust very much, and also Amazon ratings, which I don’t trust at all for movies or TV. Seeing some of the shockingly-low Rotten Tomatoes scores for some of my LoveFiLM titles resulted in my Cinema Paradiso list shrinking to 83 titles!

Rental list mechanics

It’s hard to tell for sure at this stage because I haven’t yet got my account approved and had any rentals, but it looks to me like the rental list mechanics are a bit clunky compared to LoveFiLM’s.

At LoveFiLM at the point of adding a new title you would choose which of the three “buckets” to put a rental in; high priority, normal priority, or low priority. Every title in those buckets were of equal priority to every other item in the same bucket. So, when adding a new title all you had to consider was whether it was high, medium or low.

Cinema Paradiso has a single big list of rentals. In some ways this might appeal because you can fine-tune what order you would like things in. But I would suggest that very few people want to put that much effort into ordering their list. Personally, when I add a new title I can cope with:

  • “I want to see this soon”
  • “I want to see this some time”
  • “I want to see this, but I’m not bothered when”

Cinema Paradiso appears to want me to say:

  • “Put this at the top, I want it immediately!”
  • “This belongs at #11, just after the 6th season of American Horror Story, but before Capitalism: A love Story
  • “Just stick it at the end”

I can’t find any explanation anywhere on their site as to how the selection actually works, so the logical assumption is that they go down your list from top to bottom until they find a title that you want that they have available right now. Without the three buckets to put titles in, it seems to me then that every addition will have to involve some list management unless I either want to see that title really soon, or probably never.

I’ll have to give it a go but this mechanism seems a bit more awkward than LoveFiLM’s approach and needlessly so, because LoveFiLM’s way doesn’t make any promises about which of the titles in each bucket will come next either, nor even that it will be anything from the high priority bucket at all. Although I cannot remember a time when something has come that wasn’t from the high priority bucket.

Cinema Paradiso does let you have more than one list, and you can divide your disc allocation between lists, but I don’t think I could emulate the high/normal/low with that. Having a 2 disc allocation I’d always be getting one disc from the “high” list and one disc from the “normal” priority, which isn’t how I’d want that to work.

Let’s see how it goes.

Referral

I did not know when I signed up that there was a referral scheme which is a shame because I do know some people already using Cinema Paradiso. If you’re going to sign up then please use my referral link. I will get a ⅙ reduction in rental fees for each person that does.

Categories: LUG Community Blogs

Andy Smith: Tricky issues when upgrading to the GoCardless “Pro” API

Planet HantsLUG - Thu, 21/09/2017 - 21:06
Background

Since 2012 BitFolk has been using GoCardless as a Direct Debit payment provider. On the whole it has been a pleasant experience:

  • Their API is a pleasure to integrate against, having excellent documentation
  • Their support is responsive and knowledgeable
  • Really good sandbox environment with plenty of testing tools
  • The fees, being 1% capped at £2.00, are pretty good for any kind of payment provider (much less than PayPal, Stripe, etc.)

Of course, if I was submitting Direct Debits myself there would be no charge at all, but BitFolk is too small and my bank (Barclays) are not interested in talking to me about that.

The “Pro” API

In September 2014 GoCardless came out with a new version of their API called the “Pro API”. It made a few things nicer but didn’t come with any real new features applicable to BitFolk, and also added a minimum fee of £0.20.

The original API I’d integrated against has a 1% fee capped at £2.00, and as BitFolk’s smallest plan is £10.79 including VAT the fee would generally be £0.11. Having a £0.20 fee on these payments would represent nearly a doubling of fees for many of my payments.

So, no compelling reason to use the Pro API.

Over the years, GoCardless made more noise about their Pro API and started calling their original API the “legacy API”. I could see the way things were going. Sure enough, eventually they announced that the legacy API would be disabled on 31 October 2017. No choice but to move to the Pro API now.

Payment caps

There aren’t normally any limits on Direct Debit payments. When you let your energy supplier or council or whatever do a Direct Debit, they can empty your bank account if they like.

The Direct Debit Guarantee has very strong provisions in it for protecting the payee and essentially if you dispute anything, any time, you get your money back without question and the supplier has to pursue you for the money by other means if they still think the charge was correct. A company that repeatedly gets Direct Debit chargebacks is going to be kicked off the service by their bank or payment provider.

The original GoCardless API had the ability to set caps on the mandate which would be enforced their side. A simple “X amount per Y time period”. I thought that this would provide some comfort to customers who may not be otherwise familiar with authorising Direct Debits from small companies like BitFolk, so I made use of that feature by default.

This turned out to be a bad decision.

The main problem with this was that there was no way to change the cap. If a customer upgraded their service then I’d have to cancel their Direct Debit mandate and ask them to authorise a new one because it would cease being possible to charge them the correct amount. Authorising a new mandate was not difficult—about the same amount of work as making any sort of online payment—but asking people to do things is always a pain point.

There was a long-standing feature request with GoCardless to implement some sort of “follow this link to authorise the change” feature, but it never happened.

Payment caps and the new API

The Pro API does not support mandates with a capped amount per interval. Given that I’d already established that it was a mistake to do that, I wasn’t too bothered about that.

I’ve since discovered however that the Pro API not only does not support setting the caps, it does not have any way to query them either. This is bad because I need to use the Pro API with mandates that were created in the legacy API. And all of those have caps.

Here’s the flow I had using the legacy API.
Legacy payment process

This way if the charge was coming a little too early, I could give some latitude and let it wait a couple of days until it could be charged. I’d also know if the problem was that the cap was too low. In that case there would be no choice but to cancel the customer’s mandate and ask them to authorise another one, but at least I would know exactly what the problem was.

With the Pro API, there is no way to check timings and charge caps. All I can do is make the charge, and then if it’s too soon or too much I get the same error message:

“Validation failed / exceeds mandate cap”

That’s it. It doesn’t tell me what the cap is, it doesn’t tell me if it’s because I’m charging too soon, nor if I’m charging too much. There is no way to distinguish between those situations.

Backwards compatible – sort of

GoCardless talk about the Pro API being backwards compatible to the legacy API, so that once switched I would still be able to create payments against mandates that were created using the legacy API. I would not need to get customers to re-authorise.

This is true to a point, but my use of caps per interval in the legacy API has severely restricted how compatible things are, and that’s something I wasn’t aware of. Sure, their “Guide to upgrading” does briefly mention that caps would continue to be enforced:

“Pre-authorisation mandates are not restricted, but the maximum amount and interval that you originally specified will still apply.”

That is the only mention of this issue in that entire document, and that statement would be fine by me, if there would have continued to be a way to tell which failure mode would be encountered.

Thinking that I was just misunderstanding, I asked GoCardless support about this. Their reply:

Thanks for emailing.

I’m afraid the limits aren’t exposed within the new API. The only solution as you suggest, is to try a payment and check for failure.

Apologies for the inconvenience caused here and if you have any further queries please don’t hesitate to let us know.

What now?

I am not yet sure of the best way to handle this.

The nuclear option would be to cancel all mandates and ask customers to authorise them again. I would like to avoid this if possible.

I am thinking that most customers continue to be fine on the “amount per interval” legacy mandates as long as they don’t upgrade, so I can leave them as they are until that happens. If they upgrade, or if a DD payment ever fails with “exceeds mandate cap” then I will have to cancel their mandate and ask them to authorise again. I can see if their mandate was created before ~today and advise them on the web site to cancel it and authorise it again.

Conclusion

I’m a little disappointed that GoCardless didn’t think that there would need to be a way to query mandate caps even though creating new mandates with those limits is no longer possible.

I can’t really accept that there is a good level of backwards compatibility here if there is a feature that you can’t even tell is in use until it causes a payment to fail, and even then you can’t tell which details of that feature cause the failure.

I understand why they haven’t just stopped honouring the caps: it wouldn’t be in line with the consumer-focused spirit of the Direct Debit Guarantee to alter things against customer expectations, and even sending out a notification to the customer might not be enough. I think they should have gone the other way and allowed querying of things that they are going to continue to enforce, though.

Could I have tested for this? Well, the difficulty there is that the GoCardless sandbox environment for the Pro API starts off clean with no access to any of your legacy activity neither from live nor from legacy sandbox. So I couldn’t do something like the following:

  1. Create legacy mandate in legacy sandbox, with amount per interval caps
  2. Try to charge against the legacy mandate from the Pro API sandbox, exceeding the cap
  3. Observe that it fails but with no way to tell why

I did note that there didn’t seem to be attributes of the mandate endpoint that would let me know when it could be charged and what the amount left to charge was, but it didn’t set off any alarm bells. Perhaps it should have.

Also I will admit I’ve had years to switch to Pro API and am only doing it now when forced. Perhaps if I had made a start on this years ago, I’d have noted what I consider to be a deficiency, asked them to remedy it and they might have had time to do so. I don’t actually think it’s likely they would bump the API version for that though. In my defence, as I mentioned, there is nothing attractive about the Pro API for my use, and it does cost more, so no surprise I’ve been reluctant to explore it.

So, if you are scrambling to update your GoCardless integration before 31 October, do check that you are prepared for payments against capped mandates to fail.

Categories: LUG Community Blogs

Chris Lamb: Which packages on my system are reproducible?

Planet ALUG - Fri, 15/09/2017 - 08:29

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process.

As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not:

$ apt install devscripts […] $ reproducible-check […] W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) <https://tests.reproducible-builds.org/debian/subversion> W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) <https://tests.reproducible-builds.org/debian/taglib> W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) <https://tests.reproducible-builds.org/debian/tcltk-defaults> W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) <https://tests.reproducible-builds.org/debian/tk8.6> W: valgrind (1:3.13.0-1) is unreproducible <https://tests.reproducible-builds.org/debian/valgrind> W: wavpack (5.1.0-2) is unreproducible (libwavpack1) <https://tests.reproducible-builds.org/debian/wavpack> W: x265 (2.5-2) is unreproducible (libx265-130) <https://tests.reproducible-builds.org/debian/x265> W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) <https://tests.reproducible-builds.org/debian/xen> W: xmlstarlet (1.6.1-2) is unreproducible <https://tests.reproducible-builds.org/debian/xmlstarlet> W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) <https://tests.reproducible-builds.org/debian/xorg-server> 282/4494 (6.28%) of installed binary packages are unreproducible.

Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework.



The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages:

$ reproducible-check --raw | dd-list --stdin Alec Leamas <leamas.alec@gmail.com> lirc (U) Alessandro Ghedini <ghedo@debian.org> valgrind Alessio Treglia <alessio@debian.org> fluidsynth (U) libsoxr (U) […]

reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017.

Categories: LUG Community Blogs

Chris Lamb: Ask the dumb questions

Planet ALUG - Tue, 05/09/2017 - 11:51

In the same way it vital to ask the "smart questions", it is equally important to ask the dumb ones.

Whilst your milieu might be—say—comparing and contrasting the finer details of commission structures between bond brokers, if you aren't quite sure of the topic learn to be bold and confident enough to boldly ask: I'm sorry, but what actually is a bond?

Don't consider this to be an all-or-nothing affair. After all, you might have at least some idea about what a bond is. Rather, adjust your tolerance to also ask for clarification when you are merely slightly unsure or merely slightly uncertain about a concept, term or reference.

So why do this? Most obviously, you are learning something and expanding your knowledge about the world, but a clarification can avoid problems later if you were mistaken in your assumptions.

Not only that, asking "can you explain that?" or admitting "I don't follow…" is not only being honest with yourself, the vulnerability you show when admitting one's ignorance opens yourself to others leading to closer friendships and working relationships.

We clearly have a tendency to want to come across as knowledgable or―perhaps more honestly―we don't want to appear dumb or uninformed as it will bruise our ego. But the precise opposite is true: nodding and muddling your way through conversations you only partly understand is unlikely to cultivate true feelings of self-respect and a healthy self-esteem.

Since adopting this approach I have found I've rarely derailed the conversation. In fact, speaking up not only encourages and flatters others that you care about their subject, it has invariably lead to related matters which are not only more inclusive but actually novel and interesting to all present.

So push through the voice in your head and be that elephant in the room. After all, you might not the only person thinking it. If it helps, try reframing it to yourself as helping others…

You'll be finding it effortless soon enough. Indeed, asking the dumb question is actually a positive feedback loop where each question you pose helps you make others in the future. Excellence is not an act, but a habit.

Categories: LUG Community Blogs

Andy Smith: When is a 64-bit counter not a 64-bit counter?

Planet HantsLUG - Sun, 03/09/2017 - 20:17

…when you run a Xen device backend (commonly dom0) on a kernel version earlier than 4.10, e.g. Debian stable.

TL;DR

Xen netback devices used 32-bit counters until that bug was fixed and released in kernel version 4.10.

On a kernel with that bug you will see counter wraps much sooner than you would expect, and if the interface is doing enough traffic for there to be multiple wraps in 5 minutes, your monitoring will no longer be accurate.

The problem

A high-bandwidth VPS customer reported that the bandwidth figures presented by BitFolk’s monitoring bore no resemblance to their own statistics gathered from inside their VPS. Their figures were a lot higher.

About octet counters

The Linux kernel maintains byte/octet counters for its network interfaces. You can view them in /sys/class/net/interface>/statistics/*_bytes.

They’re a simple count of bytes transferred, and so the count always goes up. Typically these are 64-bit unsigned integers so their maximum value would be 18,446,744,073,709,551,615 (264-1).

When you’re monitoring bandwidth use the monitoring system records the value and the timestamp. The difference in value over a known period allows the monitoring system to work out the rate.

Wrapping

Monitoring of network devices is often done using SNMP. SNMP has 32-bit and 64-bit counters.

The maximum value that can be held in a 32-bit counter is 4,294,967,295. As that is a byte count, that represents 34,359,738,368 bits or 34,359.74 megabits. Divide that by 300 (seconds in 5 minutes) and you get 114.5. Therefore if the average bandwidth is above 114.5Mbit/s for 5 minutes, you will overflow a 32-bit counter. When the counter overflows it wraps back through zero.

Wrapping a counter once is fine. We have to expect that a counter will wrap eventually, and as counters never decrease, if a new value is smaller than the previous one then we know it has wrapped and can still work out what the rate should be.

The problem comes when the counter wraps more than once. There is no way to tell how many times it has wrapped so the monitoring system will have to assume the answer is once. Once traffic reaches ~229Mbit/s the counters will be wrapping at least twice in 5 minutes and the statistics become meaningless.

64-bit counters to the rescue

For that reason, network traffic is normally monitored using 64-bit counters. You would have to have a traffic rate of almost 492 Petabit/s to wrap a 64-bit byte counter in 5 minutes.

The thing is, I was already using 64-bit SNMP counters.

Examining the sysfs files

I decided to remove SNMP from the equation by going to the source of the data that SNMP uses: the kernel on the device being monitored.

As mentioned, the kernel’s interface byte counters are exposed in sysfs at /sys/class/net/interface>/statistics/*_bytes. I dumped out those values every 10 seconds and watched them scroll in a terminal session.

What I observed was that these counters, for that particular customer, were wrapping every couple of minutes. I never observed a value greater than 8,469,862,875. That’s larger than a 32-bit counter would hold, but very close to what a 33 bit counter would hold (8,589,934,591).

64-bit counters not to the rescue

Once I realised that the kernel’s own counters were wrapping every couple of minutes inside the kernel it became clear that using 64-bit counters in SNMP was not going to help at all, and multiple wraps would be seen in 5 minutes.

What a difference a minute makes

To test the hypothesis I switched to 1-minute polling. Here’s what 12 hours of real data looks like under both 5- and 1-minute polling.

As you can see that is a pretty dramatic difference.

The bug

By this point, I’d realised that there must be a bug in Xen’s netback driver (the thing that makes virtual network interfaces in dom0).

I went searching through the source of the kernel and found that the counters had changed from an unsigned long in kernel version 4.9 to a u64 in kernel version 4.10.

Of course, once I knew what to search for it was easy to unearth a previous bug report. If I’d found that at the time of the initial report that would have saved 2 days of investigation!

Even so, the fix for this was only committed in February of this year so, unfortunately, is not present in the kernel in use by the current Debian stable. Nor in many other current distributions.

For Xen set-ups on Debian the bug could be avoided by using a backports kernel or packaging an upstream kernel.

Or you could do 1-minute polling as that would only wrap one time at an average bandwidth of ~572Mbit/s and should be safe from multiple wraps up to ~1.1Gbit/s.

Inside the VPS the counters are 64-bit so it isn’t an issue for guest administrators.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): F/LOSS activity, August 2017

Planet ALUG - Sat, 02/09/2017 - 10:00

Shockingly enough, my focus started out on Gitano once more. We managed a 1.1 release of Gitano during the Debian conference's "camp" which occurs in the week before the conference. This was a joint effort of myself, Richard Maw, and Richard Ipsum. I have to take my hat off to Richard Maw, because without his dedication to features, 1.1 would lack some stuff which Richard Ipsum proposed around ruleset support for basic readers/writers and frankly 1.1 would be a weaker release without it.

Because of the debconf situation, we didn't have a Gitano developer day which, while sad, didn't slow us down much...

  • Once again, we reviewed our current task state
  • I submitted a series which fixed our test suite for Git 2.13 which was an FTBFS bug submitted against the Debian package for Gitano. Richard Maw reviewed and merged it.
  • Richard Maw supplied a series to add testing for dangling HEAD syndrome. I reviewed and merged that.
  • Richard Maw submitted a patch to improve the auditability of the 'as' command and I reviewed and merged that.
  • Richard Ipsum submitted a patch to add reader/writer configs to ease simple project management in Gitano. I'm not proud to say that I was too busy to look at this and ended up saying it was unlikely it'd get in. Richard Maw, quite rightly, took umbrage at that and worked on the patch, eventually submitting a new series with tests which I then felt obliged to review and I merged the series eventually.

    This is an excellent example of where just because one person is too busy doesn't mean that a good idea should be dropped, and I am grateful to Richard Maw for getting this work mergeable and effectively guilt-tripping me into reviewing/merging. This is a learnable moment for me and I hope to do better into the future.

  • During all that time, I was working on a plugin to support git-multimail.py in Gitano. This work ranged across hooks and caused me to spend a long time thinking about the semantics of configuration overriding etc. Fortunately I got there in the end, and with a massive review effort from Richard Maw, we got it merged into Gitano.
  • Finally I submitted a patch which caused the tests we run in Gitano to run from an 'install' directory which ensures that we catch bugs such as those which happened in earlier series where we missed out rules files for installation etc. Richard Maw reviewed and merged that.
  • And then we released the new version of Gitano and subsidiary libraries.

    There was Luxio version 13 which switched us to readdir() from readdir_r() thanks to Richard Ipsum; Gall 1.3 which contained a bunch of build cleanups, and also a revparse_single() implementation in the C code to speed things up thanks to Richard Maw; Supple 1.0.8 which improved wrapper environment cleanups thanks to Richard Ipsum, allowed baking of paths in which means Nix is easier to support (again thanks to Richard Ipsum), fixed setuid handling so that Nix is easier to support (guess what? Richard Ipsum again); Lace 1.4 which now verifies definition names in allow/deny/anyof/allof and also produces better error messages from nested includes.

    And, of course, Gitano 1.1 whose changes were somewhat numerous and so you are invited to read them in the Gitano NEWS file for the release.

Not Gitano

Of course, not everything I did in August was Gitano related. In fact once I had completed the 1.1 release and uploaded everything to Debian I decided that I was going to take a break from Gitano until the next developer day. (In fact there's even some patch series still unread on the mailing list which I will get to when I start the developer day.)

I have long been interested in STM32 microcontrollers, using them in a variety of projects including the Entropy Key which some of you may remember. Jorge Aparicio was working on Cortex-M3 support (among other microcontrollers) in Rust and he then extended that to include a realtime framework called RTFM and from there I got interested in what I might be able to do with Rust on STM32. I noticed that there weren't any pure Rust implementations of the USB device stack which would be necessary in order to make a device, programmed in Rust, appear on a USB port for a computer to control/use. This tweaked my interest.

As many of my readers are aware, I am very bad at doing things without some external motivation. As such, I then immediately offered to give a talk at a conference which should be happening in November, just so that I'd be forced to get on with learning and implementing the stack. I have been chronicling my work in this blog, and you're encouraged to go back and read them if you have similar interests. I'm sure that as my work progresses, I'll be doing more and more of that and less of Gitano, for at least the next two months.

To bring that into context as F/LOSS work, I did end up submitting some patches to Jorge's STM32F103xx repository to support a couple more clock configuration entries so that USB and ADCs can be set up cleanly. So at least there's that.

Categories: LUG Community Blogs

Chris Lamb: Free software activities in August 2017

Planet ALUG - Thu, 31/08/2017 - 20:39

Here is my monthly update covering what I have been doing in the free software world in August 2017 (previous month):

  • Created ZeroCoolOS, a live operating system that plays the film Hackers (1995) on a continuous loop.
  • Sent a patch for pristine-tar to allow storage of detached upstream signatures. (#871809)
  • Worked more on Lintian, a static analysis tool for Debian packages, reporting on various errors, omissions and quality-assurance issues to the maintainer (previous changes):
    • Fix an apache2-unparsable-dependency false positive by allowing periods in dependency names. (#873701)
    • Ignore "repacked" packages when checking for upstream source tarball signatures as they will never match.
    • Downgrade the severity of orig-tarball-missing-upstream-signature. (#870722)
    • From a suggestion by Theodore Ts'o, expand the explanation of orig-tarball-missing-upstream-signature to include the location of where dpkg-source looks.
    • Address a number of issues in the copyright-year-in-future tag including preventing false positives in port numbers, email addresses, ISO standard numbers and street addresses (#869788), as well as "meta" or testing statements (#873323). In addition, report all violating years in a line and expand the testsuite.
    • Don't match quoted "FIXME" variants of file-contains-fixme-placeholder (#870199), avoid checking copyright_hints files (#872843) and downgrade the tag's severity.
    • Apply a patch from Alex Muntada to recommend "substr" over of "substring" in mentions-deprecated-usr-lib-perl5-directory. (#871767)
    • Prevent missing-build-dependency-for-dh_-command false positives exposed by following the advice in useless-autoreconf-build-depends. (#869541)
    • Ensure readme-debian-contains-debmake-template also checks for files containing "Automatically generated by debmake".
    • Check python3-foo packages have a Section: python, not just python2-foo. (#870272)
    • Check for packages shipping compiled Java class files. (#873211)
    • Additionally consider .cljc files to avoid codeless-jar warnings. (#870649)
    • Prevent desktop-entry-lacks-keywords-entry false positives for Link and Directory-style .desktop files. (#873702)
    • Split out Python checks from checks/scripts.pm check to a new, source check of type source.
    • Check for python-foo without a corresponding python3-foo package. (#870681)
    • Complain about packages that Build-Depend on python-sphinx only. (#870730)
    • Warn about packages that alternatively Build-Depend on the Python 2 and Python 3 versions of Sphinx. (#870758)
    • Check for packages that depend on Python 2.x. (#870822)
    • Correct false positives in unconditional-use-of-dpkg-statoverride by detecting "if !" as a shell prefix. (#869587)
    • Alert on for missing calls to dpkg-maintscript-helper(1) in maintainer scripts. (#872042)
    • Check for packages using sensible-utils without declaring a dependency after splitting from debianutils. (#872611)
    • Warn about scripts using nodejs as an interpreter now that the nodejs script provides /usr/bin/node. (#873096)
    • Remove recommendations to add a Testsuite: autopkgtest field to debian/control and emit a new tag the package if it does so. (#865531)
    • Recognise autopkgtest-pkg-elpa as a valid test suite. (#873458)
    • Add note to /etc/bash_completion.d's obsolete path warning output regarding stricter filename requirements. (#814599)
    • Add 4.0.1 and 4.1.0 as known Policy standards versions.
    • Apply a patch from Maia Everett to avoid British spellings under the en_US locale. (#868897)
    • Stop emitting {maintainer,uploader}-address-causes-mail-loops for @packages.debian.org addresses. (#871575)
    • Modify Lintian::Data's all subroutine to always return keys in insertion order.
    • Apply a patch from Steve Langasek to accomodate binutils outputting symbols in a different format on the ppc64el architecture. (#869750)
    • Add an explicit test for packages including external fonts via the Google Font and TypeKit APIs. (#873434)
    • Add missing entries in internal Test-For fields to make development/testing workflow less error-prone.
  • Sent three pull requests to git-buildpackage, a tool to assist in Debian packaging from Git repositories:
    • Make pq --abbrev= configurable. (#872351)
    • Use build profiles to avoid installation of test dependencies. (#31)
    • Correct "allow to" grammar. (#30)
  • Updated travis.debian.net (my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform for testing):
    • Move away from deb.debian.org; Travis appears to be using a HTTP proxy that strips SRV records. (commit)
    • Highlight double quotes are required for TRAVIS_DEBIAN_EXTRA_REPOSITORY. (commit)
    • Use force-unsafe-io. (commit)
    • Clarify docs when upstream already has a travis.yml file. (#46)
    • Make documentation easier to copy-paste. (commit)
  • Merged a pull request in django-slack, my library to easily post messages to the Slack group-messaging utility, where instantiation of a SlackException was failing. (#71)
  • Assigned two pull requests to the Redis key-value database store to correct "did not received" and "faield" typos. (#4216 & #4215).
Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Presented a status update at Debconf17 in Montréal, Canada alongside Holger Levsen, Maria Glukhova, Steven Chamberlain, Vagrant Cascadian, Valerie Young and Ximin Luo.
  • I worked on the following issues upstream:
    • glib2.0: Please make the output of gio-querymodules reproducible. (...)
    • gcab: Please make the output reproducible. (...)
    • gtk+2.0: Please make the immodules.cache files reproducible. (...)
    • desktop-file-utils: Please make the output reproducible. (...)
  • Within Debian:
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Worked on publishing our weekly reports. (#118, #119, #120, #121 & #122)

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Use name attribute over path to avoid leaking comparison full path in output. (commit)
  • Add missing skip_unless_module_exists import. (commit)
  • Tidy diffoscope.progress and the XML comparator (commit, commit)

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Add a simple autopkgtest smoke test. (commit)


Debian Patches contributed
  • openssh: Quote the IP address in ssh-keygen -f suggestions. (#872643)
  • libgfshare:
    • SIGSEGV if /dev/urandom is not accessible. (#873047)
    • Add bindnow hardening. (#872740)
    • Support nodoc build profile. (#872739)
  • devscripts:
  • memcached: Add hardening to systemd .service file. (#871610)
  • googler: Tidy long and short package descriptions. (#872461)
  • gnome-split: Homepage points to domain-parked website. (#873037)
Uploads
  • python-django 1:1.11.4-1 — New upstream release.
  • redis:
    • 4:4.0.1-3 — Drop yet more non-deterministic tests.
    • 4:4.0.1-4 — Tighten systemd/seccomp hardening.
    • 4:4.0.1-5 — Drop even more tests with timing issues.
    • 4:4.0.1-6 — Don't install completions to /usr/share/bash-completion/completions/debian/bash_completion/.
    • 4:4.0.1-7 — Don't let sentinel integration tests fail the build as they use too many timers to be meaningful. (#872075)
  • python-gflags 1.5.1-3 — If SOURCE_DATE_EPOCH is set, either use that as a source of current dates or the UTC-version of the file's modification time (#836004), don't call update-alternatives --remove in postrm. update debian/watch/Homepage & refresh/tidy the packaging.
  • bfs 1.1.1-1 — New upstream release, tidy autopkgtest & patches, organising the latter with Pq-Topic.
  • python-daiquiri 1.2.2-1 — New upstream release, tidy autopkgtests & update travis.yml from travis.debian.net.
  • aptfs 2:0.10-2 — Add upstream signing key, refer to /usr/share/common-licenses/GPL-3 in debian/copyright & tidy autopkgtests.
  • adminer 4.3.1-2 — Add a simple autopkgtest & don't install the Selenium-based tests in the binary package.
  • zoneminder (1.30.4+dfsg-2) — Prevent build failures with GCC 7 (#853717) & correct example /etc/fstab entries in README.Debian (#858673).

Finally, I reviewed and sponsored uploads of astral, inflection, more-itertools, trollius-redis & wolfssl.

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 1049-1 for libsndfile preventing a remote denial of service attack.
  • Issued DLA 1052-1 against subversion to correct an arbitrary code execution vulnerability.
  • Issued DLA 1054-1 for the libgxps XML Paper Specification library to prevent a remote denial of service attack.
  • Issued DLA 1056-1 for cvs to prevent a command injection vulnerability.
  • Issued DLA 1059-1 for the strongswan VPN software to close a denial of service attack.
Debian bugs filed
  • wget: Please hash the hostname in ~/.wget-hsts files. (#870813)
  • debian-policy: Clarify whether mailing lists in Maintainers/Uploaders may be moderated. (#871534)
  • git-buildpackage: "pq export" discards text within square brackets. (#872354)
  • qa.debian.org: Escape HTML in debcheck before outputting. (#872646)
  • pristine-tar: Enable multithreaded compression in pristine-xz. (#873229)
  • tryton-meta: Please combine tryton-modules-* into a single source package with multiple binaries. (#873042)
  • azure-cli:
  • fwupd-tests: Don't ship test files to generic /usr/share/installed-tests dir. (#872458)
  • libvorbis: Maintainer fields points to a moderated mailing list. (#871258)
  • rmlint-gui: Ship a rmlint-gui binary. (#872162)
  • template-glib: debian/copyright references online source without quotation. (#873619)
FTP Team

As a Debian FTP assistant I ACCEPTed 147 packages: abiword, adacgi, adasockets, ahven, animal-sniffer, astral, astroidmail, at-at-clojure, audacious, backdoor-factory, bdfproxy, binutils, blag-fortune, bluez-qt, cheshire-clojure, core-match-clojure, core-memoize-clojure, cypari2, data-priority-map-clojure, debian-edu, debian-multimedia, deepin-gettext-tools, dehydrated-hook-ddns-tsig, diceware, dtksettings, emacs-ivy, farbfeld, gcc-7-cross-ports, git-lfs, glewlwyd, gnome-recipes, gnome-shell-extension-tilix-dropdown, gnupg2, golang-github-aliyun-aliyun-oss-go-sdk, golang-github-approvals-go-approval-tests, golang-github-cheekybits-is, golang-github-chzyer-readline, golang-github-denverdino-aliyungo, golang-github-glendc-gopher-json, golang-github-gophercloud-gophercloud, golang-github-hashicorp-go-rootcerts, golang-github-matryer-try, golang-github-opentracing-contrib-go-stdlib, golang-github-opentracing-opentracing-go, golang-github-tdewolff-buffer, golang-github-tdewolff-minify, golang-github-tdewolff-parse, golang-github-tdewolff-strconv, golang-github-tdewolff-test, golang-gopkg-go-playground-validator.v8, gprbuild, gsl, gtts, hunspell-dz, hyperlink, importmagic, inflection, insighttoolkit4, isa-support, jaraco.itertools, java-classpath-clojure, java-jmx-clojure, jellyfish1, lazymap-clojure, libblockdev, libbytesize, libconfig-zomg-perl, libdazzle, libglvnd, libjs-emojify, libjwt, libmysofa, libundead, linux, lua-mode, math-combinatorics-clojure, math-numeric-tower-clojure, mediagoblin, medley-clojure, more-itertools, mozjs52, openssh-ssh1, org-mode, oysttyer, pcscada, pgsphere, poppler, puppetdb, py3status, pycryptodome, pysha3, python-cliapp, python-coloredlogs, python-consul, python-deprecation, python-django-celery-results, python-dropbox, python-fswrap, python-hbmqtt, python-intbitset, python-meshio, python-parameterized, python-pgpy, python-py-zipkin, python-pymeasure, python-thriftpy, python-tinyrpc, python-udatetime, python-wither, python-xapp, pythonqt, r-cran-bit, r-cran-bit64, r-cran-blob, r-cran-lmertest, r-cran-quantmod, r-cran-ttr, racket-mode, restorecond, rss-bridge, ruby-declarative, ruby-declarative-option, ruby-errbase, ruby-google-api-client, ruby-rash-alt, ruby-representable, ruby-test-xml, ruby-uber, sambamba, semodule-utils, shimdandy, sjacket-clojure, soapysdr, stencil-clojure, swath, template-glib, tools-analyzer-jvm-clojure, tools-namespace-clojure, uim, util-linux, vim-airline, vim-airline-themes, volume-key, wget2, xchat, xfce4-eyes-plugin & xorg-gtest.

I additionally filed 6 RC bugs against packages that had incomplete debian/copyright files against: gnome-recipes, golang-1.9, libdazzle, poppler, python-py-zipkin & template-glib.

Categories: LUG Community Blogs

Jonathan McDowell: C, floating point, and help!

Planet ALUG - Thu, 31/08/2017 - 16:58

Floating point is a pain. I know this. But I recently took over the sigrok packages in Debian and as part of updating to the latest libsigkrok4 library enabled the post compilation tests. Which promptly failed on i386. Some narrowing down of the problem leads to the following test case (which fails on both gcc-6 under Debian/Stretch and gcc-7 on Debian/Testing):

#include <inttypes.h> #include <stdio.h> #include <stdint.h> int main(int argc, char *argv[]) { printf("%" PRIu64 "\n", (uint64_t)((1.034567) * (uint64_t)(1000000ULL))); }

We expect to see 1.034567 printed out. On x86_64 we do:

$ arch x86_64 $ gcc -Wall t.c -o t ; ./t 1034567

If we compile for 32-bit the result is also as expected:

$ gcc -Wall -m32 t.c -o t ; ./t 1034567

Where things get interesting is when we enable --std=c99:

$ gcc -Wall --std=c99 t.c -o t ; ./t 1034567 $ gcc -Wall -m32 --std=c99 t.c -o t ; ./t 1034566

What? It turns out all the cXX standards result in the last digit incorrectly being 6, while the gnuXX standards (gnu11 is apparently the default) result in the correct trailing 7. Is there some postfix I can add to the value to prevent the floating point truncation taking place? Or do I just have to accept this? It works fine on armel, so it’s not a simple 32/64 bit issue.

Categories: LUG Community Blogs

Daniel Silverstone (Kinnison): STM32 USB and Rust - Packet Memory Area

Planet ALUG - Wed, 30/08/2017 - 16:15

In this, our next exciting installment of STM32 and Rust for USB device drivers, we're going to look at what the STM32 calls the 'packet memory area'. If you've been reading along with the course, including reading up on the datasheet content then you'll be aware that as well as the STM32's normal SRAM, there's a 512 byte SRAM dedicated to the USB peripheral. This SRAM is called the 'packet memory area' and is shared between the main bus and the USB peripheral core. Its purpose is, simply, to store packets in transit. Both those IN to the host (so stored queued for transmission) or OUT from the host (so stored, queued for the application to extract and consume).

It's time to actually put hand to keyboard on some Rust code, and the PMA is the perfect starting point, since it involves two basic structures. Packets are the obvious first structure, and they are contiguous sets of bytes which for the purpose of our work we shall assume are one to sixty-four bytes long. The second is what the STM32 datasheet refers to as the BTABLE or Buffer Descriptor Table. Let's consider the BTABLE first.

The Buffer Descriptor Table

The BTABLE is arranged in quads of 16bit words. For "normal" endpoints this is a pair of descriptors, each consisting of two words, one for transmission, and one for reception. The STM32 also has a concept of double buffered endpoints, but we're not going to consider those in our proof-of-concept work. The STM32 allows for up to eight endpoints (EP0 through EP7) in internal register naming, though they support endpoints numbered from zero to fifteen in the sense of the endpoint address numbering. As such there're eight descriptors each four 16bit words long (eight bytes) making for a buffer descriptor table which is 64 bytes in size at most.

Buffer Descriptor Table Byte offset in PMA Field name Description (EPn * 8) + 0 USB_ADDRn_TX The address (inside the PMA) of the TX buffer for EPn (EPn * 8) + 2 USB_COUNTn_TX The number of bytes present in the TX buffer for EPn (EPn * 8) + 4 USB_ADDRn_RX The address (inside the PMA) of the RX buffer for EPn (EPn * 8) + 6 USB_COUNTn_RX The number of bytes of space available for the RX buffer for EPn (and once received, the number of bytes received)

The TX entries are trivial to comprehend. To transmit a packet, part of the process involves writing the packet into the PMA, putting the address into the appropriate USB_ADDRn_TX entry, and the length into the corresponding USB_COUNTn_TX entry, before marking the endpoint as ready to transmit.

To receive a packet though is slightly more complex. The application must allocate some space in the PMA, setting the address into the USB_ADDRn_RX entry of the BTABLE before filling out the top half of the USB_COUNTn_RX entry. For ease of bit sizing, the STM32 only supports space allocations of two to sixty-two bytes in steps of two bytes at a time, or thirty-two to five-hundred-twelve bytes in steps of thirty-two bytes at a time. Once the packet is received, the USB peripheral will fill out the lower bits of the USB_COUNTn_RX entry with the actual number of bytes filled out in the buffer.

Packets themselves

Since packets are, typically, a maximum of 64 bytes long (for USB 2.0) and are simply sequences of bytes with no useful structure to them (as far as the USB peripheral itself is concerned) the PMA simply requires that they be present and contiguous in PMA memory space. Addresses of packets are relative to the base of the PMA and are byte-addressed, however they cannot start on an odd byte, so essentially they are 16bit addressed. Since the BTABLE can be anywhere within the PMA, as can the packets, the application will have to do some memory management (either statically, or dynamically) to manage the packets in the PMA.

Accessing the PMA

The PMA is accessed in 16bit word sections. It's not possible to access single bytes of the PMA, nor is it conveniently structured as far as the CPU is concerned. Instead the PMA's 16bit words are spread on 32bit word boundaries as far as the CPU knows. This is done for convenience and simplicity of hardware, but it means that we need to ensure our library code knows how to deal with this.

First up, to convert an address in the PMA into something which the CPU can use we need to know where in the CPU's address space the PMA is. Fortunately this is fixed at 0x4000_6000. Secondly we need to know what address in the PMA we wish to access, so we can determine which 16bit word that is, and thus what the address is as far as the CPU is concerned. If we assume we only ever want to access 16bit entries, we can just multiply the PMA offset by two before adding it to the PMA base address. So, to access the 16bit word at byte-offset 8 in the PMA, we'd look for the 16bit word at 0x4000_6000 + (0x08 * 2) => 0x4000_6010.

Bundling the PMA into something we can use

I said we'd do some Rust, and so we shall…

// Thanks to the work by Jorge Aparicio, we have a convenient wrapper // for peripherals which means we can declare a PMA peripheral: pub const PMA: Peripheral<PMA> = unsafe { Peripheral::new(0x4000_6000) }; // The PMA struct type which the peripheral will return a ref to pub struct PMA { pma_area: PMA_Area, } // And the way we turn that ref into something we can put a useful impl on impl Deref for PMA { type Target = PMA_Area; fn deref(&self) -> &PMA_Area { &self.pma_area } } // This is the actual representation of the peripheral, we use the C repr // in order to ensure it ends up packed nicely together #[repr(C)] pub struct PMA_Area { // The PMA consists of 256 u16 words separated by u16 gaps, so lets // represent that as 512 u16 words which we'll only use every other of. words: [VolatileCell<u16>; 512], }

That block of code gives us three important things. Firstly a peripheral object which we will be able to (later) manage nicely as part of the set of peripherals which RTFM will look after for us. Secondly we get a convenient packed array of u16s which will be considered volatile (the compiler won't optimise around the ordering of writes etc). Finally we get a struct on which we can hang an implementation to give our PMA more complex functionality.

A useful first pair of functions would be to simply let us get and put u16s in and out of that word array, since we're only using every other word…

impl PMA_Area { pub fn get_u16(&self, offset: usize) -> u16 { assert!((offset & 0x01) == 0); self.words[offset].get() } pub fn set_u16(&self, offset: usize, val: u16) { assert!((offset & 0x01) == 0); self.words[offset].set(val); } }

These two functions take an offset in the PMA and return the u16 word at that offset. They only work on u16 boundaries and as such they assert that the bottom bit of the offset is unset. In a release build, that will go away, but during debugging this might be essential. Since we're only using 16bit boundaries, this means that the first word in the PMA will be at offset zero, and the second at offset two, then four, then six, etc. Since we allocated our words array to expect to use every other entry, this automatically converts into the addresses we desire.

If we pop (and please don't worry about the unsafe{} stuff for now):

unsafe { (&*usb::pma::PMA.get()).set_u16(4, 64); }

into our main function somewhere, and then build and objdump our test binary we can see the following set of instructions added:

80001e4: f246 0008 movw r0, #24584 ; 0x6008 80001e8: 2140 movs r1, #64 ; 0x40 80001ea: f2c4 0000 movt r0, #16384 ; 0x4000 80001ee: 8001 strh r1, [r0, #0]

This boils down to a u16 write of 0x0040 (64) to the address 0x4006008 which is the third 32 bit word in the CPU's view of the PMA memory space (where offset 4 is the third 16bit word) which is exactly what we'd expect to see.

We can, from here, build up some functions for manipulating a BTABLE, though the most useful ones for us to take a look at are the RX counter functions:

pub fn get_rxcount(&self, ep: usize) -> u16 { self.get_u16(BTABLE + (ep * 8) + 6) & 0x3ff } pub fn set_rxcount(&self, ep: usize, val: u16) { assert!(val <= 1024); let rval: u16 = { if val > 62 { assert!((val & 0x1f) == 0); (((val >> 5) - 1) << 10) | 0x8000 } else { assert!((val & 1) == 0); (val >> 1) << 10 } }; self.set_u16(BTABLE + (ep * 8) + 6, rval) }

The getter is fairly clean and clear, we need the BTABLE base in the PMA, add the address of the USB_COUNTn_RX entry to that, retrieve the u16 and then mask off the bottom ten bits since that's the size of the relevant field.

The setter is a little more complex, since it has to deal with the two possible cases, this isn't pretty and we might be able to write some better peripheral structs in the future, but for now, if the length we're setting is 62 or less, and is divisible by two, then we put a zero in the top bit, and the number of 2-byte lumps in at bits 14:10, and if it's 64 or more, we mask off the bottom to check it's divisible by 32, and then put the count (minus one) of those blocks in, instead, and set the top bit to mark it as such.

Fortunately, when we set constants, Rust's compiler manages to optimise all this very quickly. For a BTABLE at the bottom of the PMA, and an initialisation statement of:

unsafe { (&*usb::pma::PMA.get()).set_rxcount(1, 64); }

then we end up with the simple instruction sequence:

80001e4: f246 001c movw r0, #24604 ; 0x601c 80001e8: f44f 4104 mov.w r1, #33792 ; 0x8400 80001ec: f2c4 0000 movt r0, #16384 ; 0x4000 80001f0: 8001 strh r1, [r0, #0]

We can decompose that into a C like *((u16*)0x4000601c) = 0x8400 and from there we can see that it's writing to the u16 at 0x1c bytes into the CPU's view of the PMA, which is 14 bytes into the PMA itself. Since we know we set the BTABLE at the start of the PMA, it's 14 bytes into the BTABLE which is firmly in the EP1 entries. Specifically it's USB_COUNT1_RX which is what we were hoping for. To confirm this, check out page 651 of the datasheet. The value set was 0x8400 which we can decompose into 0x8000 and 0x0400. The first is the top bit and tells us that BL_SIZE is one, and thus the blocks are 32 bytes long. Next the 0x4000 if we shift it right ten places, we get the value 2 for the field NUM_BLOCK and multiplying 2 by 32 we get the 64 bytes we asked it to set as the size of the RX buffer. It has done exactly what we hoped it would, but the compiler managed to optimise it into a single 16 bit store of a constant value to a constant location. Nice and efficient.

Finally, let's look at what happens if we want to write a packet into the PMA. For now, let's assume packets come as slices of u16s because that'll make our life a little simpler:

pub fn write_buffer(&self, base: usize, buf: &[u16]) { for (ofs, v) in buf.iter().enumerate() { self.set_u16(base + (ofs * 2), *v); } }

Yes, even though we're deep in no_std territory, we can still get an iterator over the slice, and enumerate it, getting a nice iterator of (index, value) though in this case, the value is a ref to the content of the slice, so we end up with *v to deref it. I am sure I could get that automatically happening but for now it's there.

Amazingly, despite using iterators, enumerators, high level for loops, function calls, etc, if we pop:

unsafe { (&*usb::pma::PMA.get()).write_buffer(0, &[0x1000, 0x2000, 0x3000]); }

into our main function and compile it, we end up with the instruction sequence:

80001e4: f246 0000 movw r0, #24576 ; 0x6000 80001e8: f44f 5180 mov.w r1, #4096 ; 0x1000 80001ec: f2c4 0000 movt r0, #16384 ; 0x4000 80001f0: 8001 strh r1, [r0, #0] 80001f2: f44f 5100 mov.w r1, #8192 ; 0x2000 80001f6: 8081 strh r1, [r0, #4] 80001f8: f44f 5140 mov.w r1, #12288 ; 0x3000 80001fc: 8101 strh r1, [r0, #8]

which, as you can see, ends up being three sequential halfword stores directly to the right locations in the CPU's view of the PMA. You have to love seriously aggressive compile-time optimisation

Hopefully, by next time, we'll have layered some more pleasant routines on our PMA code, and begun a foray into the setup necessary before we can begin handling interrupts and start turning up on a USB port.

Categories: LUG Community Blogs

Jonathan McDowell: On my way home from OMGWTFBBQ

Planet ALUG - Sun, 27/08/2017 - 18:42

I started writing this while sitting in Stansted on my way home from the annual UK Debian BBQ. I’m finally home now, after a great weekend catching up with folk. It’s a good social event for a bunch of Debian folk, and I’m very grateful that Steve and Jo continue to make it happen. These days there are also a number of generous companies chipping in towards the cost of food and drink, so thanks also to Codethink and QvarnLabs AB for the food, Collabora and Mythic Beasts for the beer and Chris for the coffee. And Rob for chasing us all for contributions to cover the rest.

I was trying to remember when the first one of these I attended was; trawling through mail logs there was a Cambridge meetup that ended up at Steve’s old place in April 2001, and we’ve consistently had the summer BBQ since 2004, but I’m not clear on what happened in between. Nonetheless it’s become a fixture in the calendar for those of us in the UK (and a number of people from further afield who regularly turn up). We’ve become a bit more sedate, but it’s good to always see a few new faces, drink some good beer (yay Milton), eat a lot and have some good conversations. This year also managed to get me a SheevaPlug so I could investigate #837989 - a bug with OpenOCD not being able to talk to the device. Turned out to be a channel configuration error in the move to new style FTDI support, so I’ve got that fixed locally and pushed the one line fix upstream as well.

Categories: LUG Community Blogs
Syndicate content