News aggregator

Monthly meeting November 30th

West Yorkshire LUG News - Thu, 19/11/2015 - 18:53

Our regular once a month meeting this November will be on the last Monday of the month, 30th Nov 2015 at 7pm, at The Lord Darcy, as usual.

Adam Trickett: Bog Roll: Wombat Upgrade

Planet HantsLUG - Wed, 18/11/2015 - 22:21

Today the order went in for a major rebuild of Wombat. Some parts will remain from the original, but overall most of the system will be replaced with more modern parts:

  • The new CPU has double the core count, higher clock speed and better features. It should be faster under both single and multi-threaded use. It should also use less electricity and be cooler.
  • The new GPU should be much faster, it's on a faster bus, and it has proprietary driver support (if required).
  • The SATA controller is more modern and should be much faster than the hard disk, the current controller is an older generation than the disk.
  • The RAM is much faster - two generations faster and there is four times as much of it.

Overall it should be faster, use less electricity, and be thermally cooler. It won't be as fast as my desktop, but it should be noticeably faster and my better half should be happy enough - especially as I shouldn't have to touch the data on the hard disk, which was only recently reinstalled.

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Newstalgia

Planet ALUG - Tue, 17/11/2015 - 23:57

Well, well, after talking about my time at university only yesterday, tonight I saw Christian Death who were more or less the soundtrack to my degree with their compilation album The Bible.

I'm pleased to say they seemed like thoroughly nice folks and played a good gig. Proving my lack of musical snobbery (which, to be honest, generally goes with the goth scene), I only knew one song but enjoyed everything they played, from new stuff to old.

The support band was a local act called Painted Heathers who (for me, at least) set the scene nicely and represented an innovative, modern take on the musical theme behind the act they were supporting. Essentially an indie band with goth leanings (I wonder if they agree), they suit my current musical bent (I play keyboards in an indie band) and the mood I was in. They're very new, young, and I shall spread their word for them if I can :)

Categories: LUG Community Blogs

Howto | Install ESET Remote Administrator on Ubuntu

Planet SurreyLUG - Tue, 17/11/2015 - 20:12

After much research I decided to purchase ESET Antivirus for our Windows clients. But rather than install ESET Antivirus on each client in turn, I decided to install ESET Remote Administrator on an Ubuntu VPS. ESET Remote Administrator is a server program for controlling ESET Antivirus (and other ESET programs) on clients, you can use it to deploy the ESET Agent, which in turn can then install and monitor the ESET Antivirus software.

Our Windows PCs are controlled by an ActiveDirectory server (actually a Samba4 ActiveDirectory server, although that should not make any difference to these instructions).

I found the ESET instructions for so doing decidedly sketchy, but I eventually managed to install it and took some notes as I went. I cannot promise these instructions are complete, but used in conjunction with the ESET instructions they may be of help.

Update Hosts

Edit /etc/hosts:

127.0.0.1 localhost.example.lan localhost.localdomain localhost 192.168.0.109 eset.example.lan eset

Test by typing the following two commands and checking the output matches:

# hostname eset # hostname -f eset.example.lan Dependencies # apt-get install mysql-server unixodbc libmyodbc cifs-utils libjcifs-java winbind libqtwebkit4 xvfb # dpkg-reconfigure libmyodbc Install MySQL Server

Edit /etc/mysql/my.cnf:

max_allowed_packet=33M

Restart MySQL

# service mysql-server restart Configure MySQL # mysql -u root -p

Cannot remember whether I created database and/or username and password, suggest you try without and then with.

Install ESET Remote Administration Server

Replace ??? with actual:

# sh Server-Linux-x86_64.sh --license-key=??? \ --db-type="MySQL Server" --db-driver="MySQL" --db-hostname=127.0.0.1 --db-port=3306 \ --server-root-password="???" --db-name="era_db" --db-admin-username="root" --db-admin-password="???" \ --db-user-username="era" --db-user-password="???" \ --cert-hostname="eset" --cert-auth-common-name="eset.example.lan" --cert-organizational-unit="eset.example.lan" \ --cert-organization="example ltd" --cert-locality="UK" \ --ad-server="ads.example.lan" --ad-user-name="era" --ad-user-password="???"

In case of error, read the following carefully:

/var/log/eset/RemoteAdministrator/EraServerInstaller.log Install Tomcat7 # apt-get install tomcat7 tomcat7-admin

Wait 5 minutes for Tomcat to start, then visit http://localhost:8080 to check it has worked.

Configure SSL

See SSL/TLS Configuration HOW-TO.

Step 1 - Generate Keystore $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA -keystore /usr/share/tomcat7/.keystore

Use password 'changeit'.

At the time of writing $JAVA_HOME is /usr/lib/jvm/java-7-openjdk-amd64/jre

Step 2 - Configure Tomcat

You should have two sections like this:

<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" /> <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" keystoreFile="/usr/share/tomcat7/.keystore" keystorePass="changeit" clientAuth="false" sslProtocol="TLS" />

These will already exist, but need uncommenting and adjusting with keystore details.

Test

You should now be able to go to http://localhost:8443.

Join Windows Domain

The ERA server must be joined to the domain

Install samba and stop any samba services that start.

# apt-get install samba krb5-user smbclient

Edit /etc/samba/smb.conf:

[global] workgroup = EXAMPLE security = ADS realm = EXAMPLE.COM dedicated keytab file = /etc/krb5.keytab kerberos method = secrets and keytab server string = Samba 4 Client %h winbind enum users = yes winbind enum groups = yes winbind use default domain = yes winbind expand groups = 4 winbind nss info = rfc2307 winbind refresh tickets = Yes winbind normalize names = Yes idmap config * : backend = tdb idmap config * : range = 2000-9999 idmap config EXAMPLE : backend = ad idmap config EXAMPLE : range = 10000-999999 idmap config EXAMPLE:schema_mode = rfc2307 printcap name = cups cups options = raw usershare allow guests = yes domain master = no local master = no preferred master = no os level = 20 map to guest = bad user username map = /etc/samba/smbmap

Create /etc/samba/smbmap:

!root = EXAMPLE\Administrator Administrator admionistrator

Edit /etc/krb5.conf:

[libdefaults] default_realm = EXAMPLE.COM dns_lookup_realm = false dns_lookup_kdc = true ticket_lifetime = 24h forwardable = yes

Make sure that /etc/resolv.conf points to the AD DC, and dns is setup correctly.

Then run this command:

# net ads join -U Administrator@EXAMPLE.COM

Enter Administrators password when requested.

Edit /etc/nsswitch.conf and add 'winbind' to passwd & group lines.

Start samba services.

Update DNS

For some reason the join does not always create the DNS entry on Samba4, so you may need to add this manually.

References
Categories: LUG Community Blogs

Intermittent USB3 Drive Mount

Planet SurreyLUG - Tue, 17/11/2015 - 14:38

I have a problem with a Samsung M3 Portable USB3 external hard drive only working intermittently. I use a couple of these for off-site backups for work and so I need them working reliably. In fairness a reboot does cure the problem, but I hate that as a solution.

To troubleshoot this problem, the first step was to check the system devices:

# ls -al /dev/disk/by-id

But only my main two system drives were showing, and not the USB3 drive, which would have been sdc, being the third drive (a, b, c).

System Logs

The next step was to check the system logs:

# dmesg | tail

This showed no issues at all. Then I checked syslog:

# tail /var/log/syslog

This was completely empty, and had not been written to for almost a year. I can't quite believe I haven't needed to check the logs in all that time, but there you are.

I checked that rsyslogd was running and it was as user syslog. But the /var/log/syslog file was owned by root with group adm, but whilst syslog was a member of the adm group, the files all had user rw permissions only -rw-------.

This was easily fixed:

# chmod g+rw /var/log/* # service rsyslog restart

Now the syslog file was being written to, but there was a problem writing to /dev/xconsole:

Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info="http://www.rsyslog.com"] start Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's groupid changed to 103 Nov 17 12:20:48 asusi5 rsyslogd: rsyslogd's userid changed to 101 Nov 17 12:20:48 asusi5 rsyslogd-2039: Could no open output pipe '/dev/xconsole': No such file or directory [try http://www.rsyslog.com/e/2039 ] Nov 17 12:20:48 asusi5 rsyslogd: [origin software="rsyslogd" swVersion="7.4.4" x-pid="11006" x-info="http://www.rsyslog.com"] exiting on signal 15.

So I duly visited the link mentioned, which gave instructions for disabling /dev/xconsole, but this made me nervous and further research suggested that that was indeed the correct fix for headless servers, but possibly not for a desktop PC. Instead I used the following fix:

# mknod -m 640 /dev/xconsole c 1 3 # chown syslog:adm /dev/xconsole # service rsyslog restart USB Powersaving

Now at least it seems that my syslog is working correctly. Unfortunately unplugging and plugging in the USB drive still was not writing to the logs! When I plugged in the drive the blue light would flash repeatedly and then switch off. I would have believed that the drive had a fault, if it weren't for the fact that rebooting the PC solves the problem.

Thinking that perhaps this problem was USB3 related, I decided to Google for "USB3 drive not recognised" which found this post. Except, that post was only relevant when operating on battery power, whereas I am using a plugged in desktop PC. Clearly that page could not be relevant? Except that in my notification area there is a battery icon, relating to my Uninterruptible Power Supply. But surely Ubuntu couldn't be treating my UPS as if it were a battery? Could it?

In order to find out the power status of my USB devices I followed the suggestion of typing:

# grep . /sys/bus/usb/devices/*/power/control

Most were flagged as "on", but a number of devices were flagged as "auto". I decided to try switching them on to see if that made any difference:

# for F in /sys/bus/usb/devices/*/power/control; do echo on >"${F}"; done # grep . /sys/bus/usb/devices/*/power/control

Now all the devices were showing as on. Time to try out the drive again - I unplugged it and plugged it back in again. This time the power light flashed repeatedly and then went solid.

Unfortunately the drive was still not mounted, but at least it was alive now. What next? Ah yes, I should check the logs to see if they have any new messages, now that the drive is powered and my logs are working:

# dmesg | tail [15243.369812] usb 4-3: reset SuperSpeed USB device number 2 using xhci_hcd [15243.616871] xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff880213c71800 Faulty USB-SATA Bridge

At last I have something to go on. I searched for that error message, which took me to this Debian.net page, which suggested:

Reason is faulthy usb-sata bridge ASM1051 in USB3 drives. It has nothing to do with motherboards. See this. https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=a9c54caa456dccba938005f6479892b589975e6a Workaround was made to kernel 3.17-rc5. Now those disks works. Some perfectly and some works but data transfer is not smooth.

Could that be my problem? I checked my current Kernel:

# uname -r 3.13.0-68-generic

So yes, it could be. As I cannot read the device yet, I cannot check whether ASM1051 is present in my USB3 drive.

Having uninstalled 34 obsolete kernels and a further 8 newer kernels that weren't in use, I was in a position to install the latest Kernel from Vivid with:

# apt install linux-generic-lts-vivid

The problem of course is that in order to boot onto the newer Kernel I must reboot, thereby fixing the problem anyway.

And finally

It would be easy to portray this as clear evidence of the difficulty of running Linux, certainly Samsung will have thoroughly tested their drive on Windows and Mac operating systems. That said such problems are not unheard of on the more popular operating systems and debugging such problems is far harder, I would argue, than the above logical steps.

Having rebooted, as expected, the drive worked. I tried running lshw to see if ASM1051 was present, but I could not find it. Of course upgrading the Kernel could have fixed the problem anyway.

For the present I will labour under the comforting belief that the problem is fixed and will update this post again when either the problem reoccurs on in a week's time if all is well!

Categories: LUG Community Blogs

Steve Engledow (stilvoid): More ale

Planet ALUG - Tue, 17/11/2015 - 00:35

There are several reasons I took the degree course I did - joint honours in Philosophy and Linguistics - but the prime one is that I really felt I wanted to study something I would enjoy rather than something that would guarantee me a better career. To be honest, my degree subject versus the line of work I'm in - software development - is usually a good talking point in interviews and has probably landed me more jobs than if I'd had a degree in Computer Science.

The recent events in Paris, leaving politics aside, were an undeniably bad thing and the recent news coverage and various (sometimes depressingly moronic) Facebook posts on the subject got me thinking about moral philosophy again. Specifically, given we see conflicts between groups of people ostensibly because they live according to different moral codes (let's ignore the fact that their motivations are clearly not based on this at all) and those codes are complex and ambiguous (some might say intentionally so), can there be a moral code that's simple, unambiguous, and agreeable?

My 6th form philosophy teacher, Dr. John Beresford-Fry (Dr. Fry - I can't find him online. If anyone knows how to contact him, I'd love to speak to him again), believed he had a simple code that worked:

Do nothing gratuitous.

To my mind, that doesn't quite cut it; I don't think it's actually possible to do anything completely gratuitously; there's always some reason or reasoning behind an action. Maybe he meant something more subtle and it's been lost on me.

Some years ago, I thought I had a nice simple formulation:

Act as though everyone has the right to do whatever they wish.

or

You may do whatever you want so long as it doesn't restrict anybody's right to do the same.

Today though, I was going round in very big circles trying to think that one through. It works ok for simple, extreme cases (murder, rape, theft) and even plays nicely (I think) in some grey areas (streaming movies illegally) but I really couldn't figure out how to apply it to anyone in a position of power. How could an MP apply that rule when voting on bringing in a law to raise the minimum wage?

Come to think of it, how could an MP apply any rule when voting on any law?

Then I remembered the conclusion I came to when I was nearing the end of my philosophy course: the sentimentalists or the nihilists probably have it right.

Oh well, it kept me busy for a bit, eh ;)

Note to self: I had an idea for a game around moral philosophy, don't forget it!

Categories: LUG Community Blogs

Wordpress Comments Migration to Disqus

Planet SurreyLUG - Mon, 16/11/2015 - 21:00

I assumed that, when I migrated from Wordpress to Jekyll, one of the casualties would be the old Wordpress comments. I had decided to use DISQUS for comments on Jekyll, partly because it is the "new thing" and partly because it is incredibly easy to achieve. But I never for a moment considered that migrating the old comments would be a possibility.

In fact, not only is it possible, but it is almost trivially easy.

I did encounter a few issues, as I had foolishly renamed some pages to more logical names, thereby both breaking all Google and other links, but also breaking the link with DISCUS comments. That was a very bad idea and as a consequence I spent quite a while renaming files back again, but other than that it all worked flawlessly. If you are going to rename pages - probably best to do it in Wordpress before migrating to Jekyll.

Well done [DISCUS] for making something that you would expect to be difficult easy. And thank you Wordpress for providing a decent export routine, without which my data would have been locked away.

I look forward to your comments!

Chris.

Categories: LUG Community Blogs

Steve Kemp: lumail2 nears another release

Planet HantsLUG - Mon, 16/11/2015 - 18:35

I'm pleased with the way that Lumail2 development is proceeding, and it is reaching a point where there will be a second source-release.

I've made a lot of changes to the repository recently, and most of them boil down to moving code from the C++ side of the application, over to the Lua side.

This morning, for example, I updated the handing of index.limit to be entirely Lua based.

When you open a Maildir folder you see the list of messages it contains, as you would expect.

The notion of the index.limit is that you can limit the messages displayed, for example:

  • See all messages: Config:set( "index.limit", "all")
  • See only new/unread messages: Config:set( "index.limit", "new")
  • See only messages which arrived today: Config:set( "index.limit", "today")
  • See only messages which contain "Steve" in their formatted version: Config:set( "index.limit", "steve")

These are just examples that are present as defaults, but they give an idea of how things can work. I guess it isn't so different to Mutt's "limit" facilities - but thanks to the dynamic Lua nature of the application you can add your own with relative ease.

One of the biggest changes, recently, was the ability to display coloured text! That was always possible before, but a single line could only be one colour. Now colours can be mixed within a line, so this works as you might imagine:

Panel:append( "$[RED]This is red, $[GREEN]green, $[WHITE]white, and $[CYAN]cyan!" )

Other changes include a persistant cache of "stuff", which is Lua-based, the inclusion of at least one luarocks library to parse Date: headers, and a simple API for all our objects.

All good stuff. Perhaps time for a break in the next few weeks, but right now I think I'm making useful updates every other evening or so.

Categories: LUG Community Blogs

Sleeping Better with Linux

Planet SurreyLUG - Sun, 15/11/2015 - 19:28

I was reading Phones need 'bed mode' to protect sleep on the BBC, that computers should have a blue light filter to help people to sleep properly at night. As I am frequently working on my laptop until late in the evening, I felt that I should investigate further. Currently I do turn down the brightness, but it seems that is not enough.

The article linked to a program called f.lux, which is free for Linux. I also came across this article on Linux Magazine Avoiding Eye Strain.

Installation

Getting this working on my laptop was trivial:

sudo add-apt-repository ppa:kilian/f.lux sudo apt-get update sudo apt-get install fluxgui

Finally I ran the program and set it to launch on start-up. I entered latitude as 51.2, which is close enough I believe.

Operation

The software feels fairly basic, whether this is the same for Windows and Mac I don't know. That said the applet seems to run perfectly in the Ubuntu notification area.

It is very noticeable that the display is more muted and much more comfortable to view in the evening. I don't yet know whether this will work well during the day, nor whether it will improve my sleeping. But the logic behind it seems sound and there is no reason why it shouldn't help.

Categories: LUG Community Blogs

Reasons For Migrating from Wordpress to Jekyll

Planet SurreyLUG - Sun, 15/11/2015 - 00:00

Following my recent announcement, I thought I would give some of my reasons for the move and some early impressions of using Jekyll.

What is Jekyll?

Jekyll is a Markdown based website and blogging platform, written in Ruby. The principle is simple - you write markown text files and they are automatically converted to static HTML webpages.

What is Markdown?

I am assuming that most of my audience have at least a passing knowledge of Markdown, but basically it is a very clean virtually syntax-free way of writing text files, so that they can be easily converted into different formats. The key to markdown is the conversion utility and I currently use Pandoc. I write the file once, and then I can convert into whatever format I want it in:

  • PDF: pandoc -o sample.pdf sample.markdown
  • Word: pandoc -o sample.docx sample.markdown
  • HTML: pandoc -o sample.html sample.markdown

I would imagine most people start using Markdown so that can continue to use the favourite text editor - Vim or Emacs. At work I have found myself using it in preference to a word-processor, I have written a simple md2pdf perl script, so that in vim I can simply type :md2pdf % to have my document saved as a PDF. And the PDFs that Pandoc produces are beautiful and headings and sub-headings are automatically converted into PDF Bookmarks, giving your documents an effortless professionalism.

For complicated documents I sometimes start in Markdown and then move to LaTeX, but increasingly I am finding myself able to do virtually everything in Markdown, including simple images and hyperlinks. But you also have the option of extending plain markdown with HTML tags.

So in simplest terms Jekyll is just an automated way of creating Markdown files and converting them to HTML.

But why change from Wordpress?

Wordpress has been great for me, it's relatively simple, has great statistical tools, a build in commenting system and much more besides. So why leave all that behind for something which is fundamentally so basic?

Benefits of Jekyll
  1. Text Editor: Once again the desire to use Vim was probably the key motivation.
  2. Github Pages: The fact that Jekyll could be used with the free Github Pages was another.
  3. Command Line: The ability to use grep, sed and perl and all the other command line goodies makes for an incredibly versatile system.
  4. Version Control: To have the whole site under version control.

I cannot tell you how wonderful it is to grep -l searchtext *.markdown | vim - and be able to edit each matching file in Vim.

Bootpolish Blog

There was another reason too, which was that I still had an old blog at bootpolish.net, which I wanted to close down. I could have merged it into my Wordpress blog, but I thought it would be easier to transfer it to Jekyll. To be honest I can't say that it was particularly easy, but thankfully it is now done.

The Migration Process

I followed the GitHub Instructions for Jekyll. I used rsync to copy the underlying text files from Bootpolish.net into the _drafts folder, before using bash for loops to auto-process each into a Jekyll markdown post. I used the Wordpress Importer to transfer my Wordpress blog. The importer did not work particularly well, so I ended up editing each file in turn.

I found there was some customisation required:

  1. Categories: By default Jekyll has no Category pages, for example: http://chrisjrob.com/category/technology/
  2. Tags: By default Jekyll has no Tag pages, for example: http://chrisjrob.com/tag/3dmodel/
  3. Wordpress RSS: I wanted to maintain the existing feed locations, which required creation of various additional feeds.
  4. Tag Cloud: By default Jekyll has no tag cloud functionality, which I believe is crucial to a blog.
  5. Site Search: By default Jekyll has no site search. There are plug-ins, but these are not compatible with GitHub pages. For now I have used Google Custom Search, but it has not yet indexed the entire site.

I have written a script to build all the tags and categories, which is working well. I would like to integrate this into the git commit command somehow, so that I don't forget to run it!

Any new categories would require additional RSS feed files creating, by simply copying the feed template into the relevant category/feed folder.

Conclusions

This has been much more work than I would have liked. That said, I now have my Markdown files in a format under my control. I can move those files to any Jekyll provider, or indeed to any web provider and have them hosted.

In short, I am hoping that this is the last time I will move platforms!

Lastly, if you're unfamiliar with Markdown, you can view the Markdown version of this page.

Categories: LUG Community Blogs

Adam Trickett: Bog Roll: Cold

Planet HantsLUG - Sat, 14/11/2015 - 14:58

I'm starting to get close to my target weight - only 3 kg to go according to BMI or a few centimetres according to height to waist ratio. Sadly for the past month I've not been able to get as much exercise as normal and I've hit another weight plateau.

Another strange phenomenon I'm experiencing is being cold. Normally even in mid winter I'm fine at work or home with short sleeved shirts and only on the coldest days do I need to wear a T-shirt under a regular shirt. In fact I've not worn or owned a proper vest for decades.

Last weekend I felt cold at home and put a jumper on. Something I wouldn't normally do in autumn. On Monday at work I felt cold but it was such a strange sensation that I didn't even recognise it. On Tuesday I went into M&S and bought a vest! Human fat doesn't provide the same level of insulation that blubber does for seals and whales, but it does provide some and more importantly on a diet the body down regulates thermogenesis to save energy. As I've been starving my self for 9 months to lose weight, it's hardly surprising that my body has decided that keeping warm isn't that important when I've shed over 21 kg.

Today isn't that cold - but it is pretty miserable. I've now got a vest, shirt, thin fleece and thick fleece gilet on. I even raised the temperature on the central heating thermostat up by 2°C...!

Categories: LUG Community Blogs

On Jekyll

Planet SurreyLUG - Sat, 14/11/2015 - 14:18

I've moved this here blog thing from Joomla to Jekyll.

I've not (yet) moved the site content over to this new site, as currently I'm unsure if any of the content I had written was really worth preserving, or if it was best to start anew.

The resons for moving to Jekyll are simple and mostly obvious:

No requirement for a server-side scripting language or database

This should be pretty clear but the main reason is that working with PHP and the popular PHP CMSes out there on a daily basis teaches you to be wary of them, this deployment mechanism means that the site itself cannot be hacked directly or used as an attack vector.

Of course, nothing is truely "hack-proof" so precautions still need to be taken, but it removes the vulnerabilities that a CMS like Wordpress would introduce.

Native Markdown Editing

Because most CMSes are not designed for people like me, who use markdown as their de-facto standard for formatting prose text. Many use an HTML WYSIWYG editor, which is great for most users, but ends up making editing less efficient, and the output less elegant. It also means that the only format the content can be delivered in is HTML.

No dedicated editing application

Using Jekyll, and a git based deployment process means that deploying changes to the site is simple and easy, and I can do it anywhere thanks to github's online editor. I only need to be logged into my github account in order to make changes or write a new post.

Currently, I'm using a git hook to rebuild the site and publish the changes, this is triggered by a git push to my server.

This script clones the repo from github to a temporary directory, builds it to the public directory, then deletes the temporary copy.

My next post will probably be about this deployment mechanism.

Warp Speed

Finally, this item actually returns to my first point, but the lack of server-side programming in this site means that it can be delivered at break-neck speeds. Even without any kind of CDN or HTTP accellerator like Varnish, a well tuned nginx configuration and a lack of server-side scripting means that the all-important TTFB is much lower.

I hope that these items above and the transition to Jekyll will give me cause to write better, more frequent posts here.

Categories: LUG Community Blogs

Andy Smith: Supermicro IPMI remote console on Ubuntu 14.04 through SSH tunnel

Planet HantsLUG - Fri, 13/11/2015 - 05:04

I normally don’t like using the web interface of Supermicro IPMI because it’s extremely clunky, unintuitive and uses Java in some places.

The other day however I needed to look at the console of a machine which had been left running Memtest86+. You can make Memtest86+ output to serial which is generally preferred for looking at it remotely, but this wasn’t run in that mode so was outputting nothing to the IPMI serial-over-LAN. I would have to use the Java remote console viewer.

As an added wrinkle, the IPMI network interfaces are on a network that I can’t access except through an SSH jump host.

So, I just gave it a go without doing anything special other than launching an SSH tunnel:

$ ssh me@jumphost -L127.0.0.1:1443:192.168.1.21:443 -N

This tunnels my localhost port 1443 to port 443 of 192.168.1.21 as available from the jump host. Local port 1443 used because binding low ports requires root privileges.

This allowed me to log in to the web interface of the IPMI at https://localhost:1443/, though it kept putting up a dialog which said I needed a newer JDK. Going to “Remote Control / Console Redirection” attempted to download a JNLP file and then said it failed to download.

This was with openjdk-7-jre and icedtea-7-plugin installed.

I decided maybe it would work better if I installed the Oracle Java 8 stuff (ugh). That was made easy by following these simple instructions. That’s an Ubuntu PPA which does everything for you, after you agree that you are a bad person who should feel badaccept the license.

This time things got a little further, but still failed saying it couldn’t download a JAR file. I noticed that it was trying to download the JAR from https://127.0.0.1:443/ even though my tunnel was on port 1443.

I eventually did get the remote console viewer to work but I’m not 100% convinced it was because I switched to Oracle Java.

So, basic networking issue here. Maybe it really needs port 443?

Okay, ran SSH as root so it could bind port 443. Got a bit further but now says “connection failed” with no diagnostics as to exactly what connection had failed. Still, gut instinct was that this was the remote console app having started but not having access to some port it needed.

Okay, ran SSH as a SOCKS proxy instead, set the SOCKS proxy in my browser. Same problem.

Did a search to see what ports the Supermicro remote console needs. Tried a new SSH command:

$ sudo ssh me@jumphost \ -L127.0.0.1:443:192.168.1.21:443 \ -L127.0.0.1:5900:192.168.1.21:5900 \ -L127.0.0.1:5901:192.168.1.21:5901 \ -L127.0.0.1:5120:192.168.1.21:5120 \ -L127.0.0.1:5123:192.168.1.21:5123 -N

Apart from a few popup dialogs complaining about “MalformedURLException: unknown protocol: socket” (wtf?), this now appears to work.

Categories: LUG Community Blogs

Migration from Wordpress to Jekyll

Planet SurreyLUG - Thu, 12/11/2015 - 22:44

For the past few weeks I have migrating chrisjrob.com from Wordpress to Jekyll. I have also been merging in my previous blog at bootpolish.net.

This process has proved to be much more work than I expected, but this evening it all came together and I finally pressed the button to transfer the DNS over to the Jekyll site, hosted by GitHub.

I have tried to maintain the URL structure, along with tags, categories and RSS feeds, but it can't be perfect and there will be breakage.

If you notice any problems please do comment below.

Thank you.

Chris Roberts

Categories: LUG Community Blogs

Debian Bits: New Debian Developers and Maintainers (September and October 2015)

Planet HantsLUG - Wed, 11/11/2015 - 21:35

The following contributors got their Debian Developer accounts in the last two months:

  • ChangZhuo Chen (czchen)
  • Eugene Zhukov (eugene)
  • Hugo Lefeuvre (hle)
  • Milan Kupcevic (milan)
  • Timo Weingärtner (tiwe)
  • Uwe Kleine-König (ukleinek)
  • Bernhard Schmidt (berni)
  • Stein Magnus Jodal (jodal)
  • Prach Pongpanich (prach)
  • Markus Koschany (apo)
  • Andy Simpkins (rattustrattus)

The following contributors were added as Debian Maintainers in the last two months:

  • Miguel A. Colón Vélez
  • Afif Elghraoui
  • Bastien Roucariès
  • Carsten Schoenert
  • Tomasz Nitecki
  • Christoph Ulrich Scholler
  • Mechtilde Stehmann
  • Alexandre Viau
  • Daniele Tricoli
  • Russell Sim
  • Benda Xu
  • Andrew Kelley
  • Ivan Udovichenko
  • Shih-Yuan Lee
  • Edward Betts
  • Punit Agrawal
  • Andreas Boll
  • Dave Hibberd
  • Alexandre Detiste
  • Marcio de Souza Oliveira
  • Andrew Ayer
  • Alf Gaida

Congratulations!

Categories: LUG Community Blogs

Mick Morgan: torflow

Planet ALUG - Tue, 10/11/2015 - 12:19

Yesterday, Kenneth Freeman posted a note to the tor-relays list drawing attention to a new resource called TorFlow. TorFlow is a beautiful visualisation of Tor network traffic around the world. It enables you to see where traffic is concentrated (Europe) and where there is almost none (Australasia). Having the data overlaid on a world map gives a startling picture of the unfortunate concentration of Tor nodes in particular locations.

I recently moved my own relay from Amsterdam (190 relays) to London (133) but the network needs much more geo-diversity. Unfortunately, international bandwidth costs are lowest is the areas where relays are currently located. Given that the relays are all (well, nearly all…..) run by volunteers like me and funded out of their own pockets it is perhaps not surprising that this concentration should occur. But it is not healthy for the network.

There appears to be a particularly intriguing concentration of 16 relays on a tiny island in the Gulf of Guinea. Apparently this is an artifact though because those relays are all at (0, 0) which I am told GeoIP uses as a placeholder for “unknown” (in fact, GeoIP location is a somewhat imprecise art so there may be other anomalies in the data.)

Categories: LUG Community Blogs

Steve Engledow (stilvoid): Dear diary...

Planet ALUG - Tue, 10/11/2015 - 01:17

It's been quite some time since I last got round to writing anything here; almost two months. Life has been fairly eventful in that short time. At least, work has.

During every performance review I've had since I joined Proxama, there's one goal I've consistently brought up: that I wanted to have more of an influence over the way we write and deliver software and the tools we use. That's the sort of thing I'm really interested in.

Having made it to head of the server delivery team, I had a good taste of the sort of oversight that I was looking for but a few weeks ago, I got the opportunity to take on a role that encompasses both server and mobile, development and QA so of course I jumped at the chance... and got it!

Naïvely, when I took on the role, I thought I'd be doing more of the same as I was before (a bit of line management, code reviews, shaping upcoming work, architecture, occasionally writing code), just with a larger team. This is turning out not to be the case but in quite a positive way - so far, at least. I feel as though I now have the opportunity to sit a little further back, get some thinking time, and right a few wrongs that have built up over the years. Whether I'm achieving that remains to be seen ;)

Another thought that occurred to me the other day is that way back when I was at school, I never really imagined I'd end up in a technical role. I always imagined I'd either be a maths teacher or that I'd be a writer or editor for a newspaper or magazine. I'm finding out that my new job at Proxama requires me to write quite a lot of papers on various technical subjects. Double win.

In short, I'm enjoying some of my days more, trying very hard (and sometimes failing) not to worry about the details, focus on the bigger picture and trust that the other things will fall in to place (and sort it out where they don't). Is this what it's like going "post technical"? I'm slightly worried I'll forget how to code if I don't do a bit more of it.

Today, I spent a very, very long time fighting Jira. That wasn't fun.

Note to self: book some time in to write some code.

Categories: LUG Community Blogs

Jonathan McDowell: The Joy of Recruiters

Planet ALUG - Mon, 09/11/2015 - 17:45

Last week Simon retweeted a link to Don’t Feed the Beast – the Great Tech Recruiter Infestation. Which reminded me I’d been meaning to comment on my own experiences from earlier in the year.

I don’t entertain the same level of bile as displayed in the post, but I do have a significant level of disappointment in the recruitment industry. I had conversations with 3 different agencies, all of whom were geographically relevant. One contacted me, the other 2 (one I’d dealt with before, one that was recommended to me) I contacted myself. All managed to fail to communicate with any level of acceptability.

The agency hat contacted me eventually went quiet, after having asked if they could put my CV forward for a role and pushing very hard about when I could interview. The contact in the agency I’d dealt with before replied to say I was being passed to someone else who would get in contact. Who of course didn’t. And the final agency, who had been recommended, passed me between 3 different people, said they were confident they could find me something, and then went dark except for signing me up to their generic jobs list which failed to have anything of relevance on it.

As it happens my availability and skill set were not conducive to results at that point in time, so my beef isn’t with the inability to find a role. Instead it’s with the poor levels of communication presented by an industry which seems, to me, to have communication as part of the core value it should be offering. If anyone had said at the start “Look, it’s going to be tricky, we’ll see what we can do” or “Look, that’s not what we really deal in, we can’t help”, that would have been fine. I’m fine with explanations. I get really miffed when I’m just left hanging.

I’d love to be able to say I’ll never deal with a recruiter again, but the fact of the matter is they do serve a purpose. There’s only so far a company can get with word of mouth recruitment; eventually that network of personal connections from existing employees who are considering moving dries up. Advertising might get you some more people, but it can also result in people who are hugely inappropriate for the role. From the company point of view recruiters nominally fulfil 2 roles. Firstly they connect the prospective employer with a potentially wider base of candidates. Secondly they should be able to do some sort of, at least basic, filtering of whether a candidate is appropriate for a role. From the candidate point of view the recruiter hopefully has a better knowledge of what roles are out there.

However the incentives to please each side are hugely unbalanced. The candidate isn’t paying the recruiter. “If you’re not paying for it, you’re the product” may be bandied around too often, but I believe this is one of the instances where it’s very applicable. A recruiter is paid by their ability to deliver viable candidates to prospective employers. The delivery of these candidates is the service. Whether or not the candidate is happy with the job is irrelevant beyond them staying long enough that the placement fee can be claimed. The lengthy commercial relationship is ideally between the company and the recruitment agency, not the candidate and the agency. A recruiter wants to be able to say “Look at the fine candidate I provided last time, you should always come to me first in future”. There’s a certain element of wanting the candidate to come back if/when they are looking for a new role, but it’s not a primary concern.

It is notable that the recommendations I’d received were from people who had been on the hiring side of things. The recruiter has a vested interest in keeping the employer happy, in the hope of a sustained relationship. There is little motivation for keeping the candidate happy, as long as you don’t manage to scare them off. And, in fact, if you scare some off, who cares? A recruiter doesn’t get paid for providing the best possible candidate. Or indeed a candidate who will fully engage with the role. All they’re required to provide is a hire-able candidate who takes the role.

I’m not sure what the resolution is to this. Word of mouth only scales so far for both employer and candidate. Many of the big job websites seem to be full of recruiters rather than real employers. And I’m sure there are some decent recruiters out there doing a good job, keeping both sides happy and earning their significant cut. I’m sad to say I can’t foresee any big change any time soon.

[Note I’m not currently looking for employment.]

[No recruitment agencies were harmed in the writing of this post. I have deliberately tried to avoid outing anyone in particular.]

Categories: LUG Community Blogs

Meeting at "The Moon Under Water"

Wolverhampton LUG News - Mon, 09/11/2015 - 10:49
Event-Date: Wednesday, 11 November, 2015 - 10:45Body:  53-55 Lichfield St Wolverhampton West Midlands WV1 1EQ Eat, Drink and talk Linux
Categories: LUG Community Blogs

Andy Smith: Linux Software RAID and drive timeouts

Planet HantsLUG - Mon, 09/11/2015 - 09:06
All the RAIDs are breaking

I feel like I’ve been seeing a lot more threads on the linux-raid mailing list recently where people’s arrays have broken, they need help putting them back together (because they aren’t familiar with what to do in that situation), and it turns out that there’s nothing much wrong with the devices in question other than device timeouts.

When I say “a lot”, I mean, “more than I used to.”

I think the reason for the increase in failures may be that HDD vendors have been busy segregating their products into “desktop” and “RAID” editions in a somewhat arbitrary fashion, by removing features from the “desktop” editions in the drive firmware. One of the features that today’s consumer desktop drives tend to entirely lack is configurable error timeouts, also known as SCTERC, also known as TLER.

TL;DR

If you use redundant storage but may be using non-RAID drives, you absolutely must check them for configurable timeout support. If they don’t have it then you must increase your storage driver’s timeout to compensate, otherwise you risk data loss.

How do storage timeouts work, and when are they a factor?

When the operating system requests from or write to a particular drive sector and fails to do so, it keeps trying, and does nothing else while it is trying. An HDD that either does not have configurable timeouts or that has them disabled will keep doing this for quite a long time—minutes—and won’t be responding to any other command while it does that.

At some point Linux’s own timeouts will be exceeded and the Linux kernel will decide that there is something really wrong with the drive in question. It will try to reset it, and that will probably fail, because the drive will not be responding to the reset command. Linux will probably then reset the entire SATA or SCSI link and fail the IO request.

In a single drive situation (no RAID redundancy) it is probably a good thing that the drive tries really hard to get/set the data. If it really persists it just may work, and so there’s no data loss, and you are left under no illusion that your drive is really unwell and should be replaced soon.

In a multiple drive software RAID situation it’s a really bad thing. Linux MD will kick the drive out because as far as it is concerned it’s a drive that stopped responding to anything for several minutes. But why do you need to care? RAID is resilient, right? So a drive gets kicked out and added back again, it should be no problem.

Well, a lot of the time that’s true, but if you happen to hit another unreadable sector on some other drive while the array is degraded then you’ve got two drives kicked out, and so on. A bus / controller reset can also kick multiple drives out. It’s really easy to end up with an array that thinks it’s too damaged to function because of a relatively minor amount of unreadable sectors. RAID6 can’t help you here.

If you know what you’re doing you can still coerce such an array to assemble itself again and begin rebuilding, but if its component drives have long timeouts set then you may never be able to get it to rebuild fully!

What should happen in a RAID setup is that the drives give up quickly. In the case of a failed read, RAID just reads it from elsewhere and writes it back (causing a sector reallocation in the drive). The monthly scrub that Linux MD does catches these bad sectors before you have a bad time. You can monitor your reallocated sector count and know when a drive is going bad.

How to check/set drive timeouts

You can query the current timeout setting with smartctl like so:

# for drive in /sys/block/sd*; do drive="/dev/$(basename $drive)"; echo "$drive:"; smartctl -l scterc $drive; done

You hopefully end up with something like this:

/dev/sda: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org   SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds)   /dev/sdb: smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org   SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds)

That’s a good result because it shows that configurable error timeouts (scterc) are supported, and the timeout is set to 70 all over. That’s in centiseconds, so it’s 7 seconds.

Consumer desktop drives from a few years ago might come back with something like this though:

SCT Error Recovery Control: Read: Disabled Write: Disabled

That would mean that the drive supports scterc, but does not enable it on power up. You will need to enable it yourself with smartctl again. Here’s how:

# smartctl -q errorsonly -l scterc,70,70 /dev/sda

That will be silent unless there is some error.

More modern consumer desktop drives probably won’t support scterc at all. They’ll look like this:

Warning: device does not support SCT Error Recovery Control command

Here you have no alternative but to tell Linux itself to expect this drive to take several minutes to recover from an error and please not aggressively reset it or its controller until at least that time has passed. 180 seconds has been found to be longer than any observed desktop drive will try for.

# echo 180 > /sys/block/sda/device/timeout I’ve got a mix of drives that support scterc, some that have it disabled, and some that don’t support it. What now?

It’s not difficult to come up with a script that leaves your drives set into their most optimal error timeout condition on each boot. Here’s a trivial example:

#!/bin/sh   for disk in `find /sys/block -maxdepth 1 -name 'sd*' | xargs -n 1 basename` do smartctl -q errorsonly -l scterc,70,70 /dev/$disk   if test $? -eq 4 then echo "/dev/$disk doesn't suppport scterc, setting timeout to 180s" '/o\' echo 180 > /sys/block/$disk/device/timeout else echo "/dev/$disk supports scterc " '\o/' fi done

If you call that from your system’s startup scripts (e.g. /etc/rc.local on Debian/Ubuntu) then it will try to set scterc to 7 seconds on every /dev/sd* block device. If it works, great. If it gets an error then it sets the device driver timeout to 180 seconds instead.

There are a couple of shortcomings with this approach, but I offer it here because it’s simple to understand.

It may do odd things if you have a /dev/sd* device that isn’t a real SATA/SCSI disk, for example if it’s iSCSI, or maybe some types of USB enclosure. If the drive is something that can be unplugged and plugged in again (like a USB or eSATA dock) then the drive may reset its scterc setting while unpowered and not get it back when re-plugged: the above script only runs at system boot time.

A more complete but more complex approach may be to get udev to do the work whenever it sees a new drive. That covers both boot time and any time one is plugged in. The smartctl project has had one of these scripts contributed. It looks very clever—for example it works out which devices are part of MD RAIDs—but I don’t use it yet myself as a simpler thing like the script above works for me.

What about hardware RAIDs?

A hardware RAID controller is going to set low timeouts on the drives itself, so as long as they support the feature you don’t have to worry about that.

If the support isn’t there in the drive then you may or may not be screwed there: chances are that the RAID controller is going to be smarter about how it handles slow requests and just ignore the drive for a while. If you are unlucky though you will end up in a position where some of your drives need the setting changed but you can’t directly address them with smartctl. Some brands e.g. 3ware/LSI do allow smartctl interaction through a control device.

When using hardware RAID it would be a good idea to only buy drives that support scterc.

What about ZFS?

I don’t know anything about ZFS and a quick look gives some conflicting advice:

Drives with scterc support don’t cost that much more, so I’d probably want to buy them and check it’s enabled if it were me.

What about btrfs?

As far as I can see btrfs does not disable drives, it leaves it until Linux does that, so you’re probably not at risk of losing data.

If your drives do support scterc though then you’re still best off making sure it’s set as otherwise things will crawl to a halt at the first sign of trouble.

What about NAS devices?

The thing about these is, they’re quite often just low-end hardware running Linux and doing Linux software RAID under the covers. With the disadvantage that you maybe can’t log in to them and change their timeout settings. This post claims that a few NAS vendors say they have their own timeouts and ignore scterc.

So which drives support SCTERC/TLER and how much more do they cost?

I’m not going to list any here because the list will become out of date too quickly. It’s just something to bear in mind, check for, and take action over.

Fart fart fart

Comments along the lines of “Always use hardware RAID” or “always use $filesystem” will be replaced with “fart fart fart,” so if that’s what you feel the need to say you should probably just do so on Twitter instead, where I will only have the choice to read them in my head as “fart fart fart.”

Categories: LUG Community Blogs
Syndicate content