The second in an occasional series. (Here’s the first.)
I was reading up on passwords recently; here are some articles I found interesting.
The evolution mail application is now available on SL7. It seems to work well enough except that it does not cope well with old evolution configuration files. Before trying to use it on SL7, move any old configuration files out of the way. This is explained in a little more detail on the Informatics computing help site:
The story so far: in Accessibility warnings I talked about those irritating “Couldn’t register with accessibility bus” warnings on SL7, and how suppress them (you define an environmental variable NO_AT_BRIDGE to 1.)
I’ve since learned that although this workaround works perfectly well on most of the window managers we offer, it rather mysteriously doesn’t work if you happen to use GNOME or GNOME Classic. The lucky users of these window managers still see the warning messages, while the users of (for instance) MATE or KDE Plasma Workspace
are left to get on with their lives in blissful peace don’t.
At first I thought that this might be a quirk of our rather old bash default environment infrastructure, but extensive experimentation established that that wasn’t the case. Our bash startup files are quite capable of defining, for instance, NO_BT_BRIDGE; it’s just NO_AT_BRIDGE which isn’t present in the environment, and only if you use GNOME or GNOME Classic. And it doesn’t matter what value it’s given.
After a lot of searching of GNOME related documentation and sources and blogs and mailing lists I eventually found
gnome-shell, and in particular its source, which contains this:
/* Prevent meta_init() from causing gtk to load gail and at-bridge */ g_setenv ("NO_AT_BRIDGE", "1", TRUE); meta_init (); g_unsetenv ("NO_AT_BRIDGE");
It unsets NO_AT_BRIDGE. Whether or not it was set before. Without checking its value.
Oh well. However frustrating this might initially have been, it was at least nice to get a rational explanation, and to know that I wasn’t imagining it, or coping with a haunted shell environment.
Also, to be fair, NO_AT_BRIDGE is an undocumented (though widely swapped on support sites) workaround. As such, GNOME can do with it what it likes.
At this point I could have patched the gnome-shell source to make it restore the previous value of NO_AT_BRIDGE rather than unsetting it in a blanket fashion; but this would have opened up a Pandora’s Box of tracking future versions of the package, remembering to patch them in the same way, and no doubt eventually forgetting the reason why this patching was being done. All to get rid of some irritating but ultimately quite unimportant warning messages. I decided merely to document the quirk, and give the remaining affected users a workaround, at:
There’s a problem with Tcl’s CMake scripts on SL7.
We use CMake to build our LCFG packages, as documented at LCFG Build Tools. The CMake configuration for a package is held in its
CMakeLists.txt file. This mostly just pulls in our standard LCFG build environment:
but it can also be used to tweak or add things to the CMake configuration. A package which uses the Tcl scripting language would use
CMakeLists.txt to pull in some Tcl configuration:
This is the standard way of asking CMake to deduce where various Tcl bits and pieces are and fill in correct values for Tcl-related CMake variables.
We’ve found that it’s filling in the wrong value for the @TCL_TCLSH@ variable. On SL7 this should evaluate to
/usr/bin/tclsh, but it actually evaluates to
For a user or for a script this isn’t really a problem, because
/bin on SL7 is a symbolic link to
/usr/bin, so either path will find
However for an RPM package to be built with a requirement for
/bin/tclsh certainly is a problem, because such a requirement is unfulfillable. The
tcl package provides
/usr/bin/tclsh, and no package at all provides
/bin/tclsh, so the package building software doesn’t have any way of figuring out how to fulfil the requirement for
/bin/tclsh. All it knows is that it doesn’t know of a package which provides
/bin/tclsh – so when we try to install a package which requires Tcl, the unfulfilled requirement is flagged as an error.
I haven’t found a fix for this, but here’s a workaround: simply edit
CMakeLists.txt to set the variable to its correct value:
set(TCL_TCLSH "/usr/bin/tclsh") include(lcfg.cmake)
include(FindTCL) is then left in or removed, the @TCL_TCLSH@ variable is correctly expanded to
/usr/bin/tclsh, and the bogus requirement for
/bin/tclsh disappears; so the resulting package is installable on SL7.
Here’s a little tip for fellow users of Apple Mail on a Mac.
For some time since upgrading my Mac to OS X 10.10 Yosemite, its Mail app had been misbehaving. It seemed unexpectedly slow. Sometimes when I deleted a bunch of messages, some or all of them would undelete themselves a few seconds later, often when I had moved on to another Mail folder. Problems like these would come and go; sometimes it was OK, sometimes not. Malaise affected both my personal email account and my work account.
The cause turned out to be the same for both accounts: Mail was trying to use port 143 for IMAP, despite the fact that it was also trying to use SSL. SSL IMAP should go through port 993, not port 143. Horrible misconfiguration. It didn’t just fail, though. Somehow it kind of stumbled along, behaving just well enough that I didn’t guess for a while that there was a configuration problem at all.
At the suggestion of the tech support people at my personal email provider I tracked down and unticked the Mail account setting to “Automatically detect and maintain account settings”. There’s one of these for each incoming and outgoing mail account you have configured in Mail. I had two IMAP servers and three SMTP servers configured, and I’ve deselected that setting for all five of them. Once that’s done, unticking the “SSL” box then ticking it again is enough to set each configuration back to using its proper port.
As soon as I’d made that change, Mail was transformed back into the beautiful thing it used to be: everything happens instantly, it handles complicated mass deletes with the greatest of ease, it files mail into folders with no perceptible wait, despite all the filtering rules I’ve burdened it with; mail sends at the first time of asking, without any argument or backchat from Mail. Life is good once more.
Postscript: I see from a brief web search that I’m not the only one affected by this! See for instance:
You may see a warning when starting an application (in this case
gedit) on DICE SL7:
** (gedit:26514): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
This warning is harmless. You can ignore it, or you can stop it happening by first typing the following command into your shell window:
To save you typing this, we’ve added it to DICE’s system-wide bash startup. It’ll be included in next week’s configuration update, which will hit machines on 4/5 November 2015.
As far as I can tell, the warning is produced when an application which uses GNOME’s GTK3 toolkit tries to connect to the GNOME Assistive Technology Service Provider Interface – which isn’t available, because SL7 DICE does not use GNOME’s own display manager, with which many core GNOME services are now integrated. Setting NO_AT_BRIDGE tells the application not to try to load the accessibility “bridge”.
This doesn’t seem to be mentioned in GNOME documentation. I did find a reference to it in the source of at-spi2-atk – “GTK+ module for bridging AT-SPI to ATK”. However I didn’t search the GNOME source exhaustively because, well, there’s a lot of it.
I’m sure it would be nice to be able to make use of whatever GNOME accessibility features are offered by GTK3, but since they seem to be inaccessible on systems which don’t use GDM for their login screens, we currently can’t. (We use lightdm instead of GDM.)
Edit: The story continues in part 2, Those accessibility warnings again.
I’ve written a URL shortener which can be used by anyone in the School of Informatics. For the short story skip to the summary.
The computing staff here in Informatics are required to set aside time for personal development. Constant technical change makes this a necessity. It also stops us from being too bored.
For a while I’ve been promising to spend some development time brushing up web and database programming skills by writing a URL shortener and putting it into service. I’ve now put one together.
What’s a URL shortener? It’s a web service which gives a user a short URL to use in place of a URL which may be long and complicated. For example the address bit.ly/1LfOYvn will take a web browser to blog.inf.ed.ac.uk/chris/url-shortener-part-1, the address of this blog post. goo.gl, tinyurl.com and t.co are other well known URL shorteners.
How does it work? A little like this:
Web browser (to short.url web server):
short.url. Can I have the web page at
short.url web server (to itself):
tufty… is that one of my shortened URLs? If it is, it’ll be in my database.
short.url web server rummages in its database and finds a box labelled
tufty. Inside there’s another web address.
short.url web server:
Hello Web browser. Your page has moved. Ask for it at this address:
That’s redirection. The site also needs to handle registration: it should allow a user to submit a web address and be given an equivalent shortened address for it, adding the new pair of URLs to its database.
There’s an article in Wikipedia if you want to learn more.
What ingredients are needed? A basic list would include a web server, a database, a mechanism for URL redirection, and a way for users to register URLs with the site. While not strictly necessary, an authentication mechanism would come in handy if you think you might want to allow users to delete or change a shortened URL.
To get a first attempt at a URL shortener up and running I provided each of these things then glued them together.
So, a slew of decisions.
- For the implementation language I used Perl, because one of my aims was to revise my knowledge of some Perl technologies I had briefly used a while ago. At a minimum I wanted to revisit CGI (for processing and preparing HTTP requests and responses) and DBI (for interacting with a database). Additionally, Perl is what I currently use for most jobs and I reckoned I’d have enough on my plate without also learning a new language.
- Who would use this, ultimately? I’m in the business of providing computing facilities for the School of Informatics, so it seemed wise to limit the site such that only Informatics users could register URLs on it. It equally seemed obvious that users would probably want to share their URLs worldwide, so non-privileged access – just using the short URLs – would need to be open to everyone. Luckily it’s easy to separate out access-for-Informatics-only from access-for-everyone. We put the access-for-everyone (the redirection of short URLs to the original longer ones) on http, and the access-for-Informatics-only (the registration and general admin of short URLs) on https, protected by Cosign. This sort of separation is used in some other Informatics websites so we have suitable LCFG configuration for setting up the necessary virtual host declarations on the Apache web server.
- I’ve touched on this one already – I used our supported DICE Linux platform with our LCFG configuration technology. This one wasn’t really a decision so much as a no-brainer. LCFG installs and configures everything necessary for the project – OS, the web server, the Perl modules and all, and it’ll keep it configured correctly through software and OS upgrades, reinstalls and so on.
- Some kind of database would be needed to store the URLs and their shortened URL codes. I’m not a database expert but I’m told by those who are that PostgreSQL is easily the best solution available to us on DICE, and clearly superior to certain other popular free SQL solutions.
- URL redirection is done by getting the CGI script to send an HTTP redirect response. There are several of these, but the basic choice seems to come down to 301 and 302. A 301 signal is a permanent redirect. Once a web browser gets it, it can remember it in its cache, so on subsequent visits it won’t need to revisit the site which issued the redirect. A 302 is a more temporary redirect: the browser is redirected to the new site, but subsequent visits will go through the same redirect procedure once again. If you want to use a URL shortener to
spy ongather information about people using a URL, the temporary redirect is the obvious choice. However the aim here, at least to start with, is to provide a simple URL redirection service, one which just redirects URLs; so I chose to use 301, the permanent redirect.
- Obviously it would be good for a URL shortener to have a short domain name. For a wee personal project though I thought I’d stick with our normal DNS domain. It may not be the shortest out there but it’s not too prolix; it’s identified with the School of Informatics; and to add a selfish note, it’s in my fingers’ muscle memory.
- Should we allow users to specify their own short URLs, or should we generate random ones? The first would be desirable, both would be best, but for simplicity I started off offering only random URLs.
Once these bits and pieces are in place, the remaining task is to write a CGI script which connects them together. It has to distinguish between an attempt to use a shortened URL (look up the shortened URL in the database to find its associated original URL then issue a redirect to this URL) and other visits. The other visits might be mere curiosity (give them a help message) and a desire to register a URL (give them a form which they can use to submit a URL to the site). The CGI also has to spot a submitted form response (and process it – generate a random short URL code, enter that and the original URL in the database, then give the user their new shortened URL).
And that, basically, was the first version of the URL shortener I put together. As soon as I tentatively showed it to a few colleagues, feature requests were made, so shortly afterwards it grew the ability to identify who was registering a URL (Cosign gives you this for free) and to remember this identity alongside each registered URL. This made it possible to give each authenticated user a list of their own registered URLs, and to provide the opportunity to delete each URL. It will also make it possible to let users edit their URL data, but that’ll be covered in a future post.
Ladies and gentlemen, I give you i.inf.ed.ac.uk. Play with it if you like. From this point onwards its database shall be treated with respect, so any URLs that the service issues to you will be preserved. There are a few caveats:
- It’s internal only.
- Until the CGI is a bit more sophisticated and has passed a security audit it will only be available on the Informatics intranet. The firewall will prevent external users from accessing it. The aim is to open it up to outside users (for redirection only; registration of new URLs will remain open only to DICE accounts), but not yet.
- It’s very simple.
- Once you have a basic URL shortener, the number of ways in which it might be extended is rather startling. Extensions will be covered a future post. Right now, it doesn’t have many fancy bells and whistles. You can add URLs, delete them again, and quote the shortened versions to other (School of Informatics) users. For the moment that’s about it.
- Extensions are on the way.
- The system will be extended as I get time to work on it.
Have fun, and let me know how you get on. And yes, I was a proud member of The Tufty Club.