Category Archives: Uncategorized

LastPass: another view

Last month I mentioned the LastPass breach (I ♥ password managers, 2 December).

I didn’t do much more than mention it, but if you want to read more, this Mastodon post by Jeremi M Gosney makes several interesting points.

In particular, I would be reluctant to use a password manager which stored my secrets opaquely in the cloud somewhere; the ability to store your password database only where you choose to put it seems to me to be a fundamental security advantage.

Recent changes on computing.help

If you use School of Informatics systems you’re probably familiar with computing.help.inf.ed.ac.uk, where you can find the Informatics computing team’s help and advice. In this post I just wanted to mention a couple of its topics which have changed recently.

Firstly there’s Basic Auth. Our University email is handled by Office365, the Microsoft system. Some months ago Microsoft announced that it was going to tighten up the security around logging in to Office365.

The University advises you to read your mail on the web, but if you prefer to use a mail client, you may need to reconfigure how it connects to Office365.

In our page Deprecation of Basic Auth for University Mail we’ve collected together tips on how to do this for a variety of mail clients including alpine, mutt and thunderbird.

Secondly, clean laptops. When you travel internationally, and you take your tech, you need to be careful. Some countries’ border officials may want to inspect the contents of your laptop, and they may refuse you entry, confiscate your laptop or even imprison you if your laptop uses encryption.

Since the University insists that laptops are encrypted – it’s a basic precaution against loss or theft – this presents a problem!

The solution is to borrow a “clean” laptop – meaning that there’s none of your data on it, and it’s not encrypted. You can borrow a clean Windows laptop from the University, or a clean MacBook from Informatics. Our Clean laptops page gives you the necessary links.

I ♥ password managers

Have you ever struggled to remember a password, or to think of a secure new password on the spur of the moment? Have you ever been tempted to use the same password for several purposes?

If you have, you need a password manager. But how does this help? Simple: it remembers your passwords, so you don’t have to. That’s really important:

  • Every password can be unique – no more danger of someone stealing all your accounts with just one password.
  • They can be secure and complicated, but without the hassle – so you can stop using the date of your cat’s birthday.
  • They can be there when you need them, in your web browser and apps.
  • It frees up your mind for something more fun.

The important message here is – use a password manager! But which one? Any popular, reputable one will probably be fine. Here are a few possibilities.
In the cloud. There are lots of these, and the University provides free use of one called LastPass. It’s highly rated for ease of use. As for its security, you might want to read the LastPass disclosure of a recent security incident and judge for yourself. (Well done, LastPass, for being responsible and mature enough to make that disclosure.)
Built in to your web browser (such as Chrome, Firefox, Safari) or into your computer. Many web browsers have a password manager built in. Not only do they remember your passwords, but they can also suggest a good new password when you need one. Sometimes your passwords can be shared between your devices – Apple devices can do that, for instance.
A standalone program on your computer. This is the approach we take on DICE Linux. We provide keepassxc. I think it’s great, because:

  • It’s an open source project, so there are lots of eyes on the code.
  • It stores your passwords in a single file in your own filespace, so you can control where you put that file. (Keep backup copies somewhere safe and separate.)
  • Its password file can be used on practically any type of computer, including phones and laptops.
  • The highly-regarded Electronic Freedom Foundation publishes a guide to using KeePassXC.

Finally, our computing help site has a help page on password managers.

I hope this helps, and feel free to comment.

toohot – shut down a machine if the cooling fails

One of the tasks of a system admin is to take care of computers in server rooms. You might be in your office, or you might be at home in bed, but the computers in the server rooms are in use 24 hours a day, 7 days a week.

This computing activity can generate a lot of heat, especially on powerful servers, so server rooms have powerful cooling systems.

That’s great – until the cooling system fails. When that happens, you’re going to want your servers to somehow know that the room is quickly becoming very hot, and maybe to shut themselves down before they suffer temperature-related hardware damage.

For some years now, our DICE servers have been running a script called toohot which shuts the server down if it detects a temperature which is too high.

The script queries one of the server’s IPMI temperature sensors. Generally there’s one marked “Inlet Temp”, “Ambient” or the like – look for the sensor which reports the temperature of the air that’s being taken into the machine.

It uses the ipmi-sensors utility, part of the freeipmi package, to query the server’s built-in temperature sensors.

On a server running Linux you can list the machine’s temperature sensors and their current readings:

# ipmi-sensors -t temperature

Here’s what a Dell PowerEdge R740 has in the way of temperature sensors:

# ipmi-sensors -t temperature
ID  | Name         | Type        | Reading    | Units | Event
1   | Temp         | Temperature | 39.00      | C     | 'OK'
2   | Temp         | Temperature | 51.00      | C     | 'OK'
3   | Inlet Temp   | Temperature | 20.00      | C     | 'OK'
187 | GPU1 Temp    | Temperature | N/A        | C     | N/A
188 | GPU2 Temp    | Temperature | N/A        | C     | N/A
192 | GPU3 Temp    | Temperature | N/A        | C     | N/A
193 | GPU4 Temp    | Temperature | N/A        | C     | N/A
194 | GPU5 Temp    | Temperature | N/A        | C     | N/A
195 | GPU6 Temp    | Temperature | N/A        | C     | N/A
196 | GPU7 Temp    | Temperature | N/A        | C     | N/A
197 | GPU8 Temp    | Temperature | N/A        | C     | N/A
210 | Exhaust Temp | Temperature | 28.00      | C     | 'OK'

In this case toohot would use sensor number 3, “Inlet Temp”.

To get the information it needs from the sensor, the script uses “-v” (verbose) and “-s” (specify a particular sensor number). In this example,

# ipmi-sensors -v  -s 3

This gives, on the example machine:

Record ID: 3
ID String: Inlet Temp
Sensor Type: Temperature (1h)
Sensor Number: 5
IPMB Slave Address: 10h
Sensor Owner ID: 20h
Sensor Owner LUN: 0h
Channel Number: 0h
Entity ID: system board (7)
Entity Instance: 1
Entity Instance Type: Physical Entity
Event/Reading Type Code: 1h
Lower Critical Threshold: -7.000000 C
Upper Critical Threshold: 47.000000 C
Lower Non-Critical Threshold: 3.000000 C
Upper Non-Critical Threshold: 43.000000 C
Lower Non-Recoverable Threshold: N/A
Upper Non-Recoverable Threshold: N/A
Sensor Min. Reading: -128.000000 C
Sensor Max. Reading: 127.000000 C
Normal Min.: 11.000000 C
Normal Max.: 69.000000 C
Nominal Reading: 23.000000 C
Sensor Reading: 20.000000 C
Sensor Event: 'OK'

From this the script extracts the values of the sensor readings, then compares them.

It then uses a bit of simple logic:
If the “Sensor Reading” value is >= the “Upper Non-Critical Threshold” value then shut down the machine.
In this case 20 is less than 43 so of course we don’t need to shut down the machine.

Sometimes “Upper Non-Critical Threshold” won’t have a meaningful value but “Upper Critical Threshold” or “Upper Non-Recoverable Threshold” will. In these cases the script would subtract 5C from the “Upper Critical Threshold” value or 10C from the “Upper Non-Recoverable Threshold” value, then make the comparison as above.

Occasionally, also, the sensor won’t give a sensible value – perhaps some other software is using IPMI at the time. If this happens the script waits a few seconds then tries again. It does this a few times before giving up.

The script is then run every few minutes using a “cron” job, to give some very basic protection against damage caused by a cooling failure.

The script used on the DICE machines is an LCFG component – LCFG is our automated machine configuration system – so it’s not easily separable from the rest of LCFG for use in servers managed by other means. But for what it’s worth, you can see it here:

https://svn.lcfg.org/viewvc/source/trunk/lcfg-toohot/

https://svn.lcfg.org/viewvc/source/trunk/lcfg-toohot/toohot.cin?revision=34927&view=markup

If you look at the script you’ll see that it also checks the temperatures of any GPUs it finds.

RHEL 7.6 β now available

Just as we’re finishing our deployment of 7.5, RHEL 7.6 β is now available. There are release notes. Section 41, Deprecated functionality, is interesting – they’re deprecating sendmail! Gasp! They also deprecate tcpwrappers but don’t explain how to replace it.
Edit: Matthew Richardson has alerted me to this Fedora page on deprecating tcpwrappers which gives some explanation. Thanks Matthew!

User Security Training

I’m working on computing project 403 – Consider User Security Training Materials. The brief is:

Consider what user security training materials we can provide and also what mechanism of delivery to use. This could be used to encourage VPN/Keberos use, provide system management guidelines for when a self managed server has firewall holes. Ideally we would like a single page with bullet points and that when writing documentation we get volunteers from the end user community (rather than computing staff) to proofread.

The University’s Information Security Division has produced good advice and training on digital security. This project aims to complement that. It does not attempt to compete with or replace it.

This project has been difficult to get going because it’s potentially so open-ended: we need to communicate $SECURITY_ADVICE to $COMPUTER_MANAGERS by $METHODS to achieve $AIMS. Just instantiate the variables.

However, there are some specific priorities:

  • We need to ensure that self-managed (that is, managed by users rather than by computing staff) servers with firewall holes (that is, accessible from outside Informatics) are kept reasonably secure by their managers.
  • We also want to ensure that these managers are aware of their legal obligations as service managers (data protection, freedom of information, University computing regulations, JaNET acceptable use policy, and so on).
  • We envisage providing an automatically assessed Learn course, for which the servers’ managers must achieve a good enough pass mark as a condition of getting firewall holes for their servers.
  • We hope to provide a course which would be useful across the University.
  • Promoting network security by encouraging more use of VPNs (for encrypted network traffic) and of Kerberos (to cut down the remote use of login passwords).

There seem to be two ways of tackling this:

  1. Begin by listing all the cases (e.g. security advice, groups of users, training techniques) we should consider; construct a list of meaningful actions to take; sort them into priority order; then tackle the most important first.
  2. Identify an important thing which needs doing; do it; reassess. Repeat until bored done.

Colleagues advise that the second option is easily the more practical approach to take.

smartctl and MegaRAID

I’ve never found an entirely satisfactory way of asking our Linux machines for the models, specifications and serial numbers of their disks.

Until now our server monitoring scripts have used the disk controller’s software utility to get disk information, then attempt to make sense of it. For most of our servers the disk controller is a Dell PERC, and this can be controlled using the MegaCli utility.

MegaCli can be asked to return “Inquiry Data” for each disk in the system. This inquiry data is typically a one line mixture of the disk’s manufacturer, its model number, its serial number and its current firmware level.

The problem is that the particular recipe differs from one disk model to another: sometimes the serial number appears before the manufacturer, model and firmware revision; sometimes after; sometimes the manufacturer is named, and sometimes not; sometimes these distinct pieces of data are separated by spaces, and sometimes not.

It’s a mess, and so is the script that tries to make sense of it.

So I was overjoyed – if exasperated – to find out recently that the smartctl utility has all along had the ability to disentangle this data, and present each item of disk inquiry data in a neat, individual, predictably separate way.

For a simple desktop, smartctl is easy enough to use. A command like smartctl -i /dev/sda might return information like:

Model Family:     Western Digital Blue
Device Model:     WDC WD5000AZLX-60K2TA0
Serial Number:    (withheld from blog)
LU WWN Device Id: (withheld from blog)
Firmware Version: (withheld from blog)
User Capacity:    500,107,862,016 bytes [500 GB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-3 (unknown minor revision code: 0x006d)
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Fri Jan 12 14:47:44 2018 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

That’s all very well as long as the operating system’s disk device – in this case /dev/sda – corresponds to a physical hard disk. For simpler machines this is the case, but more sophisticated servers and workstations will probably have multiple physical disks logically joined together into virtual disks.

If RAID is in use, smartctl can still produce the goods, but it needs to be told something about the RAID setup – the technology in use, the slot which houses the physical disk of interest, and the Linux device corresponding to the virtual disk of which that physical disk is part. For example,

smartctl -i -d megaraid,2 /dev/sdb

With careful use of MegaCli, a script could find out all of this information for each disk on a server with multiple physical and virtual disks, but it would be a tedious job.

So, imagine my surprise when I noticed that smartctl can be fed the slot number of any disk on the system, and the device name /dev/sda, and it’ll happily return the data on the disk in that slot, whether or not it constitutes part of /dev/sda or of some other virtual disk entirely.

So, on a server with eight physical disks RAIDed into three virtual disks, smartctl will happily supply information for any physical disk on the system if you specify disk device /dev/sda. All you need to vary is the slot number which houses the physical disk of interest. So for instance the disk in slot 7, which is actually part of /dev/sdc, can be queried with

smartctl -i -d megaraid,7 /dev/sda

A monitoring script would be a good deal easier to write if the same virtual disk device could be used for every disk on the system. Is that behaviour likely to stay, I wonder? Or will the maintainers fix it at some point?

Passwords

I was reading up on passwords recently; here are some articles I found interesting.

Those accessibility warnings again

The story so far: in Accessibility warnings I talked about those irritating “Couldn’t register with accessibility bus” warnings on SL7, and how suppress them (you define an environmental variable NO_AT_BRIDGE to 1.)

I’ve since learned that although this workaround works perfectly well on most of the window managers we offer, it rather mysteriously doesn’t work if you happen to use GNOME or GNOME Classic. The lucky users of these window managers still see the warning messages, while the users of (for instance) MATE or KDE Plasma Workspace are left to get on with their lives in blissful peace don’t.

At first I thought that this might be a quirk of our rather old bash default environment infrastructure, but extensive experimentation established that that wasn’t the case. Our bash startup files are quite capable of defining, for instance, NO_BT_BRIDGE; it’s just NO_AT_BRIDGE which isn’t present in the environment, and only if you use GNOME or GNOME Classic. And it doesn’t matter what value it’s given.

After a lot of searching of GNOME related documentation and sources and blogs and mailing lists I eventually found gnome-shell, and in particular its source, which contains this:

  /* Prevent meta_init() from causing gtk to load gail and at-bridge */
  g_setenv ("NO_AT_BRIDGE", "1", TRUE);
  meta_init ();
  g_unsetenv ("NO_AT_BRIDGE");

It unsets NO_AT_BRIDGE. Whether or not it was set before. Without checking its value.

Thanks, guys.

Oh well. However frustrating this might initially have been, it was at least nice to get a rational explanation, and to know that I wasn’t imagining it, or coping with a haunted shell environment.

Also, to be fair, NO_AT_BRIDGE is an undocumented (though widely swapped on support sites) workaround. As such, GNOME can do with it what it likes.

At this point I could have patched the gnome-shell source to make it restore the previous value of NO_AT_BRIDGE rather than unsetting it in a blanket fashion; but this would have opened up a Pandora’s Box of tracking future versions of the package, remembering to patch them in the same way, and no doubt eventually forgetting the reason why this patching was being done. All to get rid of some irritating but ultimately quite unimportant warning messages. I decided merely to document the quirk, and give the remaining affected users a workaround, at:

computing.help.inf.ed.ac.uk/accessibility-bus

Tcl has a CMake problem on SL7

There’s a problem with Tcl’s CMake scripts on SL7.

We use CMake to build our LCFG packages, as documented at LCFG Build Tools. The CMake configuration for a package is held in its CMakeLists.txt file. This mostly just pulls in our standard LCFG build environment:

include(lcfg.cmake)

but it can also be used to tweak or add things to the CMake configuration. A package which uses the Tcl scripting language would use CMakeLists.txt to pull in some Tcl configuration:

include(FindTCL)

This is the standard way of asking CMake to deduce where various Tcl bits and pieces are and fill in correct values for Tcl-related CMake variables.

We’ve found that it’s filling in the wrong value for the @TCL_TCLSH@ variable. On SL7 this should evaluate to /usr/bin/tclsh, but it actually evaluates to /bin/tclsh.

For a user or for a script this isn’t really a problem, because /bin on SL7 is a symbolic link to /usr/bin, so either path will find tclsh.

However for an RPM package to be built with a requirement for /bin/tclsh certainly is a problem, because such a requirement is unfulfillable. The tcl package provides /usr/bin/tclsh, and no package at all provides /bin/tclsh, so the package building software doesn’t have any way of figuring out how to fulfil the requirement for /bin/tclsh. All it knows is that it doesn’t know of a package which provides /bin/tclsh – so when we try to install a package which requires Tcl, the unfulfilled requirement is flagged as an error.

I haven’t found a fix for this, but here’s a workaround: simply edit CMakeLists.txt to set the variable to its correct value:

set(TCL_TCLSH "/usr/bin/tclsh")
include(lcfg.cmake)

Whether include(FindTCL) is then left in or removed, the @TCL_TCLSH@ variable is correctly expanded to /usr/bin/tclsh, and the bogus requirement for /bin/tclsh disappears; so the resulting package is installable on SL7.

Mac Mail woes with OS X Yosemite

Here’s a little tip for fellow users of Apple Mail on a Mac.

For some time since upgrading my Mac to OS X 10.10 Yosemite, its Mail app had been misbehaving. It seemed unexpectedly slow. Sometimes when I deleted a bunch of messages, some or all of them would undelete themselves a few seconds later, often when I had moved on to another Mail folder. Problems like these would come and go; sometimes it was OK, sometimes not. Malaise affected both my personal email account and my work account.

The cause turned out to be the same for both accounts: Mail was trying to use port 143 for IMAP, despite the fact that it was also trying to use SSL. SSL IMAP should go through port 993, not port 143. Horrible misconfiguration. It didn’t just fail, though. Somehow it kind of stumbled along, behaving just well enough that I didn’t guess for a while that there was a configuration problem at all.

At the suggestion of the tech support people at my personal email provider I tracked down and unticked the Mail account setting to “Automatically detect and maintain account settings”. There’s one of these for each incoming and outgoing mail account you have configured in Mail. I had two IMAP servers and three SMTP servers configured, and I’ve deselected that setting for all five of them. Once that’s done, unticking the “SSL” box then ticking it again is enough to set each configuration back to using its proper port.

As soon as I’d made that change, Mail was transformed back into the beautiful thing it used to be: everything happens instantly, it handles complicated mass deletes with the greatest of ease, it files mail into folders with no perceptible wait, despite all the filtering rules I’ve burdened it with; mail sends at the first time of asking, without any argument or backchat from Mail. Life is good once more.

Postscript: I see from a brief web search that I’m not the only one affected by this! See for instance:

Accessibility warnings

You may see a warning when starting an application (in this case gedit) on DICE SL7:

** (gedit:26514): WARNING **: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

This warning is harmless. You can ignore it, or you can stop it happening by first typing the following command into your shell window:

export NO_AT_BRIDGE=1

To save you typing this, we’ve added it to DICE’s system-wide bash startup. It’ll be included in next week’s configuration update, which will hit machines on 4/5 November 2015.

As far as I can tell, the warning is produced when an application which uses GNOME’s GTK3 toolkit tries to connect to the GNOME Assistive Technology Service Provider Interface – which isn’t available, because SL7 DICE does not use GNOME’s own display manager, with which many core GNOME services are now integrated. Setting NO_AT_BRIDGE tells the application not to try to load the accessibility “bridge”.

This doesn’t seem to be mentioned in GNOME documentation. I did find a reference to it in the source of at-spi2-atk – “GTK+ module for bridging AT-SPI to ATK”. However I didn’t search the GNOME source exhaustively because, well, there’s a lot of it.

I’m sure it would be nice to be able to make use of whatever GNOME accessibility features are offered by GTK3, but since they seem to be inaccessible on systems which don’t use GDM for their login screens, we currently can’t. (We use lightdm instead of GDM.)

Edit: The story continues in part 2, Those accessibility warnings again.

URL shortener, part 1

I’ve written a URL shortener which can be used by anyone in the School of Informatics. For the short story skip to the summary.

The computing staff here in Informatics are required to set aside time for personal development. Constant technical change makes this a necessity. It also stops us from being too bored.
For a while I’ve been promising to spend some development time brushing up web and database programming skills by writing a URL shortener and putting it into service. I’ve now put one together.

What’s a URL shortener? It’s a web service which gives a user a short URL to use in place of a URL which may be long and complicated. For example the address bit.ly/1LfOYvn will take a web browser to blog.inf.ed.ac.uk/chris/url-shortener-part-1, the address of this blog post. goo.gl, tinyurl.com and t.co are other well known URL shorteners.

How does it work? A little like this:

Web browser (to short.url web server):
Hello short.url. Can I have the web page at short.url/tufty please?

short.url web server (to itself):
Let’s see. tufty… is that one of my shortened URLs? If it is, it’ll be in my database.
short.url web server rummages in its database and finds a box labelled tufty. Inside there’s another web address.

short.url web server:
Hello Web browser. Your page has moved. Ask for it at this address: http://www.snh.gov.uk/about-scotlands-nature/species/mammals/land-mammals/squirrels/

That’s redirection. The site also needs to handle registration: it should allow a user to submit a web address and be given an equivalent shortened address for it, adding the new pair of URLs to its database.
There’s an article in Wikipedia if you want to learn more.

What ingredients are needed? A basic list would include a web server, a database, a mechanism for URL redirection, and a way for users to register URLs with the site. While not strictly necessary, an authentication mechanism would come in handy if you think you might want to allow users to delete or change a shortened URL.

To get a first attempt at a URL shortener up and running I provided each of these things then glued them together.

So, a slew of decisions.

  1. For the implementation language I used Perl, because one of my aims was to revise my knowledge of some Perl technologies I had briefly used a while ago. At a minimum I wanted to revisit CGI (for processing and preparing HTTP requests and responses) and DBI (for interacting with a database). Additionally, Perl is what I currently use for most jobs and I reckoned I’d have enough on my plate without also learning a new language.
  2. Who would use this, ultimately? I’m in the business of providing computing facilities for the School of Informatics, so it seemed wise to limit the site such that only Informatics users could register URLs on it. It equally seemed obvious that users would probably want to share their URLs worldwide, so non-privileged access – just using the short URLs – would need to be open to everyone. Luckily it’s easy to separate out access-for-Informatics-only from access-for-everyone. We put the access-for-everyone (the redirection of short URLs to the original longer ones) on http, and the access-for-Informatics-only (the registration and general admin of short URLs) on https, protected by Cosign. This sort of separation is used in some other Informatics websites so we have suitable LCFG configuration for setting up the necessary virtual host declarations on the Apache web server.
  3. I’ve touched on this one already – I used our supported DICE Linux platform with our LCFG configuration technology. This one wasn’t really a decision so much as a no-brainer. LCFG installs and configures everything necessary for the project – OS, the web server, the Perl modules and all, and it’ll keep it configured correctly through software and OS upgrades, reinstalls and so on.
  4. Some kind of database would be needed to store the URLs and their shortened URL codes. I’m not a database expert but I’m told by those who are that PostgreSQL is easily the best solution available to us on DICE, and clearly superior to certain other popular free SQL solutions.
  5. URL redirection is done by getting the CGI script to send an HTTP redirect response. There are several of these, but the basic choice seems to come down to 301 and 302. A 301 signal is a permanent redirect. Once a web browser gets it, it can remember it in its cache, so on subsequent visits it won’t need to revisit the site which issued the redirect. A 302 is a more temporary redirect: the browser is redirected to the new site, but subsequent visits will go through the same redirect procedure once again. If you want to use a URL shortener to spy on gather information about people using a URL, the temporary redirect is the obvious choice. However the aim here, at least to start with, is to provide a simple URL redirection service, one which just redirects URLs; so I chose to use 301, the permanent redirect.
  6. Obviously it would be good for a URL shortener to have a short domain name. For a wee personal project though I thought I’d stick with our normal DNS domain. It may not be the shortest out there but it’s not too prolix; it’s identified with the School of Informatics; and to add a selfish note, it’s in my fingers’ muscle memory.
  7. Should we allow users to specify their own short URLs, or should we generate random ones? The first would be desirable, both would be best, but for simplicity I started off offering only random URLs.

Once these bits and pieces are in place, the remaining task is to write a CGI script which connects them together. It has to distinguish between an attempt to use a shortened URL (look up the shortened URL in the database to find its associated original URL then issue a redirect to this URL) and other visits. The other visits might be mere curiosity (give them a help message) and a desire to register a URL (give them a form which they can use to submit a URL to the site). The CGI also has to spot a submitted form response (and process it – generate a random short URL code, enter that and the original URL in the database, then give the user their new shortened URL).

And that, basically, was the first version of the URL shortener I put together. As soon as I tentatively showed it to a few colleagues, feature requests were made, so shortly afterwards it grew the ability to identify who was registering a URL (Cosign gives you this for free) and to remember this identity alongside each registered URL. This made it possible to give each authenticated user a list of their own registered URLs, and to provide the opportunity to delete each URL. It will also make it possible to let users edit their URL data, but that’ll be covered in a future post.
 
Ladies and gentlemen, I give you i.inf.ed.ac.uk. Play with it if you like. From this point onwards its database shall be treated with respect, so any URLs that the service issues to you will be preserved. There are a few caveats:

It’s internal only.
Until the CGI is a bit more sophisticated and has passed a security audit it will only be available on the Informatics intranet. The firewall will prevent external users from accessing it. The aim is to open it up to outside users (for redirection only; registration of new URLs will remain open only to DICE accounts), but not yet.
It’s very simple.
Once you have a basic URL shortener, the number of ways in which it might be extended is rather startling. Extensions will be covered a future post. Right now, it doesn’t have many fancy bells and whistles. You can add URLs, delete them again, and quote the shortened versions to other (School of Informatics) users. For the moment that’s about it.
Extensions are on the way.
The system will be extended as I get time to work on it.

Have fun, and let me know how you get on. And yes, I was a proud member of The Tufty Club.