Account Lifecycle

We have recently implemented automated processing of the final stages of a user account’s lifecycle within our account management system, ‘Prometheus’. [1] This is used to apply expiry (or ‘grace’) and suspension periods to an account, as described in our account closure policy. [2]

All user accounts have ‘roles’ and ‘entitlements’. These are used by our systems to grant access to services, e.g. possession of a particular entitlement would allow a user to log in to a specific machine. The introduction of full lifecycle management means that most roles and entitlements are preserved for a grace period once an account has expired.

At the beginning of the grace period, an automated mail will be sent to the user, indicating the expiry of their account. At the end of the grace period the account will be automatically disabled (and, subject to the suspension period, deleted).

Eligibility for a DICE account is determined by details in the school database, so it is important that this information is correct and up to date. This is particularly pertinent for short-term visitor accounts. If you are sponsoring a visitor, please ensure that Informatics HR are informed of any changes to visit dates.

[1] http://blog.inf.ed.ac.uk/systems/2012/09/04/prometheus-account-management-in-informatics/
[2] http://computing.help.inf.ed.ac.uk/account-closure

Posted in Uncategorized | Leave a comment

Virtual Appleton Tower

As we prepare to decant from Appleton Tower to Forrest Hill and Wilkie Building, I thought it might be useful to describe how we’re networking the buildings and why we’ve done it the way we have.

In short, both FH and Wilkie will be operated as virtual floors of Appleton Tower.  This makes it straightforward to move machines from one site to another, as we can just configure the ports in a user’s new location to match what they had before.  Machines and VoIP phones can then just plug in and work, for the most part, so minimising downtime and reconfiguration.  It also simplifies network installation and management, as we have all the surrounding infrastructure services already in place.

Each site connects back to our Appleton Tower core over a pair of 10Gbps fibre links.  This allows for load-sharing, as we are collapsing three sets of uplinks into one for each new site.  It also gives us an additional measure of resilience, as it means that one switch failure, either in AT or FH or Wilkie, will not affect the service for all the others. Network diagrams for Forrest Hill are linked from here, and the Wilkie diagram is here.

The disadvantage of this scheme is that it introduces inter-building dependencies, which we normally try to avoid.  In this case, however, it was felt that this would be outweighed by the simplification of the decant process, given that we expect to be in the buildings for only one academic year.

Posted in Uncategorized | Leave a comment

Planned group space downtime

The disk array ifevo3 has a fault with its flash memory. Though it may be just about possible to replace the faulty memory without disrupting the file service, the recommendation is to shut down the the disk array to do the work. So this is what we plan on doing.

The data on ifevo3 is nearly all group space, plus some of our system/backup data. While ifevo3 is down, the group space listed below will be unavailable.

To do this work without affecting other files and home directories served by the same server, we need to unmount the affected partitions from the servers. This will mean a brief interruption to all files served by the those servers, once before the work starts, and once again after it is complete. These brief breaks should last no longer than 2 minutes, you may not even notice them at all.

We are planning to do the work on Tuesday 26th of May, between 9am and 10am. Please let us know now if this is going to cause you real problems.

The list of group areas that will be unavailable during the work is:

/group/admin/repadmin
/group/corpora/public
/group/infdb/infreports
/group/infdb/infthesis
/group/infdb/ug4projects
/group/project/dcspaul-diy
/group/support/projects
/group/teaching/cvscs1
/afs/inf.ed.ac.uk/group/admin/DOPShandover
/afs/inf.ed.ac.uk/group/appguarden2
/afs/inf.ed.ac.uk/group/bundy
/afs/inf.ed.ac.uk/group/corpora/restricted/sky_news
/afs/inf.ed.ac.uk/group/cstr/projects/corpus_1
/afs/inf.ed.ac.uk/group/cstr/projects/Felipe_Espic
/afs/inf.ed.ac.uk/group/cstr/projects/galatea/d03
/afs/inf.ed.ac.uk/group/cstr/projects/galatea/d04
/afs/inf.ed.ac.uk/group/cstr/projects/merritt_synthesis
/afs/inf.ed.ac.uk/group/cstr/projects/merritt_synthesis2
/afs/inf.ed.ac.uk/group/cup/data1
/afs/inf.ed.ac.uk/group/ESSENCEadmin
/afs/inf.ed.ac.uk/group/ICSA/beaches
/afs/inf.ed.ac.uk/group/msc-projects/s1355318
/afs/inf.ed.ac.uk/group/msc-projects/s1355708
/afs/inf.ed.ac.uk/group/pasta-private
/afs/inf.ed.ac.uk/group/project/ami5
/afs/inf.ed.ac.uk/group/project/arc_tools
/afs/inf.ed.ac.uk/group/project/biomedTM
/afs/inf.ed.ac.uk/group/project/common_sense
/afs/inf.ed.ac.uk/group/project/deca
/afs/inf.ed.ac.uk/group/project/dnn_tts
/afs/inf.ed.ac.uk/group/project/gpgpu
/afs/inf.ed.ac.uk/group/project/idea
/afs/inf.ed.ac.uk/group/project/img2txt
/afs/inf.ed.ac.uk/group/project/mast
/afs/inf.ed.ac.uk/group/project/mast/mast2
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/ptn4
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/ptn5
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/ptn6
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/ptn7
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/ptn8
/afs/inf.ed.ac.uk/group/project/nst/bbcdata/raw
/afs/inf.ed.ac.uk/group/project/reellives
/afs/inf.ed.ac.uk/group/project/slmc
/afs/inf.ed.ac.uk/group/project/UKConnect
/afs/inf.ed.ac.uk/group/statgen
/afs/inf.ed.ac.uk/group/synproc
/afs/inf.ed.ac.uk/group/teaching/matlab-help
/afs/inf.ed.ac.uk/group/ug4-projects/s1040800
/afs/inf.ed.ac.uk/group/workshop/datascience
/afs/inf.ed.ac.uk/group/workshop/ml

Remember that any web space served from these areas will also be unavailable.

And for our own record, these are the actual mount points:

kraken:/vicepv /vicepw
huldra:/vicepd /vicepe /vicepf /vicepg /viceph /vicepi /vicepj /vicepk /vicepl /vicepm /vicept /vicepu
nessie:/vicept /vicepu /disk/rmirror20 /disk/rmirror21
lammasu:/vicepa /disk/ptn175 /disk/ptn176 /disk/ptn197 /disk/ptn198
yeti:/vicepu /vicepv
telford:/disk/data /disk/data2

Thanks for your understanding,

Neil

Posted in Uncategorized | 1 Comment

Explanation of Yesterday directory

As happened today, “Yesterday” isn’t always yesterday! A brief explanation.

All AFS home directories (and some group areas) contain a Yesterday sub-directory. This directory contains a copy of your home directory from “Yesterday”, which can be useful if you accidentally delete a file.

The Yesterday directory is actually a by-product of our backup system. “Yesterday” usually means “from around 9pm”. But if the backups are running behind schedule, as can happen once a month when full backups are taken, then “Yesterday” could mean “from around 1am” or “11am” – it just depends.

This command line will tell you when your Yesterday was created:

/usr/sbin/vos exam `fs lsmount ~/Yesterday | cut -f4 -d\' | tr -d \#` | grep Creation

To recap, “Yesterday” is usually around 9pm the day before, but for operational reasons it could be as little as “a few seconds ago”.

Neil

Posted in system event | Leave a comment

SAN firmware update April 2015

Our two Dothill SAN storage boxes in the Informatics Forum are due a firmware update. Due to their built-in redundancy, the upgrades are supposed to be safe to do without affecting their operation. In fact we’ve already done a similar device at KB (that we use for our off-site copy of user’s data) without issue.

However, given the potential disruption that would caused if they do go off-line during the update (between them they have 100TB of storage), we will do the update this weekend, one on Saturday starting at 9am, the other on Sunday at 9am. The bulk of AFS file space should be considered at risk between 9am – 1pm on both days.

Home directories on the Appleton Tower servers (naga, cetus, gorgon and minotaur) are not at risk. Use the “homedir” command to see which server you are on. eg:

neilb> homedir
neilb (Neil Brown) : huldra/vicepa : /afs/inf.ed.ac.uk/user/n/neilb : free 234.2G (used 48%)

In this case my home directory is on “huldra”.

As I say, nothing should go wrong, and the update will be invisible, we are just being cautious.

Neil

Posted in Service Update | Leave a comment

Changes to default apache configuration

A few weeks ago we rolled out some new sensible default apache configuration to all our web servers.

These configuration changes will affect all DICE web servers unless their manager has chosen to exclude or modify some of these settings. This includes the few web services where the servers are managed DICE machines, but authorized users have the ability to add to the apache configuration.

The changes were made to improve the security of apache, and came about as the result of a project to improve the security of our web servers .

This resulted in a list of all sorts of possible steps we could take to improve security. Some realistic and practical, others less so. The ones that were simply apache configuration changes are listed on https://wiki.inf.ed.ac.uk/DICE/SecuringWebServers, and made it in to our sensible header https://wiki.inf.ed.ac.uk/DICE/ApacheConfSensible, it is these settings that are now active on all our web servers.

Most of the changes should be benign to regular users of a web site, however the mod_security module can be over zealous, and some extra configuration is likely to be needed if your web site supports web form uploads.

If your website runs on a DICE managed server, and you notice a change in behaviour, then it is possible these changes have had an affect. The one that has cropped up most, is the default upload limit set by mod_security, which defaults to a small 13MB, but is easily changed.

Note that we expect that our list of sensible apache configuration will continue to grow and adapt, and updates to the current list will be released over time.

Neil

Posted in News, Service Update | Leave a comment

Removal of RC4 from HTTPS Cipher Suite

In line with current best practice we will shortly be removing the RC4 stream cipher from our Apache HTTPS sites. Indeed we’ve already removed it from https://wiki.inf.ed.ac.uk without any apparent issues.

For more details see:

http://en.wikipedia.org/wiki/RC4

Modern browsers will connect via one of the various other SSL ciphers that we support. However, please do let support know if your web client has problems connecting to .inf.ed.ac.uk web sites after 26th of March.

The sort of browsers that might have problems, are old style candy-bar mobile phone browsers, eg on an 8 year old Nokia 6120 or if you are using Internet Explorer 6 on Windows XP. Slightly less old browsers can potentially use RC4, but will use a securer alternative in its absence.

Neil

Posted in News, Service Update | Leave a comment

DICE upgrade to Scientific Linux 7

As reported in previous posts (Upgrade of DICE desktops to SL7, Seven), we have been working hard on preparing DICE for upgrade to Scientific Linux 7. We are on track to begin upgrading DICE desktops in May, completing the teaching labs by the start of academic year 2015/2016 and all desktops by Xmas 2015.

We are now in a position to invite some users to act as trailblazers.  If you are keen to upgrade early and are willing to accept that some software may not yet be available, please contact us using the normal support form.

An SL7 ssh login box sl7test.inf.ed.ac.uk is now available for people to check software versions etc.

Release notes summarising changes from SL6 will be produced in the next few weeks.

Posted in Uncategorized | Leave a comment

Recently-completed network security projects

Last year I wrote about a couple of network security projects which were in their early stages of development.  As the last of these has recently completed, I thought it might be useful to summarise their outcomes.

We have had edge filtering in place for a long time, since we ran Solaris on Suns in fact, configured automatically from our machine configuration system (lcfg).  This has proved to be very successful in practice.  Our main edge routers typically reject a couple of million bogus packets per day, though this is still rather less than 0.5% of their total throughput.  We mostly don’t log this in detail, as there’s just too much of it and most of it isn’t very interesting, but we do have a couple of externally-visible machines which log more extensively.  These show several thousand scans per day, mostly for various Microsoft services, against individual IP addresses which have not been in use for several years.

The first of the projects I mentioned was “Scanning for Compromised Machines“.  After some investigation of our own, we learned that the University would be buying in to the ESISS scanning tool.  We now have this in use, regularly scanning all machines (managed and self-managed) with open firewall holes.  This has proved to be reasonably successful, and has thrown up a number of cases for further investigation.  Where these are with self-managed machines, we follow up with the machine’s manager to have any vulnerabilties closed down.

The other project was a pilot Intrusion Detection System.  This was a useful exercise, and the experience gained will certainly be helpful if we do later implement this as a full service, though overall the result was rather less useful than the “Scanning” project for reasons which are listed in more detail in the report.  In summary, though, the reports it produces are rather noisy due to our heterogeneous environment, and the rules we use are a couple of weeks or so behind the leading edge so we tend to hear about (and patch!) vulnerabilities through other routes before they start to show in the reports.  We’ll leave the pilot system running, so long as it doesn’t interfere with the proper functioning of our network, but there would still be quite a bit of work required to bring it up to production standard, and that effort just isn’t available at the moment as a result of the SL7 upgrades and the Appleton Tower decant.

Posted in Uncategorized | Leave a comment

Virtual DICE now has Java

New Virtual DICE VM images are available for download. A lot of software has been updated, but the big news is that the VMs now include Java. A careful review of the licence conditions suggested that we were after all allowed to do this!

Virtual DICE is the School’s DICE Linux, but running in a virtual machine which you can control. It uses VirtualBox so can run on any supported machine (with enough disk space and memory). To find out more read the Virtual DICE help pages.

The Java software is now also available for the previous Virtual DICE release (hostnames knibbergen and knijff). However the software update may well fail due to lack of disk space on the VM. If this happens, just install the latest Virtual DICE. (The new Virtual DICE VMs have more disk space, and have Java already installed.)

As ever, please contact computing support with any problems you encounter. Thanks.

Posted in Uncategorized | Leave a comment