Our two Dothill SAN storage boxes in the Informatics Forum are due a firmware update. Due to their built-in redundancy, the upgrades are supposed to be safe to do without affecting their operation. In fact we’ve already done a similar device at KB (that we use for our off-site copy of user’s data) without issue.
However, given the potential disruption that would caused if they do go off-line during the update (between them they have 100TB of storage), we will do the update this weekend, one on Saturday starting at 9am, the other on Sunday at 9am. The bulk of AFS file space should be considered at risk between 9am – 1pm on both days.
Home directories on the Appleton Tower servers (naga, cetus, gorgon and minotaur) are not at risk. Use the “homedir” command to see which server you are on. eg:
neilb (Neil Brown) : huldra/vicepa : /afs/inf.ed.ac.uk/user/n/neilb : free 234.2G (used 48%)
In this case my home directory is on “huldra”.
As I say, nothing should go wrong, and the update will be invisible, we are just being cautious.
A few weeks ago we rolled out some new sensible default apache configuration to all our web servers.
These configuration changes will affect all DICE web servers unless their manager has chosen to exclude or modify some of these settings. This includes the few web services where the servers are managed DICE machines, but authorized users have the ability to add to the apache configuration.
The changes were made to improve the security of apache, and came about as the result of a project to improve the security of our web servers .
This resulted in a list of all sorts of possible steps we could take to improve security. Some realistic and practical, others less so. The ones that were simply apache configuration changes are listed on https://wiki.inf.ed.ac.uk/DICE/SecuringWebServers, and made it in to our sensible header https://wiki.inf.ed.ac.uk/DICE/ApacheConfSensible, it is these settings that are now active on all our web servers.
Most of the changes should be benign to regular users of a web site, however the mod_security module can be over zealous, and some extra configuration is likely to be needed if your web site supports web form uploads.
If your website runs on a DICE managed server, and you notice a change in behaviour, then it is possible these changes have had an affect. The one that has cropped up most, is the default upload limit set by mod_security, which defaults to a small 13MB, but is easily changed.
Note that we expect that our list of sensible apache configuration will continue to grow and adapt, and updates to the current list will be released over time.
In line with current best practice we will shortly be removing the RC4 stream cipher from our Apache HTTPS sites. Indeed we’ve already removed it from https://wiki.inf.ed.ac.uk without any apparent issues.
For more details see:
Modern browsers will connect via one of the various other SSL ciphers that we support. However, please do let support know if your web client has problems connecting to .inf.ed.ac.uk web sites after 26th of March.
The sort of browsers that might have problems, are old style candy-bar mobile phone browsers, eg on an 8 year old Nokia 6120 or if you are using Internet Explorer 6 on Windows XP. Slightly less old browsers can potentially use RC4, but will use a securer alternative in its absence.
As reported in previous posts (Upgrade of DICE desktops to SL7, Seven), we have been working hard on preparing DICE for upgrade to Scientific Linux 7. We are on track to begin upgrading DICE desktops in May, completing the teaching labs by the start of academic year 2015/2016 and all desktops by Xmas 2015.
We are now in a position to invite some users to act as trailblazers. If you are keen to upgrade early and are willing to accept that some software may not yet be available, please contact us using the normal support form.
An SL7 ssh login box sl7test.inf.ed.ac.uk is now available for people to check software versions etc.
Release notes summarising changes from SL6 will be produced in the next few weeks.
Last year I wrote about a couple of network security projects which were in their early stages of development. As the last of these has recently completed, I thought it might be useful to summarise their outcomes.
We have had edge filtering in place for a long time, since we ran Solaris on Suns in fact, configured automatically from our machine configuration system (lcfg). This has proved to be very successful in practice. Our main edge routers typically reject a couple of million bogus packets per day, though this is still rather less than 0.5% of their total throughput. We mostly don’t log this in detail, as there’s just too much of it and most of it isn’t very interesting, but we do have a couple of externally-visible machines which log more extensively. These show several thousand scans per day, mostly for various Microsoft services, against individual IP addresses which have not been in use for several years.
The first of the projects I mentioned was “Scanning for Compromised Machines“. After some investigation of our own, we learned that the University would be buying in to the ESISS scanning tool. We now have this in use, regularly scanning all machines (managed and self-managed) with open firewall holes. This has proved to be reasonably successful, and has thrown up a number of cases for further investigation. Where these are with self-managed machines, we follow up with the machine’s manager to have any vulnerabilties closed down.
The other project was a pilot Intrusion Detection System. This was a useful exercise, and the experience gained will certainly be helpful if we do later implement this as a full service, though overall the result was rather less useful than the “Scanning” project for reasons which are listed in more detail in the report. In summary, though, the reports it produces are rather noisy due to our heterogeneous environment, and the rules we use are a couple of weeks or so behind the leading edge so we tend to hear about (and patch!) vulnerabilities through other routes before they start to show in the reports. We’ll leave the pilot system running, so long as it doesn’t interfere with the proper functioning of our network, but there would still be quite a bit of work required to bring it up to production standard, and that effort just isn’t available at the moment as a result of the SL7 upgrades and the Appleton Tower decant.
New Virtual DICE VM images are available for download. A lot of software has been updated, but the big news is that the VMs now include Java. A careful review of the licence conditions suggested that we were after all allowed to do this!
Virtual DICE is the School’s DICE Linux, but running in a virtual machine which you can control. It uses VirtualBox so can run on any supported machine (with enough disk space and memory). To find out more read the Virtual DICE help pages.
The Java software is now also available for the previous Virtual DICE release (hostnames knibbergen and knijff). However the software update may well fail due to lack of disk space on the VM. If this happens, just install the latest Virtual DICE. (The new Virtual DICE VMs have more disk space, and have Java already installed.)
As ever, please contact computing support with any problems you encounter. Thanks.
In November 2014 the remaining end user vestiges of the legacy school database service originating in 1996 were migrated into Theon. The areas and workflows that were transitioned at this time were:
- Staff Records
- Visitor Records
- Post Application Visit Day handling for UG students
- Taught PG student admissions handling
The School InfHR team were moved across onto Theon in November to manage the Staff and Visitor records. The PAVD and PGT handling processes have started being used in earnest this week by the ITO, although there is some remedial work remaining.
The PAVD/PGT move was the final nail in the coffin for the old terminal based menu reporting system used by the ITO called “genrep” and this and its host server will shortly be completely decommissioned.
The Staff/Visitor move was a significant shift for InfHR as it saw the primary source of data for both being shifted to upstream IS services, rather than being locally re-keyed. In the case of Staff records the central IS HR/Oracle system is now the golden copy and in the case of Visitor records the central IS VRS system is the golden copy. Due to limitations in the HR/Oracle dataset we supplement this data with a feed from the central IS IDMS service. This is also used to correct data in the VRS dataset which has some IS acknowledged issues. This transition aligns InfHR and our Staff/Visitor records with the ITO/IGS which have been using central golden copy data sources (EUCLID/EUGEX) since the introduction of Theon in late 2010.
With all the legacy data now migrated (or replaced) in Theon the process of purging and archiving the data from the old database system (continuing to comply with University retention periods and data protection regulations) and decommissioning has begun.
In order to maintain the integrity of our systems we need to reboot all DICE desktops over the next few days. We understand that reboots can be very inconvenient so you can be assured that we will only ever schedule reboots which we consider to be essential.
A delayed reboot has been scheduled for all DICE desktops. Student lab machines will be rebooted overnight. For office machines the delay will be 5 days. Although the reboots are delayed, it would be greatly appreciated if people could manually reboot their machines at their earliest convenience; the delayed reboot would then be cancelled.
If you have any queries they should be submitted via the Support Form.
As described in my November posting, we’re making some changes to our OpenVPN configuration, partly to support new devices, partly to add capacity, and partly because one of the features we currently use is being phased out.
The server-end changes are now all in place, as are the new configuration files needed on the client end to use them. Please fetch these by following the links from the computing.help site. The new files should replace any existing configuration files you have on your machine. You may then have to stop and restart OpenVPN to have these new versions picked up.
There are platform-specific versions for Windows, Mac, iOS and Android, and generic versions which should work for Linux and *BSD. Please fetch and install platform-specific versions if possible, as these may contain additional settings to give smoother operation in some cases.
The current plan is to phase out the old endpoints and configurations around the end of March (specifically March 31st), when we’ll recover the IP address space for reuse elsewhere. Old client configurations won’t work with the new endpoints without some adjustment. Look out for further announcements and reminders nearer the time.
Please provide feedback through the support form in the usual way.
When the Forum meeting rooms were set up, we configured the wired network in them to use the “office self-managed DHCP” subnet. This seemed appropriate at the time, as most users were expected to be Forum occupants. The ground-floor meeting areas, in contrast, were set up to use a separate “conferences” subnet, outside the Informatics perimeter filters for better isolation of any problems.
We now propose to reconfigure the meeting rooms to match the ground floor, as the rooms are now more used by non-Informatics people. It will also give a more consistent experience, and may make the auditors slightly less unhappy. The Appleton Tower meeting rooms were changed like this some time ago.
Wireless users will be completely unaffected by this. The only visible effect will be that wired users in the meeting rooms will have to use OpenVPN or one of the login servers to access non-public Informatics resources (so matching what wireless users already do).
It’s planned to make the change first thing on Thursday of next week (22nd), before any of the scheduled meetings start.