Moving services to SL7

Although we don’t always blog about it, the MPU has been busy lately. One project which has been taking a great deal of our time is the SL7 server upgrade project, the effort to move all of the various services we run from the now-outdated version 6 of Scientific Linux to version 7. The MPU, one of the School’s five computing units, has its fair share of services to port to SL7, and here’s a summary of the work we did for this project during the last third of 2016 (that is, September to December):

We upgraded these MPU services to SL7:

  • The virtual machine hosting service. This covers eight servers on three sites, hosting some 180 guest VMs, most of which were kept running seamlessly through the upgrades.
  • The PXE service and the package cache service. These share two servers in the Forum and the Tower.
  • The PackageForge package-building service, covering two build servers and a master server. The build servers’ performance was improved by moving them to VMs. Before the master server could be upgraded, the PackageForge software needed enhancement, including an upgrade to PostgreSQL 9.6, changing the package data from YAML format in the filesystem to JSON format in the database – opening the way for a future version to provide far better presentation of the build results in the user interface – and various code updates, making the web interface noticeably more responsive.
  • The export packages server. This was moved to a new VM.
  • The LCFG slave servers – the two main slaves, one test slave, one DIY DICE slave and two inf-level release-testing slaves, an increase of one (we now monitor the inf level on SL6 and SL7). The two main slave servers were substantially speeded up by increasing their memory to 8GB, so that all LCFG profile information could be held in memory at once.
  • The site mirrors packages server, where we keep our own copies of various software repositories covering Scientific Linux, EPEL, PostgreSQL and others.
  • The LCFG website and the LCFG wiki. We installed and configured a substantially updated version of the TWiki software.
  • BuzzSaw and LogCabin (which organise and serve the login logs) were moved to the new SL7 loghost. This work included the update of Django packages and the building of some dependencies.
  • The LCFG disaster relief server, which will take over our configuration infrastructure should some calamity befall the Forum. This server hosts a complex mix of services , so sorting out its Apache config for SL7 helped to prepare the way for the LCFG master upgrade to come.

In addition, substantial work was done towards the upgrade of these services:

  • The computing help service.
  • The LCFG bug tracking service.
  • The LCFG master:
    • Replacement of Apache mod_krb5 with mod_gssapi;
    • Porting of mod_user_rewrite to the new LCFG build tools;
    • Reworking of the rfe packaging to produce a separate rfe-server sub-package and to introduce systemd support;
    • A complete rewrite of the rfe component in Perl with Template Toolkit;
    • We moved the web view of the LCFG repositories from the outdated websvn to the more capable viewvc, with a new LCFG component to manage its configuration;
    • The updating of all components’ defaults packages to up-to-date SL7 versions.

Work on this project has continued into 2017, but more of that in a future post.

Published by

Chris Cooke

Chris Cooke is a Computing Officer in the School of Informatics at the University of Edinburgh. He works in the Managed Platforms Unit and rides a very large bicycle.

Leave a Reply

Your email address will not be published. Required fields are marked *