Remote Desktop Project

February 21, 2018

This week I’ve been working on the configuration for an XRDP server for Informatics staff. This will be publicised as a prototype service, the plan being to hold off replacing the NX service until Semester 2 is completed at the end of May, that avoids the potential for any disruption to teaching. The prototype service will be installed on some spare hardware which has 2 x 2.6GHz CPU, 36GB RAM and 146GB disk space, that’s not huge but should be sufficient for multiple users to be logged in simultaneously. As the staff service is likely to only ever be based on a single server I’ve decided to simplify the config by dropping the haproxy frontend, that will now only be used on the multi-host general service. To protect from DoS attacks iptables will be used to do rate-limiting. If I can work out how to get the xrdp software to log the IP address for failed logins I will also investigate using fail2ban to add firewall rules. Most of the user documentation on computing.help is now ready, I just need to add some instructions and screenshots for the Remmina client on Linux.


User management improvements

November 23, 2017

Management of local users and groups (i.e. those in /etc/passwd and /etc/group) is done using the LCFG auth component. One feature that has always been lacking is the ability to create a home directory where necessary and populate it from a skeleton directory (typically this is /etc/skel). The result of this feature being missing is that it is necessary to add a whole bunch of additional file component resources to create the home directory and that still doesn’t provide support for a skeleton directory.

Recently I needed something along those lines so I’ve taken the chance to add a couple of new resources – create_home_$ and skel_dir_$. When the create_home resource is set to true for a user the home directory will be created by the component and the permissions set appropriately. By default the directory will be populated from /etc/skel but it could be anything. This means it is now possible to setup a machine with a set of identically initialised local users.

For example:

auth.pw_name_cephadmin           cephadmin
auth.pw_uid_cephadmin            755
auth.pw_gid_cephadmin            755
auth.pw_gecos_cephadmin          Ceph Admin User
auth.pw_dir_cephadmin            /var/lib/cephadmin
auth.pw_shell_cephadmin          /bin/bash
auth.create_home_cephadmin       yes /* Ensure home directory exists */

auth.gr_name_cephadmin           cephadmin
auth.gr_gid_cephadmin            755

LCFG Core: resource types

November 21, 2017

The recent round of LCFG client testing using real LCFG profiles from both Informatics and the wider community has shown that the code is now in very good shape and we’re close to being able to deploy to a larger group of machines. One issue that this testing has uncovered is related to how the type of a resource is specified in a schema. A type in the LCFG world really just controls what regular expression is used to validate the resource value. Various type annotations can be used (e.g. %integer, %boolean or %string) to limit the permitted values, if there is no annotation it is assumed to be a tag list and this has clearly caught out a few component authors. For example:

@foo %integer
foo

@bar %boolean
bar

@baz
baz

@quux sub1_$ sub2_$
quux
sub1_$
sub2_$

Both of the last two examples (baz and quux) are tag lists, the first just does not have any associated sub-resources.

The compiler should not allow anything but valid tag names (which match /^[a-zA-Z0-9_]+$/) in a tag list resource but due to some inadequacies it currently permits pretty much anything. The new core code is a lot stricter and thus the v4 client will refuse to accept a profile if it contains invalid tag lists. Bugs have been filed against a few components (bug#1016 and bug#1017). It’s very satisfying to see the new code helping us improve the quality of our configurations.


yum cache and disk space

November 15, 2017

At a recent LCFG Deployers meeting we discussed a problem with yum not fully cleaning the cache directory even when the yum clean all command is used. This turns out to be related to how the cache directory path is defined in /etc/yum.conf as /var/cache/yum/$basearch/$releasever. As the release version changes with each minor platform release (e.g. 7.3, 7.4) the old directories can become abandoned. At first this might seem like a trivial problem but these cache directories can be huge, we have seen instances where gigabytes of disk space have been used and cannot be simply reclaimed. To help fix this problem I’ve added a new purgecache method to the LCFG yum component. This takes a sledgehammer approach of just deleting everything in the /var/cache/yum/ directory. This can be run manually whenever required or called regularly using something like cron. In Informatics it is now configured to run weekly on a Sunday like this:

!cron.objects             mADD(yum_purge)
cron.object_yum_purge     yum
cron.method_yum_purge     purgecache
cron.run_yum_purge        AUTOMINS AUTOHOUR * * sun

LCFG autoreboot

November 10, 2017

One of the tools which saves us an enormous amount of effort is our LCFG autoreboot component. This watches for reboot requests from other LCFG components and then schedules the reboot for the required date/time.

One nice feature is that it can automatically choose a reboot time from within a specified range. This means that when many similarly configured machines schedule a reboot they don’t all go at the same time which could result in the overloading of services that are accessed at boot time. Recently it was reported that the component has problems parsing single-digit times which results in the reboot not being scheduled. Amazingly this bug has lain undetected for approximately 4 years during which time a significant chunk of machines have presumably been failing to reboot on time. As well as resolving that bug I also took the chance to fix a minor issue related to a misunderstanding of the shutdown command options which resulted in the default delay time being set for 3600 minutes instead of 3600 seconds, thankfully we change that delay locally so it never had any direct impact on our machines.

Whilst fixing those two bugs I discovered another issue related to sending reboot notifications via email, if that failed for any reason the reboot would not be scheduled, the component will now report the error but continue. This is a common problem we see in LCFG components where problems are handled with the Fail method (which logs and then exits) instead of just logging with Error. This is particularly a problem since an exit with non-zero code is not the same as dieing which can be caught with the use of the eval function. Since a call to Fail ends the current process immediately this can lead to a particularly annoying situation where a failure in a Configure method results in a failure in the Start method. This means that a component might never reach the started state, a situation from which it is difficult to recover. We are slowly working our way through eradicating this issue from core components but it’s going to take a while.

Recently we have had feedback from some of our users that the reboot notification message was not especially informative. The issue is related to us incorporating the message into the message of the day which sometimes leads to it being left lieing around out-of-date for some time. The message would typically say something like “A reboot has been scheduled for 2am on Thursday”, which is fine as long as the message goes away once the reboot has been completed. To resolve this I took advantage of a feature I added some years ago which passes the reboot time as a Perl DateTime object (named shutdown_dt) into the message template. With a little bit of thought I came up with the following which uses the Template Toolkit Date plugin:


[%- USE date -%]
[%- USE wrap -%]
[%- FILTER head = wrap(70, ‘*** ‘, ‘*** ‘) -%]
This machine ([% host.VALUE %]) requires a reboot as important updates are available.
[%- END %]

[% IF enforcing.VALUE -%]
[%- FILTER body = wrap(70, ‘ ‘, ‘ ‘) -%]
It will be unavailable for approximately 15 minutes beginning at
[% date.format( time = shutdown_dt.VALUE.epoch,
format = ‘%H:%M %A %e %B %Y’,
locale = ‘en_GB’) %].
Connected users will be warned [% shutdown_delay.VALUE %] minutes beforehand.
[%- END %]

[% END -%]

This also uses the wrap plugin to ensure that the lines are neatly arranged and the header section has a “*** ” prefix for each line to help grab the attention of the users.


LCFG Core: Resource import and export

November 7, 2017

As part of porting the LCFG client to the new core libraries the qxprof and sxprof utilities have been updated. This has led to the development of a new high-level LCFG::Client::Resources Perl library which can be used to import, merge and export resources in all the various required forms. The intention is that eventually all code which uses the LCFG::Resources Perl library (in particular the LCFG::Component framework) will be updated to use this new library. The new library provides a very similar set of functionality and will appear familiar but I’ve taken the opportunity to improve some of the more awkward parts. Here’s a simple example taken from the perldoc:

# Load client resources from DB
my $res1 = LCFG::Client::Resources::LoadProfile("mynode","client");

# Import client resources from environment variables
my $res2 = LCFG::Client::Resources::Import("client");

# Merge two sets of resources
my $res3 = LCFG::Client::Resources::Merge( $res1, $res2 );

# Save the result as a status file
LCFG::Client::Resources::SaveState( "client", $res3 );

The library can import resources from: Berkeley DB, status files, override files, shell environment and explicit resource specification strings. It can export resources as status files, in a form that can be evaluated in the shell environment and also in various terse and verbose forms (e.g. the output styles for qxprof).

The LCFG::Resources library provides access to resources via a reference to a hash which is structured something like:

{
   'sysinfo' => {
                 'os_id_full' => {
                                  'DERIVE' => '/var/lcfg/conf/server/releases/develop/core/include/lcfg/defaults/sysinfo.h:42',
                                  'VALUE' => 'sl74',
                                  'TYPE' => undef,
                                  'CONTEXT' => undef
                                 },
                 'path_lcfgconf' => {
                                  'DERIVE' => '/var/lcfg/conf/server/releases/develop/core/include/lcfg/defaults/sysinfo.h:100',
                                  'VALUE' => '/var/lcfg/conf',
                                  'TYPE' => undef,
                                  'CONTEXT' => undef
                                 },
                }
}

The top level key is the component name, the second level is the resource name and the third level is the name of the resource attribute (e.g. VALUE or TYPE ).

The new LCFG::Client::Resources library takes a similar approach with the top level key being the component name but the value for that key is a reference to a LCFG::Profile::Component object. Resource objects can then be accessed by using the find_resource method which returns a reference to a LCFG::Resource object. For example:

my $res = LCFG::Client::Resources::LoadProfile("mynode","sysinfo");

my $sysinfo = $res->{sysinfo};

my $os_id_full = $sysinfo->find_resource('os_id_full');

say $os_id_full->value;

Users of the qxprof and sxprof utilities should not notice any differences but hopefully the changes will be appreciated by those developing new code.


Testing the new LCFG core : Part 2

May 18, 2017

Following on from the basic tests for the new XML parser the next step is to check if the new core libs can be used to correctly store the profile state into a Berkeley DB file. This process is particularly interesting because it involves evaluating any context information and selecting the correct resource values based on the contexts. Effectively the XML profile represents all possible configuration states whereas only a single state is stored in the DB.

The aim was to compare the contents of the old and new DBs for each Informatics LCFG profile. Firstly I used rdxprof to generate DB files using the current libs:

cd /disk/scratch/profiles/inf.ed.ac.uk/
for i in $(find -maxdepth 1 -type d -printf '%f\n' | grep -v '^\.');\
do \
 echo $i; \
 /usr/sbin/rdxprof  -v -u file:///disk/scratch/profiles/ $i; \
done

This creates a DB file for each profile in the /var/lcfg/conf/profile/dbm directory. For 1500-ish profiles this takes a long time…

The next step is to do the same with the new libs:

find /disk/scratch/profiles/ -name '*.xml' | xargs \
perl -MLCFG::Profile -wE \
'for (@ARGV) { eval { $p = LCFG::Profile->new_from_xml($_); \
$n = $p->nodename; \
$p->to_bdb( "/disk/scratch/results/dbm/$n.DB2.db" ) }; \
print $@ if $@ }'

This creates a DB file for each profile in the /disk/scratch/results/dbm directory. This is much faster than using rdxprof.

The final step was to compare each DB. This was done simply using the perl DB_File module to tie each DB to a hash and then comparing the keys and values. Pleasingly this has shown that the new code is generating identical DBs for all the Informatics profiles.

Now I need to hack this together into a test script which other sites can use to similarly verify the code on their sets of profiles.


Testing the new LCFG core : Part 1

May 17, 2017

The project to rework the core LCFG code is rattling along and has reached the point where some full scale testing is needed. The first step is to check whether the new XML parser can actually just parse all of our LCFG profiles. At this stage I’m not interested in whether it can do anything useful with the data once loaded, I just want to see how it handles a large number of different profiles.

Firstly a source of XML profiles is needed, I grabbed a complete local copy from our lcfg server:


rsync -av -e ssh lcfg:/var/lcfg/conf/server/web/profiles/ /disk/scratch/profiles/

I then ran the XML parser on every profile I could find:


find /disk/scratch/profiles/ -name ‘*.xml’ | xargs \
perl -MLCFG::Profile -wE \
‘for (@ARGV) { eval { LCFG::Profile->new_from_xml($_) }; print $@ if $@ }’

Initially I hit upon bug#971 which is a genuine bug in the schema for the gridengine component. As noted previously, this was found because the new libraries are much stricter about what is considered to be valid data. With that bug resolved I can now parse all 1525 LCFG XML profiles for Informatics.


LCFG Core Project

May 2, 2017

Over the last few years I have been working on (and off) creating a new set of “core” libraries for LCFG. This is now finally edging towards the point of completion with most of the remaining work being related to polishing, testing and documentation.

This project originated from the need to remove dependencies on obsolete Perl XML libraries. The other main aims were to create a new OO API for resources/components and packages which would provide new opportunities for code reuse between client, ngeneric and server.

Over time several other aims have been added:

  • Simplify platform upgrades.
  • Platform independence / portability.
  • Make it possible to support new languages.
  • Ensure resource usage remains low.

Originally this was to be a rewrite just in Perl but the heavy resource usage of early prototypes showed it was necessary to move at least some of the functionality into C libraries. Since that point the chance to enhance portability was also identified and included in the aims for the project. As well as making it possible to target other platforms (other Linux or Unix, e.g. MacOSX), the enhanced portability should make it much simpler and quicker to port to new Redhat based platforms.

The intention is that the new core libraries will be totally platform-independent and portable, for example, no hardwired paths or assumptions that platform is Redhat/RPM (or even Linux) based. The new core is split two parts: C and Perl libraries with the aim that as much functionality as possible is in the C libraries to aid reuse from other languages (e.g. Python).

The aim is that these libraries should be able to co-exist alongside current libraries to ease the transition.

I have spent a lot of time on documenting the entire C API. The documentation is formatted into html and pdf using doxygen, I had not used this tool before but I am very pleased with the results and will definitely be using it more in the future. Although a slow task, documenting the functions has proved to be a very useful review process. It has helped me find many inconsistencies between functions with similar purposes and has led to numerous small improvements.

LCFG Client

The client has been reworked to use new Core libraries. This is where the platform-specific knowledge of paths, package manager, etc, is held.

Resource Support

Format Read Write
XML YES NO
DB YES YES
Status YES YES
Environment YES YES

There is currently no support for reading header files or source profiles but this could be added later.

There is new support for finding the “diffs” between resources, components and profiles.

Package Support

Format Read Write
XML YES YES
rpmcfg YES YES
rpmlist YES YES

There is currently no support for reading package list files but this could be added later.

Remaining Work

There is still work to be done on the top-level profile handling code and the code for finding the differences between resources, components and profiles needs reworking. Also the libraries for reading/writing XML files and Berkeley DB need documentation.

That is all the remaining work required on the “core” libraries. After that there will be some work to do on finishing the port of the client to the new libraries. I’ve had that working before but function APIs have changed, I don’t expect it to require a huge amount of work.


PostgreSQL 9.6

September 29, 2016

I’m currently working on upgrading both the PkgForge build farm and the BuzzSaw log file processor services to SL7.2. Both of these services use PostgreSQL databases and have been stuck on 9.2 for a while pending the server upgrades. The latest version of PostgreSQL (9.6) is due to be released today so I thought I would give the release candidate a whirl to see how I get on. There are numerous benefits over 9.2, in particular I am planning to use the new jsonb column type to store PkgForge build information which was previously serialised to a YAML file, being able to query that data directly from the DB should be very useful. The feature I am most interested in trying from 9.6 is parallel execution of sequential scans, joins and aggregates. This has the potential to make some of the large queries for the BuzzSaw DB much faster. My very simplistic first tests suggest that setting the max_parallel_workers_per_gather option to 4 will reduce the query time by at least 50%, it will need a bit more investigation and analyse to check it really is helpful but that’s an encouraging result.

A 2ndQuadrant blog post has some useful information on the new parallel sequential scan feature.