Testing the new LCFG core : Part 2

May 18, 2017

Following on from the basic tests for the new XML parser the next step is to check if the new core libs can be used to correctly store the profile state into a Berkeley DB file. This process is particularly interesting because it involves evaluating any context information and selecting the correct resource values based on the contexts. Effectively the XML profile represents all possible configuration states whereas only a single state is stored in the DB.

The aim was to compare the contents of the old and new DBs for each Informatics LCFG profile. Firstly I used rdxprof to generate DB files using the current libs:

cd /disk/scratch/profiles/inf.ed.ac.uk/
for i in $(find -maxdepth 1 -type d -printf '%f\n' | grep -v '^\.');\
do \
 echo $i; \
 /usr/sbin/rdxprof  -v -u file:///disk/scratch/profiles/ $i; \
done

This creates a DB file for each profile in the /var/lcfg/conf/profile/dbm directory. For 1500-ish profiles this takes a long time…

The next step is to do the same with the new libs:

find /disk/scratch/profiles/ -name '*.xml' | xargs \
perl -MLCFG::Profile -wE \
'for (@ARGV) { eval { $p = LCFG::Profile->new_from_xml($_); \
$n = $p->nodename; \
$p->to_bdb( "/disk/scratch/results/dbm/$n.DB2.db" ) }; \
print $@ if $@ }'

This creates a DB file for each profile in the /disk/scratch/results/dbm directory. This is much faster than using rdxprof.

The final step was to compare each DB. This was done simply using the perl DB_File module to tie each DB to a hash and then comparing the keys and values. Pleasingly this has shown that the new code is generating identical DBs for all the Informatics profiles.

Now I need to hack this together into a test script which other sites can use to similarly verify the code on their sets of profiles.


Testing the new LCFG core : Part 1

May 17, 2017

The project to rework the core LCFG code is rattling along and has reached the point where some full scale testing is needed. The first step is to check whether the new XML parser can actually just parse all of our LCFG profiles. At this stage I’m not interested in whether it can do anything useful with the data once loaded, I just want to see how it handles a large number of different profiles.

Firstly a source of XML profiles is needed, I grabbed a complete local copy from our lcfg server:


rsync -av -e ssh lcfg:/var/lcfg/conf/server/web/profiles/ /disk/scratch/profiles/

I then ran the XML parser on every profile I could find:


find /disk/scratch/profiles/ -name ‘*.xml’ | xargs \
perl -MLCFG::Profile -wE \
‘for (@ARGV) { eval { LCFG::Profile->new_from_xml($_) }; print $@ if $@ }’

Initially I hit upon bug#971 which is a genuine bug in the schema for the gridengine component. As noted previously, this was found because the new libraries are much stricter about what is considered to be valid data. With that bug resolved I can now parse all 1525 LCFG XML profiles for Informatics.


LCFG Core Project

May 2, 2017

Over the last few years I have been working on (and off) creating a new set of “core” libraries for LCFG. This is now finally edging towards the point of completion with most of the remaining work being related to polishing, testing and documentation.

This project originated from the need to remove dependencies on obsolete Perl XML libraries. The other main aims were to create a new OO API for resources/components and packages which would provide new opportunities for code reuse between client, ngeneric and server.

Over time several other aims have been added:

  • Simplify platform upgrades.
  • Platform independence / portability.
  • Make it possible to support new languages.
  • Ensure resource usage remains low.

Originally this was to be a rewrite just in Perl but the heavy resource usage of early prototypes showed it was necessary to move at least some of the functionality into C libraries. Since that point the chance to enhance portability was also identified and included in the aims for the project. As well as making it possible to target other platforms (other Linux or Unix, e.g. MacOSX), the enhanced portability should make it much simpler and quicker to port to new Redhat based platforms.

The intention is that the new core libraries will be totally platform-independent and portable, for example, no hardwired paths or assumptions that platform is Redhat/RPM (or even Linux) based. The new core is split two parts: C and Perl libraries with the aim that as much functionality as possible is in the C libraries to aid reuse from other languages (e.g. Python).

The aim is that these libraries should be able to co-exist alongside current libraries to ease the transition.

I have spent a lot of time on documenting the entire C API. The documentation is formatted into html and pdf using doxygen, I had not used this tool before but I am very pleased with the results and will definitely be using it more in the future. Although a slow task, documenting the functions has proved to be a very useful review process. It has helped me find many inconsistencies between functions with similar purposes and has led to numerous small improvements.

LCFG Client

The client has been reworked to use new Core libraries. This is where the platform-specific knowledge of paths, package manager, etc, is held.

Resource Support

Format Read Write
XML YES NO
DB YES YES
Status YES YES
Environment YES YES

There is currently no support for reading header files or source profiles but this could be added later.

There is new support for finding the “diffs” between resources, components and profiles.

Package Support

Format Read Write
XML YES YES
rpmcfg YES YES
rpmlist YES YES

There is currently no support for reading package list files but this could be added later.

Remaining Work

There is still work to be done on the top-level profile handling code and the code for finding the differences between resources, components and profiles needs reworking. Also the libraries for reading/writing XML files and Berkeley DB need documentation.

That is all the remaining work required on the “core” libraries. After that there will be some work to do on finishing the port of the client to the new libraries. I’ve had that working before but function APIs have changed, I don’t expect it to require a huge amount of work.