Option Handling and Moose

May 28, 2008

Further to my previous post on option handling in Perl. There is a rather useful Moose extension named MooseX::Getopt which allows the building of an object directly from a set of command line options. The class attributes are directly translated into an option profile for Getopt::Long. Here’s a simple example class:

package LCFG::Foo;
use Moose;

with 'MooseX::Getopt';

has 'logname' => ( is => 'rw', isa => 'Str' );

has 'quiet' => ( is => 'rw', isa => 'Bool', default => 0 );

no Moose;
1;

In the script you can then just do:

#!/usr/bin/perl
use strict;
use warnings;

use LCFG::Foo;

my $foo = LCFG::Foo->new_with_options();

if ( !$foo->quiet) {
   print $foo->logname . "\n";
}

The script can then be called like:

my_script.pl --logname Changes --quiet

Option Handling in Perl

May 28, 2008

I’ve often used the Perl module Getopt::Long to handle options for scripts. This time I wanted something a bit more complex, I was aiming for something akin to the command-line interface to cvs. Basically, there are options which apply globally then a command and then command specific options. e.g.

lcfg-reltool --quiet --dir dice/lcfg-foo release --checkcommitted --genchangelog

Normally Getopt::Long expects to handle the whole argument list and will throw an error when it sees unknown options. I discovered that it is possible to configure it instead to pass-through any unknown options.

use Getopt::Long ();
Getopt::Long::Configure('pass_through');

my $dir = '.';
my $quiet = 0;
Getopt::Long::GetOptions( 'dir=s' => \$dir,
                                       'quiet' => \$quiet);

Getopt::Long::Configure('no_pass_through'); # revert to normal behaviour

All matched options will be removed from the @ARGV list and it is then possible to call Getopt::Long for a second time with a command-specific option profile.


Buildtools

May 9, 2008

Recently I’ve been putting some effort into getting the new buildtools working. I am now beginning to get a good understanding of how and why the previous version became rather messy. If the aim is to make life as simple as possible for the software authors and packagers then the more this is done the worse the code becomes. A big problem is that the way in which the system currently works is fundamentally incorrect but we’ve got very accustomed to the working practices. It seems to me to be inevitable that for any new system to work correctly we must give up some of the “benefits” of the current system.

The biggest problem is well demonstrated in the way we manage the RPM specfile. The specfile is generated from an input template file (it contains @FOO@ macros). When we build a package using buildtools (e.g. with make rpm) these macros are evaluated at build-time and you can get a different specfile for every platform. This means that building a package on one Redhat platform using buildtools and then using the generated SRPM to build on other platforms is a flawed approach. This can cause particular problems when moving from one architecture to another with the path to libraries (e.g. pam modules are stored in /lib64/security/ on 64-bit machines but are in /lib/security for 32bit). The specfile should have constant contents, and drive the build-process, it should not be a by-product of the build system.

So, given this what process should we go through starting with a code change and ending with newly built packages?

I reckon it works something like this:

  1. Edit code.
  2. Commit changes into revision-control system.
  3. Generate changelog entry.
  4. Tag new release.
  5. Export tagged release.
  6. Generate cmake files, within the exported release, which will control the build process. Note that no macro-substitution or compilation is done at this stage.
  7. Generate specfile with the only macros getting filled-in being “static” (e.g. name, version, changelog).
  8. Generate source tarball.
  9. Generate SRPM.
  10. For each target platform build RPM using SRPM. At this stage cmake is actually used to do the macro-substitution, code compilation and file installation.

These details are intentionally quite high-level. Throughout you can replace SRPM/RPM with the packaging system for your favourite platform. To begin with this is the process which we already know, it’s only from step 7 onwards that it begins to diverge. I’ve already explained the bit about not allowing completely general macro expansion for the specfile, building from SRPM is also essential. I believe it is very important that we build our packages in the same way that external users would, anything else is just not a good test of our packages.

The problem I’m banging my head against is that traditionally we have generated specfiles in the same way as we have filled in macros in the component code, schema and documentation. I don’t want to completely rule it out but I want to know at what level things should and shouldn’t be substituted. It is really useful being able to have an @VERSION@ macro in the specfile so I don’t have to remember to edit by hand each time. But is doing macro-substitution on file and directory names at release-time a good idea? My gut-reaction is that this is not a good idea but what’s the alternative?

This is going to require something of a shift in working practices for Informatics COs so I’m really interested in thoughts and comments here.


Simon’s wedding

May 5, 2008

Laragh and I were at Simon’s wedding on Sunday, we took a few photos.