LCFG::Build::Skeleton changes

December 8, 2014

At the LCFG Annual Review meeting held last week one topic which was discussed was the idea of all Perl based LCFG components being implemented as modules with the component script just being a very thin wrapper which loads the module and calls the dispatch method. This has been our recommended coding style for quite a while now and we use this approach for many of the core components.

During the discussion I realised that the lcfg-skeleton tool which is used to create the outline directory structure for new projects does not support this way of working. To make it really easy to create new Perl-based components which follow recommended best-practice I have consequently updated LCFG-Build-Skeleton. The new version 0.4.1 creates a module file (e.g. lib/LCFG/Component/Foo.pm), the necessary CMake file and also tweaks the specfile appropriately. This will be in the stable release on Thursday 18th December or you can grab it from CPAN now.


Splitting lcfg-utils

March 4, 2009

Recently I have been working on converting the MPU managed LCFG components to being built via the new build tools. I’m down to the very last few now, one that I have been avoiding for ages is lcfg-utils since the current package build scripts are a bit crazy, it is all in a bit of a tangled mess. Yesterday I finally bit the bullet and started ripping it apart with the aim of separating it out into related units – one for the C code, one for the core Perl utilities and one for the Perl package utilities. Along the way I also had in mind enabling the building of a shared library for lcfgutils and a few other niceties.

I was pleased to find that the new build tools really did make the job much easier than I had expected. The two packages of Perl code, LCFG::Utils and LCFG::PkgUtils use Module::Build so could be uploaded to CPAN. The newly pared down lcfg-utils package provides the shared library and a few basic utilities uses CMake. There is also an lcfg-utils-devel package on Redhat systems which holds the header file and static library for any build processes where that is required.

I now have it all nicely organised and ready for testing. I believe it all works, it certainly appears to on my machine but it will need further testing to check that I haven’t introduced any nasty bugs. These are fairly important libraries and utilities so a certain amount of cautious checking is required. If you want to give it a go you can do so with the following lines added to an LCFG source profile:

!profile.packages       mEXTRA(+lcfg-utils-1.3.0-1\
                                                 lcfg-utils-devel-1.3.0-1\
                                                 perl-LCFG-PkgUtils-1.0.0-1\
                                                 perl-LCFG-Utils-1.0.2-1)

If you are feeling really brave you can also try out a new version of updaterpms which uses the lcfgutils shared library, you just need:

!profile.packages  mEXTRA(+updaterpms-3.1.5-1)

Moving to the new build tools just got easier

October 8, 2008

I’ve been working on porting the MPU components to the new LCFG build tools. This is giving me a good idea of how successful the tools are at handling a wide variety of situations. So far everything seems to be going pretty well, I’ve certainly not hit any major stumbling blocks which would suggest the need for major changes.

As I’ve gone through I’ve taken the chance to replace any deprecated LCFG macro names with their modern equivalents. Essentially this is the list of aliases in the table in the "Package Information" section of the "Substitution Variables" page in the build tools documentation. Mainly they are deprecated because they might clash with standard CMake variable names. I started off doing this search-and-replace process by hand but I rapidly got rather bored with that approach. This is the sort of job at which Perl excels so I’ve now come up with a tool to do the job automatically.

The LCFG release tool checkmacros command now has a --fix_deprecated option to carry out this automatic replacement. It scans through all the files as usual finding all the instances of various macro usage and produces the report. After that point it can use the results to modify any files containing deprecated macros.

If you are feeling really brave you could also try out a new version of updaterpms which uses the lcfgutils shared library, you will need:

!profile.packages  mEXTRA(+updaterpms-3.1.5-1)

LCFG on CPAN

September 11, 2008

The plan for the new LCFG build tools has always been that they should be packaged as “standard” Perl modules. The idea being that this should help people who want to port LCFG to new platforms or just work on platforms other than those directly supported. A direct consequence of this is that they can be uploaded to the Comprehensive Perl Archive Network (CPAN) which vastly improves distribution of the software. Putting Perl modules onto CPAN means they can be easily converted into RPMs using cpanspec, Debian packages using dh-make-perl or just installed directly using the CPAN shell which is shipped as part of the standard perl distribution. I’ve now uploaded most of the build tools packages, only LCFG::Build::Tools is still to go, here’s the links:


Eating your own dog food

September 5, 2008

I’m currently reading “Practices of an Agile Developer” which is part of the “Pragmatic Programmers” series of books. It contains many great ideas and tips on how to be a better software developer and is an excellent followup to the original, and now almost iconic, “Pragmatic Programmer” book.

When I’ve finished I’ll put up a full review. For now I thought it worth noting one point which caught my attention, the idea of “eating your own dog food”. As you can see from the wikipedia article, this is not a new idea.

This is something that I’ve been attempting to put into practice throughout the LCFG buildtools project. As soon as the tools I developed became capable of actually managing the releasing and packaging of software projects I’ve been using them to do just that with the new buildtools packages. At times I’ve wondered if the extra effort would be worth it and it has got me into some awkward positions when the tools at hand are broken and putting the fix into place is made more difficult by not being able to do straightforward releases. The big advantage is that by actually using the tools extensively I’ve developed a really good idea of what works and what just sucks. There is no better way to check if the high-level design is correct than to use the system in its infancy. The build tools also have unit tests which will say if an individual piece of code does the right thing according to some specification but they will never be useful for judging whether a tool “feels right”. This has undoubtedly led to the new tools being much better equipped for building software packages for projects which are not LCFG components, that’s not to say that the packaging of LCFG components has been in anyway ignored, of course. In a way this has also provided a form of continuous integration testing, I’ve typically needed the latest submitted version of the tools to be working in some form at all times to get onto the next stage.

I think it’s clear that taking an “agile” approach has been much better than the more traditional methodology of doing all the design up-front, coding up the design and only then doing full usability testing. Up-front planning is absolutely essential but I think it needs to be kept to a high enough level to make it possible to see how the development can be split into chunks, each focussing on separate pieces of functionality. For example, I first worked on being able to use the tools to tag releases, once that was working I started using it and moved onto adding support for packaging. At the beginning of each chunk that work is planned in more depth and, if necessary, prototypes are developed to test ideas before full development commences.


Logging

July 15, 2008

I’ve been thinking a bit about logging in relation to the new LCFG build tools. I have previously come across the Log::Log4perl Perl module and thought it was a very good solution for large projects. Now there is a really nice Moose wrapper, named MooseX::Log::Log4perl, which makes it trivial to add logging to your classes.


Using CMake

July 10, 2008

The build tools project is now moving along nicely and I’ve started converting LCFG components and other software projects owned by MPU. For the first time today I had a go at building a fairly simple package (lcfg-pkgtools) using CMake. I was pleasantly surprised at how easy it was to write the necessary CMake rules for building the shared library and executable. Here’s the whole cmake file.

Note that in the case of most LCFG components it will not be necessary to actually write a single line of CMake as it will all be done automagically this is a situation where the author has to do a bit more work.


Buildtools

May 9, 2008

Recently I’ve been putting some effort into getting the new buildtools working. I am now beginning to get a good understanding of how and why the previous version became rather messy. If the aim is to make life as simple as possible for the software authors and packagers then the more this is done the worse the code becomes. A big problem is that the way in which the system currently works is fundamentally incorrect but we’ve got very accustomed to the working practices. It seems to me to be inevitable that for any new system to work correctly we must give up some of the “benefits” of the current system.

The biggest problem is well demonstrated in the way we manage the RPM specfile. The specfile is generated from an input template file (it contains @FOO@ macros). When we build a package using buildtools (e.g. with make rpm) these macros are evaluated at build-time and you can get a different specfile for every platform. This means that building a package on one Redhat platform using buildtools and then using the generated SRPM to build on other platforms is a flawed approach. This can cause particular problems when moving from one architecture to another with the path to libraries (e.g. pam modules are stored in /lib64/security/ on 64-bit machines but are in /lib/security for 32bit). The specfile should have constant contents, and drive the build-process, it should not be a by-product of the build system.

So, given this what process should we go through starting with a code change and ending with newly built packages?

I reckon it works something like this:

  1. Edit code.
  2. Commit changes into revision-control system.
  3. Generate changelog entry.
  4. Tag new release.
  5. Export tagged release.
  6. Generate cmake files, within the exported release, which will control the build process. Note that no macro-substitution or compilation is done at this stage.
  7. Generate specfile with the only macros getting filled-in being “static” (e.g. name, version, changelog).
  8. Generate source tarball.
  9. Generate SRPM.
  10. For each target platform build RPM using SRPM. At this stage cmake is actually used to do the macro-substitution, code compilation and file installation.

These details are intentionally quite high-level. Throughout you can replace SRPM/RPM with the packaging system for your favourite platform. To begin with this is the process which we already know, it’s only from step 7 onwards that it begins to diverge. I’ve already explained the bit about not allowing completely general macro expansion for the specfile, building from SRPM is also essential. I believe it is very important that we build our packages in the same way that external users would, anything else is just not a good test of our packages.

The problem I’m banging my head against is that traditionally we have generated specfiles in the same way as we have filled in macros in the component code, schema and documentation. I don’t want to completely rule it out but I want to know at what level things should and shouldn’t be substituted. It is really useful being able to have an @VERSION@ macro in the specfile so I don’t have to remember to edit by hand each time. But is doing macro-substitution on file and directory names at release-time a good idea? My gut-reaction is that this is not a good idea but what’s the alternative?

This is going to require something of a shift in working practices for Informatics COs so I’m really interested in thoughts and comments here.


mock and package building

April 17, 2008

Recently I’ve been working on getting mock installed and configured on our new build server. The idea is to provide a set of chroots which authorized users can use for package building in an automated fashion. There are big advantages to doing things this way, for a start it only requires one machine to allow building for any RPM based platform (either i386 or x86_64 as long as the machine is installed as x86_64). It also requires packagers to be much more aware of their build requirements as the chroot is pretty minimal so unless the dependencies are well-specified the package won’t build. This should result in a much better quality of packaging which can be more easily distributed to other users and sites.

For reference, most of the information I needed to get mock correctly configured was taken from the fedora MockTricks wiki page. The most crucial bit I missed on the first pass was that on x86_64 you need to be careful with the yum configuration to exclude pretty much all i386 packages. If you don’t do that you get a chroot full of i386 packages which can cause interest problems, in particular file conflicts, when it comes to building some packages.

For each platform we are going to provide a set of chroots which have access to different package repositories. Taking sl5 as an example there is a basic chroot (named sl5-i386) which can access {distro,updates,extras,lcfg,world} buckets, there is a uoe chroot (uoe-sl5-i386) which has all of those buckets and can also access the uoe package repository and thirdly there is an inf chroot (inf-sl5-i386) which adds the inf bucket as well as uoe. The hope is that most packagers will use the basic chroot to minimize dependencies on locally built packages. Anything built in those chroots will have to go into the associated bucket, the packages should never be submitted to the lcfg or world bucket, for instance.

During testing of the chroots Simon and I discovered one unfortunate problem with the LCFG component source packages. The intention has always been that by bundling the various buildtools makefiles (e.g. os.mk, buildtools.mk, lcfg.mk) that the package could be rebuilt entirely independently of buildtools. This is not the case, without already having the lcfg-buildtools RPM installed it is impossible to rebuild an LCFG component SRPM. The only way this can be fixed is as part of the buildtools rewrite project.


Moose on SL5

March 3, 2008

I’ve been working through packaging the Moose perl module and its dependencies for SL5. This has now been done and you can now use an LCFG header (currently “develop” only) to include the packages like this:

#include <lcfg/options/perl-moose.h>

I will add the modules for FC6 in next. All the packages which weren’t provided in epel have been built locally and put into the new “world” bucket.

To get the new release tools on the SL5 machine as well you will need this:

!profile.packages    mEXTRA(perl-Data-Structure-Util-0.12-1.inf \
                                              perl-YAML-Syck-0.98-1.el5\
                                              perl-UNIVERSAL-require-0.11-1.el5/noarch\
                                              perl-LCFG-Build-PkgSpec-0.0.5-1/noarch\
                                              perl-LCFG-Build-VCS-0.0.5-1/noarch)