Code Smells

February 17, 2009

Here is a little snippet of C code which recently reared up its ugly head and bit me, these sort of problems are often termed Code Smells. No, I’m not going to say who wrote it…

#ifdef LCFG_OS_rh9
#if defined(LCFG_OS_fc3) || defined(LCFG_OS_fc5) || defined(LCFG_OS_fc6) || defined(LCFG_OS_sl5)

There are a couple of problems here, the biggest and most obvious of which is the maintenance issue. Every time a new platform is introduced this code will need to be modified to add a check for another LCFG_OS_ macro. It’s a bug lurking in the dark waiting to pounce on the unwary, in particular it makes it hard for any external users to take the code and use it elsewhere.

The deeper problem, and the reason I consider this a “code smell”, is that the code checks for the wrong thing. The real requirement is to check which API is being used by a library. Although, in the normal case, the choice conveniently maps onto the platform there is no reason why the older version of the software couldn’t be used on a more recent operating system. What is really needed here is a check for the required API. The code now reads:

#ifdef OLD_API

This automatically chooses the up-to-date API unless the user has specified that the older API is required. This could be done manually or through automatic detection incorporated into the build scripts. Ideally the headers for the library would have a macro defined which specified the API being used but in this case there does not seem to be anything like that available.

Running commands from perl

February 13, 2009

I think most people are aware of the dangers of running external commands from Perl, particularly when they involve variables storing user input. Over the years multiple methods have been provided to make things safer, such as system() taking a list of command parts and not using the shell rather than the original single string-based approach which forks a shell. That’s fine when you just want to fire off a command and check the return code, often though we want to capture the output for immediate perusal or store it directly into a file. Here’s a snippet of code from the subversion component I used when I wanted to run the svnadmin command and send the output direct to a file:

        my $out = IO::File->new( $outfile, 'w' ) 
              or $self->Fail("Could not open $outfile: $!");

        open( my $in, '-|', $svnadmin, 'dump', $repo_path )
            or $self->Fail("Failed to dump svn repository $repo_path: $!");

        while ( defined ( my $line =  ) ) {
            print {$out} $line;

        $out->close or $self->Fail("Failed to save svn repository dump: $!");

All that just to do the equivalent of "svnadmin dump > /path/to/dumpfile". It could be even more complicated if you got an error code returned and you wanted to examine the output on stderr.

There are various solutions to this problem based around IPC::Open2 and IPC::Open3 but they can still be quite hairy and involve opening and reopening file handles.

I have recently discovered a much nicer, simpler interface which hides the complexity in solving these problems, the module is named IPC::Run. A quick example from the module summary shows the potential:

   ## First,a command to run:
      my @cat = qw( cat );

   ## Using run() instead of system():
      use IPC::Run qw( run timeout );

      run \@cmd, \$in, \$out, \$err, timeout( 10 ) or die "cat: $?"

      # Can do I/O to sub refs and filenames, too:
      run \@cmd, '<', "in.txt", \&out, \&err or die "cat: $?"
      run \@cat, '>', "out.txt", '2>>', "err.txt";

      # Redirecting using psuedo-terminals instad of pipes.
      run \@cat, '<ptypty>', \$out_and_err;

This is barely scratching the surface, it can act like Expect and do loads of other clever tricks.