Author Archives: gdutton

more rfe heavy lifting

I’d always intended to follow up a big hand for rfe with details of my vim-based LCFG profile / header editing environment – but seven years intervened and apparently it never crossed my mind again.

So, to cut another long story short…

I use snappy aliases “vp” and “ep” to view and edit LCFG profiles, respectively. These consist of a few DNS checks, shortcuts, a backup system, etc. but the crucial bit is the overriding of EDITOR to be a very small bash shim, which assumes you have a full checkout of the DICE LCFG headers at "$HOME/path/to/headers":

#!/bin/bash
# Define paths for "gf" goto-file support
LCFG_WORKING_COPY="$HOME/path/to/headers"
lcfgwcpath="$LCFG_INCLUDE_WC/core/include,$LCFG_INCLUDE_WC/live/include,$LCFG_INCLUDE_WC/core/packages/*,$LCFG_INCLUDE_WC/live/packages"

# rfe doesn't specify a filetype, so we have to force CPP mode.
# Also, set working path above for "gf" support.
/usr/bin/vim -c "set filetype=cpp" -c "set path=${lcfgwcpath}" $*

I activate this via my shell. You could do it, for example, with .bashrc:

function editprofile() {
  local profile args
  for profile in ${*:-$(hostname -s)}; do
    args="${args} lcfg/${profile}";
  done;
  EDITOR="$HOME/bin/pvim" rfe -S ${args};
}

This grants me significantly more useful syntax-highlighting, as well as an understanding of #include directives, which allows me to “gf” to inspect the header under the cursor.

Of course, the highlighting doesn’t understand the embedded Perl-like LCFG bits, but that’s OK, comments and macros are a significant improvement (in particular, no longer treating /^#.*$/ as a comment). On several occasions I have experimented with a special “lcfg” vim filetype but this is 99% based on CPP, and generally the bugs outweigh the benefits.

alpine descent

Fans (and regular users) of the text-based email client “alpine” might be interested to know that as of 3rd December we will be reverting to the previous upstream release, 2.11 on DICE SL7. The noticeable effect of this change should be minimal, except if you are in the set of users affected by bugs in the 2.20 release, in which case it should be wholly beneficial. DICE SL6 won’t be affected by this, as it has remained on the comparatively stable 2.10 release for some time now.

For those who have not been following the development of pine/alpine in recent years, this history gives a good account.

The version we’ve been using for a little while (2.20-1) was taken from an alternative fork to previous versions and contained several flaws – some of which we’ve noticed in practice – and unfortunately the updated package (2.20-2) referenced above adds a further bug affecting normal workflow (specifically: entering the “Compose” menu causes a repeatable fault if keylabels are disabled; in due course this should be reported upstream).

Given the comparatively unstable state of upstream alpine packaging, we’ve taken the decision to revert to 2.11 for now, and in future we will almost certainly build and package our own release of alpine as we had done for most of the course of DICE SL5 and SL6, but based on Eduardo Chappa’s releases. This invites some additional maintenance load to using our upstream distribution, but should hopefully result in a better, more stable alpine on SL7.

reform: rough status so far

As of April 2014:

  • Technologies researched: WSGI + Apache, Flask, Jinja2, PostgreSQL/psycopg2 chosen.
  • Backend framework, logging, page lifecyle, etc. built.
  • Config format drafted
  • Kerberised PostgreSQL backend session / form storage developed

a big hand for rfe

Here are some tools that I’ve put together that make a huge difference to a clunky rfe workflow. Bored already? Don’t worry, I’ve written the script already: tl;dr

Imagine power cycling a DICE server with redundant PSUs, using our lovely power bar control files. You don’t know where it’s installed, so you have to search for it. Continue reading

cron and on and on?

DICE users have long had the ability to add their own cron jobs, to schedule repeating tasks or to launch services on reboot1 using the standard crontab command.

The Research and Teaching unit recently noted that users’ crontabs (sometimes required, for example, on research servers to start or check on custom services) are not routinely backed up. They are of course necessarily minimal and easy to recreate, but their fate following a disaster (or even a reinstallation) is not obvious. Even if users are aware of this limitation, it prevents us from performing a completely automated recovery where users’ crontabs are involved.

It is possible to back up crontabs on demand or routinely, but we’ve no procedure to do this on machines that typically have no other data to back up. So the purpose of this post is simply to draw attention to the above, and ask a few general questions:

  • Is it well-known that crontabs are “at risk”? Does anyone care?
  • Should crontabs be backed up routinely? on all servers? desktops? at all?
  • Is there anything else we should be doing about this?

Comments are encouraged, below or by email.


  1. crontab is the setuid executable which manages /var/spool/cron/ for this purpose. The location is notable: on DICE, the /var directory (or filesystem) consists of data which is either transient or generated and managed automatically. With the exception of some database files, this data is not typically backed up since its value is only to the running system. Indeed most cron entries on a machine are not created by users; they are configured by LCFG and (re)created automatically.
  2. Note that using cron on DICE requires a few steps to work with (or around) AFS.

Actually Using pgluser

pgluser is a handy tool on DICE, and allows me to manage postgresql user accounts and databases in a completely automated manner. As it was primarily a tool to make my life easier, I’ve never tried particularly hard to increase its uptake. This is my attempt to help, at least for anyone considering setting up an postgresql server in an LCFG environment:

https://svn.theon.inf.ed.ac.uk/trac/wiki/PgluserAndDice

Have I missed anything?

alpine, nagios and display filters

I’ve been aware of alpine’s “display filter” feature for some time, used as it is for on-the-fly GPG interpretation amongst other things. But I’d never really examined the feature before now. The manual says:

The [filter] command is executed and the message is piped into its standard input. The standard output of the command is read back by Alpine.

This says it all: display filters turn out to be an extremely powerful generic mechanism for reformatting and enhancing text; it works particularly well when applied to machine generated messages. Maybe its power is best explained by the example which caused me to investigate in in the first place:

An example (the nagios bit):

A longstanding irritant to me has been a my difficulty in shutting nagios up. For a long time I’ve been relying on a filter to parse nagios’ incoming emails and generate a URL. The display filter closes the loop, automatically injecting that magic URL at the end of the message.

Here’s a simplified version of the filter, reminiscent of the one in the previous post:


#!/usr/bin/gawk -f
# Crude detection of problem type for acknowledgement link
# Don't forget to validate these inputs...
/Notification Type: / { TYPE=$3; }
/Service:/ { SERVICE=substr($0,length($1)+1,length($0)); }
/Host:/ { HOST=$2; }
# Important: this is a filter, so don't forget to print input lines back out!
// {print;}
# Now add the acknowledgement link below:
END {
    if (HOST && TYPE == "PROBLEM") {
        # this is the script which generates the URL.
        # ideally this should be replaced with some awk to do the same thing
        cmd="~/bin/nagack "HOST" "SERVICE
        cmd | getline url
        close(cmd)
        # now add the link to the email.
        print "[Acknowledgement link: "url" ]"
}

Now, to alpine’s Display Filters setting, add:


Display Filters    = _LEADING("***** Nagios")_ /path/to/nagios-filter-script

that’s it! My emails from nagios now look like:


***** Nagios *****
Notification Type: PROBLEM
Service: ssh
Host: myhost
Address: 192.168.12.34
State: CRITICAL
...
[Acknowledgement link: https://nagiosserver/nagios/cgi-bin/cmd.cgi?cmd_typ=3... ]

Important caveats:

  • If you’re not careful, by adding these filters you will have introduced a trivial local shell injection attack to your mail client. Validate your inputs — just like I didn’t above!
  • The developers have this to note about running filters on every message:

    Testing for the trigger and invoking the filter doesn’t come for free. There is overhead associated with searching for the trigger string, testing for the filter’s existence and actually piping the text through the filter. The impact can be reduced if the Trigger Modifying Tokens […] are employed.

    I’ve certainly noticed a small (immeasurable, but noticeable) delay in opening messages with triggers. Large enough to be annoying if I’d planned to filter every message, even using a trivial bash filter which itself is quick to complete.

  • One additional caveat on DICE: if your alpine session outlives your AFS credentials, and you’ve stored your display filters in your home directory, you will find that the display filters simply disappear. As good a reminder as any to renew, and thankfully a “renc” is all that’s required to restore your filters to former glory.

That’s it! Surprisingly trivial, and with a handful of these triggers, the benefits are huge. I’m using five so far, mostly generating clickable links to some of our automated systems, but I’d be pleased to hear what other people are doing with these filters.

Editing component files with vim

Editing LCFG component source files using Vim is of course The Right Thing to do, but due to the way these source files are named (typically filename.ext.cin) vim doesn’t necessarily pick up on the filetype, and goodies such as syntax highlighting are lost.

This is easy to fix using vim’s ftdetect system. Some examples for simple types:

" These files are always POD in disguise
au BufRead,BufNewFile *.pod.cin : set filetype=pod
" Slightly contentious: a new filetype is needed, really, but this is a decent match.
au BufRead,BufNewFile *.def.cin : set filetype=cpp
" For other, unknown types, detect from the as-yet undefined shebang:
au BufRead,BufNewFile *.cin : if getline(1) =~ '^#!@SHELL@' | set filetype=sh | endif
au BufRead,BufNewFile *.cin : if getline(1) =~ '^#!@PERL@' | set filetype=perl | endif

(note the latter two lines are specified separately, rather than elseifed, purely for readability). It’s fairly obvious that this can be extended to any file type, and there’s also scope for adding an automatic mapping to allow all files of form file.typ.cin to be mapped automatically to their default .typ. “sub-extension” file type.

Anyway, the above has already improved my productivity no end so I’ll leave the latter exercise to the reader. Comments and contributions are welcome, as always — so long as they’re not suggestions to use Emacs(!)

Away with the PXEs

Occasionally, for the purposes of internal testing or continuity, it’s desirable to bring up a server with a duplicate MAC address. It’s a safe enough manoeuvre (so long as these machines operate on different wires) for the brief periods in which I require it but when this scenario involves the installation of a new server via our installroot PXE service, things are trickier.

Our PXE server is configured automagically by spanning map and, effectively, keyed on MAC, so it’s unlikely to present the correct configuration (reliably) when the new host differs from the old one in some way.

The workaround is to override the PXE configuration on the *existing* server (on the basis that you weren’t planning on reinstalling it, anyway, were you?):

!pxeclient.platforms mADD(new_plat_name) /* e.g. sl6_64 */

/* And, if you need to add or remove serial console support: */
!pxeclient.serial_port mSET(ttyS0) /* or () */

Post-PXE, the dhclient component is aware of subnet differences and will ensure your machine receives the correct profile for installation (though, to prevent future confusion, remove this as soon as the installer has done its work!).

get on the rpm bus

This is a quickie script which streamlines my RPM building and submission to a single command. Note that this is entirely dependent on our shiny new Package Forge system, which feeds RPMs to multiple platforms for building and eventual submission into our RPM buckets.

All it does is chain up “rpmbuild -bs [spec]; pkgforge submit [srpm]” but it’s a nice timesaver nonetheless. Side-benefits include the automatic generation of a readable ID and provision of a tracking link for pkgforge so that you can anxiously refresh the page to watch the build progress (or you could just wait for the report email…).

So, here is is; my very simple and stupid RPM automation. Suggested name: ‘rpmbus’.

#!/bin/bash
if [[ -z $2 ]]; then
    echo "RPMbus: build -> submit assist"
    echo "Usage: `basename $0`   [pkgforge args]"
    exit 1
fi
bucket=$1; shift
spec=$1; shift
args=$*

output=`rpmbuild -bs ${spec} | tail -n 1`
pkg=`echo ${output} | sed -e 's_^Wrote: __'`

if [[ ! -e ${pkg} ]]; then
    echo "Package wasn't built: ${output}"
    exit 1
fi

id=`basename ${spec} | sed -e 's_\.spec__' -e 's_\.__g'`-`date +"%s"`
echo -e "Found source package:\n  ${pkg}"
echo "  Extra args: ${args:-none}"

read -p "Submit to '${bucket}'?" foo
if [[ ${foo} != 'y' ]]; then
    echo "Cancelled"
    exit 1
fi

echo "Submitting to ${bucket}..."
pkgforge submit --id ${id} -B ${bucket} ${args} ${pkg} && \
echo "  https://pkgforge.inf.ed.ac.uk/job/view?id=${id}"

 

Caveats: well, they’re numerous and they’re pretty apparent. But it took five minutes to write and it WFM :)

losing locate

As much as MacOS’ Spotlight is an integral and indispensable part of my interaction with my laptop, a part of me still begrudges the “gratuitous” CPU and disk utilisation which is of course a necessary part of its operation.

However as a hardened Linux user unprepared to do without the luxury of the locate database, my inner resource miser was further upset on discovering that these databases were not shared, and with even more irritation enabled the periodic updatedb cron job, as was suggested by locate itself.

Whether it was SSD envy, a nagging sense of a job half-done or sheer procrastination I’m not sure, but last week I felt compelled to do away with the needless platter-spinning and found the answer far too quickly, in the form of the Spotlight shell utility mdfind.

One alias later:

alias locate="mdfind -name"

and I was able to destroy the locate database, and discontinue its indexing:

launchctl stop /System/Library/LaunchDaemons/com.apple.locate.plist
launchctl unload -w /System/Library/LaunchDaemons/com.apple.locate.plist

whew!

Some obligatory qualifications…

This applies largely to OS X 10.6, Snow Leopard. Leopard’s arrangement is slightly different, and I know nothing about earlier versions… And no, it doesn’t support all of locate’s arguments, but I rarely used them (apart from -i) anyway (and don’t have any case-sensitive, indexed filesystems on the mac right now). man mdfind provides workarounds for many of the more unusual uses of locate, and grep provides the rest…

something I didn’t know about mailcap

For a few weeks I’ve been idly wondering why I’ve been unable to get alpine to take advantage of the syntax-highlighted goodness of vim, when viewing attached patches. Having just won another small victory against my own ignorance, I thought it best to share.

Like any sensible mail client, alpine chooses viewers for attached files using lookups of the system mailcap files, /etc/mailcap and ~/.mailcap. Enabling plain-text viewing in vim should be as simple as assigning vim to the appropriate type(s) in ~/.mailcap (and, for some types, unchecking the alpine Show Plain Text Internally preference).

However, attempts to open plain-text files (in this case specifically text/x-patch) in the multi-talented vim editor failed: alpine simply returned a “finished” status, as if viewing had been successful. My suspicion was confirmed when I redirected vim’s ouptut (hidden by alpine) to a file:

Vim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal

The latter message was well known to me; it’s usually triggered by my forgetting to affix the “stdin hyphen” whilst piping input to vim.

The problem is that both vim and alpine require control of the terminal to function; vim does not simply return beautifully ANSI-escaped coloured text for later display. Attempts to somehow force alpine to relinquish control of the terminal, or for vim to take it, failed until I discovered the secret amongst mailcap’s flags, as described by the manual:

    copiousoutput
        This flag should be given whenever the interpreter is
        capable of producing more than a few lines of output
        on stdout, and does no interaction with the user. [...]

I’d seen this, but for some reason had always assumed ‘copiousoutput’ to be some sort of magic external pager, with no connection to the mailcap system. Reading on, the solution was clear:

    needsterminal
        If this flag is given, the named interpreter needs to
        interact with the user on a terminal. [...]

So, a few amendments to ~/.mailcap later:

  Text/X-Patch;    /usr/bin/vim -R -- '%s'; needsterminal

and alpine had gained magical powers to invoke terminal-based viewers. There’s more to this; in particular the ‘edit=‘ and ‘compose=‘ fields, not to mention print support. But that’s enough to get basic viewing in vim.

+1 for reading the manual. -1 for not reading it before embarking on terminal manipulation…