Sys Admins need to be extra careful

March 21, 2014

Recently there have been revelations that the NSA is explicitly targetting sys admins. This is because they see sys admins as a good way to gain access to the users and data on the networks they manage. It’s worried me for a while now that gaining access to a typical sys admin account provides an attacker with a really easy way to get root access (for instance, there are plenty of sites out there which allow anyone in group “wheel” to gain extra privileges). Also, as I blogged recently, even when you cannot directly gain full root access, anyone who is permitted to do privileged admin tasks using sudo probably has some sort of illicit way of gaining extra privilege.

Even if we ignore concerns about government surveillance, when you can trivially find a huge list of sys admins via linkedin.com you know that attackers are going to be focussing their efforts on that list of targets. It’s clear to me that we have reached a time where sys admins are going to have to accept more onerous access restrictions than a “normal” user because they have the ability to easily acquire a lot more power than a “normal” user. We’re going to be obliged to use technologies such as multi-factor authentication, we’re going to have to avoid insecure web sites that require accounts but don’t have an https option, we’re going to have to use a secure VPN just to do simple things.


sudo security issues

March 17, 2014

I’ve always been very wary of using sudo for anything more than the simplest cases. I quite like the Ubuntu approach of using sudo to gain root privileges instead of su, it’s nice and simple and doesn’t give any suggestion of power being restricted, all it really achieves is the avoidance of the root password. A complicated sudo configuration has always seemed like a great way to hand out complete root privilege whilst being under the false impression that everything is nice and secure. This recent blog article I spotted has confirmed in my mind that heavy reliance on sudo really is a recipe for disaster.


LCFG Client Refactor: New profile parser

January 24, 2014

Recently I’ve been working on developing a new framework which encapsulates all aspects of handling the LCFG profiles on the client-side. This framework is written in Perl and is named, appropriately enough, LCFG::Profile, I plan to blog about the various details in due course. The coding phase is almost complete and I’ve moved onto adding documentation for all the module APIs. I’ve found the documentation phase to be a very useful review process. It has helped me spot various dark corners of the code and methods which were added earlier in the development process which are no longer required. Removing this dead code now is a good idea as we may otherwise end up being committed to supporting the code if it forms part of a public API. I’ve also found it to be a very good way to spot inconsistencies between similar APIs implemented in the various modules. It’s definitely a good idea to follow the principle of least surprise whenever possible. If methods are named similarly and take a similar group of arguments they probably ought to return similar types of results.


LCFG Client Refactor: Phase Two

December 5, 2013

As the results of Phase One of the LCFG Client refactoring project are now in the beta-testing stage and approaching a roll-out date we have commenced work on Phase Two. The primary aim of this new work is to remove all dependencies on the W3C::SAX Perl modules which have been unmaintained for a very long time. We’re probably the last place in the world still using those modules so it’s definitely time to be moving on to something more modern. The project plan for this new work is available for anyone interested.

As a first step I’ve been prototyping some new XML parsing code based on the popular and well-maintained XML::LibXML module. I’ve also been thinking about ideas for an API for storing/accessing the information regarding components and resources. I’ve put together some useful notes on the LCFG XML profile structure to help me get my head around it all.


Security: Using the human perimeter

December 5, 2013

I recently came across an interesting security blog article on the Dark Reading site – "Using The Human Perimeter To Detect Outside Attacks". This is particularly interesting because, as part of our ongoing efforts to improve the security of our network, earlier this year I developed a new "log cabin" service which allows users to review all their SSH and web authentications. As well as providing a web interface where you can peruse all your login activity for the last few months we also send out terse monthly summaries to everyone by email. These summaries list only the most "interesting" connection sources and help to encourage users to keep checking. I will be speaking about this project at the next FLOSS UK conference which will be held in Brighton in March 2014. The talk is titled "Crowd-Sourcing the Detection of Compromised User Accounts" and it will look at how users can become involved in the whole process of keeping their account secure. I particularly like the term "human perimeter" I might have to borrow that one.


LCFG V3 Client – beta release

October 17, 2013

I am pleased to announce that the v3 update for the LCFG client has now reached the beta-release stage. As of stable release 2013101401a everything is in place to begin testing at your ownsite. Full details are available on the LCFG wiki.

If you come across any bugs or unexpected behaviour please file a bug at bugs.lcfg.org.


LCFG Annual Review Meeting

October 16, 2013

On Thursday 5th December instead of our normal Deployers Meeting we are going to be holding an Annual Review meeting.

All users of LCFG are encouraged to attend this meeting to hear about what has been happening over the last year and what developments they can look forwards to in the next year. This is also an excellent opportunity to raise issues that are important to you, put forward ideas for future developments you would like to see and chat about all things LCFG!

This will start at 2pm and we aim to be finished by 5pm. It will be held in room 2.33 of the Informatics Forum (note that this is NOT the usual room).

Full details are available on the LCFG wiki.

Afterwards there will be an informal gathering in a local pub followed by some of us going somewhere for food.

I hope to see lots of people there!


LCFG Client Refactor: Further node name support

June 3, 2013

I remember once as a 12 year old playing rugby at school. I received the ball, saw the field ahead was clear and knew that this was the time to run like hell. For one joyous moment I was brushing aside the defending team, spotting my moment of glory, having never been a particularly sporty kid was this my chance to join the cool crowd? Sadly, someone burst my bubble and pointed out that the main reason I wasn’t being flattened was because we were actually playing touch rugby…

Anyway, my general point is, it’s always good to know when, having been passed the ball, you should just run like hell and see what happens. It might also be good to remember which game you are playing but, hey, ho…

Having been given the chance to split the LCFG node name from the host name, I spotted a chance to really make it count. In short order the following code has been altered to extend this support to the whole of the LCFG client framework:

  • perl-LCFG-Utils 1.5.0
  • lcfg-ngeneric 1.4.0
  • lcfg-om 0.8.0
  • lcfg-file 1.2.0
  • lcfg-authorize 1.1.0
  • lcfg-hackparts 0.103.0
  • lcfg-logserver 1.4.0
  • lcfg-sysinfo 1.3.0
  • lcfg-installroot 0.103.0

None of this has (yet) been shipped to the stable tree since it needs more hacking of the current LCFG client (v2) code to fix a compatibility issue.

The big achievement here is that it makes it possible to specify the lcfg nodename on the PXE installer kernel command-line via the lcfg.node parameter and get the whole way through to an installed managed machine which is using a LCFG profile which is completely unrelated to the host name.

There are various big benefits to this change. It is now possible to have a fully roaming machine which is LCFG managed, there is no requirement for a static host name or static IP address. This means that no matter what host name or domain name settings are in place the LCFG client will continue to work as required. This also makes it possible to use a single “generic” profile to configure multiple machines. If you know you have a lab full of identical machines this could be very handy indeed.

The downside of this is that some things like spanning maps will not work the way you might expect. You also will not receive notifications from the server when a profile changes, you have to really solely on the poll time (probably worth making the timeout shorter). You probably also cannot send acknowledgements to the server and the LCFG status pages will consequently be mostly useless for those clients. It is also difficult to configure networking to do anything other than use DHCP. You’re choosing to move some of the configuration information back out of LCFG (or at least out of a particular profile). You may end up saving effort one way and adding it in another.

At the moment although I have broken the conceptual link between node and host name for the client framework there are still lots of components which are confused by this change. Components have traditionally been able to rely on combining the profile.node and profile.domain resources to form the FQDN. This was probably always on slightly shaky ground but now there can be no guarantee whatsoever of a useful value in the profile.node resource. If a component really cares about the host name (rather than the node name) then it will have to ask the host directly (using hostname or Sys::Hostname from Perl).


LCFG Client Refactor: host name versus node name

May 23, 2013

A long-standing issue that we have had with the LCFG client is that it is not possible to use an LCFG profile with a name which does not match the host name. They have always been treated by rdxprof and the ngeneric framework as conceptually interchangeable. There is no particular reason for this limitation other than the traditional “it’s always been that way“, also we’ve never had a requirement important enough to get this implemented or the opportunity to quickly make the change. As the refactoring project is drawing to a close it seemed like a good time to break this conceptual connection and rework the code to always use the LCFG node name. For the moment the actual behaviour won’t change, since the node name defaults to the host name as before, but we now have a mechanism to allow it to be altered. When the client enters daemon mode it now stashes the name of the LCFG node being used. Since you can only run one client daemon at a time this makes reasonable sense. The standalone one-shot behaviour remains unaltered, you can still access any profile you like.


python and string encodings

May 21, 2013

I’ve recently finished the User accessible login reports project. After the initial roll-out to users I had a few reports of people getting server errors when certain sets of data were viewed. This website is written in Python and uses the Django framework. During the template processing stage we were getting error messages like the following:

DjangoUnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 30: invalid continuation byte.

It appears that not all data coming from the whois service is encoded in the same way (see RFC 3912 for a discussion of the issue). In this case it was using a latin1 encoding but whois is quite an old service which has no support for declaring the content encoding used so we can never know what we are going to have to handle in advance.

A bit of searching around revealed the chardet module which can be used to automatically detect the encoding used in a string. So, I just added the following code and the problem was solved.

import chardet
enc = chardet.detect(val)['encoding']
if enc != 'utf-8':
    val = val.decode(enc)
val = val.encode('ascii','replace')

The final result is that I am guaranteed to have the string from whois as an ascii string with any unsupported characters replaced by a question mark (?). It’s not a perfect representation but it is web safe and is good enough for my needs.