Simon's Musings

January 11, 2009

Cosign authenticated OpenID Identity Provider

Filed under: Uncategorized — sxw @ 11:25 pm
Tags: , , ,

As part of the relocation of my motley collection of ‘not-a-service’ applications, I’ve moved and tidied up the cosign based OpenID identity provider. It’s now available at https://id.not-a-service.inf.ed.ac.uk/.

OpenID is a simple way to use a single digital identity across the entire interent. This experimental Identity Provider allows you to use your Informatics identity as this identity.

The technology behind OpenID means that you can do this without disclosing any information to external sites which might compromise the security of your Informatics account. You still login to our local systems (either when you login to the DICE machine on your desk, or when you go to our web login site) and, needless to say, you should still never disclose your DICE username and password anywhere else. 

Your OpenID is a URL, initially of the form http://id.not-a-service.inf.ed.ac.uk/uun, but by adding a simple bit of HTML, you may use any URL which you control the contents of (so, for example, you could use http://homepages.inf.ed.ac.uk/uun). Any site which displays  (the OpenID logo) in their login field will accept this URL as your identity. You will then be (if necessary), redirected to https://weblogin.inf.ed.ac.uk to enter your username and password, and to our Open ID site to confirm that you’re prepared to divulge your identity.

As the name suggests, this is not a service. It’s not officially supported, and I can make no long term promises regarding its availability. But, please do try it out for ‘throwaway’ web accounts, and let me know if it proves of use.

Technical Details

The service is based around JanRain’s PHP OpenID library, with my enterprise authentication patch. Some crafty use of mod_rewrite, and Apache access control directives force redirection to cosign when authentication is required, whilst still allowing services to access the identity page. The Open ID provider in use is relatively old, and doesn’t support all of the latest bells and whistles.

All of the configuration is performed in the dice/options/openid.h header. The server itself is packaged in the php-openid-server RPM, with MySQL, X509 and Cosign being configured by their corresponding components. The web server is managed using Apacheconf, with an additional configuration file (for the SSL server) being provided through the file component. The templating of the OpenID server is also handled by the file component, with the Informatics style header, and body text being added from LCFG resources.

Unfortunately, the MySQL server database be entirely configured through LCFG, as a password must be shared between the database and the web application. The web application configuration is created by LCFG as /etc/openid/config.php/tmpl, and must be copied into place (/etc/openid/config.php) once the database password has been filled in. Similarly, the database must be created, and the password assigned manually when a new service is configured. Addressing this issue would require a substantial reworking of the MySQL component.

When bringing up a new server, the database must be initialised by running (through om mysql runcommand)

   CREATE DATABASE openid_server;

   GRANT ALL PRIVILEGES ON openid_server.* TO user@host IDENTIFIED BY "password_here";

   FLUSH PRIVILEGES;

In our configuration, both the database, and the server configuration file are part of the backed up set, allowing restores to simply be a matter of copying the configuration into place, and restoring the database.

January 6, 2009

LCFG components as proper objects

Filed under: Uncategorized — sxw @ 4:59 pm
Tags:

Towards the end of last year, at the COs Christmas meal in the sadly-destroyed Khushi’s, Stephen challenged me to produce an python environment for LCFG components. A relatively simple task, you might think, but he threw in the twist that it had to be properly object oriented. This makes it a much trickier prospect, and set me to thinking about what really object-oriented LCFG components might look like.

We’ve got the problem that originally, LCFG was written in sh. Not that great for structured programming, but the original LCFG design still managed to treat what we now know as components as ‘objects’ with methods such as ‘start’ and ‘stop’. The first problem with this is that the execution context wasn’t preserved between each method invocation (components were just scripts which were called with the method as an argument – so every time a method was run the script was invoked from scratch). A simple form of persistence was added – allowing methods to serialise selected object attributes into a file, which would then be loaded, and the attributes reinitialised when the script restarted. This was extended to resources, so that the resource set when ‘stop’ was called would always match that from when the component was initially started. The ‘configure’ method was added to provide a defined mechanism of transitioning between resource sets when the component was running.

All of these additional features were implemented in ‘sh’ – straining the flexibility of the shell to breaking point, and causing numerous restrictions on valid attribute and resource names and values. More recently, we made the jump to implementing some components in perl, but the perl environment is essentially a port of the existing sh one, and adds little in the way of new structure or abstraction.

This leaves a somewhat creaking component environment, which is hamstrung through its implementation language, need for backwards compatibility, and tight coupling with the LCFG client (which handles the interaction between the components and the profiles which the client collects from the LCFG server). If we take a step back from these realities, what would the ideal component framework look like?

Firstly, we need to preserve resource structure when passing sections of the profile to components. Both the current perl and sh frameworks use the collapsed ‘x_y_z’ list form, which makes it impossible to deal with structured resources in meaningful ways. We need to define a new object hierarchy which makes it possible to preserve the structure of the resources in the XML profile right the way through to the component code.

Secondly, we need to deal with the installation, and removal cases. Currently, LCFG has no concept of component installation, or removal. Installation can be detected within the component code as the first time the ‘start’ or ‘configure’ methods get called on a machine, but there’s no generic mechanism for this. There’s currently no way of handling removal – a component never knows when it’s removed from the boot.services list, and so has no way of telling which ‘stop’ invocation is its last. This leads to machines which transition between multiple roles often having a large amount of detritus in their root partitions.

Thirdly, we need to more strictly define persistence. The current persistence definitions are adhoc, partly because sh never gave a clear way of inferring attributes. I believe that our requirements for handling the installation case mean that we should split a component into two different objects. One, Factory-style object should persist from installation to removal. That is, the install method should be that objects constructor, and removal its destructor. All of the objects attributes, and a copy of the resource set, should persist throughout the life of that Factory object, with a ‘configure’ mechanism being available to deal with resource changes throughout its life. For some components, the Factory will be all that is required. For example components which don’t manage daemons, such as pam and sasl, have no meaningful concept of ‘start’ and ‘stop’. Other components will need to have instances which handle the lifecycle of a service. These instances would be created by calling a method of the Factory class, and would have ‘start’ as their constructor, and ‘stop’ as the destructor. In this way, attributes can persist throughout the life of a daemon. In the future, when we support non-singleton components, it would be possible for the Factory to produce multiple concurrent instances. 

Fourthly, we need to build things that can be inherited. As Stephen has noted elsewhere, LCFG desperately needs a way of allowing components to inherit from other components. However, handling the resource set implications of this will require server changes. But, there’s an additional kind of inheritance that we should be interested in. Many of our components do similar tasks, and share large chunks of code. To date, the restrictions of our implementation language (for sh) and framework (for perl) has restricted inheriting useful super classes. Whatever new framework we define should make it trivial to, for example, write a class which handles safely starting, stopping, and notifying a daemon, which can then be inherited by all classes requiring that functionality.

So, that’s my ideal world. Comments?

November 3, 2008

AFS Hackathon and Google Summer of Code

Filed under: Uncategorized — sxw @ 7:35 pm
Tags: , , , ,

I’m now back in Scotland, having spent the last week in California, courtesy of the very nice people at Google’s Open Source Program’s office and OpenAFS. As previously mentioned, I spent the summer mentoring a student (Dragos Tatulea) who was adding support for read-write disconnection to OpenAFS. The mentoring process was hugely rewarding – from a standing start Dragos learned a huge amount about a very complex codebase, and produced a workable implementation of disconnected operation which is now part of the OpenAFS tree. Whilst mentoring was both challenging and time consuming, it also encouraged me to rapidly learn about bits of the OpenAFS codebase I’d never delved into before, and lots about Linux kernel development that I’d been trying to avoid ever knowing!

So, Google invited OpenAFS to nominate people from their Summer of Code mentoring team to attend a summit at their Mountain View headquarters, and I was kindly included. Derrick, Jeff and Matt from OpenAFS also came along. My bags also eventually joined me!

The mentors summit its self was an eye-opening experience. Organised as an un-conference, where people were encouraged to arrange sessions on topics and technologies that interested them, there was a huge amount of fascinating information, and many useful relationships created and renewed. In particular, a chance demonstration at the session talking about Android introduced me to Gerrit, a web based code review tool. I firmly hope that gerrit will be part of the OpenAFS development process, just as soon as we get moved over to git.

Immediately following the Summer of Code mentor’s conference, Google hosted an AFS hackathon – a chance for a collection of OpenAFS developers to get together, discuss the current state of our world, and make targetted progress on specific items. Much of the discussion here centered upon moving forwards on a few specific areas – the move from CVS to git, the integration of rxk5 and Hartmut’s OSD work, and the ongoing work on forming an foundation, and creating a standardisation process. 

I also spent half a day looking at improving the AFS user experience on the Nokia n810. Unfortunately the Hildon file manager widget which both the n810 file browser and all native applications use has some features that make it particularly unfriendly for network file systems. Firstly, it does all of its processing in a single thread, so file system operations which block for a long time also hang the user interface of the application. Secondly, it’s not particularly aware of ‘expensive’ operations – for example, when you open a directory it will also open all of the sub directories, and work out how many files are in them by stating every file, in every sub directory. Needless to say the performance of this is very poor when the directory you are opening is /afs.

I also spent time on bringing up a test instance of gerrit, and working up some proposals of how this could be integrated into the OpenAFS patch workflow. Whilst this is still blocked on the work on the git migration (which Max and Mike made significant progress on over the 2 days), hopefully we’ll be in a position to start using it in anger soon.

Despite the best efforts of the fog at LAX, and American Airlines, I also made it back to Scotland!

September 29, 2008

Presenting iTalk : A web interface to our Jabber service

Filed under: Informatics — sxw @ 11:57 am
Tags: , , ,

As this blog has suggested in the past, I’m in the process of moving things off duffus, and at the same time putting them on a more stable, LCFG managed, footing. The latest system to be moved is iTalk, a local installation of the JWChat application, which provides a web based interface to the Informatics Jabber service. As the service name http://italk.not-a-service.inf.ed.ac.uk/ suggests, this is not a production quality service – but it should be usable by anyone who has access to our Jabber service. Please do try it out, and let me know how you get on.

An older post provides some interesting technical details about the configuration of this service.

September 19, 2008

Integrating Jabber web interfaces with Cosign (and other SSO technologies)

Filed under: Uncategorized — sxw @ 7:25 pm
Tags: , , , , ,

Back in May 2007, I wrote a blog post discussing hypothetical mechanisms for integrating the JWChat web-based Jabber client with a web SSO solution such as WWW-Negotiate or cosign. In my not-so-copious spare time, I’ve been working on implementing this, and now have a working solution to share. Skip to the end of this post if you’re just interested in the code.

Web-based jabber clients such as JWchat differ from many other web applications in that the application actually lives on the client side. Javascript running in the user’s browser communicates with the Jabber server, via a proxy. The proxy takes XMPP (the Jabber protocol) data encapsulated in an HTTP tunnel, and relays it directly to the Jabber server. For this project, I’ve concentrated on the Punjab python-based proxy server, and the http-binding (BOSH) encapsulation technique.

In order to perform SSO, the client must authenticate the encapsulated HTTP connection in the usual way. The proxy must then hijack the client’s authentication step with the Jabber server, and replace it with one that uses credentials delegated to it as part of the HTTP connection establishment. Our credentials are all Kerberos based, which simplifies this step hugely. Here’s a rough description of how it all works when using the WWW-Negotiate HTTP authentication mechansim

  • The client makes an HTTP connection to the proxy, which is authenticated using the client’s Kerberos credentials. The proxy performs this negotiate, and stores the delegated credentials
  • The client and the jabber server start up the XMPP connection via the proxy
  • Providing the authentication step succeeded, the proxy intercepts the server’s list of acceptable authentication mechanisms. If the server indiciates that GSSAPI is acceptable, the proxy adds the SASL EXTERNAL mechanism to the list. (EXTERNAL is a trivial mechanism which just indiciates that the server is prepared to accept an out-of-band authentication method)
  • The client initiates an authentication using the EXTERNAL method
  • The proxy intercepts the first EXTERNAL authentication stanza, and replaces it with a GSSAPI stanza, created using the stored credentials that were delegated in the first step.
  • The proxy and the server continue to exchange stanzas, without involving the client, until the connection establishment either succeeds or fails.
  • The proxy returns the success or failure stanza to the client – which views this as the result of its attempt at EXTERNAL authentication.
  • Normal operation resumes

However, as I’ve discussed previously, the level of client side configuration required by WWW-Negotiate, especially when credential delegation is required, makes it difficult to deploy in a heterogeneous environment. That’s why we use Cosign as our WebSSO solution, rather than using Kerberos directly, and pretty much requires this solution to work with Cosign.

Cosign complicates things because it’s based on redirects, and (ultimately) on user interaction following those redirects. It also has a pretty complicated web-server side module which can’t be easily implemented within the python-based Punjab (rewriting the cosign client as a python Twisted module might be an interesting side project, but not for today). Cosign’s redirects means that you can’t easily use it to authenticate XMLHttpRequest connections, especially when those connections are POSTd. Instead, what we must do is use Cosign to authenticate the fetch of the HTML landing page. Once this page is fetched, and authenticated, Apache has taken care of populating the cosign cookie on the client, and placed a ticket file containing the delegated credentials on the server.

This all actually simplifies things for the proxy. Instead of having to accept a WWW-Authenticate GSSAPI handshake from the client, it just has to check for the presence of a cosign cookie. Having found the cookie, it can then locate the ticket file on the server, and tell the proxy to use that file to obtain credentials with which to authenticate to the server.

Code

Enough talk, now for the patches …

JWChat, or more correctly, the JSJaC connection library that it uses requires a couple of small patches. The first fixes a bug with session restarts, which would otherwise break Punjab’s XML parser. The second adds support for using the EXTERNAL SASL authentication mechanism

The major changes are to Punjab. This patch adds support for doing both cosign and WWW-Authenticate authentication (if you aren’t interested in cosign, the patch should still function correctly – it will just skip the cosign parts when cosign cookies aren’t present with the incoming connection). In order to do GSSAPI authentication from punjab, you will also need my python-gss module.

Configuration wise, in order to use cosign, punjab must be accessed as a proxy through a cosign authenticated location on your web server. The JWChat html pages must also be served through a cosign authenticated location, which requires delegated tickets.

Additional apacheconf features

Filed under: Uncategorized — sxw @ 5:39 pm
Tags: ,

I’ve just added lcfg/options/apacheconf-proxy.h (and the corresponding dice/options/apacheconf-proxy.h) to add the relevant Apache modules to provide HTTP proxying. This joins the existing LCFG level apacheconf feature headers:

  • apacheconf-perl.h – adds support for mod_perl
  • apacheconf-php5.h – adds the PHP5 interpretter
  • apacheconf-python.h – adds support for mod_python
  • apacheconf-rewrite.h – adds mod_rewrite
  • apacheconf-ssl.h – adds SSL support
  • apacheconf-suxec.h – adds suexec support

There are also the following DICE only features, which contain local binaries and configuration

  • apacheconf-cosign.h
  • apacheconf-krb5.h
  • apacheconf-ldapauthz.h

September 9, 2008

Testing

Filed under: Uncategorized — sxw @ 3:02 pm

This is a test of blog posting from my iPod using the WordPress application.

September 4, 2008

Certificate problems with the Symbian web browser

Filed under: Uncategorized — sxw @ 10:20 am

It recently came to light that none of the secure DICE web sites would load on the browser that ships with Symbian (Symbian is the operating system on many Nokia, and other, mobile phones). The browser would get as far as connecting to the server, but then reject its SSL certificate with an obscure error message.It turns out that this problem is because the DICE certificate signing service, SIXKTS is generating X509 certificates that, whilst being valid ASN.1, are not acceptable to the Symbian browser. The problem is with the ‘parameters’ section of the algorithm identifier. The ASN.1 syntax indicates that this is ‘ANY DEFINED BY algorithm OPTIONAL’ – this means that the parameter may have any type, but that the type is determined by the OID contained in ‘algorithm’. In addition, it says that the parameter is optional – it may or may not be included in the certificate. The RSA algorithm that we are using defines the type of ‘parameters’ as being NULL.The X509 signing service was taking the path of least resistance by simply choosing to take advantage of the ‘OPTIONAL’ definition, and didn’t include a parameters value at all. OpenSSL and NSS based X509 libraries were fine with this. Symbian’s error message is sufficiently opaque that it doesn’t really help with tracking down the problem at all.

However, examining our certificates in Safari provided a helpful pointer. Safari’s crypto library is a little more picky – and would display a parser error when these certificates were inspected. Amending the signing service so it includes a NULL parameter, rather than omitting it completely, resolves the parser error on Safari and allows these sites to work on Symbian.

April 23, 2008

Building an OpenAFS release for RedHat Linux

Filed under: Uncategorized — sxw @ 11:21 am
Tags:

Some notes, primarily for myself, of the steps required to build an OpenAFS release for Fedora / RHEL …

  • Get the openafs-rel-src.tar.bz2 openafs-rel-doc.tar.bz2 Changelog and RELNOTES files from the release volume
  • Touch empty versions of Changelog and RELNOTES if they aren’t available upstream.
  • Extract makesrpm.pl and mockbuild.pl from openafs-rel-src.tar.bz2 with tar -xvjf openafs-rel-src.tar.bz2 \*makesrpm.pl \*mockbuild.pl and copy them to somewhere useful
  • Build the SRPM with makesrpm.pl openafs-rel-src.tar.bz2 openafs-rel-doc.tar.bz2 RELNOTES ChangeLog
  • Check that the RPM builds on the local system
  • Set up the http proxy for the mock build, if required: export http_proxy=http://localhost:3128
  • Run the build: ./mockbuild.pl --resultdir=/path/to/output/rpms centos-5-x86_64 centos-5-i386 fedora-8-x86_64 fedora-8-i386 centos-4-x86_64 centos-4-i386 fedora-7-x86_64 fedora-7-i386 fedora-6-x86_64 fedora-6-i386 fedora-5-x86_64 fedora-5-i386 openafs-rel.src.rpm
  • Wait …

April 4, 2008

Catalyst on DICE

Filed under: Uncategorized — sxw @ 5:18 pm
Tags: , ,

The web interface for our new account management system, prometheus (which I’m mainly discussing in its own blog) uses the perl Catalyst MVC framework. The initial prototyping and development of prometheus was done on a DICE FC5 machine, with most of Catalyst installed from Extras. For some reason, last week seemed like a good point to move it to it’s eventual home architecture on Scientific Linux.

Let us be clear. Catalyst has a lot of dependencies. And by lot, I don’t just mean a few additional packages. On top of our standard DICE installation, an additional 73 perl modules are required to build, and use, Catalyst itself. Unfortunately for me, only 28 of those are actually available in the EPEL repository. Of the remainder, 6 of them were completely new to Fedora, and have now been packaged and will be submitted upstream. The last 39 are packages which exist in Fedora but which don’t have EPEL branches – at some point I should ask their maintainers about either doing so, or allowing me to comaintain an EPEL fork.

All that aside, a header to add catalyst support to a machine is available as ed/options/catalyst.h Feel free to tell me about any problems, or further missing packages (!)

« Previous PageNext Page »

Theme: Rubric.