Category Archives: Research

The overall aim of our research is to understand the behavioural capabilities of insects and to model these using robots. Currently there are a number of specific topics we are investigating:

Creating the Seville 2009 ant habitat

Firstly the 10m*10m experimental area was divided into a 1m*1m grid using metallic markers.  This allowed the layout of grass tussock and bushes to be mapped onto squared  paper . This mapping provided the 2D position of all vegetation but not their heights.

Seville 2009 field site with 10m*10m being marked out
Seville 2009 field site with 10m*10m being marked out

Thus a  database of panoramic images was collected from which the height of grass tussocks could be measured. A custom wireless panoramic camera system was constructed for this purpose (see image below). Images were captured on a 50cm*50cm grid (after alignment and levelling) to a concealed wirelessly-connected laptop (Hauppauge WinTV-HVR system). In total, 424 images were sampled within the 10m*10m test area.  Images were then unwrapped (1ᵒ resolution, 360ᵒ azimuth, 0ᵒ to 45ᵒ elevation) using the OCamCalib Toolbox for MATLAB [1]. By referencing the 2D map decribed above, and with knowledge of the alignement of the camera, all grass tussocks visible in the panoramic image database were manually lablled.  All tussocks were identified at least once, with many visible in multiple images.  The outcome being a list of heights for each tussock as viewed from different perspectives (image locations). The highest point of each tussock in each image was manually recorded, and a simple trigonometric transform (possible as the distance from camera to centre of tussock and angle of elevation are known ) was used calculate the height of each tussock (see [2] for more details).

Custom wireless camera used to recorded image database
Custom wireless camera used to recorded image database

The data described above provided sufficient information to construct a geometrically accurate simulated environment. To this end we utilised the Matlab-based world building methodology described in [3], which generates natural looking tussocks from a cluster of black triangles. In our world triangles were constrained to emerge from within the mapped tussocks and projected upwards to a mean height from the measurements above with some noise. Triangles were oriented vertically, and the angle of elevation sampled from a normal distribution.

Finally the we added realistic colouring of tussocks. The model simulates the spectral content of the world using by modelling the visible light photoreceptor of the ant eye [4] which makes up the majority of the central and lower visual field. A photographic image database of the field site at a level of 5cm from the ground was created during our 2012 field study using a specially adapted camera which could capture UV as well as the visible wavelengths of light. Using a bandpass filter (Schott BG18) with a range 415nm to 575nm and a peak transmittance at 515nm, a gray scale image approximates the photoreceptor response. The distribution of gray scale values from a random selection of images was then used to calculate a mean and standard deviation for the visible light component of the scene. In turn these values were used to create for every polygon a random value in the green channel of RGB space.

The complete world is generated from 5000 triangles randomly distributed across the 234 tussocks which produced an optimal trade-off between view authenticity and speed of image rendering. The final 3D environment plus rendering code can be downloaded at AntNavigationChallenge.

References

[1] Scaramuzza, Davide, Agostino Martinelli, and Roland Siegwart. “A toolbox for easily calibrating omnidirectional cameras.” Intelligent Robots and Systems, IEEE/RSJ International Conference on. IEEE, (2006).

[2] Mangan, Michael. “Visual homing in field crickets and desert ants: A comparative behavioural and modelling study.” Ph.D. Thesis, University of Edinburgh (2011).

[3] Baddeley, Bart, Paul Graham, Philip Husbands, and Andrew Philippides. “A model of ant route navigation driven by scene familiarity.” PLoS Comput Biol 8, no. 1 (2012): e1002336.

[4] Mote, Michael I., and Rüdiger Wehner. “Functional characteristics of photoreceptors in the compound eye and ocellus of the desert ant, Cataglyphis bicolor.” Journal of comparative physiology 137.1 (1980): 63-71.

 

AntBot: Tweaking the modules

The app should be in a working state, so now we shall do some very simple edits of the code.

To set this up properly, instantiate all 4 example modules as descrbied in the previous tutorial, call the Haferlach PI “HF” when registering it.

Tweaking the Combiner (Adjusting Weightings)

If you open up the WeightedCombiner you will see it works by taking the mean of all 4 modules, we’re going to edit this to give more preference to the Trig Path Integrator.

The combiner consists of a single function ‘nextCommand’ which works by calling the ‘getOpinion’ method on each module and storing their headings, it then performs some collection of operations on these to create it’s own heading to be returned.

In this case, the combiner simply takes the average of all 4 modules’ headings and returns this. Currently, the average gives an equal weighting to each – we want this to be skewed so jump to lines 28 and 29 and change the weighting of the modules from 0.25 on all to 0.7 on PI and 0.1 on the others.


double wAngle = PI[0] * 0.7 + VH[0] * 0.1 + SS[0] * 0.1 + HF[0] * 0.1;
double wDist = PI[1] * 0.7 + VH[1] * 0.1 + SS[1] * 0.1 + HF[1] * 0.1;
double[] weightedHeading = { wAngle, wDist };

Now we have a weighted combiner that favours the trig PI. This is in essence the basics of how a combiner works – but what if we want to condition the weightings on the status of the modules?

More Tweaking the Combiner (Examining state of modules)

Each module has a method called ‘getStatus()’ which returns a string describing what it is doing. Looking at the source code for a module will tell you what the format of this string is.

In this case, we want to condition the weighting on the distance from the nest – as such we need to look at the status of the trig path integrator whose status is a string of the format <antbotOrientation> <angleFromNest> <distanceFromNest>.

We can retrieve and split this into tokens with


String[] PI_status = navigationModules.get("PI").getStatus().split("\\s+");
double nestDistance = Double.valueOf(PI_status[2]);

Now we know the nest distance, we can simply surround the weighting with an if statement:


double wAngle;
double wDist;

if(nestDistance > 0.5)
{
wAngle = PI[0] * 0.25 + VH[0] * 0.25 + SS[0] * 0.25 + HF[0] * 0.25;
wDist = PI[1] * 0.25 + VH[1] * 0.25 + SS[1] * 0.25 + HF[1] * 0.25;
}
else
{
wAngle = PI[0] * 0.7 + VH[0] * 0.1 + SS[0] * 0.1 + HF[0] * 0.1;
wDist = PI[1] * 0.7 + VH[1] * 0.1 + SS[1] * 0.1 + HF[1] * 0.1;
}

 

Tweaking Visual Homing

To give a basic example of how tweaking a module would work, we shall change the type of pixelwise error calculation that the visual homing module uses.

Visual Homing works by taking in frames from the camera sensor  and calculating the average value of the pixels in the frame – this is done with the “calculateError()” function.

Open up VisualHoming and declare a new function “sumSquare()”:

 

private double sumSquare()
{
// Get difference
Core.subtract(currentFrame, homeFrame, diffFrame);

// Get square
Core.multiply(diffFrame, diffFrame, sqFrame);

// Get sum of the array
Scalar sum = Core.sumElems(sqFrame);
double errorArr[] = sum.val;
double ss = errorArr[0] / 2;

return ss;
}

Now go into calculateError() and replace “currentError = rootMeanSquare();” with “currentError = sumSquare();”.

Just like that we have modified the module.

AntBot: Setting up the App

Now that you have the projects set up and ready to start development, it is time to examine the code itself.

This post will cover ‘wiring up’ an App with the pre-packaged combiners and navigation modules.

What is Pre-Packaged?

With the AntBot codebase comes with 2 combiners and 4 navigation modules. The ‘Simple Combiner’ ignores all modules apart from one and issues that as the path to be followed. The ‘Weighted Combiner’ takes the average angle and distance of all 4 modules and submits it.

The Navigation modules represent Path Integrators that work through basic trigonometry and one based on Haferlach’s model. The other two implement visual homing based on pixel-wise error and the ‘Run-Down’ algorithm and a simplistic Systematic Search generator.

Setup

First, open MainActivity and declare the modules and combiner that you want to use at the top of the class (we will only use the trig path integrator for now):

WeightedCombiner weightedCombiner;
VisualHoming visualHoming;
TrigPathIntegration trigPathIntegration;
SystematicSearch systematicSearch;

The method ‘onCreate’ holds the actual setup code. First the combiner is instantiated and registered with the framework:

weightedCombiner = new WeightedCombiner();
setCombiner(weightedCombiner);

Now, the modules are set up – for each module it must be instantiated, subscribed to it’s feeds and registered with the combiner (where it is given a name so it can be identified), for example:

visualHoming = new VisualHoming();
cameraFragment.addSubscriber(visualHoming);
simpleCombiner.addModule("VH", visualHoming);

The trig path integrator has to be subscribed to the controller fragment and should be called “PI”. The systematic search generator should be subscribed to the trig path integrator and the controller fragment. It should be called “SS”.

If data is to be sent to the server the Network Fragment must be subscribed to the modules which will supply it with said information:

trigPathIntegration.addSubscriber(networkFragment);
visualHoming.addSubscriber(networkFragment);

Finally, a further step is needed for any modules that make use of OpenCV (such as the Visual Homing module), below onCreate is another method named onOpenCVLoaded – it is recommended that all vision based modules be placed into a dormant state until the library is loaded, after which they can be ‘woken up’ (this stops function calls being made to non-existant libraries):

visualHoming.enableVisualHoming();

This is all that setup consists of – pressing the Run app button should load the app onto the phone ready for use.

Swapping Modules

To swap modules in and out you simply need to comment out/delete their code. If you wanted to replace the trig path integrator with the haferlach path intergator you would simply have to do the following:

//trigPathIntegration = new TrigPathIntegration();
//controllerFragment.addSubscriber(trigPathIntegration);
//simpleCombiner.addModule("PI", trigPathIntegration);

haferlachPathIntegration = new HaferlachPathIntegration();
controllerFragment.addSubscriber(haferlachPathIntegration);
simpleCombiner.addModule(“PI”, haferlachPathIntegration);

Having said that, the system is of course module agnostic so it is more than possible to have the two modules coexist happily (but with one of them would have to be called something that isn’t PI).

The weighted combiner could be swapped for the simple combiner in the same way, but only one combiner can be used at a time.

 

Now we have the app working, let’s tweak the modules!

AntBot: Setting up the Development Environment

This guide expects you to have basic familiarity with Eclipse.

Download the project from GitHub

https://github.com/InsectRobotics/antbot

Download Android Studio, Eclipse & Arduino IDE

To begin with, download Android Studio and follow the instructions on the site – on Linux you may need to install extra packages so this is particularly important.

To compile the AntBotServer code you will also need Eclipse – specifically the ‘Eclipse IDE for Java Developers’.

After Downloading both, run them for the first time – Android Studio particularly may need to set up Android SDKs (for reference, AntBot is built for Android 4.4 and higher).

It should be noted that whenever I use the term ‘workspace’ within this guide it can refer to a space both programs use or separate ones.

The Ardunio IDE can be found here. Like the above, it will use a ‘workspace’ folder which it calls ‘sketches’. Arduino IDE requires no further setup outside of installation.

Dependencies

There are two libraries the AntBot app and AntBotServer depend on: USBSerialForAndroid (this is actually included in the source code) and OpenCV – both the desktop and android versions.

The Arduino code requires a library to monitor the wheel encoders called Encoder.

Set up OpenCV

Compiling OpenCV (hopefully optional)

Unless you are using Windows, this is required for the desktop version – I have included a compiled version of the library for linux which can be found under AntBotServer/opencv3.0.0/ubuntu-14.04. Hopefully this will work, if not or if you want to compile it yourself this guide will demonstrate how to compile it. If it does, skip to setting OpenCV up as a user library.

You will need the OpenCV source (from the above link) and cmake. Once you finish the ‘Build’ section of the guide, the desired files will be opencv-3.0.0/bin/opencv-300.jar and opencv-3.0.0/lib/libopencv_java300.so.

Now that there is a compiled version of OpenCV you can set it up as a user library in eclipse with this guide.  It should be noted that in the section of the guide that tells you to set the native path to ‘OpenCV-2.4.6/build/java/x64’ should actually say ‘OpenCV-2.4.6/lib’.

Installing OpenCV on the Android Device (hopefully optional)

Assuming you are testing with an Android device, you will need to load the OpenCV library onto it. Within the OpenCV Android zip there will be a folder called ‘apk’ – take the contents and follow this guide.

Importing OpenCV as an Android Module

Before you use the OpenCV Android library you will need to have it set up as a project in your workspace. Doing this is relatively simple – just place the OpenCV Android library in your workspace folder and perform either File -> Import Project or from the splash screen do Import Project and select the OpenCV-android-sdk folder from your workspace.

Import Projects

Now that the requirements are in place, simply import AntBot into Android Studio and AntBotServer into Eclipse using the same method as above. Android Studio may need to download a different version of the Android SDK to run – allow it to do so. Eclipse will require you to add the OpenCV user library to AntBotServer.

Allow USB Debugging

If you are just getting started or are doing this on a new computer, you will need to authorise USB debugging on your computer – when plugging in the Android Device this should automatically come as a pop up. If not, follow this guide.

Now it’s time to set up the app!

AntBot: Make Your Own AntBot

antbot_side_view

 

Above we can see AntBot ‘in the flesh. Ignoring the lego framework that supports the camera, to assemble your own AntBot you need the following*:

component

 

*You will also need some standard issue male-female copper wires to connect the boards as well as a Serial-USB cable to connect the Arduino to the smartphone.

The prices quoted are taken from Proto-Pic at the time of writing and as such it may be prudent to shop around in case these have changed. While a Nexus 5 is listed, the framework can theoretically be deployed on any Android smartphone with a sufficient level of processing power.

 

To connect the chassis to the motor driver board, simply connect as directed in the following. Each ‘channel’ corresponds to a wheel, i.e. Channel 1 -> Wheel 1 etc.

components_chassis

To connect the Ardunio to the motor driver board, connect like so:

components_arduino

 

The colours correspond to the channels in the picture above.

 

First step done, now it’s time to calibrate the chassis!

RoboAnt: Build your own Android robot

Nowadays smartphones are affordable, compact and capable computers. Mike had the ingenious idea that they can do a perfect robot brain. Packed with computing power and useful sensors,  the one thing they can’t do (I think) is control external analog components – like motors. This is where the Arduino comes in. The hugely popular embedded platform has tons of accessories built by its restless community. One of them – Zumo Shield from Pololu, is the final ingredient we need. Stir… and RoboAnt is born.

Here is how to make your own.

Continue reading RoboAnt: Build your own Android robot

Methodological issues

The essence of our methodology is to use robots as models of biological systems.  We usually refer to this as “Biorobotics” (although the terminology in this field is not fixed). An important feature is that our principal focus is on understanding the biology, using robotics as a tool, rather than on trying to improve robotics or address specific robot applications. However the fact that biological systems are capable of many things that we would like robots to be capable of – such as adaptive interaction with real complex environments to achieve tasks robustly – means that results of the work are likely to have some benefit for robotics as well.

Nevertheless, our main motivation is to understand brains and behaviour. We focus on insect systems because there is a better chance that we can understand and model the complete processing loop, at a neural level, in these ‘simpler’ systems. We use robots because this forces us to consider the whole loop, including the physics of interaction with the environment.

As this is a relatively novel methodology, it is important to understand how it fits into scientific explanation, and this has been the focus of some of our research. For example, any scientific modelling requires decisions about abstraction, about the level of mechanisms to model, about the scope of systems to be explained, about the accuracy with which mechanisms will be reproduced, about the medium that will be used to implement the model, and the criteria for evaluation.

Key publications

  • Webb, B (2000) What does robotics offer animal behaviour? Animal Behaviour, 60, 545-558 (pdf preprint)
  • Webb, B. (2001) Can robots make good models of biological behaviour? Target article for Behavioural and Brain Sciences 24 (6) 1033-1050 (html preprint)
  • Webb,B. and Consi, T.  eds. (2001)  Biorobotics: methods and applications AAAI Press
  • Webb, B. (2002) Robots in invertebrate neuroscience. Nature 417:359-363 (pdf preprint)
  • Webb, B. (2006) Validating biorobotic models. Journal of Neural Engineering 3 R25-R35 doi:10.1088/1741-2560/3/3/R01 (pdf preprint)