thisismyrobot

Littered with robots and Python code

A new Android app!

Yes, I did say I wasn't going to write any more Android apps, but there's a really good reason this time :)

At work a couple of weeks ago two of my co-workers were inventorying a large quantity of stock that had just arrived. They were hoping to scan the barcodes for each item into a simple CSV file. Their first though was obviously "there's an app for that". Turns out there wasn't. There are hundreds of barcode-scanning and inventory apps available, but none that simple scanned to a CSV list of barcodes, then allowed that CSV data to be emailed/saved etc.

So yesterday, after 4 hours work, I can now say there is such an app. Stock Scanner isn't pretty, nor feature-packed, but it exactly fulfils the above requirement.


Stock Scanner is available in a limited-scans free version , or a very cheap paid version , on the Android market Google Play.

Bucket-brigading neural networks

I've recently been playing around with some Python code to explore a hunch I've had for a couple of years: that you can train a feed-forward neural network by simply indicating whether a output in response to an input was "good" or "bad".

I'd always imagined that I would hook up a small robot with a embedded neural network, giving myself a remote control with a button like this:


The robot would rove around, and whenever it did something "bad" (e.g. ran into a wall that it should have registered on its sensors) I'd press the button and it would train itself using that "bad" input->output pairing - e.g. that "move forward" when the front sonar sensor is registering an obstruction is "bad". I could also have a "good" button if it did something like turn just before a wall, for instance, to reinforce the correct behaviours.

This appealed to me as it was also very similar to how I (attempt to) train our cat...

Yes, that is our cat. No, that was not a training session...
Anyway, I have migrated this hunch to the GitHub repository BadCat . It has taken a few twists and turns along the way, but I have been able to "train" some very elementary neural networks using a simple set of rules based on the original hunch. I ended up taking a few pointers out of genetic algorithms theory just for fun too.

The algorithm works in the following way:

  1. Read the "sensors"
  2. Apply sensor readings to a learning tool (neural network), get the output
  3. Try out the output "in the real world"
  4. If the result of trying out the output is "bad":
    1. Slightly mutate the output
    2. Goto 3 above
  5. Train the network with the resultant (original or mutated) output
The mutation amount increases the longer the output is "bad", based on the assumption that the original output will be close to the desired already, but allowing the output to chance dramatically if the robot is stuck in a new situation. The "good" input->output pairs form part of a fixed length queue of recent memory that is used for regular training.

This approach is similar to the "bucket brigade" rule-reinforcement technique that can be used to train expert systems. It is also not dissimilar to reinforcement learning principles, except that the observation-action-reward mechanism is implicit instead of being explicit - the action is the output generated based on the observation and the weighting of the neural network and the reward (or penalty) is externally sourced and applied to the network only when needed.*

I am looking forward to trying this out a real mobile robot as soon as I can order my Pi and I will keep you up-to-date on how it turns out.

* Oh, and just to be clear, I am not a robotics or AI PHD student and this is not part of a proper academic research paper. It is very likely that what I am doing here has been done before so I make no claim to extraordinary originality or breakthrough genius - just consider this some musings and a pinch of free Python code :)

Some small Python scripts


So ... that's not quite the "picture of a robot" I was intending to lead this post off with :-)

Strictly speaking, the 'R' in the image above represents a "robot" in the very simple mobile robot simulator that I just developed. RoboSim is written in Python and allows a developer to include a very rudimentary 2D simulator in their project - for instance to test a neural network or genetic algorithm. The robot can rotate on the spot in 45° increments as well as move forward and backwards. Maps are defined as simple nested lists, with internal "walls" defined for areas that cannot be traversed. The robot is fitted with two front bumper switches that are triggered depending on what the robot is pressed against. RoboSim is available on GitHub , and may receive the odd tweak here and there in the future although it has served its purpose in another project already.

My other project is probably going to keep me going for a little while longer, at least until my Raspberry Pi(s) arrive... The project was born out of a hope to combine a couple of them together for a seriously powerful mobile robot. I really wanted to use one for nothing but OpenCV video processing and another for navigation planning etc. What I really didn't want to do was to be constantly swapping between each Pi to upload new code as I tried out different ideas.

Then it occurred to me: wouldn't it be nice if I could just get one or more Pis to act as a "dumb" nodes to run arbitrary Python code provided it to it by a "master" Pi...

A couple of days programming later and the newly Github'd project, DisPy , does this. The README explains it better but essentially, instead of instantiating classes normally, I use a wrapper class to perform the instantiation. Behind the scenes the class' source code is copied over the network to a "node" machine, the class is instantiated on that node and all the local copy's methods and members are replaced by stubs that perform XML-RPC calls back to the "node".

The result is that method calls and member access happens transparently over XML-RPC, allowing for the runtime offloading of arbitrary code to one or many Pis (or anything else that can run Python).

The code is all contained in one module and has minimal dependencies, hopefully it works on other OS' but I haven't tested it on anything other than Ubuntu 11.10 yet. Please fork it , break it and have a play, I'd love your feedback on this one!



A few changes and an exciting future

Tomorrow morning I will begin a new job and more importantly, a different direction in my career.

As you can tell from the history of this blog I have always had a passion for robotics and other embedded hardware systems. Graduating with a Bachelor of Computing, instead of Engineering, has obviously limited my job prospects in these more hardware-oriented fields. As a consequence, for the last five or so years I have been employed primarily as a web application developer with occasional forays into desktop application and embedded hardware development.

This all changed four weeks ago when I received an offer of employment at a local electricity generation business. I will taking on a role assisting with developing, administering and supporting their Energy Management System. This will involve working with complex hardware-oriented SCADA systems. I am extremely excited about this new role and the learning opportunities it will offer and I have decided it is time to adjust my non-employment priorities too.

These adjustments will have the greatest effect on my Android application development. I will still continue to bug-fix existing applications and I may even develop a few more new applications, but this will now be a low priority - a couple of hours a month. I've enjoyed working with this platform greatly but, frankly, I am not willing (with this new role) to put the time and effort in to turn this into a self-supporting business, and it doesn't make enough money to continue in a half-hearted manner.

The good news is that as a consequence of the above I intend to spend a lot more time on my embedded hardware/hobby-robotics projects. I've already been working on some as-yet undocumented projects and I would like to blog about these as they reach milestones and conclusions.

Thank you for indulging me in a personal post, I look forward to a picture of a robot leading my next one! :)

Video review of Sythe by content3300

I just came across this video by the YouTube user content3300, showing Sythe in action. It appears to be an entry for a competition, but it shows all the features quiet well. Thanks content3300!


Distributed tournaments for the Google AI Challenge

As I noted a couple of posts ago, I am taking part in the Google AI Challenge again this year (my entry). The challenge this year is Ants, a game which requires entries (agents) to control a number of ants in an environment made up of land, water, food and enemy ants.

The design of my agent is fairly simple and has a large number of parameters that are a adjustable (e.g. distance between an enemy ant and my base that is considered a "threat"). This made it a perfect candidate for trialling out some Genetic Algorithms (GA) theory to tune those parameters, as well as to evalute some algorithmic design decisions.

To start using GA one must generate an initial batch of solutions to the problem. This is currently in the form of 12 versions of my agent.

Once an initial set of solutions has been generated, the next step is the evaluation of the fitness of each solution. Each agent I design is a different "solution" to the problem of being the best agent - the best agent is the fittest.

I decided the simplest way to evaluate the fitness of each agent is for it to compete against other agents that I have made, and sample agents, in the standard game format that is used on the official servers.

As I have a number of laptops and computers, none of which super-powerful, I decided to try and make a distributed tournament system so that I could play as many games as possible to get the best idea of fitness - my setup is as follows.

  • Each machine is running Ubuntu 11.10, with Dropbox installed. The game's Dropbox folder contains the game engine, maps and all the agents that are currently being tested.
    • This allows for new agents to be added at any point and all machines to be immediately updated.
  • Each machine continuously:
    1. Selects a random map
    2. Selects a random set of agents to play on that map
    3. Plays the game
    4. Writes the score to a file based on it's host name - eg "log-ubuntubox.txt". These files are also in the Dropbox folder.
  • Any machine can run a shared script that aggregates the results from all log-*.txt files, computing the average points/game for each agent. This is used as the fitness.
Because I am using Python 2.7 (installed by default on Ubuntu 11.10) for the game engine, agents and extra scripting the provisioning of a new machine is this simple:
  1. Install Ubuntu
  2. Install Dropbox
  3. Run "python play.py"
So far this is working quiet well with quiet dramatic and unexpected performance differences between some nearly identical agents. Once each agent has played at least 30 games I will remove some of the lowest scoring agents and add some new versions based on combining the traits that are the most successful.

With any luck this should result in a pretty competitive entry in this year's Challenge - I will keep you posted!

Milestones

I just had a look at my Market stats and I've just hit a couple of milestones:
  • More than 100 ratings of Sythe Free (average 4.3/5)
  • More than 10,000 active users of Sythe Free
  • More than 25,000 downloads of Sythe Free
If only the paid version was going so well... :-)

Sythe update released

Just a quick one - I've just released an update to Sythe to fix:
  • Never-ending playback after closing Sythe
  • Incorrect step between octaves
  • Incorrect octave start/finish
  • Mis-match between note and frequency when switching modes
Thanks for the patience with this one guys, I've gotten totally bogged down in the 2011 Google AI Challenge (a greater time-sink than Skyrim...)

Sythe 1.3 is now available on the Android Market for free or very, very cheap.

Sneak peek


You are looking at the main screen of an early version of my next app - a high-quality drum synthesiser. Currently it mixes 3 sine-wave sources with independent frequencies, amplitudes and ADSR envelopes.

Oh, and yes, it'll use my minimalist red-on-black UI again :-)

Warm fuzzy feelings

As I mentioned a little while ago, my last update to Sythe was released into two apps - one free and one for the lowest price available in each currency.

With the exception of their names, the difference between these two apps is zero. I didn't even employ any tricky marketing; both apps openly refer to each other and both clearly state that they are the same app.

And you know what, people have bought the paid version!

Thanks guys, you've re-inspired me and I am already working on some new Android apps - watch this space...