Sending Data from Arduino to ROS

For a while now I have been following Willow Garage’s open source ROS (Robot Operating System). It just celebrated its third anniversary showing impressive growth with more and more universities and companies using and extending ROS: http://www.willowgarage.com/blog/2010/11/08/happy-3rd-anniversary-ros. Furthermore ROS is beginning to enter the hobby market.

As a first step to get familiar with ROS I decided to create a simple package that would allow me to receive lines of text from the Arduino board and publish them into ROS. Andrew Harris has already published a package for Arduino (http://www.ros.org/news/2010/01/arduino-and-cmucam3-support-for-ros.html) that uses an efficient binary protocol to communicate between Arduino and ROS. I might eventually use it but nothing beats writing a bit of software yourself when it comes to understanding a new software infrastructure.

The Source code of my simple Arduino playground package is available on Google code athttp://code.google.com/p/drh-robotics-ros/source/browse/?r=60#svn%2Ftrunk%2Fros%2Fplayground. If you want to play with it I suggest that you check out revision 60 of the source code into a folder under your home directory. In order for ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is easily accomplished by adding a line at the end of the .bashrc file in your home folder. In my case the line reads:

export ROS_PACKAGE_PATH=~/dev/drh-robotics-ros/ros/playground:$ROS_PACKAGE_PATH

In your case the path to the playground folder might be different. To test it out we need an Arduino board with a sketch that periodically writes a string followed by a newline to the serial port like so:

Serial.println("Hello World");

The Python code of the arduino ROS node consists of two files. The entry point is the file /ros/playground/nodes/arduino.py:

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''

import roslib; roslib.load_manifest('playground')
import rospy
from std_msgs.msg import String
from playground.msg import Encoder

from SerialDataGateway import SerialDataGateway

class Arduino(object):
 '''
 Helper class for communicating over serial port with an Arduino board
 '''

 def _HandleReceivedLine(self,  line):
 rospy.logdebug(line)
 self._Publisher.publish(String(line))

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 baudrate: Baud rate for the serial communication
 '''

 self._Publisher = rospy.Publisher('serial', String)
 rospy.init_node('arduino')

 self._SerialDataGateway = SerialDataGateway(port, baudrate,  self._HandleReceivedLine)

 def Start(self):
 rospy.logdebug("Starting")
 self._SerialDataGateway.Start()

 def Stop(self):
 rospy.logdebug("Stopping")
 self._SerialDataGateway.Stop()

if __name__ == '__main__':
 arduino = Arduino("/dev/ttyUSB0", 115200)
 try:
 arduino.Start()
 rospy.spin()

 except rospy.ROSInterruptException:
 arduino.Stop()

In the __main__ function at the bottom of the source file an instance of the Arduino class is instantiated with the appropriate port and baud rate. In the constructor the ROS topic ‘serial’ is published and the node is initialized. Finally an instance of the helper class SerialDataGateway is created (more info below). Once this is done the Arduino instance is started. The call to rospy.Spin() ensures that the program keeps running.

The helper class SerialDataGateway is responsible for receiving data via the serial port and assembling the data into individual lines. When it receives a ‘\n’ it calls into the line handler function that was passed into the constructor. The class uses a worker thread which in the current usage pattern is not really necessary. But it allows me to enhance the node later to support read and write operations in parallel.

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''
import threading
import serial
from cStringIO import StringIO
import time
import rospy

def _OnLineReceived(line):
 print(line)

class SerialDataGateway(object):
 '''
 Helper class for receiving lines from a serial port
 '''

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200, lineHandler = _OnLineReceived):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 receivedLineHandler: The function to call when a line was received.
 '''
 self._Port = port
 self._Baudrate = baudrate
 self.ReceivedLineHandler = lineHandler
 self._KeepRunning = False

 def Start(self):
 self._Serial = serial.Serial(port = self._Port, baudrate = self._Baudrate, timeout = 1)

 self._KeepRunning = True
 self._ReceiverThread = threading.Thread(target=self._Listen)
 self._ReceiverThread.setDaemon(True)
 self._ReceiverThread.start()

 def Stop(self):
 rospy.loginfo("Stopping serial gateway")
 self._KeepRunning = False
 time.sleep(.1)
 self._Serial.close()

 def _Listen(self):
 stringIO = StringIO()
 while self._KeepRunning:
 data = self._Serial.read()
 if data == '\r':
 pass
 if data == '\n':
 self.ReceivedLineHandler(stringIO.getvalue())
 stringIO.close()
 stringIO = StringIO()
 else:
 stringIO.write(data)

 def Write(self, data):
 info = "Writing to serial port: %s" %data
 rospy.loginfo(info)
 self._Serial.write(data)

 if __name__ == '__main__':
 dataReceiver = SerialDataGateway("/dev/ttyUSB0",  115200)
 dataReceiver.Start()

 raw_input("Hit <Enter> to end.")
 dataReceiver.Stop()

The Arduino class passes its ‘_HandleReceivedLine’ function into the SerialDataGateway constructor. As a result whenever a new line is received the _HandleReceivedLine function in the Arduino class will be called:

def _HandleReceivedLine(self,  line):
  rospy.logdebug(line)
  self._Publisher.publish(String(line))

The function simply logs the line and then publishes it as a string. In a real world scenario I would modify the function to parse the line and extract, say, pose data (x, y, and angle). Correspondingly a pose message rather than a simple string message would be published.

Finally we are ready to try out the new node. The required steps are as follows:

  • Hook up an Ardunino board running a sketch that causes a line to be written to the serial port periodically.
  • In a new terminal run:
    roscore
  • In another new terminal run:
    roscd playground
    make
    rosrun playground arduino.py

Provided the port and baud rate specified in Arduino.py match your setup you should have a running ROS Arduino node now. Next we want to tap into the published topic.

  • In a new terminal subscribe to the ‘serial’ topic:
    rostopic echo /serial
    You should see the published lines fly by in this terminal.
  • To get a graphical representation of what is running start rxgraph in yet another terminal. The resulting graph should look similar to this:

Screenshot-rxgraph

On the left is the /arduino node with the exposed /serial topic. The node in the middle is the result of the rostopic app that we started. It subscribes to the /serial topic and displays the values in the terminal.

This concludes my first baby step into the world of ROS.

Monocular Visual Odometry

For a while now I have been looking for ways to use (computer) vision to get odometry information. This is by no means a new concept. Here are some examples (by no means a comprehensive list):

  • NASA used visual odometry on Mars: Two Years of Visual Odometry on the Mars Exploration Rovers(pdf)
  • Optical computer mice employ it and optical mouse sensors have been used in robotics; e.g. Outdoor Downward-facing Optical Flow Odometry with Commodity Sensors (pdf)
  • Insects leverage it. E.g., see the references in Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology (pdf)

However, these solutions typically require complex and expensive setups. A little more than a month ago I stumbled over a paper by Jason Campbell, Rahul Sukthankar, Illah Nourbakhsh, and Aroon Pahwa explaining how a single regular web cam can be used to achieve robust visual odometry: A Robust Visual Odometry and Precipice Detection
System Using Consumer-grade Monocular Vision
(pdf). Naturally this got me hooked. To make sure that I fully understand the presented algorithm I reimplemented it in C# and paired it with a fairly comprehensive WinForm based UI. Here is a screen shot of the result:

VisualOdometryScreenshot

The source code is available from http://code.google.com/p/drh-visual-odometry/ and is covered by the GPL 3.0 license.

More details can be found in the presentation slides (pdf) that I presented during last Saturday’s meeting at theRobotics Society of Southern California.

To facilitate testing the code without an actual setup I uploaded my test video (61MB!).

Building a Self-Balancing Robot: Basic Balancing is Working

A lot has happened since my last blog. The big news is that basic balancing is working now. Let’s start with the now almost obligatory photo of the current robot state:

2010-06-12-16-20-26-0741  2010-06-12-16-20-46-075

For better weight distribution the batteries, motor controller and the Arduino Mega board have been rearranged. The gray box between the wheels underneath the robot platform houses the 3-axis accelerometer ADX330 and the dual axis gyroscope IDG300. I used the same components as in my Lego prototype (for details see Building a Self-Balancing Robot – The Prototype). All connections are fed into the protoboard that sits on top of the Arduino Mega. The wooden bar with caster wheels at the end is used to prevent the robot from completely falling over. Naturally the caster wheels don’t touch the ground when the robot is balancing.

Enough about the hardware. The vast bulk of the changes since my last blog entry are software related. All the source code is now available as open source and can be accessed from the Google code hosting sitehttp://code.google.com/p/drh-balancing-robot/. The source code includes the Arduino program with its libraries plus a wxPython UI application that is used to communicate with the Arduino controller to retrieve tilt and speed data and to send calibration data to the board. This blog is based on the code as of revision 37.

Let’s start with …

The Arduino Program

The main program is BalancingRobot.pde. It uses a number of custom libraries that encapsulate the details of various ‘function blocks’. The code for these libraries resides under the folder Arduino\libraries. The majority of the libraries are straight forward:

  • ADXL300: Access to the ADXL330 accelerometer and associated angle calculations
  • IDG300: Access to and calibration of the IDG300 gyroscope
  • QuadratureEncoder: Handling of the quadrature encoder signals

Others deserve a little bit of explanation.

Communicating with the Sabertooth Motor Driver

The setup of the Sabertooth motor driver has slightly changed since my last post. I now use the much more capable Packetized Serial Mode to talk to the controller. The details are encapsulated in the Sabertooth classSabertooth.h. Using a serial port to communicate with the motor driver has the advantage that apart from the motor drive signals, we can also specify a cutoff voltage. Once the voltage drops below the specified threshold the driver will automatically stop the motors, thus protecting the batteries.

The Sabertooth class is used from within the Arduino program like so:

...
#include <HardwareSerial.h>
#include <Sabertooth.h>

// Sabertooth address 128 set with the DIP switches on the controller (OFF OFF ON ON  ON ON)
// Using Serial3 for communicating with the Sabertooth motor driver. Requires a connection
// from pin 14 on the Arduino Mega board to S1 on the Sabertooth device.
Sabertooth _Sabertooth = Sabertooth(128, &Serial3);

...

void setup()
{
 ...

 // give Sabertooth motor driver time to get ready
 delay(2000);

 // Setting baud rate for communication with the Sabertooth device.
 _Sabertooth.InitializeCom(19200);
 _Sabertooth.SetMinVoltage(12.4);
}

...

_Sabertooth.SetSpeedMotorB(motorSignal1);
_Sabertooth.SetSpeedMotorA(motorSignal2);

Another advantage is that only one of the inputs (S1) of the motor driver is used for speed control. The S2 input is configured as an active-low emergency stop. Having an emergency stop is essential when powerful motors are involved! Here is my ‘high-tech’ solution for pulling S2 low when a cord is ripped:

2010-06-12-16-20-16-073

Tilt Angle Calculation and Balancing Algorithm

As discussed in my earlier post Building a Self-Balancing Robot – The Prototype, it is necessary to fuse the output of the accelerometer with the output of the gyroscope to get a reliable tilt angle. This is accomplished by a Kalman filter (TiltCalculator.h and TiltCalculator.cpp) that is based on code from Trammell Hudson.

The tilt angle and the angular velocity are passed to the balancer class (Balancer.h). It uses four coefficients to calculate the torque:

float CalculateTorque(float angleError, float angularVelocityError, float positionError, float velocityError)
{
 float torque =
 angleError * K1 +
 angularVelocityError * K2 +
 positionError * K3 +
 velocityError * K4;

 return torque;
}

Currently I only use the first two terms. This allows for simple balancing but does not prevent the robot from drifting. The wheel encoders give the information that is required to plug in the last two terms but I just haven’t gotten around doing it yet. David P. Anderson’s excellent nBot pages include a succinct high-level description of the balancing algorithm. A more technical short description of the ‘Segway’ or inverted pendulum can be foundhere.

Provided all the required physical properties of the robot are known, the four parameters K1, … K4 can be calculated. However, in my case it is a matter of trial and error – lots of trial and error. This requires an effective way to see the values that are calculated by the control algorithm and to send parameters to the controller. This ability is provided by a Python desktop application.

The Robot Control User Interface

My ‘native’ language is C# but, inspired by a visit to Willow Garage and their use of Python in ROS, I decided to give Python a try. Python per se is fairly easy to learn; but add to it the desire to decide on a UI framework and to learn how to use it in conjunction with a plotting library and you end up with a bigger job than I was originally planning for. Anyway, I decided on using wxPython as the UI framework, Pydev for Eclipse as the IDE, andmatplotlib as the plotting library. I got it working but I am sure I am breaking the Zen of Python in numerous ways.

I won’t describe the code in detail. It can be downloaded from the Google code project page drh-balancing-robotalong with the controller code discussed above. The browsing entry point is http://code.google.com/p/drh-balancing-robot/source/browse/#svn/trunk/PythonClient/RobotController/src. Again, please note that this blog entry is based on revision 37 of the code. The code is very likely to change in the future.

The starting point for the UI app is \UI\MainApp. It creates the MainModel which contains the logic behind the UI and then brings up the main application window (MainWindow.py). This window contains two plot panels. The plot panel on the top shows a real time graph of

  • The raw tilt angle as determined from the accelerometer values
  • The tilt angle value that is calculated from the fusion of the accelerometer and the gyroscope values
  • The angular velocity as measured by the gyroscope

The lower panel shows a real time graph of the speed of the two motors. I used it to calibrate the speed controller logic. However, since it is not used for balancing I’ll ignore it here. Here is a short video that shows the real time graphs while I tilt the robot back and forth.


Menu entries allow the user to define or change the balancing parameters and speed controller (again, not used here) parameters and send them to the Arduino controller.

The bidirectional communication with the Arduino controller over the serial port is handled by \Model\DataAdapter .py in conjunction with Support\SerialDataGateway.py.

Analysis

The robot is stable as long as it is not pushed too hard. Unfortunately the used wheelchair motors have quite a bit of play in their gears resulting in jerky back and forth swings around the center point. Also the motors are quite slow and hence cannot compensate for even moderate pushes of the robot. Running the motors at 24V instead of the current 12V will improve the speed issue somewhat but ultimately I am afraid the motors might not cut it. I will post a video of the balancing in action shortly.

Building a Self-Balancing Robot: Motor Driver and Wheel Encoder

The focus of this blog post is the motor control system. Let’s start with a photo of the current state of the robot platform:

2010-04-06-14-03-00-014-small

The wheels have been mounted and the main electric wiring is in place. The two batteries are wired up in parallel for a 12V power supply resulting in a maximum motor speed of about 60 rpm . The motors could be supplied with 24V but at least for now the power and speed that I get out of 12V is enough.

The Motor Driver

The power to the motors is provided by a Sabertooth 25 dual 25A motor driver (I got mine from The RobotMarketPlace). It is mounted vertically directly behind the batteries. The driver is controlled by a Seeeduino Mega, an Arduino Mega clone from SeeedStudio Depot. In the photo the Seeeduino board is barely visible in the back hidden underneath an attached protoshield. The input mode of the motor driver is configured like this:

MotorDriverDipSwitches

  • 1 & 2: R/C input signal
  • 3: no lithium battery cutoff
  • 4: no mixing mode
  • 5: linear
  • 6: micro controller mode (no timeout fail-safe and no auto-calibration)

The corresponding signal is provided by the Arduino controller. Currently (for testing purposes only) the motor speed is controlled by a 100k potentiometer that is hooked up between ground and 3.3V with its wiper connected to an analog input of the controller. The resulting Arduino program is listed below.

#include "WProgram.h"

unsigned long previousMilliseconds = 0;
unsigned long updateInterval = 50; // update interval in milli seconds

// temporary: contolling the motors with one potentiometer
int potiAnalogInPin = 5;
int motor1PwmOutPin = 11;
int motor2PwmOutPin = 12;

void setup()
{
 analogWrite(motor1PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
}

void loop()
{
 unsigned long currentMilliseconds = millis();

 unsigned long milliSecsSinceLastUpdate = currentMilliseconds - previousMilliseconds;
 if(milliSecsSinceLastUpdate > updateInterval)
 {
 // save the last time we updated
 previousMilliseconds = currentMilliseconds;

 int potiValue = analogRead(potiAnalogInPin);  // read the input pin
 analogWrite(motor1PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 }
}

The Wheel Encoders

The speed and direction of each motor is measured by two home-brew quadrature encoders that are mounted on the upper ends of the motors where the electric brake used to be. The electronic circuit for each quadrature encoder is based on two Hamamatsu P5587 IR photo reflectors and is described in David P. Anderson’s article ‘Home-Brew Shaft Encoders for the Pittman GM8712 Gearhead Motor‘:

P5587Circuit

The space on top of the motor is wide enough to mount the two photo reflectors in one line which simplifies the calibration. The two photos below show the circuit board mounted on the back of the motor first without and then with the encoder disk attached to the motor shaft. The board is glued to a plastic disk that is attached to cut off screws. This setup allows me to easily fine tune the distance between the sensors and the encoder disk.

2010-04-06-13-54-00-007-small

2010-04-06-13-54-56-009-small

 

In order to get a quadrature encoder signal that conveys speed and direction the bright and dark sequences for the two sensors need to be offset from each other. Since the sensors are lined up the offset must be created by the encoder disk itself:

EncoderDisk

I tried to create the encoder disk with Scott Boskovich’s encoder design program which can be downloaded fromhttp://www.societyofrobots.com/sensors_encoder.shtml but the output was too coarse. Instead I created a C# console application that builds up the encoder disk as an SVG (scalable vector graphics) file which I then open and print from Inkscape (my robot arm calibration page has more details). The C# GridBuilder source code code requires .Net 2.0 and either VisualStudio 2008 or the open source IDE SharpDevelop.

In order to reliably detect the encoder wheel transitions external interrupts need to be used in the Arduino program. I can highly recommend the Arduino Playground article ‘Reading Rotary Encoders‘ which provided invaluable information for my implementation. The main part of the encoder logic is encapsulated in the QuadratureEncoder class. It heavily borrows from mikkoh’s contribution to the Arduino Playground article. The SendInfo is used for calibration purposes (see below).

/*
 QuadratureEncoder.h - Quadrature encoder library.
 Dr. Rainer Hessmer, April, 2010.
 Released into the public domain.

 Inspired by the code from http://www.arduino.cc/playground/Main/RotaryEncoders
 Incorporates code from mikkoh and others that contributed to the referenced article.
*/

#ifndef QuadratureEncoder_h
#define QuadratureEncoder_h

#include "WProgram.h"

class QuadratureEncoder
{
 /*
 Wraps the encoder setup and the update functions in a class

 !!! NOTE : User must call the functions OnAChanged and OnBChanged from an
 interrupt function him/herself!

 // ------------------------------------------------------------------------------------------------
 // Example usage :
 // ------------------------------------------------------------------------------------------------
 #include "QuadratureEncoder.h"

 QuadratureEncoder encoder(2, 3);

 void setup()
 {
 attachInterrupt(0, HandleInterruptA, CHANGE);
 attachInterrupt(1, HandleInterruptB, CHANGE);
 }

 void loop()
 {
 // do some stuff here - the joy of interrupts is that they take care of themselves
 }

 void HandleInterruptA()
 {
 encoder.OnAChanged();
 }

 void HandleInterruptB()
 {
 encoder.OnBChanged();
 }

 // ------------------------------------------------------------------------------------------------
 // Example usage end
 // ------------------------------------------------------------------------------------------------
 */

public:
 // pinA and pinB must be one of the external interupt pins
 QuadratureEncoder(int pinA, int pinB)
 {
 _PinA = pinA;
 _PinB = pinB;

 pinMode(_PinA, INPUT);      // sets pin A as input
 pinMode(_PinB, INPUT);      // sets pin B as input

 _ASet = digitalRead(_PinA);   // read the input pin
 _BSet = digitalRead(_PinB);   // read the input pin

 _Position = 0;
 }

 long int getPosition () { return _Position; };
 void setPosition (const long int p) { _Position = p; };

 // Interrupt on A changing state
 void OnAChanged()
 {
 // Test transition
 _ASet = digitalRead(_PinA) == HIGH;
 // and adjust counter + if A leads B
 _Position += (_ASet != _BSet) ? +1 : -1;
 }

 // Interrupt on B changing state
 void OnBChanged()
 {
 // Test transition
 _BSet = digitalRead(_PinB) == HIGH;
 // and adjust counter + if B follows A
 _Position += (_ASet == _BSet) ? +1 : -1;
 }

 void SendInfo()
 {
 bool a = digitalRead(_PinA);
 bool b = digitalRead(_PinB);
 unsigned long milliSecs = millis();

 Serial.print(milliSecs);
 Serial.print("\t");
 Serial.print(a);
 Serial.print("\t");
 Serial.print(b);
 Serial.println();
 }

private:
 int _PinA, _PinB;
 bool _ASet, _BSet;
 long _Position;
};

#endif

The QuadratureEncoder.h file needs to be copied into a folder called QuadratureEncoder under the libraries folder that is used by the Arduino environment so that it becomes accessible to Arduino programs.

The complete source code of the Arduino program combines the motor control code from above with the encoder logic:

#include "WProgram.h"
#include <HardwareSerial.h>
#include <QuadratureEncoder.h>

unsigned long previousMilliseconds = 0;
unsigned long updateInterval = 50; // update interval in milli seconds

// temporary: contolling the motors with one potentiometer
int potiAnalogInPin = 5;
int motor1PwmOutPin = 11;
int motor2PwmOutPin = 12;

// The quadrature encoder for motor 1 uses external interupts 0 and 1 which are associated with pins 2 and 3
QuadratureEncoder encoderMotor1(2, 3);
// The quadrature encoder for motor 1 uses external interupts 5 and 4 which are associated with pins 18 and 19
QuadratureEncoder encoderMotor2(18, 19);

void setup()
{
 analogReference(EXTERNAL); // we set the analog reference for the analog to digital converters to 3.3V
 Serial.begin(115200);

 analogWrite(motor1PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255

 attachInterrupt(0, HandleMotor1InterruptA, CHANGE); // Pin 2
 attachInterrupt(1, HandleMotor1InterruptB, CHANGE); // Pin 3

 attachInterrupt(5, HandleMotor2InterruptA, CHANGE); // Pin 18
 attachInterrupt(4, HandleMotor2InterruptB, CHANGE); // Pin 19
}

void loop()
{
 unsigned long currentMilliseconds = millis();

 unsigned long milliSecsSinceLastUpdate = currentMilliseconds - previousMilliseconds;
 if(milliSecsSinceLastUpdate > updateInterval)
 {
 // save the last time we updated
 previousMilliseconds = currentMilliseconds;

 int potiValue = analogRead(potiAnalogInPin);  // read the input pin
 analogWrite(motor1PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 }
}

// Interrupt service routines for motor 1 quadrature encoder
void HandleMotor1InterruptA()
{
 encoderMotor1.OnAChanged();
 //encoderMotor1.SendInfo();
}

void HandleMotor1InterruptB()
{
 encoderMotor1.OnBChanged();
 //encoderMotor1.SendInfo();
}

// Interrupt service routines for motor 2 quadrature encoder
void HandleMotor2InterruptA()
{
 encoderMotor2.OnAChanged();
 //encoderMotor2.SendInfo();
}

void HandleMotor2InterruptB()
{
 encoderMotor2.OnBChanged();
 //encoderMotor2.SendInfo();
}

Note: Yesterday I upgraded to Arduino software version 0018. Since the upgrade I can no longer send data over the serial port and at the same time use the external interrupts and the analogWrite functions. I will update this post once I have a solution.

Before the upgrade to 0018 I used the currently commented out lines ‘encoderMotor<1|2>.SendInfo()’ to send the measured transitions to the PC for plotting. As an example here is the plot of the encoder output for Motor1 over a time period of 100 milliseconds:

QuadratureEncoderSignal

The blue and red curves are the output of the two P5587 photo reflectors attached to Motor1. For the measurement I drove the motor as slowly as possible. Due to the diminished power the motor speed varied slightly as can be seen in the varying pulse widths. If the sensors would be perfectly aligned the two signals (blue and red) would be exactly phase shifted by 90°.  However, as long as the two signals are sufficiently offset, four transitions can be accurately measured for each black & white segment of the encoder ring. The encoder disk has 40 black & white pairs in each ring resulting in 160 transitions per full rotation.

The shaft that the encoder disk is attached to makes 32 full rotations for each rotation of the wheel. Thus I get 32 x 160 = 5120 transitions for each turn of the wheel giving a resolution of about 0.07° or 0.2mm for the 350mm (13.8″) diameter wheels. This is more than enough and in the future I will probably only use one interrupt per wheel for a resolution of 0.4mm.

So, the ‘drive train’ is pretty much in place now. Next I will focus on hooking up the accelerometer and the gyroscope.

Monte Carlo Localization for Robots

Recently I started to read the excellent book Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard, and Dieter Fox and got intrigued by Monte Carlo Localization (MCL). The MCL algorithm fully takes into account the uncertainty associated with drive commands and sensor measurements and allows a robot to locate itself in an environment provided a map is available.

In order to make sure that I really understand the algorithm I decided to implement it myself. Furthermore I did not find a readily accessible, standalone open source implementation that would allow me to exercise the algorithm in a simulated environment. Covering the resulting implementation is a bit too big for this blog. Instead I decided to add a multi-part article to my web site. You can find it here: Monte Carlo Localization for Robots

MCLSimulationSmall

Building a Self-Balancing Robot – The Base

In my first post about a self-balancing two-wheeled robot I introduced the prototype. Now it is time for the beginnings of the construction of the ‘real’ thing. This post focuses on the platform. In order for it to be sturdy enough to eventually let a human ride on it, I opted for a pair of wheelchair motors with wheels that I bought used from ebay. The pictures show the dimensions (inches) of one of the Pride Jet motors with the attached gear box.

2009-12-30-18-18-12-028-cropped-small   2009-12-30-18-19-42-030-cropped-small

The motors come with an electric brake attached at the end opposite to the gearbox. In the photos the black bowl shaped cover and the electric brake (still attached to two white wires) have been removed from the motor. Instead of the brake I will attach wheel encoders (more about this in a later post). The matching wheels came with the motor.

2009-12-30-19-45-24-031-cropped-small

In a wheelchair the motors are mounted horizontally. However, in order to gain more ground clearance I decided on a vertical mount for the robot. The next pictures show the assembled wooden base of the robot.

2010-01-05-16-36-00-002-cropped-small 2010-01-05-16-37-54-004-small
2010-01-05-16-38-28-006-small

The gearboxes are attached to a board on the underside of the round base board. On the top of the base board, behind the motors that stick out from below, two sealed lead acid AGM (Absorbed Glass Matt) batteries are mounted. The off center configuration balances out most of the motors’ weight. The two batteries, model RBC6 12V 12Ah SLA Sealed Lead Acid AGM, are offered by Battery Geek on ebay for US $31 plus US $12 shipping.

This covers the construction of the base. Apart from the motors and the batteries all materials can be bought in a local hardware store.

Next I will work on the electrical system and the electronics. Stay tuned …

Building a Self-Balancing Robot – The Prototype

Like many others I am fascinated by two-wheeled, self-balancing robots. I am currently in the process of building a self-balancing robot that should be big enough to allow me to ride on it. However, before I blog about this new project I want to briefly introduce a LEGO prototype that I built a couple of months ago. The structure and the motors are LEGO but as the ‘brain’ I used the Make Controller Kit (see image below). The Make controller board can be seen in the center. Only the larger diameter wheels are used for balancing; the small wheel that is visible in the front (there is another one in the back) is only there to prevent the robot from falling over too far if the power is off or the balancing fails.

SelfBalancingLegoRobot

The following short video shows the balancing in action. At the end of the video I turn off the power supply, making the robot ‘fall’ as a result.

The tilt sensor consists of the 3-axis accelerometer ADX330 and the dual axis gyroscope IDG300. The x and z accelerations from the ADX330 sensor are used to calculate the raw tilt angle. It is accurate provided the sensor is static; i.e., the acceleration is purely based on the gravitational forces. In a moving robot the actual forces are based on the gravitational forces and the acceleration of the robot itself. To compensate for acceleration the gyroscope sensor is used. The tilt angle is calculated by integrating over the angle velocity that is measured by the gyroscope. This calculated tilt angle is heavily impacted by drift and hence cannot be used in isolation. For this reason the output of the gyroscope and the output of the accelerometer are combined via a Kalman filter.

The following graph shows the difference between the angle that is calculated from the accelerometer (blue) and the angle based on the application of the Kalman filter (red):

AccelerometerVsKalman

The calculated tilt angle is passed into a PID loop that is used to control the motors. For details see the complete controller code. In addition to the code that calculates the tilt angle and drives the motor, the code also contains two asynchronous USB tasks (UsbReceiverTask and UsbSenderTask) that were used to communicate directly with the PC. This allowed for fairly efficient debugging of the code and fine tuning of the various control parameters as can be seen in the screen shot below. The included graph shows the tilt angle (red), the angular rate (black) and the normalized speed (green) over time. It is clearly visible how the speed oscillates in order to keep the robot balanced. Below the graph the PID parameters and parameters that are used to assemble the input into the PID loop are shown.

PIDControlledNormalizedSpeed

As a prototype the robot worked very well. The balancing was very stable as long as the robot was not pushed. When pushed the robot would counteract but would start moving in the direction that it was pushed towards. This shortcoming is caused by the fact that the control code has no insight into the speed of the robot. The typical remedy is to add wheel encoders to measure the speed and incorporate the speed signal into the PID algorithm. Since wheel encoders are a bit cumbersome I decided not to bother with them for the prototype. I will use wheel encoders in the ‘official’ project though.

Acknowledgments

I learned a lot from what other balancing robot builders and Segway clone builders have published. Here is a list of some of the links (in no particular order) that I used:

Tic-Tac-Toe Playing Robotic Arm

The description of my Tic-Tac-Toe playing robotic arm is now available from my web site at http://www.hessmer.org/robotics/lynxmotion-robotic-arm. It includes a detailed explanation of the formulas used for the inverse kinematics of the arm. The vision system leverages the OpenCV computer vision library through the Emgu CV cross platform .Net wrapper. The complete C# source code for the project is provided. Here is a short video of the robot playing against a human opponent: