Ardros – Transform Between base_link and the Kinect Sensor

For SLAM and navigation to work correctly on a ROS based robot it is necessary to accurately specify the transformation between the coordinate system (frame) of the steering center of the robot (the frame is typically called base_link) and the coordinate system of the laser scans. In the case of Kinect based robots the laser scan measurements are derived from the 3d point cloud that is provided by the openni_camera package and is based on the frame /openni_depth_frame.

As described in my earlier post 2d SLAM with ROS and Kinect I specify the transformation between the base_link and the Kinect sensor (specifically the frame openni_camera) in the launch file ardros_configuration.launch. The relevant line is:

<node pkg="tf" type="static_transform_publisher"  name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link  openni_camera 100" />

This transformation hooks together the base_link frame with the frames that are associated with the Kinect sensor. How the various frames are connected can be seen by running the following commands as described in this tutorial:

rosrun tf view_frames
evince frames.pdf

Here is the result of running these commands while running the navigation stack on Ardros (click on the image for a larger view).

tf_frames

I marked the frame for the laser scan data in red. As can be seen this frame is now connected with base_link.

Now the question is, how can we determine the parameters shown in the transformation above. I found it easiest to ‘park’ the robot in front of a wall pushing against a rectangular piece of card board or foam board to guarantee that the front of the robot is oriented parallel to the wall at a fixed distance:

CalibratingKinectLocation

I use a book to create a step change in the laser scan right in the center of the front of the robot. Next I run the navigation stack and bring up rviz with a configuration file that shows the laser scan plus two axes displays for the frames /base_link and /openni_camera.

KinectLocationCalibration

By setting the length of the base_link frame axes to the distance from the origin of the base_link frame to the wall I can readily see whether the laser scan line intersects with the end of the x axis (red). If it doesn’t then the x offset as specified in the static transformation broadcaster base_to_kinect_broadcaster (see the line from the launch file above) needs to be adjusted. Furthermore the step change in the distance needs to align with the x axis. Any mismatch needs to be adjusted by modifying the y offset in the transform base_to_kinect_broadcaster.

Once this is done the frames are correctly aligned and the navigation stack can accurately correlate the laser scan with the odometry information.

Ardros – Upgrading to the Maple 32-bit Microcontroller

As described in previous posts until now I used an Arduino Mega controller for my Ardros robot. In most cases this controller provides more than enough power. Mainly out of curiosity but also in order to overcome problems with encoder ticks counting when my motors run at full speed I decided to give the Leaf Lab‘s Maple controller a try.

The Maple is based on the 32-bit STM32F103RB ARM Cortex M3 microcontoller running at 72 MHz. Apart from the drastic speed improvements, what makes the Maple so attractive is that it uses the same pin layout as the Arduino and comes with the same IDE and hardware abstraction.

MapleIde

As a result the transition from Arduino is fairly easy. Naturally there are differences, though. They are summarized here.

The port of the complete Arduino program including the libraries that make up the Ardros – Controller and Drive System was rather painless and is now running on my robot. The source code is available on my Google code site drh-robotics-ros. This post is based on revision 104 of the code. The relevant subdirectories are Robot and libraries.

Ardros – Dead Reckoning

Now that we have the ability to retrieve odometry information from the robot and send velocity commands we combine both to drive the robot around using dead reckoning. The setup is very similar to the teleop scenario that I briefly discussed in the blog entry ROS Introduction Class with Ardros. Instead of the teleop node we now use a new DeadReckoning node:

DeadReckoningNodes

The other key difference is that we now leverage the odometry transformation information that the arduino node publishes (represented by the ‘tf’ lollipop in the block diagram above). The DeadReckoning.py node is modeled after the corresponding C++ node in Willow Garage PR2 stack ‘Using the base controller with odometry and transform information‘.

Note that the DeadReckoning.py node can be used with any ROS enabled robot that publishes odometry information and accepts velocity commands based on the contract defined by ROS. Obviously in our case we use it in conjunction with the arduino node. If you want to follow along I suggest you locally check out the code from the Google Subversion repository. This blog entry is based on revision 61. In order to check out the code to the local directory ‘drh-robotics-ros’ use this command:

svn checkout -r 61 http://drh-robotics-ros.googlecode.com/svn/trunk/ drh-robotics-ros

For ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is accomplished by adding a line at the end of the .bashrc file in your home folder:

export ROS_PACKAGE_PATH=~/drh-robotics-ros/ros/ardros:$ROS_PACKAGE_PATH

The path to the ardros folder might need to be adjusted.

Now we are ready to spin up the arduino node.

  1. Open a new terminal and enterroscoreThis will start the ros server.
  2. In a new terminal typeroscd ardrosroslaunch ./launch/ardros_standalone.launch

The content of the launch file ardros_standalone.launch is:

<launch>
  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

It starts the arduino.py node with the parameters specified in the referenced ardros.yaml file:

port: /dev/ttyUSB0
baudRate: 115200
batteryMonitorParams: {voltageTooLowlimit: 11.7}

# wheel diameter [m], trackwidth [m], ticks per revolution
driveGeometry: {wheelDiameter: 0.0763, trackWidth: 0.37, countsPerRevolution: 9750}

# The speed controller parameters inlcude the P and I gains for the three PI controllers and the command timeout for the speed controller in seconds.
# If no velocity command arrives for more than the specified timeout then the speed controller will stop the robot.
speedController: {velocityPParam: 0.1, velocityIParam: 0.1, turnPParam: 0.4, turnIParam: 0.5, commandTimeout: 1.0}

The arduino.py node is now running, publishing tf and odometry messages while waiting for velocity commands. Sending those commands is the task of the  DeadReckoning.py node. This node implements the Driver class (more about it later) which exposes two public functions:

  • DriveX(self, distance, speed)
  • Turn(self, angle, angularSpeed)

The node creates an instance of the Driver class and then uses the two public functions to command the robot to drive a given pattern. The checked in code commands the robot to drive forward 2m, turn 180°, drive back 2m and finally turn 180° again to end up in the start position:

if __name__ == '__main__':
	try:
		driver = Driver()
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = math.pi, angularSpeed = 0.3);
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = -math.pi, angularSpeed = 0.3)
		#driver.Turn(angle = 3 * math.pi, angularSpeed = 0.3);
	except rospy.ROSInterruptException:
		pass

Let’s look at the code of the Driver class starting with its constructor:

	def __init__(self):
		rospy.init_node('DeadReckoning')

		self._VelocityCommandPublisher = rospy.Publisher("cmd_vel", Twist)
		self._TransformListener = tf.TransformListener()

		# wait for the listener to get the first transform message
		self._TransformListener.waitForTransform("/odom", "/base_link", rospy.Time(), rospy.Duration(4.0))

In the constructor the instance is registered as a ROS node. Since the node will publish twist messages on the cmd_vel topic a corresponding publisher needs to be created. In order to get information about the transformation from the /odom frame to the robot’s /base_link frame we need a TransformListener. Finally we wait up to 4 seconds for the requested transformation to arrive. If we don’t get a message within this time the constructor fails.

Driving forward and backward is handled by the DriveX function:

	def DriveX(self, distance, speed):
		'''
		Drive in x direction a specified distance based on odometry information
		distance [m]: the distance to travel in the x direction (&gt;0: forward, &lt;0: backwards) 		speed [m/s]: the speed with which to travel; must be positive 		''' 		forward = (distance &gt;= 0)

		# record the starting transform from the odom to the base_link frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		# startTranslation is a tuple holding the x,y,z components of the translation vector
		# startRotation is a tuple holding the four components of the quaternion
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

		done = False

		velocityCommand = Twist()
		if forward:
			velocityCommand.linear.x = speed # going forward m/s
		else:
			velocityCommand.linear.x = -speed # going forward m/s

		velocityCommand.angular.z = 0.0 # no angular velocity

		while True:
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

				dx = currentTranslation[0] - startTranslation[0]
				dy = currentTranslation[1] - startTranslation[1]

				distanceMoved = math.sqrt(dx * dx + dy * dy)
				print distanceMoved
				if (forward):
					arrived = distanceMoved &gt;= distance
				else:
					arrived = distanceMoved &gt;= -distance

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			rospy.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

The function first retrieves the current transformation between the frames /odom and /base_link to know where the robot starts from. Then it sets up a twist message that commands the robot to drive forward (or backward) with the specified speed. The actual driving happens in the while loop. We get the current transformation and calculate how far the robot drove since the start of the function. If we traveled the requested distance the while loop exists and the function returns. Otherwise the twist message is published on the cmd_vel topic and we wait a little before checking again …

The Turn function is structured in the same way. Since angles are involved it is slightly more complex but hopefully the code is clear enough to understand.

	def Turn(self, angle, angularSpeed):
		'''
		Turn the robot based on odometry information
		angle [rad]: the angle to turn (positive angles mean clockwise rotation)
		angularSpeed [rad/s]: the speed with which to turn; must be positive
		'''

		ccw = (angle &gt;= 0) # counter clockwise rotation

		# record the starting transform from the odom to the base frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
		startAngle = 2 * math.atan2(startRotation[2], startRotation[3])

		print "start angle: " + str(startAngle)
		previousAngle = startAngle
		angleOffset = 0.0

		done = False

		velocityCommand = Twist()
		velocityCommand.linear.x = 0.0 # going forward m/s
		if ccw:
			velocityCommand.angular.z = angularSpeed
		else:
			velocityCommand.angular.z = -angularSpeed

		while not rospy.is_shutdown():
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
				currentAngle = 2 * math.atan2(currentRotation[2], currentRotation[3])
				print "currentAngle: " + str(currentAngle)

				# we need to handle roll over of the angle
				if (currentAngle * previousAngle &lt; 0 and math.fabs(currentAngle - previousAngle) &gt; math.pi / 2):
					if (currentAngle &gt; previousAngle):
						print "subtracting"
						angleOffset = angleOffset - 2 * math.pi
					else:
						print "adding"
						angleOffset = angleOffset + 2 * math.pi

				angleTurned = currentAngle + angleOffset - startAngle
				previousAngle = currentAngle

				print "angleTurned: " + str(angleTurned)
				if (ccw):
					arrived = (angleTurned &gt;= angle)
				else:
					arrived = (angleTurned &lt;= angle)

				print arrived

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			time.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

We can now startup the DeadReckoning.py node and see it control the robot’s movements. In a new terminal type

roscd ardros
rosrun ardros DeadReckoning.py

This covers dead reckoning.

Ardros – Controller and Drive System

Ardros (Arduino & ROS) is an experimental robot platform that I am working on to get familiar with ROS and particularly with the navigation stack that comes with ROS. My previous post focused on the Python code running on the PC that acts as a gateway to the Arduino controller. In the following I  cover the (differential) drive system and the Arduino controller code.

The main components that make up the drive system are:

The controller code is available from Google code (http://code.google.com/p/drh-robotics-ros/source/browse/#svn%2Ftrunk) in the sub folder ‘Arduino’. This post discusses the state as of Subversion revision 58. I expect that the code will further evolve over time.

The main file is \Arduino\Robot\Robot.pde. It leverages a couple of libraries (source code in \Arduino\libraries):

  • Messenger: For handling messages sent from the PC through the USB link
  • TimeInfo: Helper class for keeping track of time
  • RobotParams: Holds the parameter values that are required to determine the distance traveled / turned from the encoder ticks. The parameters are: WheelDiameter, TrackWidth, CountsPerRevolution, DistancePerCount, RadiansPerCount
  • OdometricLocalizer: It is responsible for determining the pose (x, y position and heading) of the robot based on the encoder counts.
  • BatteryMonitor: A simple wrapper class that encapsulates the scaling of an analog input value to get the current battery voltage. It also determines whether the battery voltage is too low. This information is used by the speed controller (see below) and causes it to stop the motors if the voltage is too low.
  • SpeedController: It uses three PI (proportional & integral) controllers to translate commanded velocity (velocity in x direction and angular velocity) to motor control values for the left and right motors. The speed controller implements a slightly modified version of the coupled PI control structure discussed here. The document illustrates a structure that only uses a P (proportional) controller that links the two motors and ensures that they are kept in sync:
    CoupledPIControlls
    In my code I use a PI (proportional & integral) controller instead. I.e., Kp_turn is accompanied by a Ki_turn. In my case it was not necessary to implement the boxes labeled ‘friction_comp’.
    The speed controller also implements a command timeout. If no commands are received for more than the specified timeout then the controller stops the motors.

After going through setup() the code in the loop() function of \Arduino\Robot\Robot.pde first checks whether all components have been initialized.

void loop()
{
  ReadSerial();

  unsigned long milliSecsSinceLastUpdate = millis() - _TimeInfo.LastUpdateMillisecs;
  if(milliSecsSinceLastUpdate &gt;= c_UpdateInterval)
  {
    //Serial.println(milliSecsSinceLastUpdate);
    // time for another update
    _TimeInfo.Update();
    if (_IsInitialized)
    {
      DoWork();
    }
    else
    {
      RequestInitialization();
    }
  }
}

If at least some of the components have not yet been initialized then initialization is requested by sending corresponding messages to the computer:

void RequestInitialization()
{
    _IsInitialized = true;

    if (!_RobotParams.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeDriveGeometry"); // requesting initialization of the parameters of the differential drive needed for odometry calculations
      Serial.print("\n");
    }

    if (!_SpeedController.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeSpeedController"); // requesting initialization of the speed controller
      Serial.print("\n");
    }

    if (!_BatteryMonitor.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeBatteryMonitor"); // requesting initialization of the battery monitor
      Serial.print("\n");
    }
}

The code running on the PC needs to react to these requests by sending down the initialization parameters which are handled by the Messenger class:

void OnMssageCompleted()
{
 if (_Messenger.checkString("s"))
 {
 SetSpeed();
 return;
 }

 if (_Messenger.checkString("DriveGeometry"))
 {
 InitializeDriveGeometry();
 return;
 }

 if (_Messenger.checkString("SpeedControllerParams"))
 {
 InitializeSpeedControllerParams();
 return;
 }

 if (_Messenger.checkString("BatteryMonitorParams"))
 {
 InitializeBatteryMonitor();
 }

 // clear out unrecognized content
 while(_Messenger.available())
 {
 _Messenger.readInt();
 }
}

Only after all components are initialized, odometry information is sent up and velocity commands are acted upon in the DoWork() function:

void DoWork()
{
  _OdometricLocalizer.Update(_LeftEncoderTicks, _RightEncoderTicks);
  _BatteryMonitor.Update();
  _SpeedController.Update(_BatteryMonitor.VoltageIsTooLow);
  IssueCommands();

  Serial.print("o"); // o indicates odometry message
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.X, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Y, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Heading, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.V, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Omega, 3);
  Serial.print("\n");

  Serial.print("b\t"); // o indicates battery info message
  Serial.print(_BatteryMonitor.BatteryVoltage, 3);
  Serial.print("\t");
  Serial.print(_BatteryMonitor.VoltageIsTooLow);
  Serial.print("\n");
}

void IssueCommands()
{
  float normalizedRightMotorCV, normalizedLeftMotorCV;

  normalizedRightMotorCV = _SpeedController.NormalizedLeftCV;
  normalizedLeftMotorCV = _SpeedController.NormalizedRightCV;

  /*
  Serial.print("Speed: ");
  Serial.print(_SpeedController.DesiredVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.DesiredAngularVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.LeftError);
  Serial.print("\t");
  Serial.print(_SpeedController.RightError);
  Serial.print("\t");
  Serial.print(_SpeedController.TurnError);
  Serial.print("\t");
  Serial.print(normalizedRightMotorCV);
  Serial.print("\t");
  Serial.print(normalizedLeftMotorCV);
  Serial.print("\n");
  */

  float rightServoValue = mapFloat(normalizedRightMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)
  float leftServoValue = mapFloat(normalizedLeftMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)

  _RightServo.write(rightServoValue);     // sets the servo position according to the scaled value (0 ... 179)
  _LeftServo.write(leftServoValue);     // sets the servo position according to the scaled value (0 ... 179)
}

Hopefully this somewhat explains the flow of data. In order to be able to participate in the ROS navigation stack the key aspects are that

  1. Velocity commands are accepted and acted upon. They arrive at the controller as messages prefixed with ‘s:’ for ‘speed’.
  2. Odometry information is sent back. This includes pose data (x, y, heading) as well as speed data (translational velocity and rotational velocity).

We are now in the position to do dead reckoning – but this will need to wait for the next blog entry.

Disclaimer: It turns out that the Arduino is taxed to its very limit by my controller program. This is caused by the combination of

  • The very high number of interrupts originating from the high resolution quadrature encoders mounted directly to the motor shafts.
    For details see my blog entry Quadrature encoder too fast for Arduino.
  • The trigonometric calculations performed to obtain the odometry data
  • The bidirectional communication between the controller and the PC.

As a result the faster the motors run the more wheel encoder transitions are skipped. It looks like the interrupts are suppressed in this overload scenario. As long as the robot moves slowly (< 1m/s in my case) the detected number of wheel encoder transitions is accurate enough. I plan to upgrade to a more powerful controller soon.

Quadrature Encoder too Fast for Arduino (with Solution)

For my next robot project I am using Pittman Motors with 500 cnt/rev quadrature encoders mounted on the motor axle. Initially I used the same code that I described in my blog post ‘Building a Self-Balancing Robot: Motor Driver and Wheel Encoder’ for handling the encoder input. Everything worked fine when I turned the motors manually; the interrupts kicked in and the counters were correctly incremented / decremented. However, when I powered up the motors the Arduino program froze.

Others suffered from the same experience (see e.g., Missing counting encoder). One suggested solution is the use of dedicated quadrature decoder chips like the LS7266R1. More information about its usage in a robot project can be found in the thesis Design of a Wireless Control System for a Laboratory Planetary Rover by Eric Jamesson Wilhelm.

Fortunately no dedicated decoder hardware is necessary (at least in my case). It turns out that the Arduino function digitalRead() comes with a lot of overhead and is correspondingly slow. Both Jean-Claude Wippler (Pin I/O performance) and Bill Porter (Ready, Set, Oscillate! The Fastest Way to Change Arduino Pins) discuss the overhead and demonstrate how it can be avoided by directly reading from Atmel ports rather than going through the Arduino library, thus gaining a factor of about 50 in performance.

While directly reading from controller ports certainly solves the issue, it forces you to become familiar with port maps and write code that is specific to a given controller (no simple porting from Arduino Uno to Mega, etc.). This problem has been beautifully addressed by jrraines’ digitalwritefast library (download) which in turn is based on a suggestion from Paul Stoffregen (see http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1267553811/0).

So, with the open source community to the rescue my quadrature encoder problem is now solved. Here is my new quadrature encoder code wrapped in a little test sketch:

UPDATE: I incorporated Bill Porter’s feedback (see his comment for further detail) to further improve the performance of the code. Since the interrupt only fires on a rising change there is no need to read pin A; it’s value will always be ‘true’.  Furthermore by using pre-processor directives #ifdef … #else … #endif the evaluation of an if statement in each interrupt handler during runtime can be avoided. Thanks, Bill!

#include "WProgram.h"
#include <Servo.h>
#include <digitalWriteFast.h>  // library for high performance reads and writes by jrraines
                               // see http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1267553811/0
                               // and http://code.google.com/p/digitalwritefast/

// It turns out that the regular digitalRead() calls are too slow and bring the arduino down when
// I use them in the interrupt routines while the motor runs at full speed creating more than
// 40000 encoder ticks per second per motor.

// Quadrature encoders
// Left encoder
#define c_LeftEncoderInterrupt 4
#define c_LeftEncoderPinA 19
#define c_LeftEncoderPinB 25
#define LeftEncoderIsReversed
volatile bool _LeftEncoderBSet;
volatile long _LeftEncoderTicks = 0;

// Right encoder
#define c_RightEncoderInterrupt 5
#define c_RightEncoderPinA 18
#define c_RightEncoderPinB 24
volatile bool _RightEncoderBSet;
volatile long _RightEncoderTicks = 0;

Servo _RightServo;  // create servo object to control right motor
Servo _LeftServo;  // create servo object to control left motor

int potpin = 0;  // analog pin used to connect the potentiometer
int val;    // variable to read the value from the analog pin

void setup()
{
  Serial.begin(115200);

  _RightServo.attach(2);  // attaches the servo on specified pin to the servo object
  _LeftServo.attach(3);  // attaches the servo on specified pin to the servo object

  // Quadrature encoders
  // Left encoder
  pinMode(c_LeftEncoderPinA, INPUT);      // sets pin A as input
  digitalWrite(c_LeftEncoderPinA, LOW);  // turn on pullup resistors
  pinMode(c_LeftEncoderPinB, INPUT);      // sets pin B as input
  digitalWrite(c_LeftEncoderPinB, LOW);  // turn on pullup resistors
  attachInterrupt(c_LeftEncoderInterrupt, HandleLeftMotorInterruptA, RISING);

  // Right encoder
  pinMode(c_RightEncoderPinA, INPUT);      // sets pin A as input
  digitalWrite(c_RightEncoderPinA, LOW);  // turn on pullup resistors
  pinMode(c_RightEncoderPinB, INPUT);      // sets pin B as input
  digitalWrite(c_RightEncoderPinB, LOW);  // turn on pullup resistors
  attachInterrupt(c_RightEncoderInterrupt, HandleRightMotorInterruptA, RISING);
}

void loop()
{
  val = analogRead(potpin);            // reads the value of the potentiometer (value between 0 and 1023)
  val = map(val, 0, 1023, 0, 179);     // scale it to use it with the servo (value between 0 and 180)

  _RightServo.write(val);
  _LeftServo.write(val);

  Serial.print(_LeftEncoderTicks);
  Serial.print("\t");
  Serial.print(_RightEncoderTicks);
  Serial.print("\n");

  delay(20);
}

// Interrupt service routines for the left motor's quadrature encoder
void HandleLeftMotorInterruptA()
{
  // Test transition; since the interrupt will only fire on 'rising' we don't need to read pin A
  _LeftEncoderBSet = digitalReadFast(c_LeftEncoderPinB);   // read the input pin

  // and adjust counter + if A leads B
  #ifdef LeftEncoderIsReversed
    _LeftEncoderTicks -= _LeftEncoderBSet ? -1 : +1;
  #else
    _LeftEncoderTicks += _LeftEncoderBSet ? -1 : +1;
  #endif
}

// Interrupt service routines for the right motor's quadrature encoder
void HandleRightMotorInterruptA()
{
  // Test transition; since the interrupt will only fire on 'rising' we don't need to read pin A
  _RightEncoderBSet = digitalReadFast(c_RightEncoderPinB);   // read the input pin

  // and adjust counter + if A leads B
  #ifdef RightEncoderIsReversed
    _RightEncoderTicks -= _RightEncoderBSet ? -1 : +1;
  #else
    _RightEncoderTicks += _RightEncoderBSet ? -1 : +1;
  #endif
}

Hopefully the code is fairly self explanatory. Since fast reads require that the pins are known at compile time I could no longer use my QuadratureEncoder class. Instead the code for calculating the encoder ticks is now part of the sketch itself and there is a little bit of duplicated code in the two interrupt handlers. For testing I used a 10k potentiometer at analog pin 0. The analog value is used to set the output of two servos that drive a Sabertooth 10 RC Dual Motor Speed Controller. This allowed me to put the motors through their paces and verify that the encoder ticks are correctly caught.

Monocular Visual Odometry

For a while now I have been looking for ways to use (computer) vision to get odometry information. This is by no means a new concept. Here are some examples (by no means a comprehensive list):

  • NASA used visual odometry on Mars: Two Years of Visual Odometry on the Mars Exploration Rovers(pdf)
  • Optical computer mice employ it and optical mouse sensors have been used in robotics; e.g. Outdoor Downward-facing Optical Flow Odometry with Commodity Sensors (pdf)
  • Insects leverage it. E.g., see the references in Robust Models for Optic Flow Coding in Natural Scenes Inspired by Insect Biology (pdf)

However, these solutions typically require complex and expensive setups. A little more than a month ago I stumbled over a paper by Jason Campbell, Rahul Sukthankar, Illah Nourbakhsh, and Aroon Pahwa explaining how a single regular web cam can be used to achieve robust visual odometry: A Robust Visual Odometry and Precipice Detection
System Using Consumer-grade Monocular Vision
(pdf). Naturally this got me hooked. To make sure that I fully understand the presented algorithm I reimplemented it in C# and paired it with a fairly comprehensive WinForm based UI. Here is a screen shot of the result:

VisualOdometryScreenshot

The source code is available from http://code.google.com/p/drh-visual-odometry/ and is covered by the GPL 3.0 license.

More details can be found in the presentation slides (pdf) that I presented during last Saturday’s meeting at theRobotics Society of Southern California.

To facilitate testing the code without an actual setup I uploaded my test video (61MB!).

Building a Self-Balancing Robot: Basic Balancing is Working

A lot has happened since my last blog. The big news is that basic balancing is working now. Let’s start with the now almost obligatory photo of the current robot state:

2010-06-12-16-20-26-0741  2010-06-12-16-20-46-075

For better weight distribution the batteries, motor controller and the Arduino Mega board have been rearranged. The gray box between the wheels underneath the robot platform houses the 3-axis accelerometer ADX330 and the dual axis gyroscope IDG300. I used the same components as in my Lego prototype (for details see Building a Self-Balancing Robot – The Prototype). All connections are fed into the protoboard that sits on top of the Arduino Mega. The wooden bar with caster wheels at the end is used to prevent the robot from completely falling over. Naturally the caster wheels don’t touch the ground when the robot is balancing.

Enough about the hardware. The vast bulk of the changes since my last blog entry are software related. All the source code is now available as open source and can be accessed from the Google code hosting sitehttp://code.google.com/p/drh-balancing-robot/. The source code includes the Arduino program with its libraries plus a wxPython UI application that is used to communicate with the Arduino controller to retrieve tilt and speed data and to send calibration data to the board. This blog is based on the code as of revision 37.

Let’s start with …

The Arduino Program

The main program is BalancingRobot.pde. It uses a number of custom libraries that encapsulate the details of various ‘function blocks’. The code for these libraries resides under the folder Arduino\libraries. The majority of the libraries are straight forward:

  • ADXL300: Access to the ADXL330 accelerometer and associated angle calculations
  • IDG300: Access to and calibration of the IDG300 gyroscope
  • QuadratureEncoder: Handling of the quadrature encoder signals

Others deserve a little bit of explanation.

Communicating with the Sabertooth Motor Driver

The setup of the Sabertooth motor driver has slightly changed since my last post. I now use the much more capable Packetized Serial Mode to talk to the controller. The details are encapsulated in the Sabertooth classSabertooth.h. Using a serial port to communicate with the motor driver has the advantage that apart from the motor drive signals, we can also specify a cutoff voltage. Once the voltage drops below the specified threshold the driver will automatically stop the motors, thus protecting the batteries.

The Sabertooth class is used from within the Arduino program like so:

...
#include &lt;HardwareSerial.h&gt;
#include &lt;Sabertooth.h&gt;

// Sabertooth address 128 set with the DIP switches on the controller (OFF OFF ON ON  ON ON)
// Using Serial3 for communicating with the Sabertooth motor driver. Requires a connection
// from pin 14 on the Arduino Mega board to S1 on the Sabertooth device.
Sabertooth _Sabertooth = Sabertooth(128, &amp;Serial3);

...

void setup()
{
 ...

 // give Sabertooth motor driver time to get ready
 delay(2000);

 // Setting baud rate for communication with the Sabertooth device.
 _Sabertooth.InitializeCom(19200);
 _Sabertooth.SetMinVoltage(12.4);
}

...

_Sabertooth.SetSpeedMotorB(motorSignal1);
_Sabertooth.SetSpeedMotorA(motorSignal2);

Another advantage is that only one of the inputs (S1) of the motor driver is used for speed control. The S2 input is configured as an active-low emergency stop. Having an emergency stop is essential when powerful motors are involved! Here is my ‘high-tech’ solution for pulling S2 low when a cord is ripped:

2010-06-12-16-20-16-073

Tilt Angle Calculation and Balancing Algorithm

As discussed in my earlier post Building a Self-Balancing Robot – The Prototype, it is necessary to fuse the output of the accelerometer with the output of the gyroscope to get a reliable tilt angle. This is accomplished by a Kalman filter (TiltCalculator.h and TiltCalculator.cpp) that is based on code from Trammell Hudson.

The tilt angle and the angular velocity are passed to the balancer class (Balancer.h). It uses four coefficients to calculate the torque:

float CalculateTorque(float angleError, float angularVelocityError, float positionError, float velocityError)
{
 float torque =
 angleError * K1 +
 angularVelocityError * K2 +
 positionError * K3 +
 velocityError * K4;

 return torque;
}

Currently I only use the first two terms. This allows for simple balancing but does not prevent the robot from drifting. The wheel encoders give the information that is required to plug in the last two terms but I just haven’t gotten around doing it yet. David P. Anderson’s excellent nBot pages include a succinct high-level description of the balancing algorithm. A more technical short description of the ‘Segway’ or inverted pendulum can be foundhere.

Provided all the required physical properties of the robot are known, the four parameters K1, … K4 can be calculated. However, in my case it is a matter of trial and error – lots of trial and error. This requires an effective way to see the values that are calculated by the control algorithm and to send parameters to the controller. This ability is provided by a Python desktop application.

The Robot Control User Interface

My ‘native’ language is C# but, inspired by a visit to Willow Garage and their use of Python in ROS, I decided to give Python a try. Python per se is fairly easy to learn; but add to it the desire to decide on a UI framework and to learn how to use it in conjunction with a plotting library and you end up with a bigger job than I was originally planning for. Anyway, I decided on using wxPython as the UI framework, Pydev for Eclipse as the IDE, andmatplotlib as the plotting library. I got it working but I am sure I am breaking the Zen of Python in numerous ways.

I won’t describe the code in detail. It can be downloaded from the Google code project page drh-balancing-robotalong with the controller code discussed above. The browsing entry point is http://code.google.com/p/drh-balancing-robot/source/browse/#svn/trunk/PythonClient/RobotController/src. Again, please note that this blog entry is based on revision 37 of the code. The code is very likely to change in the future.

The starting point for the UI app is \UI\MainApp. It creates the MainModel which contains the logic behind the UI and then brings up the main application window (MainWindow.py). This window contains two plot panels. The plot panel on the top shows a real time graph of

  • The raw tilt angle as determined from the accelerometer values
  • The tilt angle value that is calculated from the fusion of the accelerometer and the gyroscope values
  • The angular velocity as measured by the gyroscope

The lower panel shows a real time graph of the speed of the two motors. I used it to calibrate the speed controller logic. However, since it is not used for balancing I’ll ignore it here. Here is a short video that shows the real time graphs while I tilt the robot back and forth.


Menu entries allow the user to define or change the balancing parameters and speed controller (again, not used here) parameters and send them to the Arduino controller.

The bidirectional communication with the Arduino controller over the serial port is handled by \Model\DataAdapter .py in conjunction with Support\SerialDataGateway.py.

Analysis

The robot is stable as long as it is not pushed too hard. Unfortunately the used wheelchair motors have quite a bit of play in their gears resulting in jerky back and forth swings around the center point. Also the motors are quite slow and hence cannot compensate for even moderate pushes of the robot. Running the motors at 24V instead of the current 12V will improve the speed issue somewhat but ultimately I am afraid the motors might not cut it. I will post a video of the balancing in action shortly.

Building a Self-Balancing Robot: Motor Driver and Wheel Encoder

The focus of this blog post is the motor control system. Let’s start with a photo of the current state of the robot platform:

2010-04-06-14-03-00-014-small

The wheels have been mounted and the main electric wiring is in place. The two batteries are wired up in parallel for a 12V power supply resulting in a maximum motor speed of about 60 rpm . The motors could be supplied with 24V but at least for now the power and speed that I get out of 12V is enough.

The Motor Driver

The power to the motors is provided by a Sabertooth 25 dual 25A motor driver (I got mine from The RobotMarketPlace). It is mounted vertically directly behind the batteries. The driver is controlled by a Seeeduino Mega, an Arduino Mega clone from SeeedStudio Depot. In the photo the Seeeduino board is barely visible in the back hidden underneath an attached protoshield. The input mode of the motor driver is configured like this:

MotorDriverDipSwitches

  • 1 & 2: R/C input signal
  • 3: no lithium battery cutoff
  • 4: no mixing mode
  • 5: linear
  • 6: micro controller mode (no timeout fail-safe and no auto-calibration)

The corresponding signal is provided by the Arduino controller. Currently (for testing purposes only) the motor speed is controlled by a 100k potentiometer that is hooked up between ground and 3.3V with its wiper connected to an analog input of the controller. The resulting Arduino program is listed below.

#include "WProgram.h"

unsigned long previousMilliseconds = 0;
unsigned long updateInterval = 50; // update interval in milli seconds

// temporary: contolling the motors with one potentiometer
int potiAnalogInPin = 5;
int motor1PwmOutPin = 11;
int motor2PwmOutPin = 12;

void setup()
{
 analogWrite(motor1PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
}

void loop()
{
 unsigned long currentMilliseconds = millis();

 unsigned long milliSecsSinceLastUpdate = currentMilliseconds - previousMilliseconds;
 if(milliSecsSinceLastUpdate > updateInterval)
 {
 // save the last time we updated
 previousMilliseconds = currentMilliseconds;

 int potiValue = analogRead(potiAnalogInPin);  // read the input pin
 analogWrite(motor1PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 }
}

The Wheel Encoders

The speed and direction of each motor is measured by two home-brew quadrature encoders that are mounted on the upper ends of the motors where the electric brake used to be. The electronic circuit for each quadrature encoder is based on two Hamamatsu P5587 IR photo reflectors and is described in David P. Anderson’s article ‘Home-Brew Shaft Encoders for the Pittman GM8712 Gearhead Motor‘:

P5587Circuit

The space on top of the motor is wide enough to mount the two photo reflectors in one line which simplifies the calibration. The two photos below show the circuit board mounted on the back of the motor first without and then with the encoder disk attached to the motor shaft. The board is glued to a plastic disk that is attached to cut off screws. This setup allows me to easily fine tune the distance between the sensors and the encoder disk.

2010-04-06-13-54-00-007-small

2010-04-06-13-54-56-009-small

 

In order to get a quadrature encoder signal that conveys speed and direction the bright and dark sequences for the two sensors need to be offset from each other. Since the sensors are lined up the offset must be created by the encoder disk itself:

EncoderDisk

I tried to create the encoder disk with Scott Boskovich’s encoder design program which can be downloaded fromhttp://www.societyofrobots.com/sensors_encoder.shtml but the output was too coarse. Instead I created a C# console application that builds up the encoder disk as an SVG (scalable vector graphics) file which I then open and print from Inkscape (my robot arm calibration page has more details). The C# GridBuilder source code code requires .Net 2.0 and either VisualStudio 2008 or the open source IDE SharpDevelop.

In order to reliably detect the encoder wheel transitions external interrupts need to be used in the Arduino program. I can highly recommend the Arduino Playground article ‘Reading Rotary Encoders‘ which provided invaluable information for my implementation. The main part of the encoder logic is encapsulated in the QuadratureEncoder class. It heavily borrows from mikkoh’s contribution to the Arduino Playground article. The SendInfo is used for calibration purposes (see below).

/*
 QuadratureEncoder.h - Quadrature encoder library.
 Dr. Rainer Hessmer, April, 2010.
 Released into the public domain.

 Inspired by the code from http://www.arduino.cc/playground/Main/RotaryEncoders
 Incorporates code from mikkoh and others that contributed to the referenced article.
*/

#ifndef QuadratureEncoder_h
#define QuadratureEncoder_h

#include "WProgram.h"

class QuadratureEncoder
{
 /*
 Wraps the encoder setup and the update functions in a class

 !!! NOTE : User must call the functions OnAChanged and OnBChanged from an
 interrupt function him/herself!

 // ------------------------------------------------------------------------------------------------
 // Example usage :
 // ------------------------------------------------------------------------------------------------
 #include "QuadratureEncoder.h"

 QuadratureEncoder encoder(2, 3);

 void setup()
 {
 attachInterrupt(0, HandleInterruptA, CHANGE);
 attachInterrupt(1, HandleInterruptB, CHANGE);
 }

 void loop()
 {
 // do some stuff here - the joy of interrupts is that they take care of themselves
 }

 void HandleInterruptA()
 {
 encoder.OnAChanged();
 }

 void HandleInterruptB()
 {
 encoder.OnBChanged();
 }

 // ------------------------------------------------------------------------------------------------
 // Example usage end
 // ------------------------------------------------------------------------------------------------
 */

public:
 // pinA and pinB must be one of the external interupt pins
 QuadratureEncoder(int pinA, int pinB)
 {
 _PinA = pinA;
 _PinB = pinB;

 pinMode(_PinA, INPUT);      // sets pin A as input
 pinMode(_PinB, INPUT);      // sets pin B as input

 _ASet = digitalRead(_PinA);   // read the input pin
 _BSet = digitalRead(_PinB);   // read the input pin

 _Position = 0;
 }

 long int getPosition () { return _Position; };
 void setPosition (const long int p) { _Position = p; };

 // Interrupt on A changing state
 void OnAChanged()
 {
 // Test transition
 _ASet = digitalRead(_PinA) == HIGH;
 // and adjust counter + if A leads B
 _Position += (_ASet != _BSet) ? +1 : -1;
 }

 // Interrupt on B changing state
 void OnBChanged()
 {
 // Test transition
 _BSet = digitalRead(_PinB) == HIGH;
 // and adjust counter + if B follows A
 _Position += (_ASet == _BSet) ? +1 : -1;
 }

 void SendInfo()
 {
 bool a = digitalRead(_PinA);
 bool b = digitalRead(_PinB);
 unsigned long milliSecs = millis();

 Serial.print(milliSecs);
 Serial.print("\t");
 Serial.print(a);
 Serial.print("\t");
 Serial.print(b);
 Serial.println();
 }

private:
 int _PinA, _PinB;
 bool _ASet, _BSet;
 long _Position;
};

#endif

The QuadratureEncoder.h file needs to be copied into a folder called QuadratureEncoder under the libraries folder that is used by the Arduino environment so that it becomes accessible to Arduino programs.

The complete source code of the Arduino program combines the motor control code from above with the encoder logic:

#include "WProgram.h"
#include <HardwareSerial.h>
#include <QuadratureEncoder.h>

unsigned long previousMilliseconds = 0;
unsigned long updateInterval = 50; // update interval in milli seconds

// temporary: contolling the motors with one potentiometer
int potiAnalogInPin = 5;
int motor1PwmOutPin = 11;
int motor2PwmOutPin = 12;

// The quadrature encoder for motor 1 uses external interupts 0 and 1 which are associated with pins 2 and 3
QuadratureEncoder encoderMotor1(2, 3);
// The quadrature encoder for motor 1 uses external interupts 5 and 4 which are associated with pins 18 and 19
QuadratureEncoder encoderMotor2(18, 19);

void setup()
{
 analogReference(EXTERNAL); // we set the analog reference for the analog to digital converters to 3.3V
 Serial.begin(115200);

 analogWrite(motor1PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, 127);  // motors stopped at middle value; analogWrite values from 0 to 255

 attachInterrupt(0, HandleMotor1InterruptA, CHANGE); // Pin 2
 attachInterrupt(1, HandleMotor1InterruptB, CHANGE); // Pin 3

 attachInterrupt(5, HandleMotor2InterruptA, CHANGE); // Pin 18
 attachInterrupt(4, HandleMotor2InterruptB, CHANGE); // Pin 19
}

void loop()
{
 unsigned long currentMilliseconds = millis();

 unsigned long milliSecsSinceLastUpdate = currentMilliseconds - previousMilliseconds;
 if(milliSecsSinceLastUpdate > updateInterval)
 {
 // save the last time we updated
 previousMilliseconds = currentMilliseconds;

 int potiValue = analogRead(potiAnalogInPin);  // read the input pin
 analogWrite(motor1PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 analogWrite(motor2PwmOutPin, potiValue / 4);  // analogRead values go from 0 to 1023, analogWrite values from 0 to 255
 }
}

// Interrupt service routines for motor 1 quadrature encoder
void HandleMotor1InterruptA()
{
 encoderMotor1.OnAChanged();
 //encoderMotor1.SendInfo();
}

void HandleMotor1InterruptB()
{
 encoderMotor1.OnBChanged();
 //encoderMotor1.SendInfo();
}

// Interrupt service routines for motor 2 quadrature encoder
void HandleMotor2InterruptA()
{
 encoderMotor2.OnAChanged();
 //encoderMotor2.SendInfo();
}

void HandleMotor2InterruptB()
{
 encoderMotor2.OnBChanged();
 //encoderMotor2.SendInfo();
}

Note: Yesterday I upgraded to Arduino software version 0018. Since the upgrade I can no longer send data over the serial port and at the same time use the external interrupts and the analogWrite functions. I will update this post once I have a solution.

Before the upgrade to 0018 I used the currently commented out lines ‘encoderMotor<1|2>.SendInfo()’ to send the measured transitions to the PC for plotting. As an example here is the plot of the encoder output for Motor1 over a time period of 100 milliseconds:

QuadratureEncoderSignal

The blue and red curves are the output of the two P5587 photo reflectors attached to Motor1. For the measurement I drove the motor as slowly as possible. Due to the diminished power the motor speed varied slightly as can be seen in the varying pulse widths. If the sensors would be perfectly aligned the two signals (blue and red) would be exactly phase shifted by 90°.  However, as long as the two signals are sufficiently offset, four transitions can be accurately measured for each black & white segment of the encoder ring. The encoder disk has 40 black & white pairs in each ring resulting in 160 transitions per full rotation.

The shaft that the encoder disk is attached to makes 32 full rotations for each rotation of the wheel. Thus I get 32 x 160 = 5120 transitions for each turn of the wheel giving a resolution of about 0.07° or 0.2mm for the 350mm (13.8″) diameter wheels. This is more than enough and in the future I will probably only use one interrupt per wheel for a resolution of 0.4mm.

So, the ‘drive train’ is pretty much in place now. Next I will focus on hooking up the accelerometer and the gyroscope.

Monte Carlo Localization for Robots

Recently I started to read the excellent book Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard, and Dieter Fox and got intrigued by Monte Carlo Localization (MCL). The MCL algorithm fully takes into account the uncertainty associated with drive commands and sensor measurements and allows a robot to locate itself in an environment provided a map is available.

In order to make sure that I really understand the algorithm I decided to implement it myself. Furthermore I did not find a readily accessible, standalone open source implementation that would allow me to exercise the algorithm in a simulated environment. Covering the resulting implementation is a bit too big for this blog. Instead I decided to add a multi-part article to my web site. You can find it here: Monte Carlo Localization for Robots

MCLSimulationSmall