Using the ROS Navigation Stack

In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.

With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:

  • A robot that publishes odometry information and accepts drive commands
  • (Faked) laser scanner sensor data derived from the Kinect sensor
  • Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
  • A map generated via SLAM

From a high level perspective navigation brings in these additions:

  • Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
  • Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.

Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.

Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.

The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>
	<include file="$(find ardros)/launch/move_base.launch"/>
</launch>

It brings in two other launch files, the first being the ardros_configuration.launch which we already know from theprevious post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.

The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:

<launch>
  <!-- Run the map server -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/>

  <!--- Run AMCL -->
  <include file="$(find ardros)/launch/amcl_diff.launch" />

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" />
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" />
    <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" />
  </node>
</launch>

The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type inamcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.

It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.

Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation athttp://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:

# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]]
#footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]]
#robot_radius: ir_of_robot
inflation_radius: 0.6

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I

  1. Mount the Kinect towards the back of the robot
  2. Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.

Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:

roslaunch `rospack find ardros`/launch/navigation.launch

In a second terminal bring up rviz:

rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg

You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ‘2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ‘2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.

Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:

rostopic echo /move_base/status

The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.

Sending Data from Arduino to ROS

For a while now I have been following Willow Garage’s open source ROS (Robot Operating System). It just celebrated its third anniversary showing impressive growth with more and more universities and companies using and extending ROS: http://www.willowgarage.com/blog/2010/11/08/happy-3rd-anniversary-ros. Furthermore ROS is beginning to enter the hobby market.

As a first step to get familiar with ROS I decided to create a simple package that would allow me to receive lines of text from the Arduino board and publish them into ROS. Andrew Harris has already published a package for Arduino (http://www.ros.org/news/2010/01/arduino-and-cmucam3-support-for-ros.html) that uses an efficient binary protocol to communicate between Arduino and ROS. I might eventually use it but nothing beats writing a bit of software yourself when it comes to understanding a new software infrastructure.

The Source code of my simple Arduino playground package is available on Google code athttp://code.google.com/p/drh-robotics-ros/source/browse/?r=60#svn%2Ftrunk%2Fros%2Fplayground. If you want to play with it I suggest that you check out revision 60 of the source code into a folder under your home directory. In order for ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is easily accomplished by adding a line at the end of the .bashrc file in your home folder. In my case the line reads:

export ROS_PACKAGE_PATH=~/dev/drh-robotics-ros/ros/playground:$ROS_PACKAGE_PATH

In your case the path to the playground folder might be different. To test it out we need an Arduino board with a sketch that periodically writes a string followed by a newline to the serial port like so:

Serial.println("Hello World");

The Python code of the arduino ROS node consists of two files. The entry point is the file /ros/playground/nodes/arduino.py:

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''

import roslib; roslib.load_manifest('playground')
import rospy
from std_msgs.msg import String
from playground.msg import Encoder

from SerialDataGateway import SerialDataGateway

class Arduino(object):
 '''
 Helper class for communicating over serial port with an Arduino board
 '''

 def _HandleReceivedLine(self,  line):
 rospy.logdebug(line)
 self._Publisher.publish(String(line))

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 baudrate: Baud rate for the serial communication
 '''

 self._Publisher = rospy.Publisher('serial', String)
 rospy.init_node('arduino')

 self._SerialDataGateway = SerialDataGateway(port, baudrate,  self._HandleReceivedLine)

 def Start(self):
 rospy.logdebug("Starting")
 self._SerialDataGateway.Start()

 def Stop(self):
 rospy.logdebug("Stopping")
 self._SerialDataGateway.Stop()

if __name__ == '__main__':
 arduino = Arduino("/dev/ttyUSB0", 115200)
 try:
 arduino.Start()
 rospy.spin()

 except rospy.ROSInterruptException:
 arduino.Stop()

In the __main__ function at the bottom of the source file an instance of the Arduino class is instantiated with the appropriate port and baud rate. In the constructor the ROS topic ‘serial’ is published and the node is initialized. Finally an instance of the helper class SerialDataGateway is created (more info below). Once this is done the Arduino instance is started. The call to rospy.Spin() ensures that the program keeps running.

The helper class SerialDataGateway is responsible for receiving data via the serial port and assembling the data into individual lines. When it receives a ‘\n’ it calls into the line handler function that was passed into the constructor. The class uses a worker thread which in the current usage pattern is not really necessary. But it allows me to enhance the node later to support read and write operations in parallel.

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''
import threading
import serial
from cStringIO import StringIO
import time
import rospy

def _OnLineReceived(line):
 print(line)

class SerialDataGateway(object):
 '''
 Helper class for receiving lines from a serial port
 '''

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200, lineHandler = _OnLineReceived):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 receivedLineHandler: The function to call when a line was received.
 '''
 self._Port = port
 self._Baudrate = baudrate
 self.ReceivedLineHandler = lineHandler
 self._KeepRunning = False

 def Start(self):
 self._Serial = serial.Serial(port = self._Port, baudrate = self._Baudrate, timeout = 1)

 self._KeepRunning = True
 self._ReceiverThread = threading.Thread(target=self._Listen)
 self._ReceiverThread.setDaemon(True)
 self._ReceiverThread.start()

 def Stop(self):
 rospy.loginfo("Stopping serial gateway")
 self._KeepRunning = False
 time.sleep(.1)
 self._Serial.close()

 def _Listen(self):
 stringIO = StringIO()
 while self._KeepRunning:
 data = self._Serial.read()
 if data == '\r':
 pass
 if data == '\n':
 self.ReceivedLineHandler(stringIO.getvalue())
 stringIO.close()
 stringIO = StringIO()
 else:
 stringIO.write(data)

 def Write(self, data):
 info = "Writing to serial port: %s" %data
 rospy.loginfo(info)
 self._Serial.write(data)

 if __name__ == '__main__':
 dataReceiver = SerialDataGateway("/dev/ttyUSB0",  115200)
 dataReceiver.Start()

 raw_input("Hit &lt;Enter&gt; to end.")
 dataReceiver.Stop()

The Arduino class passes its ‘_HandleReceivedLine’ function into the SerialDataGateway constructor. As a result whenever a new line is received the _HandleReceivedLine function in the Arduino class will be called:

def _HandleReceivedLine(self,  line):
  rospy.logdebug(line)
  self._Publisher.publish(String(line))

The function simply logs the line and then publishes it as a string. In a real world scenario I would modify the function to parse the line and extract, say, pose data (x, y, and angle). Correspondingly a pose message rather than a simple string message would be published.

Finally we are ready to try out the new node. The required steps are as follows:

  • Hook up an Ardunino board running a sketch that causes a line to be written to the serial port periodically.
  • In a new terminal run:
    roscore
  • In another new terminal run:
    roscd playground
    make
    rosrun playground arduino.py

Provided the port and baud rate specified in Arduino.py match your setup you should have a running ROS Arduino node now. Next we want to tap into the published topic.

  • In a new terminal subscribe to the ‘serial’ topic:
    rostopic echo /serial
    You should see the published lines fly by in this terminal.
  • To get a graphical representation of what is running start rxgraph in yet another terminal. The resulting graph should look similar to this:

Screenshot-rxgraph

On the left is the /arduino node with the exposed /serial topic. The node in the middle is the result of the rostopic app that we started. It subscribes to the /serial topic and displays the values in the terminal.

This concludes my first baby step into the world of ROS.

Building a Self-Balancing Robot: Basic Balancing is Working

A lot has happened since my last blog. The big news is that basic balancing is working now. Let’s start with the now almost obligatory photo of the current robot state:

2010-06-12-16-20-26-0741  2010-06-12-16-20-46-075

For better weight distribution the batteries, motor controller and the Arduino Mega board have been rearranged. The gray box between the wheels underneath the robot platform houses the 3-axis accelerometer ADX330 and the dual axis gyroscope IDG300. I used the same components as in my Lego prototype (for details see Building a Self-Balancing Robot – The Prototype). All connections are fed into the protoboard that sits on top of the Arduino Mega. The wooden bar with caster wheels at the end is used to prevent the robot from completely falling over. Naturally the caster wheels don’t touch the ground when the robot is balancing.

Enough about the hardware. The vast bulk of the changes since my last blog entry are software related. All the source code is now available as open source and can be accessed from the Google code hosting sitehttp://code.google.com/p/drh-balancing-robot/. The source code includes the Arduino program with its libraries plus a wxPython UI application that is used to communicate with the Arduino controller to retrieve tilt and speed data and to send calibration data to the board. This blog is based on the code as of revision 37.

Let’s start with …

The Arduino Program

The main program is BalancingRobot.pde. It uses a number of custom libraries that encapsulate the details of various ‘function blocks’. The code for these libraries resides under the folder Arduino\libraries. The majority of the libraries are straight forward:

  • ADXL300: Access to the ADXL330 accelerometer and associated angle calculations
  • IDG300: Access to and calibration of the IDG300 gyroscope
  • QuadratureEncoder: Handling of the quadrature encoder signals

Others deserve a little bit of explanation.

Communicating with the Sabertooth Motor Driver

The setup of the Sabertooth motor driver has slightly changed since my last post. I now use the much more capable Packetized Serial Mode to talk to the controller. The details are encapsulated in the Sabertooth classSabertooth.h. Using a serial port to communicate with the motor driver has the advantage that apart from the motor drive signals, we can also specify a cutoff voltage. Once the voltage drops below the specified threshold the driver will automatically stop the motors, thus protecting the batteries.

The Sabertooth class is used from within the Arduino program like so:

...
#include &lt;HardwareSerial.h&gt;
#include &lt;Sabertooth.h&gt;

// Sabertooth address 128 set with the DIP switches on the controller (OFF OFF ON ON  ON ON)
// Using Serial3 for communicating with the Sabertooth motor driver. Requires a connection
// from pin 14 on the Arduino Mega board to S1 on the Sabertooth device.
Sabertooth _Sabertooth = Sabertooth(128, &amp;Serial3);

...

void setup()
{
 ...

 // give Sabertooth motor driver time to get ready
 delay(2000);

 // Setting baud rate for communication with the Sabertooth device.
 _Sabertooth.InitializeCom(19200);
 _Sabertooth.SetMinVoltage(12.4);
}

...

_Sabertooth.SetSpeedMotorB(motorSignal1);
_Sabertooth.SetSpeedMotorA(motorSignal2);

Another advantage is that only one of the inputs (S1) of the motor driver is used for speed control. The S2 input is configured as an active-low emergency stop. Having an emergency stop is essential when powerful motors are involved! Here is my ‘high-tech’ solution for pulling S2 low when a cord is ripped:

2010-06-12-16-20-16-073

Tilt Angle Calculation and Balancing Algorithm

As discussed in my earlier post Building a Self-Balancing Robot – The Prototype, it is necessary to fuse the output of the accelerometer with the output of the gyroscope to get a reliable tilt angle. This is accomplished by a Kalman filter (TiltCalculator.h and TiltCalculator.cpp) that is based on code from Trammell Hudson.

The tilt angle and the angular velocity are passed to the balancer class (Balancer.h). It uses four coefficients to calculate the torque:

float CalculateTorque(float angleError, float angularVelocityError, float positionError, float velocityError)
{
 float torque =
 angleError * K1 +
 angularVelocityError * K2 +
 positionError * K3 +
 velocityError * K4;

 return torque;
}

Currently I only use the first two terms. This allows for simple balancing but does not prevent the robot from drifting. The wheel encoders give the information that is required to plug in the last two terms but I just haven’t gotten around doing it yet. David P. Anderson’s excellent nBot pages include a succinct high-level description of the balancing algorithm. A more technical short description of the ‘Segway’ or inverted pendulum can be foundhere.

Provided all the required physical properties of the robot are known, the four parameters K1, … K4 can be calculated. However, in my case it is a matter of trial and error – lots of trial and error. This requires an effective way to see the values that are calculated by the control algorithm and to send parameters to the controller. This ability is provided by a Python desktop application.

The Robot Control User Interface

My ‘native’ language is C# but, inspired by a visit to Willow Garage and their use of Python in ROS, I decided to give Python a try. Python per se is fairly easy to learn; but add to it the desire to decide on a UI framework and to learn how to use it in conjunction with a plotting library and you end up with a bigger job than I was originally planning for. Anyway, I decided on using wxPython as the UI framework, Pydev for Eclipse as the IDE, andmatplotlib as the plotting library. I got it working but I am sure I am breaking the Zen of Python in numerous ways.

I won’t describe the code in detail. It can be downloaded from the Google code project page drh-balancing-robotalong with the controller code discussed above. The browsing entry point is http://code.google.com/p/drh-balancing-robot/source/browse/#svn/trunk/PythonClient/RobotController/src. Again, please note that this blog entry is based on revision 37 of the code. The code is very likely to change in the future.

The starting point for the UI app is \UI\MainApp. It creates the MainModel which contains the logic behind the UI and then brings up the main application window (MainWindow.py). This window contains two plot panels. The plot panel on the top shows a real time graph of

  • The raw tilt angle as determined from the accelerometer values
  • The tilt angle value that is calculated from the fusion of the accelerometer and the gyroscope values
  • The angular velocity as measured by the gyroscope

The lower panel shows a real time graph of the speed of the two motors. I used it to calibrate the speed controller logic. However, since it is not used for balancing I’ll ignore it here. Here is a short video that shows the real time graphs while I tilt the robot back and forth.


Menu entries allow the user to define or change the balancing parameters and speed controller (again, not used here) parameters and send them to the Arduino controller.

The bidirectional communication with the Arduino controller over the serial port is handled by \Model\DataAdapter .py in conjunction with Support\SerialDataGateway.py.

Analysis

The robot is stable as long as it is not pushed too hard. Unfortunately the used wheelchair motors have quite a bit of play in their gears resulting in jerky back and forth swings around the center point. Also the motors are quite slow and hence cannot compensate for even moderate pushes of the robot. Running the motors at 24V instead of the current 12V will improve the speed issue somewhat but ultimately I am afraid the motors might not cut it. I will post a video of the balancing in action shortly.