Using the ROS Navigation Stack

In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.

With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:

  • A robot that publishes odometry information and accepts drive commands
  • (Faked) laser scanner sensor data derived from the Kinect sensor
  • Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
  • A map generated via SLAM

From a high level perspective navigation brings in these additions:

  • Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
  • Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.

Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.

Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.

The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>
	<include file="$(find ardros)/launch/move_base.launch"/>
</launch>

It brings in two other launch files, the first being the ardros_configuration.launch which we already know from theprevious post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.

The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:

<launch>
  <!-- Run the map server -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/>

  <!--- Run AMCL -->
  <include file="$(find ardros)/launch/amcl_diff.launch" />

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" />
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" />
    <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" />
  </node>
</launch>

The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type inamcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.

It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.

Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation athttp://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:

# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]]
#footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]]
#robot_radius: ir_of_robot
inflation_radius: 0.6

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I

  1. Mount the Kinect towards the back of the robot
  2. Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.

Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:

roslaunch `rospack find ardros`/launch/navigation.launch

In a second terminal bring up rviz:

rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg

You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ‘2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ‘2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.

Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:

rostopic echo /move_base/status

The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.

2d SLAM with ROS and Kinect

Introduction

So far we really did not tap much into the power of ROS. Teleoperation and dead reckoning was just a way to ‘warm up’ and get familiar with the basics of ROS. This is about to change as we are getting ready to leverage ROS’s implementation of SLAM (Simultaneous Localization and Mapping). The associated ROS package isgmapping which wraps the open source GMapping implementation that is available on OpenSlam.org. In order to use gmapping we need a source of odometry information (Ardros already provides this) and a source of depth information. Typically laser scanners are used to get the depth information. They are accurate and fast but also quite expensive and hence not a very attractive choice for hobby robotics. Instead of a laser scanner I am using the Microsoft Kinect sensor which provides rgb + d information (color video plus depth information).

Hooking up the Kinect Sensor

The following assumes that you have the package desktop-full of  ROS Diamondback installed. In addition we need to install the openni_kinect package as described at http://www.ros.org/wiki/openni_kinect. Once in place we can verify that the depth sensor information comes in. After hooking up the Kinect sensor, type the following command in a terminal:

roslaunch openni_camera openni_node.launch

Then in a different terminal:

rosrun rviz rviz

In rviz set the Fixed Frame parameter to /openni_camera under .Global Options. Then add a PointCloud2 display type and select /camera/rgb/points for its topic. You should now see a color coded depth image similar to the one below:

KinectDepth

Additional information about how the Kinect sensor is represented in ROS can be found here: http://www.ros.org/wiki/openni_camera

Faking a Laser Scanner

To use the depth image for SLAM we convert the point cloud to a faked laser scan signal by cutting a horizontal slice out of the image and using the nearest distance (closest depth) in each column. Fortunately we don’t need to code this ourselves. The pointcloud_to_laserscan package which is part of the turtlebot stack already covers this. The turtlebot stack does not come with desktop-full and needs to be installed separately.

The remainder of this blog requires revision 70 of my Ardros package. For details about how to get the Ardros source code please refer to my blog entry Ardros – Dead Reckoning but make sure that you use the newer revision.

We are now in a position to look at the laser scan output in rviz:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/kinect_laser.launch
  • Open a new terminal and execute
rosrun rviz rviz
  • In rviz
    • Set the fixed frame under Global Options to /openni_depth_frame
    • Add the display type ‘Laser Scan’ and select the topic /scan for it.

You should now see the laser scan data as derived from Kinect’s point cloud. For the screenshot below I added a grid with a cell size 0f 0.1m to the rviz display and also included a tf (transform) display showing only the /openni_camera frame. For clarification I added the x, y, z labels after taking the screenshot.

LaserScan

The launch file ‘kinect_laser.launch‘ warrants a closer look:

<launch>
  <!-- kinect and frame ids -->
  <include file="$(find openni_camera)/launch/openni_node.launch"/>

  <!-- openni_manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>
  </node>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/openni_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>
  </node>
</launch>

The first include statement brings in the openni_node launch file which brings up the Kinect camera (openni camera) and the tf transformations that link the various  camera frames (depth, rgb, etc.) together.

Then a node of type nodelet named  openni_manager is spun up. It acts as the container for other nodelets. Nodelets are like nodes but instead of running as their own process they are loaded into a parent node of type nodelet. Since multiple nodelets execute in the same process they can very efficiently share data. This is relevant when large amounts of data (like video frames) are processed. The launch file causes two nodelets to be loaded to a) throttle the rgb-d frame rate down to 2 Hertz and b) to cut a slice out of each rgb-d frame, calculate its distance column by column and assemble it as a laser scan data structure published on the topic /scan.

Both nodelets are part of the turtlebot stack. CloudToScan supports a couple of parameters that define the size and location of the slice that is used as the basis for the laser scan data. I use the default values. For details please refer to the source code.

Putting it all Together

Finally we are ready to get SLAM working. As inputs the SLAM algorithm requires odometry information and laser scan data. We provide both by using the launch file ‘ardros_configuration.launch‘.

<launch>
  <node pkg="tf" type="static_transform_publisher" name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link openni_camera 100" />

  <!--we use the kinect sensor as a fake laser scanner /-->
  <include file="$(find ardros)/launch/kinect_laser.launch"/>

  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

The content is best explained from bottom to top: We launch the arduino node which represents the robot and also launch the kinect based laser scanner as discussed above. In addition we need to publish a tf transform that relates the odometry frame base_link with the openni_camera frame. This transformation makes it possible to transform the laser scan data into the base_link frame. In the photo you can see where the Kinect sensor is located on the robot.

KinectOnRobot

Finally we need to launch slam gmapping. All this is wrapped up in the slam.launch file:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>

	<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
		<!--remap from="scan" to="base_scan"/-->
		<!--param name="odom_frame" value="odom"/-->
		<param name="map_update_interval" value="2.0"/>
		<param name="maxUrange" value="6.0"/>
		<param name="sigma" value="0.05"/>
		<param name="kernelSize" value="1"/>
		<param name="lstep" value="0.05"/>
		<param name="astep" value="0.05"/>
		<param name="iterations" value="5"/>
		<param name="lsigma" value="0.075"/>
		<param name="ogain" value="3.0"/>
		<param name="lskip" value="0"/>
		<param name="srr" value="0.01"/>
		<param name="srt" value="0.02"/>
		<param name="str" value="0.01"/>
		<param name="stt" value="0.02"/>
		<param name="linearUpdate" value="0.25"/>     <!-- param name="linearUpdate" value="0.5"/-->
		<param name="angularUpdate" value="0.262"/>   <!--param name="angularUpdate" value="0.436"/-->
		<param name="temporalUpdate" value="-1.0"/>
		<param name="resampleThreshold" value="0.5"/>
		<param name="particles" value="300"/>
		<param name="xmin" value="-50.0"/>
		<param name="ymin" value="-50.0"/>
		<param name="xmax" value="50.0"/>
		<param name="ymax" value="50.0"/>
		<param name="delta" value="0.05"/>
		<param name="llsamplerange" value="0.01"/>
		<param name="llsamplestep" value="0.01"/>
		<param name="lasamplerange" value="0.005"/>
		<param name="lasamplestep" value="0.005"/>
	</node>
</launch>

The various gmapping parameters are explained at http://www.ros.org/wiki/gmapping. There is plenty of room for fine tuning but the values listed above work fairly well. I am using mainly the default values but I specifically chose a relatively high particle count to increase the accuracy which suffers from the fact that the Kinect sensor has a much smaller horizontal field of view than a real laser scanner. So let’s run it:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/slam.launch
  • Open a new terminal and execute
roscd ardros
rosrun rviz rviz -d ./rviz/slam.vcg

We can now drive the robot around for a while and watch as the map is assembled. When the map is completed it can be saved:

  • Open a new terminal and execute
roscd ardros
rosrun map_server map_saver -f ./maps/map

As an example here is snapshot of a sample run and a screen recording that shows how it evolved.

RunningSlam

Sending Data from Arduino to ROS

For a while now I have been following Willow Garage’s open source ROS (Robot Operating System). It just celebrated its third anniversary showing impressive growth with more and more universities and companies using and extending ROS: http://www.willowgarage.com/blog/2010/11/08/happy-3rd-anniversary-ros. Furthermore ROS is beginning to enter the hobby market.

As a first step to get familiar with ROS I decided to create a simple package that would allow me to receive lines of text from the Arduino board and publish them into ROS. Andrew Harris has already published a package for Arduino (http://www.ros.org/news/2010/01/arduino-and-cmucam3-support-for-ros.html) that uses an efficient binary protocol to communicate between Arduino and ROS. I might eventually use it but nothing beats writing a bit of software yourself when it comes to understanding a new software infrastructure.

The Source code of my simple Arduino playground package is available on Google code athttp://code.google.com/p/drh-robotics-ros/source/browse/?r=60#svn%2Ftrunk%2Fros%2Fplayground. If you want to play with it I suggest that you check out revision 60 of the source code into a folder under your home directory. In order for ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is easily accomplished by adding a line at the end of the .bashrc file in your home folder. In my case the line reads:

export ROS_PACKAGE_PATH=~/dev/drh-robotics-ros/ros/playground:$ROS_PACKAGE_PATH

In your case the path to the playground folder might be different. To test it out we need an Arduino board with a sketch that periodically writes a string followed by a newline to the serial port like so:

Serial.println("Hello World");

The Python code of the arduino ROS node consists of two files. The entry point is the file /ros/playground/nodes/arduino.py:

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''

import roslib; roslib.load_manifest('playground')
import rospy
from std_msgs.msg import String
from playground.msg import Encoder

from SerialDataGateway import SerialDataGateway

class Arduino(object):
 '''
 Helper class for communicating over serial port with an Arduino board
 '''

 def _HandleReceivedLine(self,  line):
 rospy.logdebug(line)
 self._Publisher.publish(String(line))

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 baudrate: Baud rate for the serial communication
 '''

 self._Publisher = rospy.Publisher('serial', String)
 rospy.init_node('arduino')

 self._SerialDataGateway = SerialDataGateway(port, baudrate,  self._HandleReceivedLine)

 def Start(self):
 rospy.logdebug("Starting")
 self._SerialDataGateway.Start()

 def Stop(self):
 rospy.logdebug("Stopping")
 self._SerialDataGateway.Stop()

if __name__ == '__main__':
 arduino = Arduino("/dev/ttyUSB0", 115200)
 try:
 arduino.Start()
 rospy.spin()

 except rospy.ROSInterruptException:
 arduino.Stop()

In the __main__ function at the bottom of the source file an instance of the Arduino class is instantiated with the appropriate port and baud rate. In the constructor the ROS topic ‘serial’ is published and the node is initialized. Finally an instance of the helper class SerialDataGateway is created (more info below). Once this is done the Arduino instance is started. The call to rospy.Spin() ensures that the program keeps running.

The helper class SerialDataGateway is responsible for receiving data via the serial port and assembling the data into individual lines. When it receives a ‘\n’ it calls into the line handler function that was passed into the constructor. The class uses a worker thread which in the current usage pattern is not really necessary. But it allows me to enhance the node later to support read and write operations in parallel.

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''
import threading
import serial
from cStringIO import StringIO
import time
import rospy

def _OnLineReceived(line):
 print(line)

class SerialDataGateway(object):
 '''
 Helper class for receiving lines from a serial port
 '''

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200, lineHandler = _OnLineReceived):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 receivedLineHandler: The function to call when a line was received.
 '''
 self._Port = port
 self._Baudrate = baudrate
 self.ReceivedLineHandler = lineHandler
 self._KeepRunning = False

 def Start(self):
 self._Serial = serial.Serial(port = self._Port, baudrate = self._Baudrate, timeout = 1)

 self._KeepRunning = True
 self._ReceiverThread = threading.Thread(target=self._Listen)
 self._ReceiverThread.setDaemon(True)
 self._ReceiverThread.start()

 def Stop(self):
 rospy.loginfo("Stopping serial gateway")
 self._KeepRunning = False
 time.sleep(.1)
 self._Serial.close()

 def _Listen(self):
 stringIO = StringIO()
 while self._KeepRunning:
 data = self._Serial.read()
 if data == '\r':
 pass
 if data == '\n':
 self.ReceivedLineHandler(stringIO.getvalue())
 stringIO.close()
 stringIO = StringIO()
 else:
 stringIO.write(data)

 def Write(self, data):
 info = "Writing to serial port: %s" %data
 rospy.loginfo(info)
 self._Serial.write(data)

 if __name__ == '__main__':
 dataReceiver = SerialDataGateway("/dev/ttyUSB0",  115200)
 dataReceiver.Start()

 raw_input("Hit &lt;Enter&gt; to end.")
 dataReceiver.Stop()

The Arduino class passes its ‘_HandleReceivedLine’ function into the SerialDataGateway constructor. As a result whenever a new line is received the _HandleReceivedLine function in the Arduino class will be called:

def _HandleReceivedLine(self,  line):
  rospy.logdebug(line)
  self._Publisher.publish(String(line))

The function simply logs the line and then publishes it as a string. In a real world scenario I would modify the function to parse the line and extract, say, pose data (x, y, and angle). Correspondingly a pose message rather than a simple string message would be published.

Finally we are ready to try out the new node. The required steps are as follows:

  • Hook up an Ardunino board running a sketch that causes a line to be written to the serial port periodically.
  • In a new terminal run:
    roscore
  • In another new terminal run:
    roscd playground
    make
    rosrun playground arduino.py

Provided the port and baud rate specified in Arduino.py match your setup you should have a running ROS Arduino node now. Next we want to tap into the published topic.

  • In a new terminal subscribe to the ‘serial’ topic:
    rostopic echo /serial
    You should see the published lines fly by in this terminal.
  • To get a graphical representation of what is running start rxgraph in yet another terminal. The resulting graph should look similar to this:

Screenshot-rxgraph

On the left is the /arduino node with the exposed /serial topic. The node in the middle is the result of the rostopic app that we started. It subscribes to the /serial topic and displays the values in the terminal.

This concludes my first baby step into the world of ROS.

Monte Carlo Localization for Robots

Recently I started to read the excellent book Probabilistic Robotics by Sebastian Thrun, Wolfram Burgard, and Dieter Fox and got intrigued by Monte Carlo Localization (MCL). The MCL algorithm fully takes into account the uncertainty associated with drive commands and sensor measurements and allows a robot to locate itself in an environment provided a map is available.

In order to make sure that I really understand the algorithm I decided to implement it myself. Furthermore I did not find a readily accessible, standalone open source implementation that would allow me to exercise the algorithm in a simulated environment. Covering the resulting implementation is a bit too big for this blog. Instead I decided to add a multi-part article to my web site. You can find it here: Monte Carlo Localization for Robots

MCLSimulationSmall