Ardros – Transform Between base_link and the Kinect Sensor

For SLAM and navigation to work correctly on a ROS based robot it is necessary to accurately specify the transformation between the coordinate system (frame) of the steering center of the robot (the frame is typically called base_link) and the coordinate system of the laser scans. In the case of Kinect based robots the laser scan measurements are derived from the 3d point cloud that is provided by the openni_camera package and is based on the frame /openni_depth_frame.

As described in my earlier post 2d SLAM with ROS and Kinect I specify the transformation between the base_link and the Kinect sensor (specifically the frame openni_camera) in the launch file ardros_configuration.launch. The relevant line is:

<node pkg="tf" type="static_transform_publisher"  name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link  openni_camera 100" />

This transformation hooks together the base_link frame with the frames that are associated with the Kinect sensor. How the various frames are connected can be seen by running the following commands as described in this tutorial:

rosrun tf view_frames
evince frames.pdf

Here is the result of running these commands while running the navigation stack on Ardros (click on the image for a larger view).

tf_frames

I marked the frame for the laser scan data in red. As can be seen this frame is now connected with base_link.

Now the question is, how can we determine the parameters shown in the transformation above. I found it easiest to ‘park’ the robot in front of a wall pushing against a rectangular piece of card board or foam board to guarantee that the front of the robot is oriented parallel to the wall at a fixed distance:

CalibratingKinectLocation

I use a book to create a step change in the laser scan right in the center of the front of the robot. Next I run the navigation stack and bring up rviz with a configuration file that shows the laser scan plus two axes displays for the frames /base_link and /openni_camera.

KinectLocationCalibration

By setting the length of the base_link frame axes to the distance from the origin of the base_link frame to the wall I can readily see whether the laser scan line intersects with the end of the x axis (red). If it doesn’t then the x offset as specified in the static transformation broadcaster base_to_kinect_broadcaster (see the line from the launch file above) needs to be adjusted. Furthermore the step change in the distance needs to align with the x axis. Any mismatch needs to be adjusted by modifying the y offset in the transform base_to_kinect_broadcaster.

Once this is done the frames are correctly aligned and the navigation stack can accurately correlate the laser scan with the odometry information.

Using the ROS Navigation Stack

In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.

With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:

  • A robot that publishes odometry information and accepts drive commands
  • (Faked) laser scanner sensor data derived from the Kinect sensor
  • Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
  • A map generated via SLAM

From a high level perspective navigation brings in these additions:

  • Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
  • Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.

Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.

Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.

The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>
	<include file="$(find ardros)/launch/move_base.launch"/>
</launch>

It brings in two other launch files, the first being the ardros_configuration.launch which we already know from theprevious post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.

The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:

<launch>
  <!-- Run the map server -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/>

  <!--- Run AMCL -->
  <include file="$(find ardros)/launch/amcl_diff.launch" />

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" />
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" />
    <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" />
  </node>
</launch>

The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type inamcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.

It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.

Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation athttp://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:

# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]]
#footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]]
#robot_radius: ir_of_robot
inflation_radius: 0.6

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I

  1. Mount the Kinect towards the back of the robot
  2. Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.

Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:

roslaunch `rospack find ardros`/launch/navigation.launch

In a second terminal bring up rviz:

rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg

You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ‘2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ‘2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.

Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:

rostopic echo /move_base/status

The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.

2d SLAM with ROS and Kinect

Introduction

So far we really did not tap much into the power of ROS. Teleoperation and dead reckoning was just a way to ‘warm up’ and get familiar with the basics of ROS. This is about to change as we are getting ready to leverage ROS’s implementation of SLAM (Simultaneous Localization and Mapping). The associated ROS package isgmapping which wraps the open source GMapping implementation that is available on OpenSlam.org. In order to use gmapping we need a source of odometry information (Ardros already provides this) and a source of depth information. Typically laser scanners are used to get the depth information. They are accurate and fast but also quite expensive and hence not a very attractive choice for hobby robotics. Instead of a laser scanner I am using the Microsoft Kinect sensor which provides rgb + d information (color video plus depth information).

Hooking up the Kinect Sensor

The following assumes that you have the package desktop-full of  ROS Diamondback installed. In addition we need to install the openni_kinect package as described at http://www.ros.org/wiki/openni_kinect. Once in place we can verify that the depth sensor information comes in. After hooking up the Kinect sensor, type the following command in a terminal:

roslaunch openni_camera openni_node.launch

Then in a different terminal:

rosrun rviz rviz

In rviz set the Fixed Frame parameter to /openni_camera under .Global Options. Then add a PointCloud2 display type and select /camera/rgb/points for its topic. You should now see a color coded depth image similar to the one below:

KinectDepth

Additional information about how the Kinect sensor is represented in ROS can be found here: http://www.ros.org/wiki/openni_camera

Faking a Laser Scanner

To use the depth image for SLAM we convert the point cloud to a faked laser scan signal by cutting a horizontal slice out of the image and using the nearest distance (closest depth) in each column. Fortunately we don’t need to code this ourselves. The pointcloud_to_laserscan package which is part of the turtlebot stack already covers this. The turtlebot stack does not come with desktop-full and needs to be installed separately.

The remainder of this blog requires revision 70 of my Ardros package. For details about how to get the Ardros source code please refer to my blog entry Ardros – Dead Reckoning but make sure that you use the newer revision.

We are now in a position to look at the laser scan output in rviz:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/kinect_laser.launch
  • Open a new terminal and execute
rosrun rviz rviz
  • In rviz
    • Set the fixed frame under Global Options to /openni_depth_frame
    • Add the display type ‘Laser Scan’ and select the topic /scan for it.

You should now see the laser scan data as derived from Kinect’s point cloud. For the screenshot below I added a grid with a cell size 0f 0.1m to the rviz display and also included a tf (transform) display showing only the /openni_camera frame. For clarification I added the x, y, z labels after taking the screenshot.

LaserScan

The launch file ‘kinect_laser.launch‘ warrants a closer look:

<launch>
  <!-- kinect and frame ids -->
  <include file="$(find openni_camera)/launch/openni_node.launch"/>

  <!-- openni_manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>
  </node>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/openni_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>
  </node>
</launch>

The first include statement brings in the openni_node launch file which brings up the Kinect camera (openni camera) and the tf transformations that link the various  camera frames (depth, rgb, etc.) together.

Then a node of type nodelet named  openni_manager is spun up. It acts as the container for other nodelets. Nodelets are like nodes but instead of running as their own process they are loaded into a parent node of type nodelet. Since multiple nodelets execute in the same process they can very efficiently share data. This is relevant when large amounts of data (like video frames) are processed. The launch file causes two nodelets to be loaded to a) throttle the rgb-d frame rate down to 2 Hertz and b) to cut a slice out of each rgb-d frame, calculate its distance column by column and assemble it as a laser scan data structure published on the topic /scan.

Both nodelets are part of the turtlebot stack. CloudToScan supports a couple of parameters that define the size and location of the slice that is used as the basis for the laser scan data. I use the default values. For details please refer to the source code.

Putting it all Together

Finally we are ready to get SLAM working. As inputs the SLAM algorithm requires odometry information and laser scan data. We provide both by using the launch file ‘ardros_configuration.launch‘.

<launch>
  <node pkg="tf" type="static_transform_publisher" name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link openni_camera 100" />

  <!--we use the kinect sensor as a fake laser scanner /-->
  <include file="$(find ardros)/launch/kinect_laser.launch"/>

  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

The content is best explained from bottom to top: We launch the arduino node which represents the robot and also launch the kinect based laser scanner as discussed above. In addition we need to publish a tf transform that relates the odometry frame base_link with the openni_camera frame. This transformation makes it possible to transform the laser scan data into the base_link frame. In the photo you can see where the Kinect sensor is located on the robot.

KinectOnRobot

Finally we need to launch slam gmapping. All this is wrapped up in the slam.launch file:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>

	<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
		<!--remap from="scan" to="base_scan"/-->
		<!--param name="odom_frame" value="odom"/-->
		<param name="map_update_interval" value="2.0"/>
		<param name="maxUrange" value="6.0"/>
		<param name="sigma" value="0.05"/>
		<param name="kernelSize" value="1"/>
		<param name="lstep" value="0.05"/>
		<param name="astep" value="0.05"/>
		<param name="iterations" value="5"/>
		<param name="lsigma" value="0.075"/>
		<param name="ogain" value="3.0"/>
		<param name="lskip" value="0"/>
		<param name="srr" value="0.01"/>
		<param name="srt" value="0.02"/>
		<param name="str" value="0.01"/>
		<param name="stt" value="0.02"/>
		<param name="linearUpdate" value="0.25"/>     <!-- param name="linearUpdate" value="0.5"/-->
		<param name="angularUpdate" value="0.262"/>   <!--param name="angularUpdate" value="0.436"/-->
		<param name="temporalUpdate" value="-1.0"/>
		<param name="resampleThreshold" value="0.5"/>
		<param name="particles" value="300"/>
		<param name="xmin" value="-50.0"/>
		<param name="ymin" value="-50.0"/>
		<param name="xmax" value="50.0"/>
		<param name="ymax" value="50.0"/>
		<param name="delta" value="0.05"/>
		<param name="llsamplerange" value="0.01"/>
		<param name="llsamplestep" value="0.01"/>
		<param name="lasamplerange" value="0.005"/>
		<param name="lasamplestep" value="0.005"/>
	</node>
</launch>

The various gmapping parameters are explained at http://www.ros.org/wiki/gmapping. There is plenty of room for fine tuning but the values listed above work fairly well. I am using mainly the default values but I specifically chose a relatively high particle count to increase the accuracy which suffers from the fact that the Kinect sensor has a much smaller horizontal field of view than a real laser scanner. So let’s run it:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/slam.launch
  • Open a new terminal and execute
roscd ardros
rosrun rviz rviz -d ./rviz/slam.vcg

We can now drive the robot around for a while and watch as the map is assembled. When the map is completed it can be saved:

  • Open a new terminal and execute
roscd ardros
rosrun map_server map_saver -f ./maps/map

As an example here is snapshot of a sample run and a screen recording that shows how it evolved.

RunningSlam

Ardros – Dead Reckoning

Now that we have the ability to retrieve odometry information from the robot and send velocity commands we combine both to drive the robot around using dead reckoning. The setup is very similar to the teleop scenario that I briefly discussed in the blog entry ROS Introduction Class with Ardros. Instead of the teleop node we now use a new DeadReckoning node:

DeadReckoningNodes

The other key difference is that we now leverage the odometry transformation information that the arduino node publishes (represented by the ‘tf’ lollipop in the block diagram above). The DeadReckoning.py node is modeled after the corresponding C++ node in Willow Garage PR2 stack ‘Using the base controller with odometry and transform information‘.

Note that the DeadReckoning.py node can be used with any ROS enabled robot that publishes odometry information and accepts velocity commands based on the contract defined by ROS. Obviously in our case we use it in conjunction with the arduino node. If you want to follow along I suggest you locally check out the code from the Google Subversion repository. This blog entry is based on revision 61. In order to check out the code to the local directory ‘drh-robotics-ros’ use this command:

svn checkout -r 61 http://drh-robotics-ros.googlecode.com/svn/trunk/ drh-robotics-ros

For ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is accomplished by adding a line at the end of the .bashrc file in your home folder:

export ROS_PACKAGE_PATH=~/drh-robotics-ros/ros/ardros:$ROS_PACKAGE_PATH

The path to the ardros folder might need to be adjusted.

Now we are ready to spin up the arduino node.

  1. Open a new terminal and enterroscoreThis will start the ros server.
  2. In a new terminal typeroscd ardrosroslaunch ./launch/ardros_standalone.launch

The content of the launch file ardros_standalone.launch is:

<launch>
  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

It starts the arduino.py node with the parameters specified in the referenced ardros.yaml file:

port: /dev/ttyUSB0
baudRate: 115200
batteryMonitorParams: {voltageTooLowlimit: 11.7}

# wheel diameter [m], trackwidth [m], ticks per revolution
driveGeometry: {wheelDiameter: 0.0763, trackWidth: 0.37, countsPerRevolution: 9750}

# The speed controller parameters inlcude the P and I gains for the three PI controllers and the command timeout for the speed controller in seconds.
# If no velocity command arrives for more than the specified timeout then the speed controller will stop the robot.
speedController: {velocityPParam: 0.1, velocityIParam: 0.1, turnPParam: 0.4, turnIParam: 0.5, commandTimeout: 1.0}

The arduino.py node is now running, publishing tf and odometry messages while waiting for velocity commands. Sending those commands is the task of the  DeadReckoning.py node. This node implements the Driver class (more about it later) which exposes two public functions:

  • DriveX(self, distance, speed)
  • Turn(self, angle, angularSpeed)

The node creates an instance of the Driver class and then uses the two public functions to command the robot to drive a given pattern. The checked in code commands the robot to drive forward 2m, turn 180°, drive back 2m and finally turn 180° again to end up in the start position:

if __name__ == '__main__':
	try:
		driver = Driver()
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = math.pi, angularSpeed = 0.3);
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = -math.pi, angularSpeed = 0.3)
		#driver.Turn(angle = 3 * math.pi, angularSpeed = 0.3);
	except rospy.ROSInterruptException:
		pass

Let’s look at the code of the Driver class starting with its constructor:

	def __init__(self):
		rospy.init_node('DeadReckoning')

		self._VelocityCommandPublisher = rospy.Publisher("cmd_vel", Twist)
		self._TransformListener = tf.TransformListener()

		# wait for the listener to get the first transform message
		self._TransformListener.waitForTransform("/odom", "/base_link", rospy.Time(), rospy.Duration(4.0))

In the constructor the instance is registered as a ROS node. Since the node will publish twist messages on the cmd_vel topic a corresponding publisher needs to be created. In order to get information about the transformation from the /odom frame to the robot’s /base_link frame we need a TransformListener. Finally we wait up to 4 seconds for the requested transformation to arrive. If we don’t get a message within this time the constructor fails.

Driving forward and backward is handled by the DriveX function:

	def DriveX(self, distance, speed):
		'''
		Drive in x direction a specified distance based on odometry information
		distance [m]: the distance to travel in the x direction (&gt;0: forward, &lt;0: backwards) 		speed [m/s]: the speed with which to travel; must be positive 		''' 		forward = (distance &gt;= 0)

		# record the starting transform from the odom to the base_link frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		# startTranslation is a tuple holding the x,y,z components of the translation vector
		# startRotation is a tuple holding the four components of the quaternion
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

		done = False

		velocityCommand = Twist()
		if forward:
			velocityCommand.linear.x = speed # going forward m/s
		else:
			velocityCommand.linear.x = -speed # going forward m/s

		velocityCommand.angular.z = 0.0 # no angular velocity

		while True:
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

				dx = currentTranslation[0] - startTranslation[0]
				dy = currentTranslation[1] - startTranslation[1]

				distanceMoved = math.sqrt(dx * dx + dy * dy)
				print distanceMoved
				if (forward):
					arrived = distanceMoved &gt;= distance
				else:
					arrived = distanceMoved &gt;= -distance

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			rospy.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

The function first retrieves the current transformation between the frames /odom and /base_link to know where the robot starts from. Then it sets up a twist message that commands the robot to drive forward (or backward) with the specified speed. The actual driving happens in the while loop. We get the current transformation and calculate how far the robot drove since the start of the function. If we traveled the requested distance the while loop exists and the function returns. Otherwise the twist message is published on the cmd_vel topic and we wait a little before checking again …

The Turn function is structured in the same way. Since angles are involved it is slightly more complex but hopefully the code is clear enough to understand.

	def Turn(self, angle, angularSpeed):
		'''
		Turn the robot based on odometry information
		angle [rad]: the angle to turn (positive angles mean clockwise rotation)
		angularSpeed [rad/s]: the speed with which to turn; must be positive
		'''

		ccw = (angle &gt;= 0) # counter clockwise rotation

		# record the starting transform from the odom to the base frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
		startAngle = 2 * math.atan2(startRotation[2], startRotation[3])

		print "start angle: " + str(startAngle)
		previousAngle = startAngle
		angleOffset = 0.0

		done = False

		velocityCommand = Twist()
		velocityCommand.linear.x = 0.0 # going forward m/s
		if ccw:
			velocityCommand.angular.z = angularSpeed
		else:
			velocityCommand.angular.z = -angularSpeed

		while not rospy.is_shutdown():
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
				currentAngle = 2 * math.atan2(currentRotation[2], currentRotation[3])
				print "currentAngle: " + str(currentAngle)

				# we need to handle roll over of the angle
				if (currentAngle * previousAngle &lt; 0 and math.fabs(currentAngle - previousAngle) &gt; math.pi / 2):
					if (currentAngle &gt; previousAngle):
						print "subtracting"
						angleOffset = angleOffset - 2 * math.pi
					else:
						print "adding"
						angleOffset = angleOffset + 2 * math.pi

				angleTurned = currentAngle + angleOffset - startAngle
				previousAngle = currentAngle

				print "angleTurned: " + str(angleTurned)
				if (ccw):
					arrived = (angleTurned &gt;= angle)
				else:
					arrived = (angleTurned &lt;= angle)

				print arrived

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			time.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

We can now startup the DeadReckoning.py node and see it control the robot’s movements. In a new terminal type

roscd ardros
rosrun ardros DeadReckoning.py

This covers dead reckoning.

Ardros – Controller and Drive System

Ardros (Arduino & ROS) is an experimental robot platform that I am working on to get familiar with ROS and particularly with the navigation stack that comes with ROS. My previous post focused on the Python code running on the PC that acts as a gateway to the Arduino controller. In the following I  cover the (differential) drive system and the Arduino controller code.

The main components that make up the drive system are:

The controller code is available from Google code (http://code.google.com/p/drh-robotics-ros/source/browse/#svn%2Ftrunk) in the sub folder ‘Arduino’. This post discusses the state as of Subversion revision 58. I expect that the code will further evolve over time.

The main file is \Arduino\Robot\Robot.pde. It leverages a couple of libraries (source code in \Arduino\libraries):

  • Messenger: For handling messages sent from the PC through the USB link
  • TimeInfo: Helper class for keeping track of time
  • RobotParams: Holds the parameter values that are required to determine the distance traveled / turned from the encoder ticks. The parameters are: WheelDiameter, TrackWidth, CountsPerRevolution, DistancePerCount, RadiansPerCount
  • OdometricLocalizer: It is responsible for determining the pose (x, y position and heading) of the robot based on the encoder counts.
  • BatteryMonitor: A simple wrapper class that encapsulates the scaling of an analog input value to get the current battery voltage. It also determines whether the battery voltage is too low. This information is used by the speed controller (see below) and causes it to stop the motors if the voltage is too low.
  • SpeedController: It uses three PI (proportional & integral) controllers to translate commanded velocity (velocity in x direction and angular velocity) to motor control values for the left and right motors. The speed controller implements a slightly modified version of the coupled PI control structure discussed here. The document illustrates a structure that only uses a P (proportional) controller that links the two motors and ensures that they are kept in sync:
    CoupledPIControlls
    In my code I use a PI (proportional & integral) controller instead. I.e., Kp_turn is accompanied by a Ki_turn. In my case it was not necessary to implement the boxes labeled ‘friction_comp’.
    The speed controller also implements a command timeout. If no commands are received for more than the specified timeout then the controller stops the motors.

After going through setup() the code in the loop() function of \Arduino\Robot\Robot.pde first checks whether all components have been initialized.

void loop()
{
  ReadSerial();

  unsigned long milliSecsSinceLastUpdate = millis() - _TimeInfo.LastUpdateMillisecs;
  if(milliSecsSinceLastUpdate &gt;= c_UpdateInterval)
  {
    //Serial.println(milliSecsSinceLastUpdate);
    // time for another update
    _TimeInfo.Update();
    if (_IsInitialized)
    {
      DoWork();
    }
    else
    {
      RequestInitialization();
    }
  }
}

If at least some of the components have not yet been initialized then initialization is requested by sending corresponding messages to the computer:

void RequestInitialization()
{
    _IsInitialized = true;

    if (!_RobotParams.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeDriveGeometry"); // requesting initialization of the parameters of the differential drive needed for odometry calculations
      Serial.print("\n");
    }

    if (!_SpeedController.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeSpeedController"); // requesting initialization of the speed controller
      Serial.print("\n");
    }

    if (!_BatteryMonitor.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeBatteryMonitor"); // requesting initialization of the battery monitor
      Serial.print("\n");
    }
}

The code running on the PC needs to react to these requests by sending down the initialization parameters which are handled by the Messenger class:

void OnMssageCompleted()
{
 if (_Messenger.checkString("s"))
 {
 SetSpeed();
 return;
 }

 if (_Messenger.checkString("DriveGeometry"))
 {
 InitializeDriveGeometry();
 return;
 }

 if (_Messenger.checkString("SpeedControllerParams"))
 {
 InitializeSpeedControllerParams();
 return;
 }

 if (_Messenger.checkString("BatteryMonitorParams"))
 {
 InitializeBatteryMonitor();
 }

 // clear out unrecognized content
 while(_Messenger.available())
 {
 _Messenger.readInt();
 }
}

Only after all components are initialized, odometry information is sent up and velocity commands are acted upon in the DoWork() function:

void DoWork()
{
  _OdometricLocalizer.Update(_LeftEncoderTicks, _RightEncoderTicks);
  _BatteryMonitor.Update();
  _SpeedController.Update(_BatteryMonitor.VoltageIsTooLow);
  IssueCommands();

  Serial.print("o"); // o indicates odometry message
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.X, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Y, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Heading, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.V, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Omega, 3);
  Serial.print("\n");

  Serial.print("b\t"); // o indicates battery info message
  Serial.print(_BatteryMonitor.BatteryVoltage, 3);
  Serial.print("\t");
  Serial.print(_BatteryMonitor.VoltageIsTooLow);
  Serial.print("\n");
}

void IssueCommands()
{
  float normalizedRightMotorCV, normalizedLeftMotorCV;

  normalizedRightMotorCV = _SpeedController.NormalizedLeftCV;
  normalizedLeftMotorCV = _SpeedController.NormalizedRightCV;

  /*
  Serial.print("Speed: ");
  Serial.print(_SpeedController.DesiredVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.DesiredAngularVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.LeftError);
  Serial.print("\t");
  Serial.print(_SpeedController.RightError);
  Serial.print("\t");
  Serial.print(_SpeedController.TurnError);
  Serial.print("\t");
  Serial.print(normalizedRightMotorCV);
  Serial.print("\t");
  Serial.print(normalizedLeftMotorCV);
  Serial.print("\n");
  */

  float rightServoValue = mapFloat(normalizedRightMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)
  float leftServoValue = mapFloat(normalizedLeftMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)

  _RightServo.write(rightServoValue);     // sets the servo position according to the scaled value (0 ... 179)
  _LeftServo.write(leftServoValue);     // sets the servo position according to the scaled value (0 ... 179)
}

Hopefully this somewhat explains the flow of data. In order to be able to participate in the ROS navigation stack the key aspects are that

  1. Velocity commands are accepted and acted upon. They arrive at the controller as messages prefixed with ‘s:’ for ‘speed’.
  2. Odometry information is sent back. This includes pose data (x, y, heading) as well as speed data (translational velocity and rotational velocity).

We are now in the position to do dead reckoning – but this will need to wait for the next blog entry.

Disclaimer: It turns out that the Arduino is taxed to its very limit by my controller program. This is caused by the combination of

  • The very high number of interrupts originating from the high resolution quadrature encoders mounted directly to the motor shafts.
    For details see my blog entry Quadrature encoder too fast for Arduino.
  • The trigonometric calculations performed to obtain the odometry data
  • The bidirectional communication between the controller and the PC.

As a result the faster the motors run the more wheel encoder transitions are skipped. It looks like the interrupts are suppressed in this overload scenario. As long as the robot moves slowly (< 1m/s in my case) the detected number of wheel encoder transitions is accurate enough. I plan to upgrade to a more powerful controller soon.

Sending Data from Arduino to ROS

For a while now I have been following Willow Garage’s open source ROS (Robot Operating System). It just celebrated its third anniversary showing impressive growth with more and more universities and companies using and extending ROS: http://www.willowgarage.com/blog/2010/11/08/happy-3rd-anniversary-ros. Furthermore ROS is beginning to enter the hobby market.

As a first step to get familiar with ROS I decided to create a simple package that would allow me to receive lines of text from the Arduino board and publish them into ROS. Andrew Harris has already published a package for Arduino (http://www.ros.org/news/2010/01/arduino-and-cmucam3-support-for-ros.html) that uses an efficient binary protocol to communicate between Arduino and ROS. I might eventually use it but nothing beats writing a bit of software yourself when it comes to understanding a new software infrastructure.

The Source code of my simple Arduino playground package is available on Google code athttp://code.google.com/p/drh-robotics-ros/source/browse/?r=60#svn%2Ftrunk%2Fros%2Fplayground. If you want to play with it I suggest that you check out revision 60 of the source code into a folder under your home directory. In order for ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is easily accomplished by adding a line at the end of the .bashrc file in your home folder. In my case the line reads:

export ROS_PACKAGE_PATH=~/dev/drh-robotics-ros/ros/playground:$ROS_PACKAGE_PATH

In your case the path to the playground folder might be different. To test it out we need an Arduino board with a sketch that periodically writes a string followed by a newline to the serial port like so:

Serial.println("Hello World");

The Python code of the arduino ROS node consists of two files. The entry point is the file /ros/playground/nodes/arduino.py:

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''

import roslib; roslib.load_manifest('playground')
import rospy
from std_msgs.msg import String
from playground.msg import Encoder

from SerialDataGateway import SerialDataGateway

class Arduino(object):
 '''
 Helper class for communicating over serial port with an Arduino board
 '''

 def _HandleReceivedLine(self,  line):
 rospy.logdebug(line)
 self._Publisher.publish(String(line))

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 baudrate: Baud rate for the serial communication
 '''

 self._Publisher = rospy.Publisher('serial', String)
 rospy.init_node('arduino')

 self._SerialDataGateway = SerialDataGateway(port, baudrate,  self._HandleReceivedLine)

 def Start(self):
 rospy.logdebug("Starting")
 self._SerialDataGateway.Start()

 def Stop(self):
 rospy.logdebug("Stopping")
 self._SerialDataGateway.Stop()

if __name__ == '__main__':
 arduino = Arduino("/dev/ttyUSB0", 115200)
 try:
 arduino.Start()
 rospy.spin()

 except rospy.ROSInterruptException:
 arduino.Stop()

In the __main__ function at the bottom of the source file an instance of the Arduino class is instantiated with the appropriate port and baud rate. In the constructor the ROS topic ‘serial’ is published and the node is initialized. Finally an instance of the helper class SerialDataGateway is created (more info below). Once this is done the Arduino instance is started. The call to rospy.Spin() ensures that the program keeps running.

The helper class SerialDataGateway is responsible for receiving data via the serial port and assembling the data into individual lines. When it receives a ‘\n’ it calls into the line handler function that was passed into the constructor. The class uses a worker thread which in the current usage pattern is not really necessary. But it allows me to enhance the node later to support read and write operations in parallel.

#!/usr/bin/env python
'''
Created on November 20, 2010

@author: Dr. Rainer Hessmer
'''
import threading
import serial
from cStringIO import StringIO
import time
import rospy

def _OnLineReceived(line):
 print(line)

class SerialDataGateway(object):
 '''
 Helper class for receiving lines from a serial port
 '''

 def __init__(self, port="/dev/ttyUSB0", baudrate=115200, lineHandler = _OnLineReceived):
 '''
 Initializes the receiver class.
 port: The serial port to listen to.
 receivedLineHandler: The function to call when a line was received.
 '''
 self._Port = port
 self._Baudrate = baudrate
 self.ReceivedLineHandler = lineHandler
 self._KeepRunning = False

 def Start(self):
 self._Serial = serial.Serial(port = self._Port, baudrate = self._Baudrate, timeout = 1)

 self._KeepRunning = True
 self._ReceiverThread = threading.Thread(target=self._Listen)
 self._ReceiverThread.setDaemon(True)
 self._ReceiverThread.start()

 def Stop(self):
 rospy.loginfo("Stopping serial gateway")
 self._KeepRunning = False
 time.sleep(.1)
 self._Serial.close()

 def _Listen(self):
 stringIO = StringIO()
 while self._KeepRunning:
 data = self._Serial.read()
 if data == '\r':
 pass
 if data == '\n':
 self.ReceivedLineHandler(stringIO.getvalue())
 stringIO.close()
 stringIO = StringIO()
 else:
 stringIO.write(data)

 def Write(self, data):
 info = "Writing to serial port: %s" %data
 rospy.loginfo(info)
 self._Serial.write(data)

 if __name__ == '__main__':
 dataReceiver = SerialDataGateway("/dev/ttyUSB0",  115200)
 dataReceiver.Start()

 raw_input("Hit &lt;Enter&gt; to end.")
 dataReceiver.Stop()

The Arduino class passes its ‘_HandleReceivedLine’ function into the SerialDataGateway constructor. As a result whenever a new line is received the _HandleReceivedLine function in the Arduino class will be called:

def _HandleReceivedLine(self,  line):
  rospy.logdebug(line)
  self._Publisher.publish(String(line))

The function simply logs the line and then publishes it as a string. In a real world scenario I would modify the function to parse the line and extract, say, pose data (x, y, and angle). Correspondingly a pose message rather than a simple string message would be published.

Finally we are ready to try out the new node. The required steps are as follows:

  • Hook up an Ardunino board running a sketch that causes a line to be written to the serial port periodically.
  • In a new terminal run:
    roscore
  • In another new terminal run:
    roscd playground
    make
    rosrun playground arduino.py

Provided the port and baud rate specified in Arduino.py match your setup you should have a running ROS Arduino node now. Next we want to tap into the published topic.

  • In a new terminal subscribe to the ‘serial’ topic:
    rostopic echo /serial
    You should see the published lines fly by in this terminal.
  • To get a graphical representation of what is running start rxgraph in yet another terminal. The resulting graph should look similar to this:

Screenshot-rxgraph

On the left is the /arduino node with the exposed /serial topic. The node in the middle is the result of the rostopic app that we started. It subscribes to the /serial topic and displays the values in the terminal.

This concludes my first baby step into the world of ROS.