Ardros – Upgrading to the Maple 32-bit Microcontroller

As described in previous posts until now I used an Arduino Mega controller for my Ardros robot. In most cases this controller provides more than enough power. Mainly out of curiosity but also in order to overcome problems with encoder ticks counting when my motors run at full speed I decided to give the Leaf Lab‘s Maple controller a try.

The Maple is based on the 32-bit STM32F103RB ARM Cortex M3 microcontoller running at 72 MHz. Apart from the drastic speed improvements, what makes the Maple so attractive is that it uses the same pin layout as the Arduino and comes with the same IDE and hardware abstraction.

MapleIde

As a result the transition from Arduino is fairly easy. Naturally there are differences, though. They are summarized here.

The port of the complete Arduino program including the libraries that make up the Ardros – Controller and Drive System was rather painless and is now running on my robot. The source code is available on my Google code site drh-robotics-ros. This post is based on revision 104 of the code. The relevant subdirectories are Robot and libraries.

Cycloidal Gear Builder

Introduction

As part of my research into wooden clock making I learned about cycloidal gears and was surprised that I couldn’t find a free or open source tool for generating templates for this kind of gear. Free options do exist for the more common convolute gears (e.g., http://woodgears.ca/gear_cutting/template.html). I did however find Hugh Sparks’ excellent write-up on cycloidal gears and the associated JavaScript based calculator. The calculations reflect the British Standard 978, Part 2.

New Open Source Gear Builder Utility

UPDATE: The information below is still valid but in the meantime I created an Online Cycloidal Gear Builder which is easier to use and does not have any install requirements. You will probably want to use it instead of the older desktop app. More info here.

Back to the original blog post …

To remedy the lack of free and open source tools for cycloidal gears I created a .Net 3.5 WinForm application that generates SVG (Scalable Vector Graphics) output for meshing gears. Under the hood the application uses the formulas as described by Hugh Sparks. Here is a screenshot of the application:

Application_v1.0_Screenshot

The middle left part is the input section. Based on the inputs a number of output values are calculated and displayed in the left section. For an explanation of the term module and the various output values please see Hugh Sparks’ web page. In order to generate an SVG graphic for the specific gears click on the ‘Generate & save SVG‘ button. By default the output is saved as a file called CycloidalGear.svg in the folder that the program is executed from. Another output file can be specified by clicking on the ‘…’ button. Note that an existing file with the same name will be overwritten! This is on purpose since it makes it fairly simple to use a browser to display the graphical output. The typical usage pattern is as follows:

  • Specify the input parameters
  • Click on the Generate & save SVG button
  • Open the generated svg file in a modern browser that has SVG support (Firefox or Chrome work very well). You should get something like this:
    OutputInFirefox
  • Now you can repeatedly change parameters and regenerate the svg output. Each time after clicking theGenerate & save SVG button, switch to the browser and refresh its output by clicking F5.

The generated output can be further enhanced by using a vector graphics editor that supports SVG. One attractive option is Inkscape, an open source, multi-platform vector graphics editor that directly operates on the SVG file format.

The British Standard 978, Part 2 results in quite a bit of room between the trough of one gear and the apex of the other. It is possible to override the default behavior by checking the box ‘Custom Slop‘ and specifying the desired slop in mm:

CustomSlopDefinition

The figure below shows meshing teeth with default slop (left) and custom 0.3 mm slop (right). Notice that on the right the dedendum circle of one gear almost touches the addendum of the other. The distance is the specified 0.3 mm.

DefaultVsCustomSlop

Install

The application is implemented in C# and requires Windows with .Net 3.5 installed. Most Windows PCs have this version of the .Net framework already installed. If it is missing it can be downloaded from here. To install Gear Builder please follow these steps:

The application is self contained and does not depend on the registry, etc. As a result it can be simply uninstalled by deleting the extracted files.

Source Code

The application is coded in C# using Visual Studio 2010. The source code can be accessed from the associated Google code project site, specifically http://code.google.com/p/drh-horology/source/checkout. For the generation of the svg output I use a slightly enhanced version of Ben Peterson’s SVG framework library.

License

The application itself, as well as the associated source code, are covered by the permissive MIT license. The application leverages Hugh Sparkes’ formulas as well as Ben Peterson’ SVG library. Both do not come with specific licenses.

Presentation for the Riverside Robotic Society: Kinect + ROS

At the beginning of this month I gave a presentation at the monthly club meeting of the Riverside Robotic Society. Thomas Messerschmidt, the president of the club, had invited me as a guest speaker to talk about how I use the Kinect sensor in conjunction with ROS to perform SLAM and navigation on my robot. Here is a link to the associated slide deck (pdf): Kinect-ROS-Navigation

Using the ROS Navigation Stack

In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.

With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:

  • A robot that publishes odometry information and accepts drive commands
  • (Faked) laser scanner sensor data derived from the Kinect sensor
  • Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
  • A map generated via SLAM

From a high level perspective navigation brings in these additions:

  • Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
  • Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.

Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.

Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.

The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>
	<include file="$(find ardros)/launch/move_base.launch"/>
</launch>

It brings in two other launch files, the first being the ardros_configuration.launch which we already know from theprevious post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.

The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:

<launch>
  <!-- Run the map server -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/>

  <!--- Run AMCL -->
  <include file="$(find ardros)/launch/amcl_diff.launch" />

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" />
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" />
    <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" />
  </node>
</launch>

The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type inamcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.

It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.

Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation athttp://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:

# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]]
#footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]]
#robot_radius: ir_of_robot
inflation_radius: 0.6

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I

  1. Mount the Kinect towards the back of the robot
  2. Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.

Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:

roslaunch `rospack find ardros`/launch/navigation.launch

In a second terminal bring up rviz:

rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg

You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ‘2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ‘2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.

Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:

rostopic echo /move_base/status

The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.

2d SLAM with ROS and Kinect

Introduction

So far we really did not tap much into the power of ROS. Teleoperation and dead reckoning was just a way to ‘warm up’ and get familiar with the basics of ROS. This is about to change as we are getting ready to leverage ROS’s implementation of SLAM (Simultaneous Localization and Mapping). The associated ROS package isgmapping which wraps the open source GMapping implementation that is available on OpenSlam.org. In order to use gmapping we need a source of odometry information (Ardros already provides this) and a source of depth information. Typically laser scanners are used to get the depth information. They are accurate and fast but also quite expensive and hence not a very attractive choice for hobby robotics. Instead of a laser scanner I am using the Microsoft Kinect sensor which provides rgb + d information (color video plus depth information).

Hooking up the Kinect Sensor

The following assumes that you have the package desktop-full of  ROS Diamondback installed. In addition we need to install the openni_kinect package as described at http://www.ros.org/wiki/openni_kinect. Once in place we can verify that the depth sensor information comes in. After hooking up the Kinect sensor, type the following command in a terminal:

roslaunch openni_camera openni_node.launch

Then in a different terminal:

rosrun rviz rviz

In rviz set the Fixed Frame parameter to /openni_camera under .Global Options. Then add a PointCloud2 display type and select /camera/rgb/points for its topic. You should now see a color coded depth image similar to the one below:

KinectDepth

Additional information about how the Kinect sensor is represented in ROS can be found here: http://www.ros.org/wiki/openni_camera

Faking a Laser Scanner

To use the depth image for SLAM we convert the point cloud to a faked laser scan signal by cutting a horizontal slice out of the image and using the nearest distance (closest depth) in each column. Fortunately we don’t need to code this ourselves. The pointcloud_to_laserscan package which is part of the turtlebot stack already covers this. The turtlebot stack does not come with desktop-full and needs to be installed separately.

The remainder of this blog requires revision 70 of my Ardros package. For details about how to get the Ardros source code please refer to my blog entry Ardros – Dead Reckoning but make sure that you use the newer revision.

We are now in a position to look at the laser scan output in rviz:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/kinect_laser.launch
  • Open a new terminal and execute
rosrun rviz rviz
  • In rviz
    • Set the fixed frame under Global Options to /openni_depth_frame
    • Add the display type ‘Laser Scan’ and select the topic /scan for it.

You should now see the laser scan data as derived from Kinect’s point cloud. For the screenshot below I added a grid with a cell size 0f 0.1m to the rviz display and also included a tf (transform) display showing only the /openni_camera frame. For clarification I added the x, y, z labels after taking the screenshot.

LaserScan

The launch file ‘kinect_laser.launch‘ warrants a closer look:

<launch>
  <!-- kinect and frame ids -->
  <include file="$(find openni_camera)/launch/openni_node.launch"/>

  <!-- openni_manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>
  </node>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/openni_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>
  </node>
</launch>

The first include statement brings in the openni_node launch file which brings up the Kinect camera (openni camera) and the tf transformations that link the various  camera frames (depth, rgb, etc.) together.

Then a node of type nodelet named  openni_manager is spun up. It acts as the container for other nodelets. Nodelets are like nodes but instead of running as their own process they are loaded into a parent node of type nodelet. Since multiple nodelets execute in the same process they can very efficiently share data. This is relevant when large amounts of data (like video frames) are processed. The launch file causes two nodelets to be loaded to a) throttle the rgb-d frame rate down to 2 Hertz and b) to cut a slice out of each rgb-d frame, calculate its distance column by column and assemble it as a laser scan data structure published on the topic /scan.

Both nodelets are part of the turtlebot stack. CloudToScan supports a couple of parameters that define the size and location of the slice that is used as the basis for the laser scan data. I use the default values. For details please refer to the source code.

Putting it all Together

Finally we are ready to get SLAM working. As inputs the SLAM algorithm requires odometry information and laser scan data. We provide both by using the launch file ‘ardros_configuration.launch‘.

<launch>
  <node pkg="tf" type="static_transform_publisher" name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link openni_camera 100" />

  <!--we use the kinect sensor as a fake laser scanner /-->
  <include file="$(find ardros)/launch/kinect_laser.launch"/>

  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

The content is best explained from bottom to top: We launch the arduino node which represents the robot and also launch the kinect based laser scanner as discussed above. In addition we need to publish a tf transform that relates the odometry frame base_link with the openni_camera frame. This transformation makes it possible to transform the laser scan data into the base_link frame. In the photo you can see where the Kinect sensor is located on the robot.

KinectOnRobot

Finally we need to launch slam gmapping. All this is wrapped up in the slam.launch file:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>

	<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
		<!--remap from="scan" to="base_scan"/-->
		<!--param name="odom_frame" value="odom"/-->
		<param name="map_update_interval" value="2.0"/>
		<param name="maxUrange" value="6.0"/>
		<param name="sigma" value="0.05"/>
		<param name="kernelSize" value="1"/>
		<param name="lstep" value="0.05"/>
		<param name="astep" value="0.05"/>
		<param name="iterations" value="5"/>
		<param name="lsigma" value="0.075"/>
		<param name="ogain" value="3.0"/>
		<param name="lskip" value="0"/>
		<param name="srr" value="0.01"/>
		<param name="srt" value="0.02"/>
		<param name="str" value="0.01"/>
		<param name="stt" value="0.02"/>
		<param name="linearUpdate" value="0.25"/>     <!-- param name="linearUpdate" value="0.5"/-->
		<param name="angularUpdate" value="0.262"/>   <!--param name="angularUpdate" value="0.436"/-->
		<param name="temporalUpdate" value="-1.0"/>
		<param name="resampleThreshold" value="0.5"/>
		<param name="particles" value="300"/>
		<param name="xmin" value="-50.0"/>
		<param name="ymin" value="-50.0"/>
		<param name="xmax" value="50.0"/>
		<param name="ymax" value="50.0"/>
		<param name="delta" value="0.05"/>
		<param name="llsamplerange" value="0.01"/>
		<param name="llsamplestep" value="0.01"/>
		<param name="lasamplerange" value="0.005"/>
		<param name="lasamplestep" value="0.005"/>
	</node>
</launch>

The various gmapping parameters are explained at http://www.ros.org/wiki/gmapping. There is plenty of room for fine tuning but the values listed above work fairly well. I am using mainly the default values but I specifically chose a relatively high particle count to increase the accuracy which suffers from the fact that the Kinect sensor has a much smaller horizontal field of view than a real laser scanner. So let’s run it:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/slam.launch
  • Open a new terminal and execute
roscd ardros
rosrun rviz rviz -d ./rviz/slam.vcg

We can now drive the robot around for a while and watch as the map is assembled. When the map is completed it can be saved:

  • Open a new terminal and execute
roscd ardros
rosrun map_server map_saver -f ./maps/map

As an example here is snapshot of a sample run and a screen recording that shows how it evolved.

RunningSlam

Ardros – Dead Reckoning

Now that we have the ability to retrieve odometry information from the robot and send velocity commands we combine both to drive the robot around using dead reckoning. The setup is very similar to the teleop scenario that I briefly discussed in the blog entry ROS Introduction Class with Ardros. Instead of the teleop node we now use a new DeadReckoning node:

DeadReckoningNodes

The other key difference is that we now leverage the odometry transformation information that the arduino node publishes (represented by the ‘tf’ lollipop in the block diagram above). The DeadReckoning.py node is modeled after the corresponding C++ node in Willow Garage PR2 stack ‘Using the base controller with odometry and transform information‘.

Note that the DeadReckoning.py node can be used with any ROS enabled robot that publishes odometry information and accepts velocity commands based on the contract defined by ROS. Obviously in our case we use it in conjunction with the arduino node. If you want to follow along I suggest you locally check out the code from the Google Subversion repository. This blog entry is based on revision 61. In order to check out the code to the local directory ‘drh-robotics-ros’ use this command:

svn checkout -r 61 http://drh-robotics-ros.googlecode.com/svn/trunk/ drh-robotics-ros

For ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is accomplished by adding a line at the end of the .bashrc file in your home folder:

export ROS_PACKAGE_PATH=~/drh-robotics-ros/ros/ardros:$ROS_PACKAGE_PATH

The path to the ardros folder might need to be adjusted.

Now we are ready to spin up the arduino node.

  1. Open a new terminal and enterroscoreThis will start the ros server.
  2. In a new terminal typeroscd ardrosroslaunch ./launch/ardros_standalone.launch

The content of the launch file ardros_standalone.launch is:

<launch>
  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

It starts the arduino.py node with the parameters specified in the referenced ardros.yaml file:

port: /dev/ttyUSB0
baudRate: 115200
batteryMonitorParams: {voltageTooLowlimit: 11.7}

# wheel diameter [m], trackwidth [m], ticks per revolution
driveGeometry: {wheelDiameter: 0.0763, trackWidth: 0.37, countsPerRevolution: 9750}

# The speed controller parameters inlcude the P and I gains for the three PI controllers and the command timeout for the speed controller in seconds.
# If no velocity command arrives for more than the specified timeout then the speed controller will stop the robot.
speedController: {velocityPParam: 0.1, velocityIParam: 0.1, turnPParam: 0.4, turnIParam: 0.5, commandTimeout: 1.0}

The arduino.py node is now running, publishing tf and odometry messages while waiting for velocity commands. Sending those commands is the task of the  DeadReckoning.py node. This node implements the Driver class (more about it later) which exposes two public functions:

  • DriveX(self, distance, speed)
  • Turn(self, angle, angularSpeed)

The node creates an instance of the Driver class and then uses the two public functions to command the robot to drive a given pattern. The checked in code commands the robot to drive forward 2m, turn 180°, drive back 2m and finally turn 180° again to end up in the start position:

if __name__ == '__main__':
	try:
		driver = Driver()
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = math.pi, angularSpeed = 0.3);
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = -math.pi, angularSpeed = 0.3)
		#driver.Turn(angle = 3 * math.pi, angularSpeed = 0.3);
	except rospy.ROSInterruptException:
		pass

Let’s look at the code of the Driver class starting with its constructor:

	def __init__(self):
		rospy.init_node('DeadReckoning')

		self._VelocityCommandPublisher = rospy.Publisher("cmd_vel", Twist)
		self._TransformListener = tf.TransformListener()

		# wait for the listener to get the first transform message
		self._TransformListener.waitForTransform("/odom", "/base_link", rospy.Time(), rospy.Duration(4.0))

In the constructor the instance is registered as a ROS node. Since the node will publish twist messages on the cmd_vel topic a corresponding publisher needs to be created. In order to get information about the transformation from the /odom frame to the robot’s /base_link frame we need a TransformListener. Finally we wait up to 4 seconds for the requested transformation to arrive. If we don’t get a message within this time the constructor fails.

Driving forward and backward is handled by the DriveX function:

	def DriveX(self, distance, speed):
		'''
		Drive in x direction a specified distance based on odometry information
		distance [m]: the distance to travel in the x direction (&gt;0: forward, &lt;0: backwards) 		speed [m/s]: the speed with which to travel; must be positive 		''' 		forward = (distance &gt;= 0)

		# record the starting transform from the odom to the base_link frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		# startTranslation is a tuple holding the x,y,z components of the translation vector
		# startRotation is a tuple holding the four components of the quaternion
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

		done = False

		velocityCommand = Twist()
		if forward:
			velocityCommand.linear.x = speed # going forward m/s
		else:
			velocityCommand.linear.x = -speed # going forward m/s

		velocityCommand.angular.z = 0.0 # no angular velocity

		while True:
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

				dx = currentTranslation[0] - startTranslation[0]
				dy = currentTranslation[1] - startTranslation[1]

				distanceMoved = math.sqrt(dx * dx + dy * dy)
				print distanceMoved
				if (forward):
					arrived = distanceMoved &gt;= distance
				else:
					arrived = distanceMoved &gt;= -distance

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			rospy.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

The function first retrieves the current transformation between the frames /odom and /base_link to know where the robot starts from. Then it sets up a twist message that commands the robot to drive forward (or backward) with the specified speed. The actual driving happens in the while loop. We get the current transformation and calculate how far the robot drove since the start of the function. If we traveled the requested distance the while loop exists and the function returns. Otherwise the twist message is published on the cmd_vel topic and we wait a little before checking again …

The Turn function is structured in the same way. Since angles are involved it is slightly more complex but hopefully the code is clear enough to understand.

	def Turn(self, angle, angularSpeed):
		'''
		Turn the robot based on odometry information
		angle [rad]: the angle to turn (positive angles mean clockwise rotation)
		angularSpeed [rad/s]: the speed with which to turn; must be positive
		'''

		ccw = (angle &gt;= 0) # counter clockwise rotation

		# record the starting transform from the odom to the base frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
		startAngle = 2 * math.atan2(startRotation[2], startRotation[3])

		print "start angle: " + str(startAngle)
		previousAngle = startAngle
		angleOffset = 0.0

		done = False

		velocityCommand = Twist()
		velocityCommand.linear.x = 0.0 # going forward m/s
		if ccw:
			velocityCommand.angular.z = angularSpeed
		else:
			velocityCommand.angular.z = -angularSpeed

		while not rospy.is_shutdown():
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
				currentAngle = 2 * math.atan2(currentRotation[2], currentRotation[3])
				print "currentAngle: " + str(currentAngle)

				# we need to handle roll over of the angle
				if (currentAngle * previousAngle &lt; 0 and math.fabs(currentAngle - previousAngle) &gt; math.pi / 2):
					if (currentAngle &gt; previousAngle):
						print "subtracting"
						angleOffset = angleOffset - 2 * math.pi
					else:
						print "adding"
						angleOffset = angleOffset + 2 * math.pi

				angleTurned = currentAngle + angleOffset - startAngle
				previousAngle = currentAngle

				print "angleTurned: " + str(angleTurned)
				if (ccw):
					arrived = (angleTurned &gt;= angle)
				else:
					arrived = (angleTurned &lt;= angle)

				print arrived

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			time.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

We can now startup the DeadReckoning.py node and see it control the robot’s movements. In a new terminal type

roscd ardros
rosrun ardros DeadReckoning.py

This covers dead reckoning.

Ardros – Controller and Drive System

Ardros (Arduino & ROS) is an experimental robot platform that I am working on to get familiar with ROS and particularly with the navigation stack that comes with ROS. My previous post focused on the Python code running on the PC that acts as a gateway to the Arduino controller. In the following I  cover the (differential) drive system and the Arduino controller code.

The main components that make up the drive system are:

The controller code is available from Google code (http://code.google.com/p/drh-robotics-ros/source/browse/#svn%2Ftrunk) in the sub folder ‘Arduino’. This post discusses the state as of Subversion revision 58. I expect that the code will further evolve over time.

The main file is \Arduino\Robot\Robot.pde. It leverages a couple of libraries (source code in \Arduino\libraries):

  • Messenger: For handling messages sent from the PC through the USB link
  • TimeInfo: Helper class for keeping track of time
  • RobotParams: Holds the parameter values that are required to determine the distance traveled / turned from the encoder ticks. The parameters are: WheelDiameter, TrackWidth, CountsPerRevolution, DistancePerCount, RadiansPerCount
  • OdometricLocalizer: It is responsible for determining the pose (x, y position and heading) of the robot based on the encoder counts.
  • BatteryMonitor: A simple wrapper class that encapsulates the scaling of an analog input value to get the current battery voltage. It also determines whether the battery voltage is too low. This information is used by the speed controller (see below) and causes it to stop the motors if the voltage is too low.
  • SpeedController: It uses three PI (proportional & integral) controllers to translate commanded velocity (velocity in x direction and angular velocity) to motor control values for the left and right motors. The speed controller implements a slightly modified version of the coupled PI control structure discussed here. The document illustrates a structure that only uses a P (proportional) controller that links the two motors and ensures that they are kept in sync:
    CoupledPIControlls
    In my code I use a PI (proportional & integral) controller instead. I.e., Kp_turn is accompanied by a Ki_turn. In my case it was not necessary to implement the boxes labeled ‘friction_comp’.
    The speed controller also implements a command timeout. If no commands are received for more than the specified timeout then the controller stops the motors.

After going through setup() the code in the loop() function of \Arduino\Robot\Robot.pde first checks whether all components have been initialized.

void loop()
{
  ReadSerial();

  unsigned long milliSecsSinceLastUpdate = millis() - _TimeInfo.LastUpdateMillisecs;
  if(milliSecsSinceLastUpdate &gt;= c_UpdateInterval)
  {
    //Serial.println(milliSecsSinceLastUpdate);
    // time for another update
    _TimeInfo.Update();
    if (_IsInitialized)
    {
      DoWork();
    }
    else
    {
      RequestInitialization();
    }
  }
}

If at least some of the components have not yet been initialized then initialization is requested by sending corresponding messages to the computer:

void RequestInitialization()
{
    _IsInitialized = true;

    if (!_RobotParams.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeDriveGeometry"); // requesting initialization of the parameters of the differential drive needed for odometry calculations
      Serial.print("\n");
    }

    if (!_SpeedController.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeSpeedController"); // requesting initialization of the speed controller
      Serial.print("\n");
    }

    if (!_BatteryMonitor.IsInitialized)
    {
      _IsInitialized = false;

      Serial.print("InitializeBatteryMonitor"); // requesting initialization of the battery monitor
      Serial.print("\n");
    }
}

The code running on the PC needs to react to these requests by sending down the initialization parameters which are handled by the Messenger class:

void OnMssageCompleted()
{
 if (_Messenger.checkString("s"))
 {
 SetSpeed();
 return;
 }

 if (_Messenger.checkString("DriveGeometry"))
 {
 InitializeDriveGeometry();
 return;
 }

 if (_Messenger.checkString("SpeedControllerParams"))
 {
 InitializeSpeedControllerParams();
 return;
 }

 if (_Messenger.checkString("BatteryMonitorParams"))
 {
 InitializeBatteryMonitor();
 }

 // clear out unrecognized content
 while(_Messenger.available())
 {
 _Messenger.readInt();
 }
}

Only after all components are initialized, odometry information is sent up and velocity commands are acted upon in the DoWork() function:

void DoWork()
{
  _OdometricLocalizer.Update(_LeftEncoderTicks, _RightEncoderTicks);
  _BatteryMonitor.Update();
  _SpeedController.Update(_BatteryMonitor.VoltageIsTooLow);
  IssueCommands();

  Serial.print("o"); // o indicates odometry message
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.X, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Y, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Heading, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.V, 3);
  Serial.print("\t");
  Serial.print(_OdometricLocalizer.Omega, 3);
  Serial.print("\n");

  Serial.print("b\t"); // o indicates battery info message
  Serial.print(_BatteryMonitor.BatteryVoltage, 3);
  Serial.print("\t");
  Serial.print(_BatteryMonitor.VoltageIsTooLow);
  Serial.print("\n");
}

void IssueCommands()
{
  float normalizedRightMotorCV, normalizedLeftMotorCV;

  normalizedRightMotorCV = _SpeedController.NormalizedLeftCV;
  normalizedLeftMotorCV = _SpeedController.NormalizedRightCV;

  /*
  Serial.print("Speed: ");
  Serial.print(_SpeedController.DesiredVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.DesiredAngularVelocity);
  Serial.print("\t");
  Serial.print(_SpeedController.LeftError);
  Serial.print("\t");
  Serial.print(_SpeedController.RightError);
  Serial.print("\t");
  Serial.print(_SpeedController.TurnError);
  Serial.print("\t");
  Serial.print(normalizedRightMotorCV);
  Serial.print("\t");
  Serial.print(normalizedLeftMotorCV);
  Serial.print("\n");
  */

  float rightServoValue = mapFloat(normalizedRightMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)
  float leftServoValue = mapFloat(normalizedLeftMotorCV, -1, 1, 90.0 - c_MaxMotorCV, 90.0 + c_MaxMotorCV);     // scale it to use it with the servo (value between 0 and 180)

  _RightServo.write(rightServoValue);     // sets the servo position according to the scaled value (0 ... 179)
  _LeftServo.write(leftServoValue);     // sets the servo position according to the scaled value (0 ... 179)
}

Hopefully this somewhat explains the flow of data. In order to be able to participate in the ROS navigation stack the key aspects are that

  1. Velocity commands are accepted and acted upon. They arrive at the controller as messages prefixed with ‘s:’ for ‘speed’.
  2. Odometry information is sent back. This includes pose data (x, y, heading) as well as speed data (translational velocity and rotational velocity).

We are now in the position to do dead reckoning – but this will need to wait for the next blog entry.

Disclaimer: It turns out that the Arduino is taxed to its very limit by my controller program. This is caused by the combination of

  • The very high number of interrupts originating from the high resolution quadrature encoders mounted directly to the motor shafts.
    For details see my blog entry Quadrature encoder too fast for Arduino.
  • The trigonometric calculations performed to obtain the odometry data
  • The bidirectional communication between the controller and the PC.

As a result the faster the motors run the more wheel encoder transitions are skipped. It looks like the interrupts are suppressed in this overload scenario. As long as the robot moves slowly (< 1m/s in my case) the detected number of wheel encoder transitions is accurate enough. I plan to upgrade to a more powerful controller soon.

ROS Indroduction Class with Ardros

Yesterday, at the Robotics Society of Southern California meeting (http://www.rssc.org/), I taught a class about ROS, the Robot Operating System from Willow Garage. In the first part of the presentation (slide deck as pdf file) I focused on generic ROS functionality. For the second half of the presentation I switched to demo mode using the current state of ‘Ardros’, my new Arduino based robot that is controlled via ROS, as a practical example for how ROS can benefit hobby roboticists. Here is a portrait of Ardros decorated with the artsy head that my sister created :-).

ArdrosRobot20110213

Ardros currently publishes odometry and tf transform information as required for future integration with the ROS navigation stack (see http://www.ros.org/wiki/navigation/Tutorials/RobotSetup). In addition it subscribes to the topic cmd_vel to accept velocity commands. The robot can be remotely controlled with a wireless PS3 controller. I use a teleop node that translates joystick messages to properly configured Twist messages to send drive commands to the robot as illustrated in the block diagram below. The teleop concept is explained inhttp://www.ros.org/wiki/ps3joy/Tutorials/WritingTeleopNode.

BlockDiagram

All the code is open source and can be downloaded from the Google code project http://code.google.com/p/drh-robotics-ros/. I expect the code to evolve quite a bit in the future. To see the code that matches the presentationrevision 51 should be used. The Arduino controller code resides in the folder \Arduino; the Python code for the ROS nodes is located in the folder \ros\ardros. Mike Feguson’s ArbotiX base_controller.py code was a great help for me and not surprisingly the code of my arduino node borrows heavily from Mike Feguson’s implementation (see http://www.ros.org/wiki/arbotix_python).

Quadrature Encoder too Fast for Arduino (with Solution)

For my next robot project I am using Pittman Motors with 500 cnt/rev quadrature encoders mounted on the motor axle. Initially I used the same code that I described in my blog post ‘Building a Self-Balancing Robot: Motor Driver and Wheel Encoder’ for handling the encoder input. Everything worked fine when I turned the motors manually; the interrupts kicked in and the counters were correctly incremented / decremented. However, when I powered up the motors the Arduino program froze.

Others suffered from the same experience (see e.g., Missing counting encoder). One suggested solution is the use of dedicated quadrature decoder chips like the LS7266R1. More information about its usage in a robot project can be found in the thesis Design of a Wireless Control System for a Laboratory Planetary Rover by Eric Jamesson Wilhelm.

Fortunately no dedicated decoder hardware is necessary (at least in my case). It turns out that the Arduino function digitalRead() comes with a lot of overhead and is correspondingly slow. Both Jean-Claude Wippler (Pin I/O performance) and Bill Porter (Ready, Set, Oscillate! The Fastest Way to Change Arduino Pins) discuss the overhead and demonstrate how it can be avoided by directly reading from Atmel ports rather than going through the Arduino library, thus gaining a factor of about 50 in performance.

While directly reading from controller ports certainly solves the issue, it forces you to become familiar with port maps and write code that is specific to a given controller (no simple porting from Arduino Uno to Mega, etc.). This problem has been beautifully addressed by jrraines’ digitalwritefast library (download) which in turn is based on a suggestion from Paul Stoffregen (see http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1267553811/0).

So, with the open source community to the rescue my quadrature encoder problem is now solved. Here is my new quadrature encoder code wrapped in a little test sketch:

UPDATE: I incorporated Bill Porter’s feedback (see his comment for further detail) to further improve the performance of the code. Since the interrupt only fires on a rising change there is no need to read pin A; it’s value will always be ‘true’.  Furthermore by using pre-processor directives #ifdef … #else … #endif the evaluation of an if statement in each interrupt handler during runtime can be avoided. Thanks, Bill!

#include "WProgram.h"
#include <Servo.h>
#include <digitalWriteFast.h>  // library for high performance reads and writes by jrraines
                               // see http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1267553811/0
                               // and http://code.google.com/p/digitalwritefast/

// It turns out that the regular digitalRead() calls are too slow and bring the arduino down when
// I use them in the interrupt routines while the motor runs at full speed creating more than
// 40000 encoder ticks per second per motor.

// Quadrature encoders
// Left encoder
#define c_LeftEncoderInterrupt 4
#define c_LeftEncoderPinA 19
#define c_LeftEncoderPinB 25
#define LeftEncoderIsReversed
volatile bool _LeftEncoderBSet;
volatile long _LeftEncoderTicks = 0;

// Right encoder
#define c_RightEncoderInterrupt 5
#define c_RightEncoderPinA 18
#define c_RightEncoderPinB 24
volatile bool _RightEncoderBSet;
volatile long _RightEncoderTicks = 0;

Servo _RightServo;  // create servo object to control right motor
Servo _LeftServo;  // create servo object to control left motor

int potpin = 0;  // analog pin used to connect the potentiometer
int val;    // variable to read the value from the analog pin

void setup()
{
  Serial.begin(115200);

  _RightServo.attach(2);  // attaches the servo on specified pin to the servo object
  _LeftServo.attach(3);  // attaches the servo on specified pin to the servo object

  // Quadrature encoders
  // Left encoder
  pinMode(c_LeftEncoderPinA, INPUT);      // sets pin A as input
  digitalWrite(c_LeftEncoderPinA, LOW);  // turn on pullup resistors
  pinMode(c_LeftEncoderPinB, INPUT);      // sets pin B as input
  digitalWrite(c_LeftEncoderPinB, LOW);  // turn on pullup resistors
  attachInterrupt(c_LeftEncoderInterrupt, HandleLeftMotorInterruptA, RISING);

  // Right encoder
  pinMode(c_RightEncoderPinA, INPUT);      // sets pin A as input
  digitalWrite(c_RightEncoderPinA, LOW);  // turn on pullup resistors
  pinMode(c_RightEncoderPinB, INPUT);      // sets pin B as input
  digitalWrite(c_RightEncoderPinB, LOW);  // turn on pullup resistors
  attachInterrupt(c_RightEncoderInterrupt, HandleRightMotorInterruptA, RISING);
}

void loop()
{
  val = analogRead(potpin);            // reads the value of the potentiometer (value between 0 and 1023)
  val = map(val, 0, 1023, 0, 179);     // scale it to use it with the servo (value between 0 and 180)

  _RightServo.write(val);
  _LeftServo.write(val);

  Serial.print(_LeftEncoderTicks);
  Serial.print("\t");
  Serial.print(_RightEncoderTicks);
  Serial.print("\n");

  delay(20);
}

// Interrupt service routines for the left motor's quadrature encoder
void HandleLeftMotorInterruptA()
{
  // Test transition; since the interrupt will only fire on 'rising' we don't need to read pin A
  _LeftEncoderBSet = digitalReadFast(c_LeftEncoderPinB);   // read the input pin

  // and adjust counter + if A leads B
  #ifdef LeftEncoderIsReversed
    _LeftEncoderTicks -= _LeftEncoderBSet ? -1 : +1;
  #else
    _LeftEncoderTicks += _LeftEncoderBSet ? -1 : +1;
  #endif
}

// Interrupt service routines for the right motor's quadrature encoder
void HandleRightMotorInterruptA()
{
  // Test transition; since the interrupt will only fire on 'rising' we don't need to read pin A
  _RightEncoderBSet = digitalReadFast(c_RightEncoderPinB);   // read the input pin

  // and adjust counter + if A leads B
  #ifdef RightEncoderIsReversed
    _RightEncoderTicks -= _RightEncoderBSet ? -1 : +1;
  #else
    _RightEncoderTicks += _RightEncoderBSet ? -1 : +1;
  #endif
}

Hopefully the code is fairly self explanatory. Since fast reads require that the pins are known at compile time I could no longer use my QuadratureEncoder class. Instead the code for calculating the encoder ticks is now part of the sketch itself and there is a little bit of duplicated code in the two interrupt handlers. For testing I used a 10k potentiometer at analog pin 0. The analog value is used to set the output of two servos that drive a Sabertooth 10 RC Dual Motor Speed Controller. This allowed me to put the motors through their paces and verify that the encoder ticks are correctly caught.