This is a quick description of how to hook up the LCD module to an Arduino. FastTech currently offers the unit for $5.83 (free shipping).

LCD1602Display_I2C Front  LCD1602Display_I2C Back

In order to have some ‘meaningful’ values to show on the LCD display the setup includes a potentiometer (any value between 5k and 50k should work) to create analog input values:

LCD Display with Potentiometer_bb

#include <Wire.h>
#include <LCD.h>
#include <LiquidCrystal_I2C.h>
/*
For details about the LCD display with I2C support see

http://www.fasttech.com/reviews/1380909/22072

and

http://dx.com/p/funduino-iic-i2c-1602-lcd-adapter-board-w-2-5-lcd-screen-black-green-red-173588

The reviewer comments by docpayce and particularly JackWP associated with the two product pages above have been very useful.

Connect the LCD: VCC -> 5V, GND -> GND, SDA -> A4 (PortC4, ADC4), SCL -> A5 (PortC5, SDA)

The LiquidCrystal_I2C library needs to be downloaded and installed from here: https://bitbucket.org/fmalpartida/new-liquidcrystal/wiki/Home
*/

#define I2C_ADDR 0x27
#define BACKLIGHT_PIN 3
#define En_pin 2
#define Rw_pin 1
#define Rs_pin 0
#define D4_pin 4
#define D5_pin 5
#define D6_pin 6
#define D7_pin 7

LiquidCrystal_I2C lcd(I2C_ADDR,En_pin,Rw_pin,Rs_pin,D4_pin,D5_pin,D6_pin,D7_pin,BACKLIGHT_PIN,POSITIVE);
LCD *myLCD = &lcd;

int SENSOR_PIN = 0; // center pin of the potentiometer

void setup()
{
  lcd.begin(16,2);               // initialize the lcd
  lcd.home ();                   // go home
  lcd.print("Hello, ARDUINO ");
  delay(1000);
}

void loop()
{
  int sensorValue = analogRead(SENSOR_PIN);
  // set cursor to second row, first column
  lcd.setCursor(0, 1);
  lcd.print(sensorValue);
  lcd.print("      ");
  delay (100);
}

Kudos to docpayce and particularly JackWP for pointing to, and figuring out the constructor parameters (see URL references at the top of the Arduino sketch).

For the impatient: Just head over to the Online Involute Spur Gear Builder page.

Still here and interested in some background? Perfect! Involute gears are by far the most commonly used gears today. An involute curve is fairly easy to calculate and quite a number of freely available tools and scripts exist that use this fact to create involute gear profiles (see e.g., Gear template generator, Parametric Involute Bevel and Spur Gears, OpenJsCad’s gear demo, and many more). However, I have not found a freely available tool that correctly caters for undercuts that occur for smaller tooth counts (around 10 or less):

UnderCutExample

If the undercuts are ignored the resulting gears will jam if machined and assembled precisely. For an elegant description of the issue including a recipe for how to graphically create correct spur gears see Michal Zalewski’s corresponding section in part six of his excellent Guerrilla guide to CNC machining, mold making, and resin casting series.

The code behind the online involute spur gear builder determines the tooth profile by simulating how a gear with infinite radius (aka rack) would cut into a smaller gear as discussed by Michal Zalewki. The advantage of the infinite gear is that it has a very simple trapezoidal tooth form solely defined by the addendum height and the pressure angle. In the figure below the rack is shown at the top. When the rack is moved to the right the gear needs to rotate clockwise so that a point on the pitch circle of the gear moves with the same speed as the rack. The code simulates this movement in steps and then for each step subtracts the shape of the rack from the gear.

InfiniteGearCutter

For illustration purposes the figure below has been calculated based on very coarse steps and only shows two teeth of the rack.

AnnotatedCutoutSteps

In reality the code simulates only one rack tooth and calculates the tooth profile of half a tooth of the matching gear. The complete gear is then assembled by mirroring and rotating the half tooth. All of this is accomplished by leveraging the ‘Constructive Area Construction’ capabilities of the csg.js library which is part of OpenJsCad. Each rack tooth location is calculated and created as a polygon. These polygons are joined together with the union operator to form one complex 2d shape that is then subtracted from a slice of the outer circle of the target gear to form half a tooth. Finally, the complete gear is assembled from rotated and mirrored half teeth:

CAG_AssemblySteps

The graphic above shows rough steps in the final output. This can be somewhat alleviated by using finer steps in the simulation but to really get rid of the steps additional smoothing is required. The step profile occurs in the undercut regions where the backside of the rotated rack tooth profile sticks out as a corner. The smoothing logic is explained in this diagram:

CutoutSmooting

The final result has smooth tooth profiles. As an example below is an image of a calculated gear set (pressure angle 14.5°) consisting of a 30 tooth gear meshing with an 8 tooth gear. Small values for clearance and backlash were used. Head over to the involute spur gear builder page to try it out yourself.

SpurGearExample

The IBT-2 H-bridge module from wingxin is an inexpensive, high power motor driver based on two BTS7960 chips and is readily available from various ebay vendors; see e.g. here.

IBT-2Module

The link provides more details but here are a few key parameters.

  • Input voltage : 6V-27V
  • Maximum Current : 43A
  • Input level : 3.3V-5V

I am not sure whether the heat sink is sufficient for handling 43A but even if one does not drive the unit to its limits the specifications are still impressive given the typical price point of this module (currently between $13 and $17 including free shipping from China). There is relatively little information available about how to hook up the module with an Arduino controller. This thread on the Arduino forum was useful but the solution wastes a few pins and does not demonstrate bidirectional motor control. In this post I describe a slightly more complete solution that uses an Arduino controller with connected potentiometer to drive a motor via the IBT-2 module from full reverse speed to full forward speed.

For reference here is the description of the input ports and the two supported usage modes (image taken from one of the ebay postings). In this post I leverage usage mode one.

IBT-2 Input Ports

Here is the associated Arduino sketch:

/*
IBT-2 Motor Control Board driven by Arduino.

Speed and direction controlled by a potentiometer attached to analog input 0.
One side pin of the potentiometer (either one) to ground; the other side pin to +5V

Connection to the IBT-2 board:
IBT-2 pin 1 (RPWM) to Arduino pin 5(PWM)
IBT-2 pin 2 (LPWM) to Arduino pin 6(PWM)
IBT-2 pins 3 (R_EN), 4 (L_EN), 7 (VCC) to Arduino 5V pin
IBT-2 pin 8 (GND) to Arduino GND
IBT-2 pins 5 (R_IS) and 6 (L_IS) not connected
*/

int SENSOR_PIN = 0; // center pin of the potentiometer

int RPWM_Output = 5; // Arduino PWM output pin 5; connect to IBT-2 pin 1 (RPWM)
int LPWM_Output = 6; // Arduino PWM output pin 6; connect to IBT-2 pin 2 (LPWM)

void setup()
{
  pinMode(RPWM_Output, OUTPUT);
  pinMode(LPWM_Output, OUTPUT);
}

void loop()
{
  int sensorValue = analogRead(SENSOR_PIN);

  // sensor value is in the range 0 to 1023
  // the lower half of it we use for reverse rotation; the upper half for forward rotation
  if (sensorValue < 512)
  {
    // reverse rotation
    int reversePWM = -(sensorValue - 511) / 2;
    analogWrite(LPWM_Output, 0);
    analogWrite(RPWM_Output, reversePWM);
  }
  else
  {
    // forward rotation
    int forwardPWM = (sensorValue - 512) / 2;
    analogWrite(LPWM_Output, forwardPWM);
    analogWrite(RPWM_Output, 0);
  }
}

The following Fritzing diagram illustrates the wiring. B+ and B- at the top of the diagram represent the power supply for the motor. A 5k or 10k potentiometer is used to control the speed.

IBT-2 with Arduino_bb

For SLAM and navigation to work correctly on a ROS based robot it is necessary to accurately specify the transformation between the coordinate system (frame) of the steering center of the robot (the frame is typically called base_link) and the coordinate system of the laser scans. In the case of Kinect based robots the laser scan measurements are derived from the 3d point cloud that is provided by the openni_camera package and is based on the frame /openni_depth_frame.

As described in my earlier post 2d SLAM with ROS and Kinect I specify the transformation between the base_link and the Kinect sensor (specifically the frame openni_camera) in the launch file ardros_configuration.launch. The relevant line is:

<node pkg="tf" type="static_transform_publisher"  name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link  openni_camera 100" />

This transformation hooks together the base_link frame with the frames that are associated with the Kinect sensor. How the various frames are connected can be seen by running the following commands as described in this tutorial:

rosrun tf view_frames
evince frames.pdf

Here is the result of running these commands while running the navigation stack on Ardros (click on the image for a larger view).

I marked the frame for the laser scan data in red. As can be seen this frame is now connected with base_link.

Now the question is, how can we determine the parameters shown in the transformation above. I found it easiest to ‘park’ the robot in front of a wall pushing against a rectangular piece of card board or foam board to guarantee that the front of the robot is oriented parallel to the wall at a fixed distance:

I use a book to create a step change in the laser scan right in the center of the front of the robot. Next I run the navigation stack and bring up rviz with a configuration file that shows the laser scan plus two axes displays for the frames /base_link and /openni_camera.

By setting the length of the base_link frame axes to the distance from the origin of the base_link frame to the wall I can readily see whether the laser scan line intersects with the end of the x axis (red). If it doesn’t then the x offset as specified in the static transformation broadcaster base_to_kinect_broadcaster (see the line from the launch file above) needs to be adjusted. Furthermore the step change in the distance needs to align with the x axis. Any mismatch needs to be adjusted by modifying the y offset in the transform base_to_kinect_broadcaster.

Once this is done the frames are correctly aligned and the navigation stack can accurately correlate the laser scan with the odometry information.

As described in previous posts until now I used an Arduino Mega controller for my Ardros robot. In most cases this controller provides more than enough power. Mainly out of curiosity but also in order to overcome problems with encoder ticks counting when my motors run at full speed I decided to give the Leaf Lab‘s Maple controller a try.

The Maple is based on the 32-bit STM32F103RB ARM Cortex M3 microcontoller running at 72 MHz. Apart from the drastic speed improvements, what makes the Maple so attractive is that it uses the same pin layout as the Arduino and comes with the same IDE and hardware abstraction.

As a result the transition from Arduino is fairly easy. Naturally there are differences, though. They are summarized here.

The port of the complete Arduino program including the libraries that make up the Ardros – Controller and Drive System was rather painless and is now running on my robot. The source code is available on my Google code site drh-robotics-ros. This post is based on revision 104 of the code. The relevant subdirectories are Robot and libraries.

Two days ago I gave a presentation on 2d navigation focusing on the ROS navigation stack at the Robotics Society of Southern California (rssc.org). The slide deck can be downloaded from here.

At the beginning of this month I gave a presentation at the monthly club meeting of the Riverside Robotic Society. Thomas Messerschmidt, the president of the club, had invited me as a guest speaker to talk about how I use the Kinect sensor in conjunction with ROS to perform SLAM and navigation on my robot. Here is a link to the associated slide deck (pdf): Kinect-ROS-Navigation

In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.

With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:

  • A robot that publishes odometry information and accepts drive commands
  • (Faked) laser scanner sensor data derived from the Kinect sensor
  • Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
  • A map generated via SLAM

From a high level perspective navigation brings in these additions:

  • Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
  • Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.

Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.

Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.

The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>
	<include file="$(find ardros)/launch/move_base.launch"/>
</launch>

It brings in two other launch files, the first being the ardros_configuration.launch which we already know from the previous post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.

The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:

<launch>
  <!-- Run the map server -->
  <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/>

  <!--- Run AMCL -->
  <include file="$(find ardros)/launch/amcl_diff.launch" />

  <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" />
    <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" />
    <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" />
    <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" />
  </node>
</launch>

The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type in amcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.

It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.

Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation at http://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:

# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup

obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]]
#footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]]
#robot_radius: ir_of_robot
inflation_radius: 0.6

observation_sources: laser_scan_sensor

laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}

The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I

  1. Mount the Kinect towards the back of the robot
  2. Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.

Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:

roslaunch `rospack find ardros`/launch/navigation.launch

In a second terminal bring up rviz:

rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg

You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ’2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ’2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.

Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:

rostopic echo /move_base/status

The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.

Introduction

So far we really did not tap much into the power of ROS. Teleoperation and dead reckoning was just a way to ‘warm up’ and get familiar with the basics of ROS. This is about to change as we are getting ready to leverage ROS’s implementation of SLAM (Simultaneous Localization and Mapping). The associated ROS package is gmapping which wraps the open source GMapping implementation that is available on OpenSlam.org. In order to use gmapping we need a source of odometry information (Ardros already provides this) and a source of depth information. Typically laser scanners are used to get the depth information. They are accurate and fast but also quite expensive and hence not a very attractive choice for hobby robotics. Instead of a laser scanner I am using the Microsoft Kinect sensor which provides rgb + d information (color video plus depth information).

Hooking up the Kinect Sensor

The following assumes that you have the package desktop-full of  ROS Diamondback installed. In addition we need to install the openni_kinect package as described at http://www.ros.org/wiki/openni_kinect. Once in place we can verify that the depth sensor information comes in. After hooking up the Kinect sensor, type the following command in a terminal:

roslaunch openni_camera openni_node.launch

Then in a different terminal:

rosrun rviz rviz

In rviz set the Fixed Frame parameter to /openni_camera under .Global Options. Then add a PointCloud2 display type and select /camera/rgb/points for its topic. You should now see a color coded depth image similar to the one below:

Additional information about how the Kinect sensor is represented in ROS can be found here: http://www.ros.org/wiki/openni_camera

Faking a Laser Scanner

To use the depth image for SLAM we convert the point cloud to a faked laser scan signal by cutting a horizontal slice out of the image and using the nearest distance (closest depth) in each column. Fortunately we don’t need to code this ourselves. The pointcloud_to_laserscan package which is part of the turtlebot stack already covers this. The turtlebot stack does not come with desktop-full and needs to be installed separately.

The remainder of this blog requires revision 70 of my Ardros package. For details about how to get the Ardros source code please refer to my blog entry Ardros – Dead Reckoning but make sure that you use the newer revision.

We are now in a position to look at the laser scan output in rviz:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/kinect_laser.launch
  • Open a new terminal and execute
rosrun rviz rviz
  • In rviz
    • Set the fixed frame under Global Options to /openni_depth_frame
    • Add the display type ‘Laser Scan’ and select the topic /scan for it.

You should now see the laser scan data as derived from Kinect’s point cloud. For the screenshot below I added a grid with a cell size 0f 0.1m to the rviz display and also included a tf (transform) display showing only the /openni_camera frame. For clarification I added the x, y, z labels after taking the screenshot.

The launch file ‘kinect_laser.launch‘ warrants a closer look:

<launch>
  <!-- kinect and frame ids -->
  <include file="$(find openni_camera)/launch/openni_node.launch"/>

  <!-- openni_manager -->
  <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>

  <!-- throttling -->
  <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager">
    <param name="max_rate" value="2"/>
    <remap from="cloud_in" to="/camera/depth/points"/>
    <remap from="cloud_out" to="cloud_throttled"/>
  </node>

  <!-- fake laser -->
  <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager">
    <param name="output_frame_id" value="/openni_depth_frame"/>
    <remap from="cloud" to="cloud_throttled"/>
  </node>
</launch>

The first include statement brings in the openni_node launch file which brings up the Kinect camera (openni camera) and the tf transformations that link the various  camera frames (depth, rgb, etc.) together.

Then a node of type nodelet named  openni_manager is spun up. It acts as the container for other nodelets. Nodelets are like nodes but instead of running as their own process they are loaded into a parent node of type nodelet. Since multiple nodelets execute in the same process they can very efficiently share data. This is relevant when large amounts of data (like video frames) are processed. The launch file causes two nodelets to be loaded to a) throttle the rgb-d frame rate down to 2 Hertz and b) to cut a slice out of each rgb-d frame, calculate its distance column by column and assemble it as a laser scan data structure published on the topic /scan.

Both nodelets are part of the turtlebot stack. CloudToScan supports a couple of parameters that define the size and location of the slice that is used as the basis for the laser scan data. I use the default values. For details please refer to the source code.

Putting it all Together

Finally we are ready to get SLAM working. As inputs the SLAM algorithm requires odometry information and laser scan data. We provide both by using the launch file ‘ardros_configuration.launch‘.

<launch>
  <node pkg="tf" type="static_transform_publisher" name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link openni_camera 100" />

  <!--we use the kinect sensor as a fake laser scanner /-->
  <include file="$(find ardros)/launch/kinect_laser.launch"/>

  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

The content is best explained from bottom to top: We launch the arduino node which represents the robot and also launch the kinect based laser scanner as discussed above. In addition we need to publish a tf transform that relates the odometry frame base_link with the openni_camera frame. This transformation makes it possible to transform the laser scan data into the base_link frame. In the photo you can see where the Kinect sensor is located on the robot.

Finally we need to launch slam gmapping. All this is wrapped up in the slam.launch file:

<launch>
	<include file="$(find ardros)/launch/ardros_configuration.launch"/>

	<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
		<!--remap from="scan" to="base_scan"/-->
		<!--param name="odom_frame" value="odom"/-->
		<param name="map_update_interval" value="2.0"/>
		<param name="maxUrange" value="6.0"/>
		<param name="sigma" value="0.05"/>
		<param name="kernelSize" value="1"/>
		<param name="lstep" value="0.05"/>
		<param name="astep" value="0.05"/>
		<param name="iterations" value="5"/>
		<param name="lsigma" value="0.075"/>
		<param name="ogain" value="3.0"/>
		<param name="lskip" value="0"/>
		<param name="srr" value="0.01"/>
		<param name="srt" value="0.02"/>
		<param name="str" value="0.01"/>
		<param name="stt" value="0.02"/>
		<param name="linearUpdate" value="0.25"/>     <!-- param name="linearUpdate" value="0.5"/-->
		<param name="angularUpdate" value="0.262"/>   <!--param name="angularUpdate" value="0.436"/-->
		<param name="temporalUpdate" value="-1.0"/>
		<param name="resampleThreshold" value="0.5"/>
		<param name="particles" value="300"/>
		<param name="xmin" value="-50.0"/>
		<param name="ymin" value="-50.0"/>
		<param name="xmax" value="50.0"/>
		<param name="ymax" value="50.0"/>
		<param name="delta" value="0.05"/>
		<param name="llsamplerange" value="0.01"/>
		<param name="llsamplestep" value="0.01"/>
		<param name="lasamplerange" value="0.005"/>
		<param name="lasamplestep" value="0.005"/>
	</node>
</launch>

The various gmapping parameters are explained at http://www.ros.org/wiki/gmapping. There is plenty of room for fine tuning but the values listed above work fairly well. I am using mainly the default values but I specifically chose a relatively high particle count to increase the accuracy which suffers from the fact that the Kinect sensor has a much smaller horizontal field of view than a real laser scanner. So let’s run it:

  • Close all open terminals
  • Open a new terminal and execute the following
roscd ardros
roslaunch ./launch/slam.launch
  • Open a new terminal and execute
roscd ardros
rosrun rviz rviz -d ./rviz/slam.vcg

We can now drive the robot around for a while and watch as the map is assembled. When the map is completed it can be saved:

  • Open a new terminal and execute
roscd ardros
rosrun map_server map_saver -f ./maps/map

As an example here is snapshot of a sample run and a screen recording that shows how it evolved.

Now that we have the ability to retrieve odometry information from the robot and send velocity commands we combine both to drive the robot around using dead reckoning. The setup is very similar to the teleop scenario that I briefly discussed in the blog entry ROS Introduction Class with Ardros. Instead of the teleop node we now use a new DeadReckoning node:

The other key difference is that we now leverage the odometry transformation information that the arduino node publishes (represented by the ‘tf’ lollipop in the block diagram above). The DeadReckoning.py node is modeled after the corresponding C++ node in Willow Garage PR2 stack ‘Using the base controller with odometry and transform information‘.

Note that the DeadReckoning.py node can be used with any ROS enabled robot that publishes odometry information and accepts velocity commands based on the contract defined by ROS. Obviously in our case we use it in conjunction with the arduino node. If you want to follow along I suggest you locally check out the code from the Google Subversion repository. This blog entry is based on revision 61. In order to check out the code to the local directory ‘drh-robotics-ros’ use this command:

svn checkout -r 61 http://drh-robotics-ros.googlecode.com/svn/trunk/ drh-robotics-ros

For ROS to see the new package its path needs to be added to the ROS_PACKAGE_PATH environment variable. This is accomplished by adding a line at the end of the .bashrc file in your home folder:

export ROS_PACKAGE_PATH=~/drh-robotics-ros/ros/ardros:$ROS_PACKAGE_PATH

The path to the ardros folder might need to be adjusted.

Now we are ready to spin up the arduino node.

  1. Open a new terminal and enterroscoreThis will start the ros server.
  2. In a new terminal typeroscd ardrosroslaunch ./launch/ardros_standalone.launch

The content of the launch file ardros_standalone.launch is:

<launch>
  <node name="arduino" pkg="ardros" type="arduino.py">
    <rosparam file="$(find ardros)/info/ardros.yaml" command="load" />
  </node>
</launch>

It starts the arduino.py node with the parameters specified in the referenced ardros.yaml file:

port: /dev/ttyUSB0
baudRate: 115200
batteryMonitorParams: {voltageTooLowlimit: 11.7}

# wheel diameter [m], trackwidth [m], ticks per revolution
driveGeometry: {wheelDiameter: 0.0763, trackWidth: 0.37, countsPerRevolution: 9750}

# The speed controller parameters inlcude the P and I gains for the three PI controllers and the command timeout for the speed controller in seconds.
# If no velocity command arrives for more than the specified timeout then the speed controller will stop the robot.
speedController: {velocityPParam: 0.1, velocityIParam: 0.1, turnPParam: 0.4, turnIParam: 0.5, commandTimeout: 1.0}

The arduino.py node is now running, publishing tf and odometry messages while waiting for velocity commands. Sending those commands is the task of the  DeadReckoning.py node. This node implements the Driver class (more about it later) which exposes two public functions:

  • DriveX(self, distance, speed)
  • Turn(self, angle, angularSpeed)

The node creates an instance of the Driver class and then uses the two public functions to command the robot to drive a given pattern. The checked in code commands the robot to drive forward 2m, turn 180°, drive back 2m and finally turn 180° again to end up in the start position:

if __name__ == '__main__':
	try:
		driver = Driver()
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = math.pi, angularSpeed = 0.3);
		driver.DriveX(distance = 2, speed = 0.1);
		driver.Turn(angle = -math.pi, angularSpeed = 0.3)
		#driver.Turn(angle = 3 * math.pi, angularSpeed = 0.3);
	except rospy.ROSInterruptException:
		pass

Let’s look at the code of the Driver class starting with its constructor:

	def __init__(self):
		rospy.init_node('DeadReckoning')

		self._VelocityCommandPublisher = rospy.Publisher("cmd_vel", Twist)
		self._TransformListener = tf.TransformListener()

		# wait for the listener to get the first transform message
		self._TransformListener.waitForTransform("/odom", "/base_link", rospy.Time(), rospy.Duration(4.0))

In the constructor the instance is registered as a ROS node. Since the node will publish twist messages on the cmd_vel topic a corresponding publisher needs to be created. In order to get information about the transformation from the /odom frame to the robot’s /base_link frame we need a TransformListener. Finally we wait up to 4 seconds for the requested transformation to arrive. If we don’t get a message within this time the constructor fails.

Driving forward and backward is handled by the DriveX function:

	def DriveX(self, distance, speed):
		'''
		Drive in x direction a specified distance based on odometry information
		distance [m]: the distance to travel in the x direction (>0: forward, <0: backwards) 		speed [m/s]: the speed with which to travel; must be positive 		''' 		forward = (distance >= 0)

		# record the starting transform from the odom to the base_link frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		# startTranslation is a tuple holding the x,y,z components of the translation vector
		# startRotation is a tuple holding the four components of the quaternion
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

		done = False

		velocityCommand = Twist()
		if forward:
			velocityCommand.linear.x = speed # going forward m/s
		else:
			velocityCommand.linear.x = -speed # going forward m/s

		velocityCommand.angular.z = 0.0 # no angular velocity

		while True:
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))

				dx = currentTranslation[0] - startTranslation[0]
				dy = currentTranslation[1] - startTranslation[1]

				distanceMoved = math.sqrt(dx * dx + dy * dy)
				print distanceMoved
				if (forward):
					arrived = distanceMoved >= distance
				else:
					arrived = distanceMoved >= -distance

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			rospy.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

The function first retrieves the current transformation between the frames /odom and /base_link to know where the robot starts from. Then it sets up a twist message that commands the robot to drive forward (or backward) with the specified speed. The actual driving happens in the while loop. We get the current transformation and calculate how far the robot drove since the start of the function. If we traveled the requested distance the while loop exists and the function returns. Otherwise the twist message is published on the cmd_vel topic and we wait a little before checking again …

The Turn function is structured in the same way. Since angles are involved it is slightly more complex but hopefully the code is clear enough to understand.

	def Turn(self, angle, angularSpeed):
		'''
		Turn the robot based on odometry information
		angle [rad]: the angle to turn (positive angles mean clockwise rotation)
		angularSpeed [rad/s]: the speed with which to turn; must be positive
		'''

		ccw = (angle >= 0) # counter clockwise rotation

		# record the starting transform from the odom to the base frame
		# Note that here the 'from' frame precedes 'to' frame which is opposite to how they are
		# ordered in tf.TransformBroadcaster's sendTransform function.
		(startTranslation, startRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
		startAngle = 2 * math.atan2(startRotation[2], startRotation[3])

		print "start angle: " + str(startAngle)
		previousAngle = startAngle
		angleOffset = 0.0

		done = False

		velocityCommand = Twist()
		velocityCommand.linear.x = 0.0 # going forward m/s
		if ccw:
			velocityCommand.angular.z = angularSpeed
		else:
			velocityCommand.angular.z = -angularSpeed

		while not rospy.is_shutdown():
			try:
				(currentTranslation, currentRotation) = self._TransformListener.lookupTransform("/odom", "/base_link", rospy.Time(0))
				currentAngle = 2 * math.atan2(currentRotation[2], currentRotation[3])
				print "currentAngle: " + str(currentAngle)

				# we need to handle roll over of the angle
				if (currentAngle * previousAngle < 0 and math.fabs(currentAngle - previousAngle) > math.pi / 2):
					if (currentAngle > previousAngle):
						print "subtracting"
						angleOffset = angleOffset - 2 * math.pi
					else:
						print "adding"
						angleOffset = angleOffset + 2 * math.pi

				angleTurned = currentAngle + angleOffset - startAngle
				previousAngle = currentAngle

				print "angleTurned: " + str(angleTurned)
				if (ccw):
					arrived = (angleTurned >= angle)
				else:
					arrived = (angleTurned <= angle)

				print arrived

				if (arrived):
					break
				else:
					# send the drive command
					print("sending vel command" + str(velocityCommand))
					self._VelocityCommandPublisher.publish(velocityCommand)

			except (tf.LookupException, tf.ConnectivityException):
				continue

			time.sleep(0.1)

		#stop
		velocityCommand.linear.x = 0.0
		velocityCommand.angular.z = 0.0
		self._VelocityCommandPublisher.publish(velocityCommand)

		return done

We can now startup the DeadReckoning.py node and see it control the robot’s movements. In a new terminal type

roscd ardros
rosrun ardros DeadReckoning.py

This covers dead reckoning.

© 2014 Dr. Rainer Hessmer Suffusion theme by Sayontan Sinha