In this post I cover how we can leverage the ROS navigation stack to let the robot autonomously drive from a given location in a map to a defined goal. I will skim over many of the details since the associated tutorials on the ROS wiki do a great job in describing how to set up the navigation stack.
With SLAM working on the Ardros robot (see my previous post) we already have much of the required setup covered. In particular we have:
- A robot that publishes odometry information and accepts drive commands
- (Faked) laser scanner sensor data derived from the Kinect sensor
- Transform configurations covering the transformations between base_link, the odometry frame, and the laser scanner frame (depth frame of the Kinect sensor)
- A map generated via SLAM
From a high level perspective navigation brings in these additions:
- Localization of the robot in a given map using Monte Carlo Localization (MCL). For more information about MCL (including C# source code) please seee my blog post Monte Carlo Localization for Robots.
- Global and local path planning. Amongst other parameters, path planning most notably requires a map, information about the size (footprint) of the robot, and information about how fast the robot can drive and accelerate.
Before reading on I highly recommend that you minimally read through the ROS tutorial ‘Setup and Configuration of the Navigation Stack on a Robot‘.
Okay, let’s dive into the Ardros specific files. As described in my previous posts the source code is available from here: http://code.google.com/p/drh-robotics-ros/. This post is based on revision 79 of the source code.
The whole navigation machinery is started via the launch file navigation.launch. The launch file itself is very simple:
<launch> <include file="$(find ardros)/launch/ardros_configuration.launch"/> <include file="$(find ardros)/launch/move_base.launch"/> </launch>
It brings in two other launch files, the first being the ardros_configuration.launch which we already know from theprevious post. It handles the Ardros specific aspects including odometry, drive commands, and the usage of the KInect sensor as a fake laser scanner.
The second include file, move_base.launch, is responsible for bringing up the navigation specific aspects:
<launch> <!-- Run the map server --> <node name="map_server" pkg="map_server" type="map_server" args="$(find ardros)/maps/map.pgm 0.05"/> <!--- Run AMCL --> <include file="$(find ardros)/launch/amcl_diff.launch" /> <node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen"> <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="global_costmap" /> <rosparam file="$(find ardros)/info/costmap_common_params.yaml" command="load" ns="local_costmap" /> <rosparam file="$(find ardros)/info/local_costmap_params.yaml" command="load" /> <rosparam file="$(find ardros)/info/global_costmap_params.yaml" command="load" /> <rosparam file="$(find ardros)/info/base_local_planner_params.yaml" command="load" /> </node> </launch>
The first relevant line brings up the map server and specifies what map it should serve up. It includes the path to the map which needs to be adjusted accordingly. Secondly it imports the launch file for AMCL (adaptive (or KLD-sampling) Monte Carlo localization) configured for a robot with differential drive (see odom_model_type inamcl_diff.launch). AMCL is responsible for localizing the robot on the map. AMCL uses numerous parameters, many of which are specified in the launch file. I use largely the default values but I increased the particle count numbers. I expect that going forward I will likely fine tune more of these parameters.
It is worth noting that Ardros is quite similar to Willow Garage’s turtlebot in its use of the Kinect sensor in conjunction with the navigation stack. I started to use the node pointcloud_to_laserscan before it became part of the turtlebot stack and hence had to come up with my own launch files and associated parameters. In the future I should be able to borrow quite a bit from turtlebot, minimally better parameters for AMCL.
Finally the last part of the move_base.launch file brings up the move_base node which is the center piece of the ROS navigation stack (see diagram and detailed explanation athttp://www.ros.org/wiki/navigation/Tutorials/RobotSetup). The content of the referenced yaml files should be pretty clear after reading through the navigation stack tutorial. I will only briefly touch on the robot’s footprint which is defined in the file costmap_common_params.yaml:
# for details see: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup obstacle_range: 2.5 raytrace_range: 3.0 footprint: [[0.305, 0.278], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.305, -0.278]] #footprint: [[0.075, 0.178], [0.04, 0.193], [-0.04, 0.193], [-0.282, 0.178], [-0.282, -0.178], [-0.04, -0.193], [0.04, -0.193], [0.075, -0.178]] #robot_radius: ir_of_robot inflation_radius: 0.6 observation_sources: laser_scan_sensor laser_scan_sensor: {sensor_frame: openni_depth_frame, data_type: LaserScan, topic: scan, marking: true, clearing: true}
The specified points define a polygon that reaches out beyond the front of the robot. I.e., I pretend the robot is bigger than it actually is. The reason for this is that I want to make sure that the path planner does not generate paths that drive the robot closer to an obstacle than it can actually see. The minimum distance that the Kinect sensor can measure is about 0.6m (2ft). To mitigate the short range ‘blindness’ I
- Mount the Kinect towards the back of the robot
- Pretend that the robot footprint reaches out beyond the front wheels further than it actually does.
Now we are ready to run the navigation stack. Close all terminals and then run the following command in a new terminal:
roslaunch `rospack find ardros`/launch/navigation.launch
In a second terminal bring up rviz:
rosrun rviz rviz -d `rospack find ardros`/rviz/navigation.vcg
You will probably see a number of error messages in the first terminal since the robot has not been located in the map. To locate the robot, set the starting pose by clicking on the ‘2D Pose Estimate’ button in rviz, followed by clicking on the current location of the robot in the map and, while still holding the left mouse button, dragging the mouse in the direction the robot faces. After specifying the goal in the same way after clicking on the ‘2D Nav Goal’ button you should see the planned path and the robot should start driving towards the goal.
Optionally you can monitor the status of the robot driving to the specified goal by running the following command in a third terminal:
rostopic echo /move_base/status
The video below is the recording of a test run showing the output in rviz while Ardros navigates a room. The video shows how the initial pose and the goal are selected. Initially the probabilistic cloud of poses used by AMCL is very wide. As the robot drives around the cloud condenses as the robot position determined from the odometry and laser scan information becomes better defined. The video also shows the calculated path that the robot tries to follow to get to the specified goal.