Introduction
So far we really did not tap much into the power of ROS. Teleoperation and dead reckoning was just a way to ‘warm up’ and get familiar with the basics of ROS. This is about to change as we are getting ready to leverage ROS’s implementation of SLAM (Simultaneous Localization and Mapping). The associated ROS package isgmapping which wraps the open source GMapping implementation that is available on OpenSlam.org. In order to use gmapping we need a source of odometry information (Ardros already provides this) and a source of depth information. Typically laser scanners are used to get the depth information. They are accurate and fast but also quite expensive and hence not a very attractive choice for hobby robotics. Instead of a laser scanner I am using the Microsoft Kinect sensor which provides rgb + d information (color video plus depth information).
Hooking up the Kinect Sensor
The following assumes that you have the package desktop-full of ROS Diamondback installed. In addition we need to install the openni_kinect package as described at http://www.ros.org/wiki/openni_kinect. Once in place we can verify that the depth sensor information comes in. After hooking up the Kinect sensor, type the following command in a terminal:
roslaunch openni_camera openni_node.launch
Then in a different terminal:
rosrun rviz rviz
In rviz set the Fixed Frame parameter to /openni_camera under .Global Options. Then add a PointCloud2 display type and select /camera/rgb/points for its topic. You should now see a color coded depth image similar to the one below:
Additional information about how the Kinect sensor is represented in ROS can be found here: http://www.ros.org/wiki/openni_camera
Faking a Laser Scanner
To use the depth image for SLAM we convert the point cloud to a faked laser scan signal by cutting a horizontal slice out of the image and using the nearest distance (closest depth) in each column. Fortunately we don’t need to code this ourselves. The pointcloud_to_laserscan package which is part of the turtlebot stack already covers this. The turtlebot stack does not come with desktop-full and needs to be installed separately.
The remainder of this blog requires revision 70 of my Ardros package. For details about how to get the Ardros source code please refer to my blog entry Ardros – Dead Reckoning but make sure that you use the newer revision.
We are now in a position to look at the laser scan output in rviz:
- Close all open terminals
- Open a new terminal and execute the following
roscd ardros roslaunch ./launch/kinect_laser.launch
- Open a new terminal and execute
rosrun rviz rviz
- In rviz
- Set the fixed frame under Global Options to /openni_depth_frame
- Add the display type ‘Laser Scan’ and select the topic /scan for it.
You should now see the laser scan data as derived from Kinect’s point cloud. For the screenshot below I added a grid with a cell size 0f 0.1m to the rviz display and also included a tf (transform) display showing only the /openni_camera frame. For clarification I added the x, y, z labels after taking the screenshot.
The launch file ‘kinect_laser.launch‘ warrants a closer look:
<launch> <!-- kinect and frame ids --> <include file="$(find openni_camera)/launch/openni_node.launch"/> <!-- openni_manager --> <node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/> <!-- throttling --> <node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load pointcloud_to_laserscan/CloudThrottle openni_manager"> <param name="max_rate" value="2"/> <remap from="cloud_in" to="/camera/depth/points"/> <remap from="cloud_out" to="cloud_throttled"/> </node> <!-- fake laser --> <node pkg="nodelet" type="nodelet" name="kinect_laser" args="load pointcloud_to_laserscan/CloudToScan openni_manager"> <param name="output_frame_id" value="/openni_depth_frame"/> <remap from="cloud" to="cloud_throttled"/> </node> </launch>
The first include statement brings in the openni_node launch file which brings up the Kinect camera (openni camera) and the tf transformations that link the various camera frames (depth, rgb, etc.) together.
Then a node of type nodelet named openni_manager is spun up. It acts as the container for other nodelets. Nodelets are like nodes but instead of running as their own process they are loaded into a parent node of type nodelet. Since multiple nodelets execute in the same process they can very efficiently share data. This is relevant when large amounts of data (like video frames) are processed. The launch file causes two nodelets to be loaded to a) throttle the rgb-d frame rate down to 2 Hertz and b) to cut a slice out of each rgb-d frame, calculate its distance column by column and assemble it as a laser scan data structure published on the topic /scan.
Both nodelets are part of the turtlebot stack. CloudToScan supports a couple of parameters that define the size and location of the slice that is used as the basis for the laser scan data. I use the default values. For details please refer to the source code.
Putting it all Together
Finally we are ready to get SLAM working. As inputs the SLAM algorithm requires odometry information and laser scan data. We provide both by using the launch file ‘ardros_configuration.launch‘.
<launch> <node pkg="tf" type="static_transform_publisher" name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link openni_camera 100" /> <!--we use the kinect sensor as a fake laser scanner /--> <include file="$(find ardros)/launch/kinect_laser.launch"/> <node name="arduino" pkg="ardros" type="arduino.py"> <rosparam file="$(find ardros)/info/ardros.yaml" command="load" /> </node> </launch>
The content is best explained from bottom to top: We launch the arduino node which represents the robot and also launch the kinect based laser scanner as discussed above. In addition we need to publish a tf transform that relates the odometry frame base_link with the openni_camera frame. This transformation makes it possible to transform the laser scan data into the base_link frame. In the photo you can see where the Kinect sensor is located on the robot.
Finally we need to launch slam gmapping. All this is wrapped up in the slam.launch file:
<launch> <include file="$(find ardros)/launch/ardros_configuration.launch"/> <node pkg="gmapping" type="slam_gmapping" name="slam_gmapping" output="screen"> <!--remap from="scan" to="base_scan"/--> <!--param name="odom_frame" value="odom"/--> <param name="map_update_interval" value="2.0"/> <param name="maxUrange" value="6.0"/> <param name="sigma" value="0.05"/> <param name="kernelSize" value="1"/> <param name="lstep" value="0.05"/> <param name="astep" value="0.05"/> <param name="iterations" value="5"/> <param name="lsigma" value="0.075"/> <param name="ogain" value="3.0"/> <param name="lskip" value="0"/> <param name="srr" value="0.01"/> <param name="srt" value="0.02"/> <param name="str" value="0.01"/> <param name="stt" value="0.02"/> <param name="linearUpdate" value="0.25"/> <!-- param name="linearUpdate" value="0.5"/--> <param name="angularUpdate" value="0.262"/> <!--param name="angularUpdate" value="0.436"/--> <param name="temporalUpdate" value="-1.0"/> <param name="resampleThreshold" value="0.5"/> <param name="particles" value="300"/> <param name="xmin" value="-50.0"/> <param name="ymin" value="-50.0"/> <param name="xmax" value="50.0"/> <param name="ymax" value="50.0"/> <param name="delta" value="0.05"/> <param name="llsamplerange" value="0.01"/> <param name="llsamplestep" value="0.01"/> <param name="lasamplerange" value="0.005"/> <param name="lasamplestep" value="0.005"/> </node> </launch>
The various gmapping parameters are explained at http://www.ros.org/wiki/gmapping. There is plenty of room for fine tuning but the values listed above work fairly well. I am using mainly the default values but I specifically chose a relatively high particle count to increase the accuracy which suffers from the fact that the Kinect sensor has a much smaller horizontal field of view than a real laser scanner. So let’s run it:
- Close all open terminals
- Open a new terminal and execute the following
roscd ardros roslaunch ./launch/slam.launch
- Open a new terminal and execute
roscd ardros rosrun rviz rviz -d ./rviz/slam.vcg
We can now drive the robot around for a while and watch as the map is assembled. When the map is completed it can be saved:
- Open a new terminal and execute
roscd ardros rosrun map_server map_saver -f ./maps/map
As an example here is snapshot of a sample run and a screen recording that shows how it evolved.