Ardros – Transform Between base_link and the Kinect Sensor

For SLAM and navigation to work correctly on a ROS based robot it is necessary to accurately specify the transformation between the coordinate system (frame) of the steering center of the robot (the frame is typically called base_link) and the coordinate system of the laser scans. In the case of Kinect based robots the laser scan measurements are derived from the 3d point cloud that is provided by the openni_camera package and is based on the frame /openni_depth_frame.

As described in my earlier post 2d SLAM with ROS and Kinect I specify the transformation between the base_link and the Kinect sensor (specifically the frame openni_camera) in the launch file ardros_configuration.launch. The relevant line is:

<node pkg="tf" type="static_transform_publisher"  name="base_to_kinect_broadcaster" args="-0.115 0 0.226 0 0 0 base_link  openni_camera 100" />

This transformation hooks together the base_link frame with the frames that are associated with the Kinect sensor. How the various frames are connected can be seen by running the following commands as described in this tutorial:

rosrun tf view_frames
evince frames.pdf

Here is the result of running these commands while running the navigation stack on Ardros (click on the image for a larger view).

tf_frames

I marked the frame for the laser scan data in red. As can be seen this frame is now connected with base_link.

Now the question is, how can we determine the parameters shown in the transformation above. I found it easiest to ‘park’ the robot in front of a wall pushing against a rectangular piece of card board or foam board to guarantee that the front of the robot is oriented parallel to the wall at a fixed distance:

CalibratingKinectLocation

I use a book to create a step change in the laser scan right in the center of the front of the robot. Next I run the navigation stack and bring up rviz with a configuration file that shows the laser scan plus two axes displays for the frames /base_link and /openni_camera.

KinectLocationCalibration

By setting the length of the base_link frame axes to the distance from the origin of the base_link frame to the wall I can readily see whether the laser scan line intersects with the end of the x axis (red). If it doesn’t then the x offset as specified in the static transformation broadcaster base_to_kinect_broadcaster (see the line from the launch file above) needs to be adjusted. Furthermore the step change in the distance needs to align with the x axis. Any mismatch needs to be adjusted by modifying the y offset in the transform base_to_kinect_broadcaster.

Once this is done the frames are correctly aligned and the navigation stack can accurately correlate the laser scan with the odometry information.

Ardros – Upgrading to the Maple 32-bit Microcontroller

As described in previous posts until now I used an Arduino Mega controller for my Ardros robot. In most cases this controller provides more than enough power. Mainly out of curiosity but also in order to overcome problems with encoder ticks counting when my motors run at full speed I decided to give the Leaf Lab‘s Maple controller a try.

The Maple is based on the 32-bit STM32F103RB ARM Cortex M3 microcontoller running at 72 MHz. Apart from the drastic speed improvements, what makes the Maple so attractive is that it uses the same pin layout as the Arduino and comes with the same IDE and hardware abstraction.

MapleIde

As a result the transition from Arduino is fairly easy. Naturally there are differences, though. They are summarized here.

The port of the complete Arduino program including the libraries that make up the Ardros – Controller and Drive System was rather painless and is now running on my robot. The source code is available on my Google code site drh-robotics-ros. This post is based on revision 104 of the code. The relevant subdirectories are Robot and libraries.