Assignment 5, Part 1

[Note that this component of assignment 5 was initially developed by Dr. Karl Tuyls while heading the Swarm Lab at Maastricht University]

Introduction

In this part of the assignment a robot drives through a corridor environment and has to localize itself. To achieve this, your task is to implement the discrete Bayes filter.  The robot will not be autonomous, but will be controlled by an interface, called “gui” which has several buttons to perform required actions.

The robot world is discretized in grid cells of 1m x 1m.  The robot moves forward by 1m by pressing the “Move forward” button, and turns 180 degrees by pressing the “Turn 180 degrees” button. This means the robot can move in both directions along the corridor and every grid cell is represented by two belief-states (one facing left, one facing right).  The following shows a picture of the stage simulator for this world:

State 0 represents the right-most position with the robot facing left and state 9 represents the left-most position facing left.  State 10 is the left-most position facing right and state 19 is the right-most position facing right.   Once again, states 0-9 represent the beliefs for the robot facing left, and 10-19 represent the beliefs for the robot facing to the right. The measurements of the robots are also discretized to detect walls at a maximum distance of 1m (done by the laser_to_wall node). This means that at every position (state) the robot gets three possible measurements: wall_left, wall_right, and wall_front.  However, for the localization process we only use wall_left and wall_right.

The image above shows how RViz visualizes the belief representation.  We see the robot and an overlay of the states projected onto the map.  States 0-9 are show on the top row and states 10-19 below, but this vertical separation is for visualization purposes only.  Next to the state number the probability is printed and the higher the probability, the greater the intensity of red is displayed.

Localization without uncertainty wouldn’t be much fun.  You can enable/disable both movement and measurement noise with the gui.  The following are the motion and measurement models to use (the same models are used to incorporate noise):

Motion model for forwards movement:
P(Xt = i | Xt = i ) = 0.1
P(Xt = i+1 | Xt = i ) = 0.8
P(Xt = i+2 | Xt = i ) = 0.1

Motion model for turns:
P(Xt = 19 – i | Xt = i ) = 0.9
P(Xt = i | Xt = i ) = 0.1

Measurement model:
P(Zt = sense_wall  | Xt = is_wall  ) = 0.8
P(Zt = sense_door | Xt = is_wall  ) = 0.2
P(Zt = sense_wall  | Xt = is_door ) = 0.3
P(Zt = sense_door | Xt = is_door ) = 0.7

This means that for forwards movement the robot drives 1m 80% of the time, 2m 10% of the time, and fails to actually move 10% of the time. For a commanded turn there is a 90% chance of actually turning and a 10% chance of not turning.  In terms of sensing, when a wall is present the sensor returns the correct value 80% of the time.  When a door is present (i.e. there is no wall) the sensor returns the correct value 70% of the time.

Setup

Download the following file and unzip it into your ~/catkin_ws/src directory.

Your job is to fill in the missing parts of bayes_filter.py which is located within the scripts directory.  The other python scripts within this directory are all necessary for the system to work.  Make sure that these scripts are executable (if not, execute chmod +x *.py in the nodes directory).  The a5_p1 package defines a message (look in the msg directory).  So catkin_make will need to be called.

You will need to establish some kind of “map” to represent the sensory conditions expected for all 20 states.  The analogy is the “mushroom map” from the notes.

In the launch directory is the launch file bayes_world.launch.  Executing roslaunch a5_p1 bayes_world.launch will do the following:

  1. it starts Stage, with the required robot model and map
  2. it starts fake_localization, needed for simulation purposes.
  3. it starts map_server, that publishes the map so it can be used in RViz.  (This is not the same map as referred to below)
  4. it starts laser_to_wall, which is a custom node that translates from laser data to wall_front, wall_left, and wall_right detection, within a maximum distance of 1m.
  5. it starts the GUI which provides control buttons for moving the robot around, turning, applying measurement update and enabling/disabling noise.
  6. it starts RViz, in which we visualize the robot position and  the belief states of the robot.

Note that the GUI buttons will be unresponsive unless you also start the localizer node (take a look at localizer.py to see what is going on).  The localizer node is not started by the launch file because it connects to your code (which you may be frequently editing, then re-running).  The localizer node instantiates a BayesFilter object which you are required to flesh out in bayes_filter.py.

To test your code execute rosrun a5_p1 localizer.py.  You can exit this process and edit bayes_filter.py if you need to make changes (that is, you can leave everything from the launch file up and running).  Move the robot by using the buttons “Move forward” or “Turn 180 degrees” on the GUI.  Note that after every movement (either forward or turning) the prediction function will be called implementing the prediction step of Bayes filter.  You have to call meas_update explicitly by pressing the button after each movement.  This is done so that you can see the separate effect of each part of Bayes filter.  You can enable or disable movement and measurement noise, but your filter code should always assume that noise may be present (i.e. use the movement and measurement models supplied above).  Note that it may take a number of movements and measurement update cycles before the belief converges.  With the right sequence of movements it should eventually converge to a single cell representing the robot’s true pose.

Submission

Submit bayes_filter.py.  You should not modify the launch file provided.