This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Homework 03 - Factorgraph localization


Workspace Configuration

Enter Singularity shell:

singularity shell deploy/images/robolab_noetic.simg



Configure workspace with the provided hw_03_pkgs. Add the folders to your workspace (overriding the aro_sim package). Then catkin build and source devel/setup.bash as usual. If catkin build results in an error, try catkin clean and then catkin build again. This may fix your problem.

Package Structure

Get familiar with the package. The structure of the package is as follows:

  • config/ contains various configurations of the package.
  • launch/ provides convenient launchers for the nodes below:
    • aro_loc.launch launches the whole localization pipeline, including visualization,
    • sim.launch is a wrapper that selects a marker placement config and launches the simulation.
  • scripts/ contains executable nodes or scripts:
    • aro_localization node provides robot localization,
  • src/aro_localization/ contains Python package with the following module:
    • factor_graph.py contains implementation of factorgraph-based localization
  • CMakeLists.txt describes package build.
  • dependencies.rosinstall lists dependencies which can directly be added into the workspace.
  • package.xml contains the Catkin package description.

Package Usage

Don't forget to source your workspace:

source "${ws}/devel/setup.bash"

To test the aro_localization node in simulator:

roslaunch --sigint-timeout 1 --sigterm-timeout 1 aro_localization sim.launch fuse_icp_slam:=false
It also runs keyboard (and gamepad) controls, so you can manually move the robot using keyboard (keys WSADX) if the terminal is focused. Worlds to test: stage_1, stage_2, stage_3, stage_4, world, aro_maze_1, aro_maze_2.

On the real robot, run

roslaunch aro_localization aro_loc.launch

There are a few differences between the simulator and the real robot:

  • In simulation, marker detection is done from RGB camera, while on the real robot, it is done from near infrared camera. Try to figure out why!
  • In simulation, the bumper is the whole body of the robot. On the real robot, the bumper only covers the front part of the robot.
  • Quality of the wheel-IMU odometry differs between the simulation and the real robot.
  • In simulation, ground truth position is available for free. To get something similar for the real robot, a complicated setup of the room would be required.

Node aro_localization

This is definition of the API of the aro_localization node you will be working on.

Private parameters

Only the important parameters are listed here. For the full view, refer to scripts/aro_localization source code.

  • map_frame (string, default fused_odom): Name of the frame in which the fused localization will be published.
  • odom_frame (string): This is not an actual parameter, but we refer to it in this document as if it were. This is name of the frame in which wheel-IMU odometry is published. The “parameter” is automatically set to the parent frame of the first odometry message on topic odom.
  • body_frame (string): This is not an actual parameter, but we refer to it in this document as if it were. This is name of the robot body frame. The “parameter” is automatically set to the child frame of the first odometry message on topic odom.
  • publish_tf (bool, default True): Whether to publish the map_frameodom_frame transform.
  • invert_tf (bool, default True): Whether to publish transform map_frameodom_frame (False) or odom_framemap_frame (True).
  • max_iters (int, default 50): Maximum number of optimizer iterations.
  • Parameters starting with c_ configure the scale of costs of the individual loss terms. You can play with these to tune the algorithm behavior.


  • apriltag (type apriltag_ros/AprilTagDetectionArray) Detected apriltags.
  • odom (type nav_msgs/Odometry) Wheel-IMU odometry provided by the platform (short-term accurate, long-term inconsistent, no discrete jumps, regular frequency ~10 Hz). Position estimate is used, velocity estimate is not used.
  • icp_odom (type nav_msgs/Odometry) Odometry from SLAM node (if available, short-term accurate, long-term consistent, discrete jumps, mostly regular frequency ~1 Hz). Position estimate is used, velocity estimate is not used.
  • ground_truth_odom (type nav_msgs/Odometry) Ground truth odometry (only in simulator and in testing mode, short-term accurate, long-term consistent, no discrete jumps, regular frequency ~10 Hz). Position estimate is used, velocity estimate is not used.
  • mobile_base/sensors/core (type kobuki_msgs/SensorState) State of the Turtlebot2 real robot sensors. Only the bumper state is read.
  • bumper (type gazebo_msgs/ContactsState) Bumper state from simulator.


  • fused_odom (type nav_msgs/Odometry) The main output of the localization. The best estimate of robot position in frame map_frame (parameter of the launch file).
  • fused_odom_viz (type visualization_msgs/MarkerArray) Visualization helper for displaying the estimated (and ground truth) poses of the markers.
  • fused_odom_path (type nav_msgs/Path) Helper output of the localization algorithm. It represents the whole path in map_frame from the instant the localization node was started. Publishing the whole path is usually inefficient, but it is convenient for debugging and evaluation.

Published Transforms

  • map_frame (node parameter) → odom_frame (node parameter): Best estimate of the robot pose from the fused localization algorithm. Short-term accurate, long-term consistent (if markers are observed regularly), discrete jumps, irregular frequency 10 Hz to 0.01 Hz.
  • map_frame_fast → body_frame: Debug/helper frame that is a composition of the latest map_frameodom_frame and the latest odom_framebody_frame. Short-term accurate, long-term inconsistent, discrete jumps, regular frequency ~20 Hz. This frame is published by separate node fused_map_fast_publisher.

Required Transforms

  • marker camera frame → body_frame (node parameter): Calibrated static transform between the marker detection camera's optical frame and the body frame of the robot.
  • odom_frame (node parameter) → body_frame (node parameter): Wheel-IMU odometry.

Your Homework

Find places in src/aro_localization/factor_graph.py with comment TODO: ARO homework 3 and fill in the missing pieces so that the fused localization works.

You will need to fill in the residuals for each factor in the graph, and compute the relevant Jacobian entries:

  • Let $x_t$ be robot pose 3-tuple of variables for time instant $t$, $mr$ be relative marker pose 3-tuple of variables, and $z^{\ast}_t$ be a 3-tuple of measurements measured at time $t$.
  • As we solve the task in 2D, each pose variable or measurement is a 3-tuple that has coordinates $x$, $y$ and $yaw$.
  • Put correct formula in the definition of res variable in compute_pose_residual(). Further in the text, we will use the following residuals induced by measurements at time $t$:
    • $res^{odom}_t$ is the odometry residual that is a function of $x_t$, $x_{t+1}$ and $z^{odom}_t$ (denoted res_odom in code).
    • $res^{mr}_t$ is the relative marker residual that is a function of $x_t$, $mr$ and $z^{mr}_t$ (denoted res_mr in code).
    • $res^{ma}_t$ is the absolute marker residual that is a function of $x_t$ and $z^{ma}_t$ (denoted res_ma in code).
  • Put correct formulas in the definition of J, J1 and Jm in compute_residuals_jacobian. Each of these is a 3×3 submatrix of the whole factorgraph Jacobian.
    • J is $\frac{\partial res^{odom}_t}{\partial x_t}$ (or analogously $\frac{\partial res^{mr}_t}{\partial x_t}$ or $\frac{\partial res^{ma}_t}{\partial x_t}$).
    • J1 is $\frac{\partial res^{odom}_t}{\partial x_{t+1}}$.
    • Jm is $\frac{\partial res^{mr}_t}{\partial mr}$.
  • You can play with the solver_options defaults which defines behavior of the nonlinear least squares solver. But it should work okay even with the default settings.
  • While developing your solution, you can *temporarily* replace self.compute_residuals_jacobian_opt in optimize() function with "3-point", which will instruct the solver to estimate the Jacobian numerically. This way, you can first tune your residuals function not caring about the Jacobian. But once your residuals are good, implement the analytic Jacobian and put back self.compute_residuals_jacobian_opt. We will check the source code and solutions using any kind of numerical Jacobian estimation will get 0 points.


Upload the package aro_localization to Brute. Your factor_graph.py file will be used on a set of static datasets to find out it works correctly.

  • 1 point for correct implementation of compute_residuals()
  • 2 points for correct implementation of compute_residuals_jacobian()
  • 2 points for localization of relative marker within 0.4 m of its ground truth pose on a static trajectory. Your factor_graph.py will be used in a manner very similar to the example usage in its if __name__ == '__main__' section.
courses/aro/tutorials/homework03.txt · Last modified: 2023/03/22 15:36 by peckama2