====== Lab 06 ====== During this lab, you should learn how to work with factorgraph-based SLAM and how to run it in the simulator. /* Lab content will be updated before the 6th week of the semester. */ **Lecturer:** Martin Pecka ([[mailto:peckama2@fel.cvut.cz?subject=ARO Lab 6|peckama2@fel.cvut.cz]]) **Relevant lectures:** {{ :courses:aro:lectures:00_localization_mle.pdf |}}, {{ :courses:aro:lectures:00_localization_se2.pdf |}}, {{ :courses:aro:lectures:00_kf.pdf |}}, {{ :courses:aro:lectures:01_ekf.pdf |}}. ===== Factorgraph SLAM in action ===== https://www.youtube.com/live/YCE1Aj0k1UA?feature=share&t=2045 The whole [[https://theairlab.org/tartanslamseries2/|Tartan SLAM Series]] is a great study material for those who want to dive deep into how SLAM in 3D is done in state-of-the-art robotics. ===== KF vs Factorgraph SLAM ===== What is the difference between (E)KF SLAM and Factorgraph SLAM? ^ ^ (E)KF ^ Factorgraph ^ | **State** | Latest robot position, relative marker positions | All robot positions, relative marker positions | | **Memory Requirements** | Constant in trajectory length, linear in #markers | Linear in trajectory length, linear in #markers | | **Loop Closures** | Only help current position estimate and markers | Help with whole trajectory estimate and markers | {{ :courses:aro:tutorials:01_ekf.gif |}} ==== Lab Task ==== Download {{ :courses:aro:tutorials:ekf_slam_simple.py |}} and {{ :courses:aro:tutorials:drw_tools.py |}} and examine the files. Find places commented with ''PLAY HERE'' and try to find a way to break the EKF optimization. You can also edit other parts of the code. By replacing ''opt = ekf'' with ''opt = fg'', you'll instruct the script to do the estimation using Factorgraph instead. What are the differences? Can you break the Factorgraph? ===== Computing Jacobians for factorgraphs ===== {{ :courses:aro:tutorials:aro_hw4_2024.pdf |}} ===== RUR Challenge Worlds ===== What would the residuals and Jacobian entries look like? * Reasonable factors: * Global absolute localization (GNSS, Vicon, RFID): $res_t^{gps} = ?$ /*||x_t - z_t^{gps}||*/ * 2-DOF, 3-DOF * Compass: $res_t^{compass} = ?$ * Absolute pose priors: $res_t^{prior} = ?$ /* ||x_t - x_t^{prior}|| */ * Interpolate marker measurement between two poses for better precision: $res_t^{mri} = ?$ * Motion model (e.g. differential drive model): $res_t^{motion} = ?$ /*||g(x_{t-1}, u_t) - x_t||*/ * How to construct the model if $u_t$ are wheel velocities? * Loop closures: $res_t^{loop} = ?$ /* ||x_i - x_j|| */ * Velocity measurements in body frame: $res_t^{vel} = ?$ * UWB localization (radio beacons with distance measurement): $res_t^{uwb} = ?$ * UWB relative marker: $res_t^{uwbm} = ?$ * Bluetooth detection (radio beacons without distance measurement): $res_t^{bt} = ?$ * This introduces inequality constraints which are generally not very well handled. * You can use a [[https://arxiv.org/pdf/1701.03077v1.pdf|robust loss]] to approximate the inequality. * Or you can pass the inequality bounds to the ''bounds'' parameter of ''least_squares''. * Marker as [[https://github.com/ctu-mrs/uvdar_core|LED in camera]] (cannot tell its distance): $res_t^{led} = ?$ * Silly factors (RUR Challenge). Figure out a sensor that could use them. * Slow light propagation: Marker detections are delayed. * Gravity field changing in space (can you try to map it?). * Gravity field constant in space but changing in time following a defined known function. * The universe randomly switches left-right symmetry (i.e. mirrors the world along an axis). * The robot is on a Little Prince planet with a very small diameter. The circumference of the planet is so small the robot does not know how many times it had circled around the planet when he has moved and received a new GPS measurement. * The robot inspects pipelines, drags a "tape measure" behind it and can read how much of the tape has been unrolled. * Friction (and thus efficiency of control commands) depends on orientation of the robot w.r.t. world. * Weird gravity acting to the side and a sensor measuring a wild function of the robot state (Ondřej Matoušek, matouon5) * Timestone and Gravitime force affecting the flux of time in different parts of the space (Matěj Trnka, trnkamat) ===== Apriltags ===== {{:courses:aro:tutorials:tagformats_web.png?400|}} Apriltags are visual markers designed specifically so that it is easy to estimate their pose in full 6 DOF (x, y, z, roll, pitch, yaw). Thus they are very comfortable for being used as absolute localization markers. Moreover, they are easily distinguishable from each other, so the detector can output not only the 6 DOFs of the marker, but also its unique ID. Here is an example of a tag localized by the ROS node from package ''apriltag_ros'': {{:courses:aro:tutorials:rviz_screenshot_2023_03_13-13_55_19.png?400|}} ===== Homework 4 assignment ===== Read and try to understand the assignment of the homework [[homework04|HW4]].