Pose estimation is critical for mobile robots to fulfill various tasks, such as path following or mapping the environment. This is usually accomplished by simultaneous localization and mapping (SLAM). However, computationally constrained systems, such as planetary rovers, rely on less intensive guidance navigation and control (GNC) solutions generally derived solely from visual odometry (VO), wheel odometry, and the onboard inertial measurement unit. Although providing adequate localization performances, the drift accumulated over time is not compensated by loop closing capabilities, typical of SLAM. Usually, rovers send surface images to the ground station, and these images are used for multiple purposes, such as scientific and operational planning. The number of images is constrained by the communication bandwidth and power budget. The set of transmitted images can be used as a means to correct the robot's trajectory in an off-line manner. In this work, a solution is presented to the problem of selecting the optimal set of viewpoints belonging to the planned path from which to capture and transmit images: 1) it guarantees accurate trajectory correction and 2) complies with the maximum number of images that can be transmitted to ground control given the available data budget. To this end, it is proposed: 1) a delocalized/decentralized sensor fusion approach based on pose graph optimization and structure from motion and 2) a strategy to select a minimal set of viewpoints along the trajectory that, given a tentative geometry of the environment and the global path that the rover must follow, minimizes the uncertainty of all the robot poses. Optimal camera viewpoint positions are selected as a function of the planned trajectory, the approximate scene geometry, and the maximum transmittable number of images. The proposed method has been tested on a dataset of stereo-images collected in a representative Martian environment, the ALTEC Mars Terrain Simulator (MTS), with the ExoMars testing rover (ExoTeR - European Space Agency, Paris, France, property). Rover stereo-images ground truth was given with millimetric accuracy by a motion capture (MC) system.

Viewpoint Selection for Rover Relative Pose Estimation Driven by Minimal Uncertainty Criteria

Chiodini S.;Giubilato R.;Pertile M.;Debei S.
2021

Abstract

Pose estimation is critical for mobile robots to fulfill various tasks, such as path following or mapping the environment. This is usually accomplished by simultaneous localization and mapping (SLAM). However, computationally constrained systems, such as planetary rovers, rely on less intensive guidance navigation and control (GNC) solutions generally derived solely from visual odometry (VO), wheel odometry, and the onboard inertial measurement unit. Although providing adequate localization performances, the drift accumulated over time is not compensated by loop closing capabilities, typical of SLAM. Usually, rovers send surface images to the ground station, and these images are used for multiple purposes, such as scientific and operational planning. The number of images is constrained by the communication bandwidth and power budget. The set of transmitted images can be used as a means to correct the robot's trajectory in an off-line manner. In this work, a solution is presented to the problem of selecting the optimal set of viewpoints belonging to the planned path from which to capture and transmit images: 1) it guarantees accurate trajectory correction and 2) complies with the maximum number of images that can be transmitted to ground control given the available data budget. To this end, it is proposed: 1) a delocalized/decentralized sensor fusion approach based on pose graph optimization and structure from motion and 2) a strategy to select a minimal set of viewpoints along the trajectory that, given a tentative geometry of the environment and the global path that the rover must follow, minimizes the uncertainty of all the robot poses. Optimal camera viewpoint positions are selected as a function of the planned trajectory, the approximate scene geometry, and the maximum transmittable number of images. The proposed method has been tested on a dataset of stereo-images collected in a representative Martian environment, the ALTEC Mars Terrain Simulator (MTS), with the ExoMars testing rover (ExoTeR - European Space Agency, Paris, France, property). Rover stereo-images ground truth was given with millimetric accuracy by a motion capture (MC) system.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3408905
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 3
social impact