In this work, we propose a robust and efficient method to build dense 3D maps, using only the images grabbed by an omnidirectional camera. The map contains exhaustive information about both the structure and the appearance of the environment and it is well suited also for large scale environments. We start from the assumption that the surrounding environment (the scene) forms a piecewise smooth surface represented by a triangle mesh. Our system is able to infer, without any odometry information, the structure of the environment along with the ego-motion of the camera by performing a robust tracking of the projection of this surface in the omnidirectional image. The key idea is to use a guess of the triangle mesh subdivision based on a constrained Delaunay triangulation built according to a set of point features and edgelet features extracted from the image. In such a way, we take into account both the corners and the edges of the scene imaged by the camera, constrained by the topology of the triangulation in order to improve the stability of the tracking process. Both motion and structure parameters are estimated using a direct method inside an optimization framework, taking into account the topology of the subdivision in a robust and efficient way. We successfully tested our system in a challenging urban scenario along a large loop using an omnidirectional camera mounted on the roof of a car.
Omnidirectional dense large-scale mapping and navigation based on meaningful triangulation
PRETTO, ALBERTO;MENEGATTI, EMANUELE;PAGELLO, ENRICO
2011
Abstract
In this work, we propose a robust and efficient method to build dense 3D maps, using only the images grabbed by an omnidirectional camera. The map contains exhaustive information about both the structure and the appearance of the environment and it is well suited also for large scale environments. We start from the assumption that the surrounding environment (the scene) forms a piecewise smooth surface represented by a triangle mesh. Our system is able to infer, without any odometry information, the structure of the environment along with the ego-motion of the camera by performing a robust tracking of the projection of this surface in the omnidirectional image. The key idea is to use a guess of the triangle mesh subdivision based on a constrained Delaunay triangulation built according to a set of point features and edgelet features extracted from the image. In such a way, we take into account both the corners and the edges of the scene imaged by the camera, constrained by the topology of the triangulation in order to improve the stability of the tracking process. Both motion and structure parameters are estimated using a direct method inside an optimization framework, taking into account the topology of the subdivision in a robust and efficient way. We successfully tested our system in a challenging urban scenario along a large loop using an omnidirectional camera mounted on the roof of a car.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.