This paper introduces a feature detection method developed through machine learning specifically tailored for event-camera based visual odometry techniques used in reconstructing trajectories for unmanned aerial vehicles. The proposed approach leverages machine-learned features to improve the precision of trajectory reconstruction. Unlike conventional visual odometry methods, which often struggle in low light and high-speed scenarios, the event-camera-based method addresses these challenges by focusing solely on detecting and processing changes in the visual scene. The machine-learned features are designed to capture the distinctive attributes of event-camera data, thereby refining the accuracy of trajectory reconstruction. The inference pipeline consists of a module that is iterated twice sequentially, comprising a Squeeze-and-Excite block and a ConvLSTM block with residual connection. This is succeeded by a final convolutional layer that generates trajectory information for corners in the form of heatmap sequences. In the experimental phase, a series of images was gathered using an event-camera in outdoor settings for both training and testing purposes.
Improving Keypoints Tracking With Machine-Learned Features in Event-Camera-Based Visual Odometry
Chiodini S.;Trevisanuto G.;Bettanini C.;Colombatti G.;Pertile M.
2024
Abstract
This paper introduces a feature detection method developed through machine learning specifically tailored for event-camera based visual odometry techniques used in reconstructing trajectories for unmanned aerial vehicles. The proposed approach leverages machine-learned features to improve the precision of trajectory reconstruction. Unlike conventional visual odometry methods, which often struggle in low light and high-speed scenarios, the event-camera-based method addresses these challenges by focusing solely on detecting and processing changes in the visual scene. The machine-learned features are designed to capture the distinctive attributes of event-camera data, thereby refining the accuracy of trajectory reconstruction. The inference pipeline consists of a module that is iterated twice sequentially, comprising a Squeeze-and-Excite block and a ConvLSTM block with residual connection. This is succeeded by a final convolutional layer that generates trajectory information for corners in the form of heatmap sequences. In the experimental phase, a series of images was gathered using an event-camera in outdoor settings for both training and testing purposes.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.