This paper introduces a feature detection method developed through machine learning specifically tailored for event-camera based visual odometry techniques used in reconstructing trajectories for unmanned aerial vehicles. The proposed approach leverages machine-learned features to improve the precision of trajectory reconstruction. Unlike conventional visual odometry methods, which often struggle in low light and high-speed scenarios, the event-camera-based method addresses these challenges by focusing solely on detecting and processing changes in the visual scene. The machine-learned features are designed to capture the distinctive attributes of event-camera data, thereby refining the accuracy of trajectory reconstruction. The inference pipeline consists of a module that is iterated twice sequentially, comprising a Squeeze-and-Excite block and a ConvLSTM block with residual connection. This is succeeded by a final convolutional layer that generates trajectory information for corners in the form of heatmap sequences. In the experimental phase, a series of images was gathered using an event-camera in outdoor settings for both training and testing purposes.

Improving Keypoints Tracking With Machine-Learned Features in Event-Camera-Based Visual Odometry

Chiodini S.;Trevisanuto G.;Bettanini C.;Colombatti G.;Pertile M.
2024

Abstract

This paper introduces a feature detection method developed through machine learning specifically tailored for event-camera based visual odometry techniques used in reconstructing trajectories for unmanned aerial vehicles. The proposed approach leverages machine-learned features to improve the precision of trajectory reconstruction. Unlike conventional visual odometry methods, which often struggle in low light and high-speed scenarios, the event-camera-based method addresses these challenges by focusing solely on detecting and processing changes in the visual scene. The machine-learned features are designed to capture the distinctive attributes of event-camera data, thereby refining the accuracy of trajectory reconstruction. The inference pipeline consists of a module that is iterated twice sequentially, comprising a Squeeze-and-Excite block and a ConvLSTM block with residual connection. This is succeeded by a final convolutional layer that generates trajectory information for corners in the form of heatmap sequences. In the experimental phase, a series of images was gathered using an event-camera in outdoor settings for both training and testing purposes.
2024
2024 IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2024 - Proceeding
11th IEEE International Workshop on Metrology for AeroSpace, MetroAeroSpace 2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3537767
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact