Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the model incorporates Ghost Modules and Squeeze-and-Excitation (SE) blocks to enhance feature extraction while maintaining computational efficiency. Obstacles are categorized as “Real”—those that physically impact navigation, such as tree trunks and persons—and “Fake”—those that do not, such as tall weeds and tree branches—allowing for precise navigation decisions. The model was trained on separate orchard and campus datasets and fine-tuned using Hyperband optimization and evaluated on an external test set to assess generalization to unseen obstacles. The model’s robustness was tested under varied lighting conditions, including low-light scenarios, to ensure real-world applicability. Computational efficiency was analyzed based on inference speed, memory consumption, and hardware requirements. Comparative analysis against state-of-the-art classification models (VGG16, ResNet50, MobileNetV3, DenseNet121, EfficientNetB0, and InceptionV3) confirmed the proposed model’s superior precision (p), recall (r), and F1-score, particularly in complex orchard scenarios. The model maintained strong generalization across diverse environmental conditions, including varying illumination and previously unseen obstacles. Furthermore, computational analysis revealed that the orchard-combined model achieved the highest inference speed at 2.31 FPS while maintaining a strong balance between accuracy and efficiency. When deployed in real-time, the model achieved 95.0% classification accuracy in orchards and 92.0% in campus environments. The real-time system demonstrated a false positive rate of 8.0% in the campus environment and 2.0% in the orchard, with a consistent false negative rate of 8.0% across both environments. These results validate the model’s effectiveness for real-time obstacle differentiation in agricultural settings. Its strong generalization, robustness to unseen obstacles, and computational efficiency make it well-suited for deployment in precision agriculture. Future work will focus on enhancing inference speed, improving performance under occlusion, and expanding dataset diversity to further strengthen real-world applicability.
Enhancing Autonomous Orchard Navigation: A Real-Time Convolutional Neural Network-Based Obstacle Classification System for Distinguishing ‘Real’ and ‘Fake’ Obstacles in Agricultural Robotics
Marinello F.;
2025
Abstract
Autonomous navigation in agricultural environments requires precise obstacle classification to ensure collision-free movement. This study proposes a convolutional neural network (CNN)-based model designed to enhance obstacle classification for agricultural robots, particularly in orchards. Building upon a previously developed YOLOv8n-based real-time detection system, the model incorporates Ghost Modules and Squeeze-and-Excitation (SE) blocks to enhance feature extraction while maintaining computational efficiency. Obstacles are categorized as “Real”—those that physically impact navigation, such as tree trunks and persons—and “Fake”—those that do not, such as tall weeds and tree branches—allowing for precise navigation decisions. The model was trained on separate orchard and campus datasets and fine-tuned using Hyperband optimization and evaluated on an external test set to assess generalization to unseen obstacles. The model’s robustness was tested under varied lighting conditions, including low-light scenarios, to ensure real-world applicability. Computational efficiency was analyzed based on inference speed, memory consumption, and hardware requirements. Comparative analysis against state-of-the-art classification models (VGG16, ResNet50, MobileNetV3, DenseNet121, EfficientNetB0, and InceptionV3) confirmed the proposed model’s superior precision (p), recall (r), and F1-score, particularly in complex orchard scenarios. The model maintained strong generalization across diverse environmental conditions, including varying illumination and previously unseen obstacles. Furthermore, computational analysis revealed that the orchard-combined model achieved the highest inference speed at 2.31 FPS while maintaining a strong balance between accuracy and efficiency. When deployed in real-time, the model achieved 95.0% classification accuracy in orchards and 92.0% in campus environments. The real-time system demonstrated a false positive rate of 8.0% in the campus environment and 2.0% in the orchard, with a consistent false negative rate of 8.0% across both environments. These results validate the model’s effectiveness for real-time obstacle differentiation in agricultural settings. Its strong generalization, robustness to unseen obstacles, and computational efficiency make it well-suited for deployment in precision agriculture. Future work will focus on enhancing inference speed, improving performance under occlusion, and expanding dataset diversity to further strengthen real-world applicability.File | Dimensione | Formato | |
---|---|---|---|
agriculture-15-00827.pdf
accesso aperto
Tipologia:
Published (Publisher's Version of Record)
Licenza:
Creative commons
Dimensione
24.06 MB
Formato
Adobe PDF
|
24.06 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.