This communication article provides a call for unmanned aerial vehicle (UAV) users in archaeology to make imagery data more publicly available while developing a new application to facilitate the use of a common deep learning algorithm (mask region-based convolutional neural network; Mask R-CNN) for instance segmentation. The intent is to provide specialists with a GUI-based tool that can apply annotation used for training for neural network models, enable training and development of segmentation models, and allow classification of imagery data to facilitate auto-discovery of features. The tool is generic and can be used for a variety of settings, although the tool was tested using datasets from the United Arab Emirates (UAE), Oman, Iran, Iraq, and Jordan. Current outputs suggest that trained data are able to help identify ruined structures, that is, structures such as burials, exposed building ruins, and other surface features that are in some degraded state. Additionally, qanat(s), or ancient underground channels having surface access holes, and mounded sites, which have distinctive hill-shaped features, are also identified. Other classes are also possible, and the tool helps users make their own training-based approach and feature identification classes. To improve accuracy, we strongly urge greater publication of UAV imagery data by projects using open journal publications and public repositories. This is something done in other fields with UAV data and is now needed in heritage and archaeology. Our tool is provided as part of the outputs given.

Automated Archaeological Feature Detection Using Deep Learning on Optical UAV Imagery: Preliminary Results

Squitieri, Andrea
Investigation
;
2022

Abstract

This communication article provides a call for unmanned aerial vehicle (UAV) users in archaeology to make imagery data more publicly available while developing a new application to facilitate the use of a common deep learning algorithm (mask region-based convolutional neural network; Mask R-CNN) for instance segmentation. The intent is to provide specialists with a GUI-based tool that can apply annotation used for training for neural network models, enable training and development of segmentation models, and allow classification of imagery data to facilitate auto-discovery of features. The tool is generic and can be used for a variety of settings, although the tool was tested using datasets from the United Arab Emirates (UAE), Oman, Iran, Iraq, and Jordan. Current outputs suggest that trained data are able to help identify ruined structures, that is, structures such as burials, exposed building ruins, and other surface features that are in some degraded state. Additionally, qanat(s), or ancient underground channels having surface access holes, and mounded sites, which have distinctive hill-shaped features, are also identified. Other classes are also possible, and the tool helps users make their own training-based approach and feature identification classes. To improve accuracy, we strongly urge greater publication of UAV imagery data by projects using open journal publications and public repositories. This is something done in other fields with UAV data and is now needed in heritage and archaeology. Our tool is provided as part of the outputs given.
2022
File in questo prodotto:
File Dimensione Formato  
remotesensing-14-00553.pdf

accesso aperto

Tipologia: Published (Publisher's Version of Record)
Licenza: Creative commons
Dimensione 9.39 MB
Formato Adobe PDF
9.39 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3552839
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 29
  • ???jsp.display-item.citation.isi??? 23
  • OpenAlex ND
social impact