The body measurement of livestock is an important task in precision livestock farming. To reduce the cost of manual measurement, an increasing number of studies have proposed non-contact body measurement methods using depth cameras. However, these methods only use 3D data to construct geometric features for body measurements, which is prone to error on incomplete and noisy point clouds. This paper introduces a 2D-3D fusion body measurement method, developed in order to exploit the potential of raw scanned data including highresolution RGB images and 3D spatial information. The keypoints for body measurement are detected on RGB images with a deep learning model. Then these keypoints are projected onto the surface of livestock point clouds by utilizing the intrinsic parameters of the camera. Combining the process of interpolation and the pose normalization method, 9 body measurements of cattle and 5 body measurements of pig (including body lengths, body widths, body heights, and heart girth) are measured. To verify the feasibility of this method, the experiments are performed on 103 cattle data and 13 pig data. Compared with manual measurements, the MAPEs (mean absolute percentage errors) of 5 cattle body measurements and 1 pig body measurement are reduced to less than 10%. Body widths are more susceptible to non-standard posture. The MAPEs of 2 cattle body widths are larger than 20% and the MAPE of 1 pig body width reaches 30%. In comparison with a previous girth measurement method, the presented method is more accurate and robust for the cattle dataset. The same approach can be adapted and implemented for non-contact body measurement for different livestock species.
Automatic livestock body measurement based on keypoint detection with multiple depth cameras
Marinello F.;Pezzuolo A.
2022
Abstract
The body measurement of livestock is an important task in precision livestock farming. To reduce the cost of manual measurement, an increasing number of studies have proposed non-contact body measurement methods using depth cameras. However, these methods only use 3D data to construct geometric features for body measurements, which is prone to error on incomplete and noisy point clouds. This paper introduces a 2D-3D fusion body measurement method, developed in order to exploit the potential of raw scanned data including highresolution RGB images and 3D spatial information. The keypoints for body measurement are detected on RGB images with a deep learning model. Then these keypoints are projected onto the surface of livestock point clouds by utilizing the intrinsic parameters of the camera. Combining the process of interpolation and the pose normalization method, 9 body measurements of cattle and 5 body measurements of pig (including body lengths, body widths, body heights, and heart girth) are measured. To verify the feasibility of this method, the experiments are performed on 103 cattle data and 13 pig data. Compared with manual measurements, the MAPEs (mean absolute percentage errors) of 5 cattle body measurements and 1 pig body measurement are reduced to less than 10%. Body widths are more susceptible to non-standard posture. The MAPEs of 2 cattle body widths are larger than 20% and the MAPE of 1 pig body width reaches 30%. In comparison with a previous girth measurement method, the presented method is more accurate and robust for the cattle dataset. The same approach can be adapted and implemented for non-contact body measurement for different livestock species.File | Dimensione | Formato | |
---|---|---|---|
1-s2.0-S0168169922003763-main (2).pdf
non disponibili
Descrizione: Articolo
Tipologia:
Published (publisher's version)
Licenza:
Accesso privato - non pubblico
Dimensione
10.68 MB
Formato
Adobe PDF
|
10.68 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.