: The formulation of a scientific opinion on whether the individual who committed a crime should be held responsible for his/her actions or should be considered not responsible by reason of insanity is very difficult. Indeed, forensic psychopathological decision on insanity is highly prone to errors and is affected by human cognitive biases, resulting in low inter-rater reliability. In this context, artificial intelligence can be extremely useful to improve the inter-subjectivity of insanity evaluation. In this paper, we discuss the possible applications of artificial intelligence in this field as well as the challenges and pitfalls that hamper the effective implementation of AI in insanity evaluation. In particular, thus far, it is possible to apply only supervised algorithms without knowing which is the ground truth and which data should be used to train and test the algorithms. In addition, it is not known which percentage of accuracy of the algorithms is sufficient to support partial or total insanity, nor which are the boundaries between sanity and partial or total insanity. Finally, ethical aspects have not been sufficiently investigated. We conclude that these pitfalls should be resolved before AI can be safely and reliably applied in criminal trials.
Artificial intelligence in insanity evaluation. Potential opportunities and current challenges
Scarpazza, Cristina
;Zangrossi, Andrea
2025
Abstract
: The formulation of a scientific opinion on whether the individual who committed a crime should be held responsible for his/her actions or should be considered not responsible by reason of insanity is very difficult. Indeed, forensic psychopathological decision on insanity is highly prone to errors and is affected by human cognitive biases, resulting in low inter-rater reliability. In this context, artificial intelligence can be extremely useful to improve the inter-subjectivity of insanity evaluation. In this paper, we discuss the possible applications of artificial intelligence in this field as well as the challenges and pitfalls that hamper the effective implementation of AI in insanity evaluation. In particular, thus far, it is possible to apply only supervised algorithms without knowing which is the ground truth and which data should be used to train and test the algorithms. In addition, it is not known which percentage of accuracy of the algorithms is sufficient to support partial or total insanity, nor which are the boundaries between sanity and partial or total insanity. Finally, ethical aspects have not been sufficiently investigated. We conclude that these pitfalls should be resolved before AI can be safely and reliably applied in criminal trials.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.