This paper aims to reflect on the issue of the ‘responsibility gap’ introduced by artificial intelligence (AI) within the criminal law context. Starting from this critical point, it outlines potential pathways to move beyond traditional punitive models, placing particular emphasis on the relevance of restorative approaches. The first part highlights the challenges that AI poses to the applicability of conventional criminal law institutions. The analysis then shifts to the mechanisms through which risk associated with the use of AI is distributed among the various actors involved in its design and deployment, underlining not only the importance of accountability, but above all the systemic and collective nature of ‘AI-related risk’.Within this framework, the concluding section explores restorative justice as a possible tool to address the consequences of algorithmic harm, emphasising mechanisms grounded in dialogue, trust-building, and social responsibility. The core thesis advanced is the need to rethink the notions of culpability and responsibility in the AI era, moving beyond the primacy of the individual toward a more systemic and shared perspective. In this sense, the paper aims to make a critical contribution to the development of a justice model that addresses new digital challenges through the lens of distributed and collective responsibility, while ensuring that the protection of fundamental rights, particularly in the criminal domain, continues to affirm its inherently human dimension.

Collective Risk Allocation and Restorative Justice in the Age of Artificial Intelligence

Maria Carla Canato
2025

Abstract

This paper aims to reflect on the issue of the ‘responsibility gap’ introduced by artificial intelligence (AI) within the criminal law context. Starting from this critical point, it outlines potential pathways to move beyond traditional punitive models, placing particular emphasis on the relevance of restorative approaches. The first part highlights the challenges that AI poses to the applicability of conventional criminal law institutions. The analysis then shifts to the mechanisms through which risk associated with the use of AI is distributed among the various actors involved in its design and deployment, underlining not only the importance of accountability, but above all the systemic and collective nature of ‘AI-related risk’.Within this framework, the concluding section explores restorative justice as a possible tool to address the consequences of algorithmic harm, emphasising mechanisms grounded in dialogue, trust-building, and social responsibility. The core thesis advanced is the need to rethink the notions of culpability and responsibility in the AI era, moving beyond the primacy of the individual toward a more systemic and shared perspective. In this sense, the paper aims to make a critical contribution to the development of a justice model that addresses new digital challenges through the lens of distributed and collective responsibility, while ensuring that the protection of fundamental rights, particularly in the criminal domain, continues to affirm its inherently human dimension.
2025
Contemporary Challenges: The Global Crime and Security Journal
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3570058
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact