Explainability in Artificial Intelligence has been revived as a topic of active research by the need to demonstrate safety to users and gain their trust in the ahow' and awhy' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how it influences the understandability of global explanations from the users' perspective. In this paper, we show how to use ontologies to create more understandable post-explanations of machine learning models. In particular, we build on TREPAN, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to TREPAN Reloaded by including ontologies that model domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees through time and accuracy of responses as well as reported user confidence and understandability in relation to syntactic complexity of the trees. The user study considers domains where explanations are critical, namely finance and medicine. The results show that decision trees generated with our algorithm, taking into account domain knowledge, are more understandable than those generated by standard TREPAN without the use of ontologies.

Trepan reloaded: A knowledge-driven approach to explaining black-box models

Confalonieri R.
;
2020

Abstract

Explainability in Artificial Intelligence has been revived as a topic of active research by the need to demonstrate safety to users and gain their trust in the ahow' and awhy' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how it influences the understandability of global explanations from the users' perspective. In this paper, we show how to use ontologies to create more understandable post-explanations of machine learning models. In particular, we build on TREPAN, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to TREPAN Reloaded by including ontologies that model domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees through time and accuracy of responses as well as reported user confidence and understandability in relation to syntactic complexity of the trees. The user study considers domains where explanations are critical, namely finance and medicine. The results show that decision trees generated with our algorithm, taking into account domain knowledge, are more understandable than those generated by standard TREPAN without the use of ontologies.
2020
24th European Conference on Artificial Intelligence, ECAI 2020
File in questo prodotto:
File Dimensione Formato  
FAIA-325-FAIA200378.pdf

non disponibili

Tipologia: Published (publisher's version)
Licenza: Accesso privato - non pubblico
Dimensione 406.62 kB
Formato Adobe PDF
406.62 kB Adobe PDF Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3471591
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 12
social impact