Pursuing fast and robust interpretability in anomaly detection is crucial, especially due to its significance in practical applications. Traditional anomaly detection methods excel in outlier identification but are often ‘black-boxes’, providing scant insights into their decision-making processes. This lack of transparency compromises their reliability and hampers their adoption in scenarios where comprehending the reasons behind anomaly detection is vital. At the same time, getting explanations quickly is paramount in practical scenarios. To bridge this gap, we present AcME-AD, a novel approach rooted in Explainable Artificial Intelligence principles, designed to clarify anomaly detection models for tabular data. AcME-AD transcends the constraints of model-specific or resource-heavy explainability techniques by delivering a model-agnostic, efficient solution for interpretability. It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly, thus aiding root cause analysis and decision-making. This paper elucidates AcME-AD ’s foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets. AcME-AD’s implementation and experiment replication code is accessible in a public repository (https://github.com/dandolodavid/ACME/tree/master/notebook/anomaly_detection_notebook).
AcME-AD: Accelerated Model Explanations for Anomaly Detection
Zaccaria V.;Masiero C.;Susto G. A.
2024
Abstract
Pursuing fast and robust interpretability in anomaly detection is crucial, especially due to its significance in practical applications. Traditional anomaly detection methods excel in outlier identification but are often ‘black-boxes’, providing scant insights into their decision-making processes. This lack of transparency compromises their reliability and hampers their adoption in scenarios where comprehending the reasons behind anomaly detection is vital. At the same time, getting explanations quickly is paramount in practical scenarios. To bridge this gap, we present AcME-AD, a novel approach rooted in Explainable Artificial Intelligence principles, designed to clarify anomaly detection models for tabular data. AcME-AD transcends the constraints of model-specific or resource-heavy explainability techniques by delivering a model-agnostic, efficient solution for interpretability. It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly, thus aiding root cause analysis and decision-making. This paper elucidates AcME-AD ’s foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets. AcME-AD’s implementation and experiment replication code is accessible in a public repository (https://github.com/dandolodavid/ACME/tree/master/notebook/anomaly_detection_notebook).Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.