Counterfactual explanations provide interpretable insights into model decisions by identifying minimal changes to an instance that would lead to a different classification. This paper proposes a method for generating counterfactual explanations for non-linear Support Vector Machines (SVMs). Unlike prior approaches that rely on heuristic optimization or gradient-based methods, our approach leverages high-confidence examples as reference points, ensuring that counterfactuals are both realistic and reliably classified by the model. Our method guarantees the generation of a valid counterfactual for any given instance under mild conditions. We demonstrate its effectiveness through experiments on real-world tabular and image datasets, showing that it produces meaningful and interpretable counterfactuals across different domains under proximity and plausibility metrics.
An investigation into creating counterfactual examples for non-linear Support Vector Machines
Bergamin, Luca
;Aiolli, Fabio
2025
Abstract
Counterfactual explanations provide interpretable insights into model decisions by identifying minimal changes to an instance that would lead to a different classification. This paper proposes a method for generating counterfactual explanations for non-linear Support Vector Machines (SVMs). Unlike prior approaches that rely on heuristic optimization or gradient-based methods, our approach leverages high-confidence examples as reference points, ensuring that counterfactuals are both realistic and reliably classified by the model. Our method guarantees the generation of a valid counterfactual for any given instance under mild conditions. We demonstrate its effectiveness through experiments on real-world tabular and image datasets, showing that it produces meaningful and interpretable counterfactuals across different domains under proximity and plausibility metrics.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.




