We study how a symbolic representation for support vector machines (SVMs) specified by means of abstract interpretation can be exploited for: (1) enhancing the interpretability of SVMs through a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset or the accuracy of the SVM and is very fast to compute; and (2) certifying individual fairness of SVMs and producing concrete counterexamples when this verification fails. We implemented our methodology and we empirically showed its effectiveness on SVMs based on linear and nonlinear (polynomial and radial basis function) kernels. Our experimental results prove that, independently of the accuracy of the SVM, our AFI measure correlates much strongly with stability of the SVM to feature perturbations than major feature importance measures available in machine learning software such as permutation feature importance, therefore providing better insight into the trustworthiness of SVMs. This is the artifact accompanying the published paper.

Abstract Interpretation-based Feature Importance for Support Vector Machines - Artifact

Francesco Ranzato
;
2023

Abstract

We study how a symbolic representation for support vector machines (SVMs) specified by means of abstract interpretation can be exploited for: (1) enhancing the interpretability of SVMs through a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset or the accuracy of the SVM and is very fast to compute; and (2) certifying individual fairness of SVMs and producing concrete counterexamples when this verification fails. We implemented our methodology and we empirically showed its effectiveness on SVMs based on linear and nonlinear (polynomial and radial basis function) kernels. Our experimental results prove that, independently of the accuracy of the SVM, our AFI measure correlates much strongly with stability of the SVM to feature perturbations than major feature importance measures available in machine learning software such as permutation feature importance, therefore providing better insight into the trustworthiness of SVMs. This is the artifact accompanying the published paper.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3508081
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact