Trustworthiness, and in particular Algorithmic Fairness, is emerging as one of the most trending topics in Machine Learning (ML). In fact, ML is now ubiquitous in decision making scenarios, highlighting the necessity of discovering and correcting unfair treatments of (historically discriminated) subgroups in the population (e.g., based on gender, ethnicity, political and sexual orientation). This necessity is even more compelling and challenging when unexplainable black-box Deep Neural Networks (DNN) are exploited. An emblematic example of this necessity is provided by the detected unfair behavior of the ML-based face recognition systems exploited by law enforcement agencies in the United States. To tackle these issues, we first propose different (un)fairness mitigation regularizers in the training process of DNNs. We then study where these regularizers should be applied to make them as effective as possible. We finally measure, by means of different accuracy and fairness metrics and different visual explanation strategies, the ability of the resulting DNNs in learning the desired task while, simultaneously, behaving fairly. Results on the recent FairFace dataset prove the validity of our approach.

Learn and Visually Explain Deep Fair Models: An Application to Face Recognition

Navarin N.;
2021

Abstract

Trustworthiness, and in particular Algorithmic Fairness, is emerging as one of the most trending topics in Machine Learning (ML). In fact, ML is now ubiquitous in decision making scenarios, highlighting the necessity of discovering and correcting unfair treatments of (historically discriminated) subgroups in the population (e.g., based on gender, ethnicity, political and sexual orientation). This necessity is even more compelling and challenging when unexplainable black-box Deep Neural Networks (DNN) are exploited. An emblematic example of this necessity is provided by the detected unfair behavior of the ML-based face recognition systems exploited by law enforcement agencies in the United States. To tackle these issues, we first propose different (un)fairness mitigation regularizers in the training process of DNNs. We then study where these regularizers should be applied to make them as effective as possible. We finally measure, by means of different accuracy and fairness metrics and different visual explanation strategies, the ability of the resulting DNNs in learning the desired task while, simultaneously, behaving fairly. Results on the recent FairFace dataset prove the validity of our approach.
2021
Proceedings of the International Joint Conference on Neural Networks
2021 International Joint Conference on Neural Networks, IJCNN 2021
978-1-6654-3900-8
File in questo prodotto:
File Dimensione Formato  
Learn_and_Visually_Explain_Deep_Fair_Models_an_Application_to_Face_Recognition.pdf

non disponibili

Tipologia: Published (publisher's version)
Licenza: Accesso privato - non pubblico
Dimensione 3.8 MB
Formato Adobe PDF
3.8 MB Adobe PDF Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3440104
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
  • OpenAlex ND
social impact