Federated learning distributes model training among multiple clients who, driven by privacy concerns, perform training using their local data and only share model weights for iterative aggregation on the server. In this work, we explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients. Finally, we validate the effectiveness of our algorithm in presence of varying number of attackers on a classification task using a well-known Fashion-MNIST dataset.
Leveraging Spanning Tree to Detect Colluding Attackers in Federated Learning
Coro Federico;
2022
Abstract
Federated learning distributes model training among multiple clients who, driven by privacy concerns, perform training using their local data and only share model weights for iterative aggregation on the server. In this work, we explore the threat of collusion attacks from multiple malicious clients who pose targeted attacks (e.g., label flipping) in a federated learning configuration. By leveraging client weights and the correlation among them, we develop a graph-based algorithm to detect malicious clients. Finally, we validate the effectiveness of our algorithm in presence of varying number of attackers on a classification task using a well-known Fashion-MNIST dataset.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.