In Federated Learning, a Central Node (CN) coordinates a group of agents to collectively train a shared neural network. However, due to the inherent information asymmetry, some agents may behave as free riders and exploit the system by reaping rewards or by passively benefiting from the common model without contributing to the training process. Proof-of-Training (PoT) effectively allows the CN to verify that an agent has completed training honestly and correctly. However, this method incurs high costs, including proof generation by the agent, communication expenses, and proof verification by the CN. Conducting Proof-of-Training in each FL round is impractical due to these expenses. To enhance verification efficiency, a feasible strategy is to conduct probabilistic verification, where only a subset of agents is sampled for verification in each FL round. This paper aims to design a new incentive mechanism to motivate the agents behave honestly and potentially mitigate free riders. Our model hinges on two parameters: (i)the reward allocated to the local trainers, namely R, and (ii) a probability vector, denoted as , indicating the likelihood of subjecting each agent to PoT scrutiny. We show that it is possible to characterize a set of parameters R and that minimizes the total CN cost and makes the routine Individually Rational and Incentive Compatible, so that every agent will actively train their local model. Finally, we validate our model through extensive experiments. Our findings show that our characterization of the best reward and validation scheme is correct as they minimize the cost of the training routine without compromising the convergence speed. All our experiments are conducted on various datasets, demonstrating the wide applicability of our results.

A Game Theory Reward Model for Federated Learning with Probabilistic Verification

Auricchio, Gennaro
;
2025

Abstract

In Federated Learning, a Central Node (CN) coordinates a group of agents to collectively train a shared neural network. However, due to the inherent information asymmetry, some agents may behave as free riders and exploit the system by reaping rewards or by passively benefiting from the common model without contributing to the training process. Proof-of-Training (PoT) effectively allows the CN to verify that an agent has completed training honestly and correctly. However, this method incurs high costs, including proof generation by the agent, communication expenses, and proof verification by the CN. Conducting Proof-of-Training in each FL round is impractical due to these expenses. To enhance verification efficiency, a feasible strategy is to conduct probabilistic verification, where only a subset of agents is sampled for verification in each FL round. This paper aims to design a new incentive mechanism to motivate the agents behave honestly and potentially mitigate free riders. Our model hinges on two parameters: (i)the reward allocated to the local trainers, namely R, and (ii) a probability vector, denoted as , indicating the likelihood of subjecting each agent to PoT scrutiny. We show that it is possible to characterize a set of parameters R and that minimizes the total CN cost and makes the routine Individually Rational and Incentive Compatible, so that every agent will actively train their local model. Finally, we validate our model through extensive experiments. Our findings show that our characterization of the best reward and validation scheme is correct as they minimize the cost of the training routine without compromising the convergence speed. All our experiments are conducted on various datasets, demonstrating the wide applicability of our results.
2025
ACM International Conference Proceeding Series
6th International Conference on Distributed Artificial Intelligence, DAI 2024
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3565880
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
  • OpenAlex ND
social impact