Extending the Scope of Gradient Reconstruction Attacks in Federated Averaging
Abstract
Federated Learning (FL) has gained prominence as a decentralized and privacy-preserving paradigm that enables multiple clients to collaboratively train a machine learning model under the supervision of a central server. Instead of centralizing the data, clients keep their data locally and share only model parameters during multiple communication rounds. However, recent attacks, such as gradient reconstruction attacks (GRAs) show privacy issues when an attacker knows the communication of a client. In the literature, these privacy issues are mainly explored when clients compute new parameters using a single gradient descent step on their data (FedSGD) and then send them back to the remote server. In a more realistic scenario, the clients' protocol is based on several gradient descent steps (FedAvg). This protocol adds intermediate computation steps, which are unknown from the attacker, thus making GRAs less successful. In this incremental paper, we conduct exhaustive experiments on four state-of-the-art attacks under the FedAvg protocol, on a very basic and a more complex neural network (ResNet-18) with CIFAR100 dataset. These experiments provide the following results 1) a privacy-utility trade-off analysis, 2) insights on the choice of attacks' hyperparameters, 3) the client's local learning rate has little impact on attacks' effectiveness 4) a proof that the privacy risk is not necessarily decreasing over rounds, contrary to common belief.