Abstract:Despite its ability to leverage local data for machine learning model training while preserving privacy, recent studies have unveiled challenges related to fairness and gradient privacy leakage in federated learningBased on this, aiming at the privacy protection challenges in federated learning, a fair and secure federated learning algorithm based on differential privacy is proposed. The algorithm sets the privacy budget based on the amount of client-side data and adjusts it according to the gradient change rate. During the training of local models on the client side, differential noise is added to the gradients to protect the privacy and security of the information. Experimental results show that, with an appropriately set privacy budget, the algorithm's performance can achieve a balance between accuracy, fairness, and privacy protection