Tang Chao, Liu Zewei,Wang Tianqi, Hu Chunqiang
DOI:
CSTR:
Author:
Affiliation:

1.Electric Power Research Institute of State Grid Sichuan Electric Power Company;2.School of Big Data &3.Software Engineering, Chongqing University

Clc Number:

TP309???????

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Despite its ability to leverage local data for machine learning model training while preserving privacy, recent studies have unveiled challenges related to fairness and gradient privacy leakage in federated learningBased on this, aiming at the privacy protection challenges in federated learning, a fair and secure federated learning algorithm based on differential privacy is proposed. The algorithm sets the privacy budget based on the amount of client-side data and adjusts it according to the gradient change rate. During the training of local models on the client side, differential noise is added to the gradients to protect the privacy and security of the information. Experimental results show that, with an appropriately set privacy budget, the algorithm's performance can achieve a balance between accuracy, fairness, and privacy protection

    Reference
    Related
    Cited by
Get Citation
Related Videos

Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 27,2024
  • Revised:October 15,2024
  • Adopted:October 21,2024
  • Online:
  • Published:
Article QR Code