TY - JOUR
T1 - Threats and defenses in the federated learning life cycle
T2 - a comprehensive survey and challenges
AU - Li, Yanli
AU - Guo, Zhongliang
AU - Yang, Nan
AU - Chen, Huaming
AU - Yuan, Dong
AU - Ding, Weiping
N1 - Funding: This work was supported in part by the National Key Research and Development Plan of China under Grant 2024YFE0202700, in part by the National Natural Science Foundation of China under Grant U2433216, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20231337, and in part by the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant 24KJB520032.
PY - 2025/5/15
Y1 - 2025/5/15
N2 - Federated learning (FL) offers innovative solutions for privacy-preserving distributed machine learning (ML). Different from centralized data collection algorithms, FL enables participants to locally train their model and only share the model updates for aggregation. Since private data never leaves the end node, FL effectively mitigates privacy leakage during collaborative training. Despite its promising potential, FL is vulnerable to various attacks due to its distributed nature, affecting the entire life cycle of FL services. These threats can harm the model’s utility or compromise participants’ privacy, either directly or indirectly. In response, numerous defense frameworks have been proposed, demonstrating effectiveness in specific settings and scenarios. To provide a clear understanding of the current research landscape, this article reviews the most representative and state-of-the-art threats and defense frameworks throughout the FL service life cycle. We start by identifying FL threats that harm utility and privacy, including those with potential or direct impacts. Then, we dive into the defense frameworks, analyze the relationship between threats and defenses, and compare the trade-offs among different defense strategies. We subsequently revisit these studies to evaluate their practicality in real-world scenarios and conclude by summarizing existing research bottlenecks and outlining future directions. We hope this survey sheds light on trustworthy FL research and contributes to the FL community.
AB - Federated learning (FL) offers innovative solutions for privacy-preserving distributed machine learning (ML). Different from centralized data collection algorithms, FL enables participants to locally train their model and only share the model updates for aggregation. Since private data never leaves the end node, FL effectively mitigates privacy leakage during collaborative training. Despite its promising potential, FL is vulnerable to various attacks due to its distributed nature, affecting the entire life cycle of FL services. These threats can harm the model’s utility or compromise participants’ privacy, either directly or indirectly. In response, numerous defense frameworks have been proposed, demonstrating effectiveness in specific settings and scenarios. To provide a clear understanding of the current research landscape, this article reviews the most representative and state-of-the-art threats and defense frameworks throughout the FL service life cycle. We start by identifying FL threats that harm utility and privacy, including those with potential or direct impacts. Then, we dive into the defense frameworks, analyze the relationship between threats and defenses, and compare the trade-offs among different defense strategies. We subsequently revisit these studies to evaluate their practicality in real-world scenarios and conclude by summarizing existing research bottlenecks and outlining future directions. We hope this survey sheds light on trustworthy FL research and contributes to the FL community.
KW - Adversarial machine learning (AML)
KW - Federated learning (FL)
KW - Privacy
KW - Robustness
UR - https://www.scopus.com/pages/publications/105005397240
U2 - 10.1109/TNNLS.2025.3563537
DO - 10.1109/TNNLS.2025.3563537
M3 - Article
C2 - 40372861
AN - SCOPUS:105005397240
SN - 2162-237X
VL - 36
SP - 15643
EP - 15663
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 9
ER -