Federated learning (FL) has emerged as a
potential method for training machine learning models on
distributed data sources while maintaining data privacy. The
distributed nature of FL, on the other hand, creates unique
cybersecurity challenges that must be addressed to protect the
integrity, confidentiality, and availability of the contributing data
and models. This review paper intends to give a thorough
examination of the cyber security problems related to federated
learning and to investigate various mitigating measures proposed
in the literature. The study discusses the possible impact of
important vulnerabilities in FL systems, such as adversarial
attacks, data poisoning, model inversion, and inference attacks,
on privacy and system performance. The study also explores
existing solutions and countermeasures proposed to solve these
security concerns, such as cryptographic approaches, secure
aggregation protocols, differential privacy mechanisms, and
model verification methods. This review paper seeks to provide
insights for researchers, practitioners, and policymakers on the
topic of cyber security in federated learning by synthesising the
present state of research and identifying gaps.