FedFly: towards migration in edge-based distributed federated learning

Rehmat Ullah, Di Wu, Paul Harvey, Peter Kilpatrick, Ivor Spence, Blesson Varghese

Research output: Contribution to journalArticlepeer-review

Abstract

Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (DNN) when devices move between edge servers during FL training. Our empirical results on the CIFAR10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33% when a device moves after 50% of the training is completed, and by up to 45% when 90% of the training is completed when compared to state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation.
Original languageEnglish
Pages (from-to)42-48
Number of pages7
JournalIEEE Communications Magazine
Volume60
Issue number10
Early online date1 Aug 2022
DOIs
Publication statusPublished - Nov 2022

Keywords

  • Federated learning
  • Edge computing
  • Deep neural networks
  • Distributed machine learning
  • Internet-of-Things

Fingerprint

Dive into the research topics of 'FedFly: towards migration in edge-based distributed federated learning'. Together they form a unique fingerprint.

Cite this