FedFreeze: a dual-phase layer freezing framework for federated learning

Di Wu*, Leon Wong*, Blesson Varghese*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Running Federated Learning (FL) on resource-constrained devices is challenging due to the resources required for training. Layer freezing has been proposed to reduce the computational costs and thus accelerate training. However, we identify that existing layer freezing approaches either learn quickly or learn effectively, but do not balance them. Specifically, aggressive early-stage layer freezing (e.g., AutoFreeze) accelerates training but achieves a lower final accuracy. On the other hand, accuracy-guaranteed layer freezing (e.g., ALF) obtains higher final accuracy but with marginal training time improvement. This article proposes FedFreeze – a dual-phase layer freezing federated learning framework that, for the first time, combines early-stage and accuracy-guaranteed layer freezing into a unified mechanism. FedFreeze designs a novel regularization-based layer freezing strategy on the device to apply early-stage layer freezing even during the initial stages for improving training speedup. In addition, FedFreeze develops a convergence-based layer freezing strategy to achieve a high final accuracy. Experimental results show that the proposed FedFreeze framework achieves up to 1.3 ×  training speedup while limiting the accuracy drop to no more than 1.67 % compared to vanilla FL. In contrast to state-of-the-art early-stage and accuracy-guaranteed layer freezing methods, FedFreeze consistently strikes a better balance between efficiency and accuracy across a wide range of settings, including different hardware platforms (Raspberry Pi and Jetson Nano clusters), datasets (FMNIST, CIFAR-10, CIFAR-100), model architectures (AlexNet, VGG11, ResNet12), and initialization strategies. The results demonstrate that FedFreeze outperforms state-of-the-art layer freezing techniques in accelerating FL training on resource-constrained devices, without incurring significant accuracy loss.
Original languageEnglish
Article number108250
Pages (from-to)1-14
Number of pages14
JournalFuture Generation Computer Systems
Volume177
Early online date26 Nov 2025
DOIs
Publication statusE-pub ahead of print - 26 Nov 2025

Keywords

  • Layer freezing
  • Federated learning
  • Edge computing
  • Internet of things

Fingerprint

Dive into the research topics of 'FedFreeze: a dual-phase layer freezing framework for federated learning'. Together they form a unique fingerprint.

Cite this