NeuroFlux: memory-efficient CNN training using adaptive local learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Downloads (Pure)


Efficient on-device Convolutional Neural Network (CNN) training in resource-constrained mobile and edge environments is an open challenge. Backpropagation is the standard approach adopted, but it is GPU memory intensive due to its strong inter-layer dependencies that demand intermediate activations across the entire CNN model to be retained in GPU memory. This necessitates smaller batch sizes to make training possible within the available GPU memory budget, but in turn, results in substantially high and impractical training time. We introduce NeuroFlux, a novel CNN training system tailored for memory-constrained scenarios. We develop two novel opportunities: firstly, adaptive auxiliary networks that employ a variable number of filters to reduce GPU memory usage, and secondly, block-specific adaptive batch sizes, which not only cater to the GPU memory constraints but also accelerate the training process. NeuroFlux segments a CNN into blocks based on GPU memory usage and further attaches an auxiliary network to each layer in these blocks. This disrupts the typical layer dependencies under a new training paradigm - 'adaptive local learning'. Moreover, NeuroFlux adeptly caches intermediate activations, eliminating redundant forward passes over previously trained blocks, further accelerating the training process. The results are twofold when compared to Backpropagation: on various hardware platforms, NeuroFlux demonstrates training speed-ups of 2.3× to 6.1× under stringent GPU memory budgets, and NeuroFlux generates streamlined models that have 10.9× to 29.4× fewer parameters
Original languageEnglish
Title of host publicationEuroSys '24: Proceedings of the Nineteenth European Conference on Computer Systems
Number of pages17
ISBN (Print)9798400704376
Publication statusPublished - Apr 2024
EventThe European Conference on Computer Systems - Athens, Greece
Duration: 22 Apr 202425 Apr 2024


ConferenceThe European Conference on Computer Systems
Abbreviated titleEuroSys
Internet address


  • CNN training
  • Memory efficient training
  • Local learning
  • Edge computing


Dive into the research topics of 'NeuroFlux: memory-efficient CNN training using adaptive local learning'. Together they form a unique fingerprint.

Cite this