Personal profile

Research overview

My research focuses on improving the state of model compression techniques, including Network Pruning, Knowledge Distillation, Quantization, and Neural Architecture Search, with the aim of reducing the training and inference costs of neural networks. The overarching goal of this research on model compression and knowledge transfer is to gain insights into how knowledge is represented within neural networks, which would enable us to design and build more efficient neural networks, thereby making AI more accessible to a wider audience by developing neural networks that can operate on platforms with limited resources, such as edge systems.

Education/Academic qualification

Master of Computing, Rapidly producing sparse trainable neural networks, University of St Andrews

Sept 2021Sept 2022

Master of Physics, New Computing Technologies for the Large Hadron Collider, UNIVERSITY OF BRISTOL

Sept 2017Jul 2021