This research will involve FPGA implementations of training algorithms that can be utilised at the edge. The aims are to gain better insights into how low-precision arithmetic can be used to optimise FPGA-based accelerators without sacrificing accuracy and developing novel computer architectures that achieve high performance while supporting the latest machine learning techniques.
Research Area:
Machine learning, Quantisation, FPGA
Electrical and Computer Engineering
Much of the focus of FPGA-based deep neural network research has been on inference. Training on edge-devices is even more challenging because the computation and memory requirements are at least 3x higher, and many of the techniques used to optimise inference cannot be applied to training.
In this research we will explore novel computer arithmetic schemes and devise new algorithms for dealing with low-precision neural networks. Training at the edge will enable a new range of applications that allow the network to adapt to changing conditions, reduce communications bandwidth and ensure privacy.
Offering:
The successful candidate will be awarded a scholarship for 3.5 years at the RTP stipend rate (currently $41,753 in 2025) subject to satisfactory academic performance. International applicants will have their tuition fees covered.
Successful candidates:
How to apply:
To apply, please email Professor Philip Leong the following:
The opportunity ID for this research opportunity is 3623