student profile: Mr Nipun Bhaskar


Map

Thesis work

Thesis title: Efficient methods and architectures for on-chip deep learning

Supervisors: Philip Heng Wai LEONG , David BOLAND

Thesis abstract:

�p�With the advancements in the computational hardware, today deep neural networks (NN) can solve almost any problem. While the artificial intelligence (AI) applications are limitless, the technology still has its limitations such as limited deployment of NN on low power mobile/embedded devices, need for network connectivity to cloud to take advantage of NN and unable to quickly adapt to respond to dynamics of a system.�/p� �p�With time NN are becoming bigger with focus on accuracy. Though accuracy is one of key requirements in industry, other big challenges are: (1) balancing cost, latency and performance of deep learning (DL) methods, (2) converting raw data in real-time into actionable and useful information and (3) timely adapting and/or responding to environmental change to manage chaos.�/p� �p�DL has been traditionally done on Graphical Processing Units (GPUs) or even cluster of GPUs in data centres. However this task is shortly moving towards custom dedicated hardware implementations on FPGAs and ASICs. Platforms like FPGAs and ASICs are not only capable to deliver better computation/watt performances than GPUs but for above mentioned industry challenges they provide better deployable solutions and make stronger business case.�/p� �p�The proposed PhD aims to investigate the following:�/p� �ol� �li�Study state-of-the-art algorithms to optimize deep learning implementation for cost, latency and performance,�/li� �li�FPGA implementations designed to handle irregular parallelism and custom data types providing hardware acceleration�/li� �li�Efficient methods and hardware architectures for low power on-chip training of neural networks for embedded applications/IoT devices.�/li� �/ol�

Note: This profile is for a student at the University of Sydney. Views presented here are not necessarily those of the University.