ML-Enabled IoT Devices and Embedded AI
Research will help reduce network bandwidth usage and response latency
Machine learning (ML) has come a long way since its beginnings few decades ago. Today, it has pervaded many aspects of our daily lives and is driving cross-disciple access to technology. The increasing application of ML has also led to an exceptional increase in the recognition accuracy of neural networks, so much so that they outperform human beings in some pattern recognition tasks. Since neural networks have huge processing requirements, most of the processing activity takes place in the cloud.
However, the cloud-centric approach is limited by network latency and bandwidth issues, and it also requires permanent coverage. Processing on cloud servers also exerts pressure on resources and raises privacy concerns. These challenges could be mitigated by shifting some of the recognition and classification capabilities to embedded devices.
Towards that objective, we are designing innovative architectures and accelerators to efficiently implement neural networks into resource-constrained embedded devices and IoT sensors. In this context, we are exploring hardware solutions such as multiprocessing and custom accelerators, or software modifications such as SIMD exploitation and quantization of the weights and parameters of the elements that compose the neural network.
Our research will help reduce network bandwidth usage and response latency, and lead to the development of truly autonomous devices that can continue to operate even in cases of network unavailability. Our approach will relieve the pressure on data centers for increased processing power, and consequently ease the performance and energy consumption challenges through a more distributed hierarchical recognition and classification model.