5+ Techniques for Efficient Implementation of Neural Networks

Wednesday, May 22, 11:20 AM - 11:50 AM
Summit Track: 
Fundamentals
Location: 
Room 203/204

Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep neural networks (DNNs) to make them more energy-efficient and less demanding of embedded processing hardware. In this talk we’ll provide an introduction to today’s established techniques for efficient implementation of DNNs: advanced quantization, network decomposition, weight pruning and sharing and sparsity-based compression. We’ll also preview up-and-coming techniques such as trained quantization and correlation-based compression.

Speaker(s):

Bert Moons

Hardware Design Architect, Synopsys

Dr. Bert Moons received the PhD degree in Electrical Engineering cum ultima laude from KU Leuven in 2018. He performed his PhD research at ESAT-MICAS, focusing on energy-scalable and run-time adaptable digital architectures and circuits for embedded Deep Learning applications. Bert authored 15+ conference and journal publications, was a Visiting Research Student at Stanford University and received the SSCS pre-doctoral achievement award in 2018. Currently he is with Synopsys, as a hardware design architect for the DesignWare EV6x Embedded Vision and AI processors.

See you at the Summit! May 20-23 in Santa Clara, California!
Register today and reserve your hotel room!