Enabling Automated Design of Computationally Efficient Deep Neural Networks

Tuesday, May 21, 4:20 PM - 4:50 PM
Summit Track: 
Technical Insights I
Mission City B1-B5

Efficient deep neural networks are increasingly important in the age of AIoT (AI + IoT), in which people hope to deploy intelligent sensors and systems at scale. However, optimizing neural networks to achieve both high accuracy and efficient resource use on different target devices is difficult, since each device has its own idiosyncrasies. In this talk, we introduce differentiable neural architecture search (DNAS), an approach for hardware-aware neural network architecture search. We show that, using DNAS, the computation cost of the search itself is two orders of magnitude lower than previous approaches, while the models found by DNAS are optimized for target devices and surpass the previous state-of-the-art in efficiency and accuracy. We also explain how we used DNAS to find a new family of efficient neural networks called FBNets.


Bichen Wu

Graduate Student Researcher, EECS, University of California, Berkeley

Bichen Wu is a PhD candidate at EECS (Department of Electrical Engineering and Computer Sciences), at the University of California, Berkeley. He works with Professor Kurt Keutzer, and he is affiliated with Berkeley AI Research (BAIR) and Berkeley Deep Drive (BDD). His research focus is on efficient deep learning, computer vision and autonomous driving.

See you at the Summit! May 20-23 in Santa Clara, California!
Register today and reserve your hotel room!