Bookmark and Share

"Enabling Automated Design of Computationally Efficient Deep Neural Networks," a Presentation from UC Berkeley

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Bichen Wu, Graduate Student Researcher in the EECS Department at the University of California, Berkeley, presents the "Enabling Automated Design of Computationally Efficient Deep Neural Networks" tutorial at the May 2019 Embedded Vision Summit.

Efficient deep neural networks are increasingly important in the age of AIoT (AI + IoT), in which people hope to deploy intelligent sensors and systems at scale. However, optimizing neural networks to achieve both high accuracy and efficient resource use on different target devices is difficult, since each device has its own idiosyncrasies.

In this talk, Wu introduces differentiable neural architecture search (DNAS), an approach for hardware-aware neural network architecture search. He shows that, using DNAS, the computation cost of the search itself is two orders of magnitude lower than previous approaches, while the models found by DNAS are optimized for target devices and surpass the previous state-of-the-art in efficiency and accuracy. Wu also explains how he used DNAS to find a new family of efficient neural networks called FBNets.