Bookmark and Share

"Implementing the TensorFlow Deep Learning Framework on Qualcomm’s Low-power DSP," a Presentation from Google

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:

Pete Warden, Research Engineer at Google, presents the "Implementing the TensorFlow Deep Learning Framework on Qualcomm’s Low-power DSP" tutorial at the May 2017 Embedded Vision Summit.

TensorFlow is Google’s second-generation deep learning software framework. TensorFlow was designed from the ground up to enable efficient implementation of deep learning algorithms at different scales, from high-performance data centers to low-power embedded and mobile devices. In this talk, Warden presents the technical details of how the TensorFlow and Qualcomm teams collaborated to target TensorFlow to Qualcomm’s low-power Hexagon DSP using Hexagon Vector Extensions, which enables deep learning models to run fast and efficiently.

Warden explains how the two companies split up the work between them and how they measured progress with specific benchmarks, and he looks at some of the code optimizations they implemented. Since the majority of the resulting code has been open-sourced, he's able to dive deeply into the specifics of the implementation decisions they made.