Bookmark and Share

"Performing Multiple Perceptual Tasks With a Single Deep Neural Network," a Presentation from Magic Leap

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Andrew Rabinovich, Director of Deep Learning at Magic Leap, presents the "Performing Multiple Perceptual Tasks With a Single Deep Neural Network" tutorial at the May 2017 Embedded Vision Summit.

As more system developers consider incorporating visual perception into smart devices such as self-driving cars, drones and wearable computers, attention is shifting toward practical formulation and implementation of these algorithms. Here, the key challenge is how to deploy very computationally demanding algorithms that achieve state-of-the-art results on platforms with limited computational capability and small power budgets. Visual perception tasks such as face recognition, place recognition and tracking are traditionally solved using multiple single-purpose algorithms. With the approach, power consumption increases as more tasks are performed.

In this talk, Rabinovich introduces techniques for performing multiple visual perception tasks within a single learning-based algorithm. He also explores general-purpose model optimization to enable such algorithms to run efficiently on embedded platforms.