Bookmark and Share

"Processor Options for Edge Inference: Options and Trade-offs," a Presentation from Micron Technology

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Raj Talluri, Senior Vice President and General Manager of the Mobile Business Unit at Micron Technology, presents the "Processor Options for Edge Inference: Options and Trade-offs" tutorial at the May 2019 Embedded Vision Summit.

Thanks to rapid advances in neural network algorithms, we’ve made tremendous progress in developing robust solutions for numerous computer vision tasks. Face detection, face recognition, object identification, object tracking, lane marking detection and pedestrian detection are just a few examples of challenging visual perception tasks where deep neural networks are providing superior solutions to traditional computer vision algorithms.

Compared with traditional algorithms, deep neural networks rely on a very different computational model. As a result, the types of processor architectures being used for deep neural networks are also quite different from those used in the past.

In this talk, Talluri explores the diverse processor architecture approaches that are gaining popularity in machine learning- based embedded vision applications and discusses their strengths and weaknesses in general, and in the context of specific applications.