"Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges," a Presentation from Qualcomm
Jeff Gehlhaar, Vice President of Technology, Corporate Research and Development, at Qualcomm, presents the "Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges" tutorial at the May 2015 Embedded Vision Summit.
Deep learning approaches have proven extremely effective for a range of perceptual tasks, including visual perception. Incorporating deep-learning-based visual perception into devices such as robots, automobiles and smartphones enable these machines to become much more intelligent and intuitive. And, while some applications can rely on the enormous compute power available in the cloud, many systems require local intelligence for various reasons. In these applications, the enormous computing requirements of deep-learning-based vision, creates unique challenges related to power and efficiency.
In this talk, Jeff explores applications and use cases where on-device deep-learning-based visual perception provides great benefits. He dives deeply into the challenges that these applications face, and explores techniques to overcome them.