Bookmark and Share

"Beyond CNNs for Video: The Chicken vs. the Datacenter," a Presentation from Xperi

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Steve Teig, Chief Technology Officer at Xperi, presents the "Beyond CNNs for Video: The Chicken vs. the Datacenter" tutorial at the May 2019 Embedded Vision Summit.

The recent revolution in computer vision derives much of its success from neural networks for image processing. These networks run predominantly in datacenters, where the training data consists mostly of photographs. Because of this history, the networks used for image processing fail to exploit temporal information. In fact, convolutional neural networks are unaware that time exists, leading to overly complex networks with strange artifacts. Remarkably, even the lowly chicken knows better, bobbing its head while walking to integrate information over time in modeling the world. Isn’t it time we learned from the chicken? In this presentation, Teig explores how we can.