Embedded Vision Insights: May 15, 2014 Edition
Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.
If you've already registered, click here to sign in.
See a sample of this page's content below:
In this edition of Embedded Vision Insights:
- In Two Weeks: The Embedded Vision Summit
- From Planetary Exploration to Consumer Devices
- 360-Degree Panorama Photography
- Embedded Vision in the News
|LETTER FROM THE EDITOR|
Two weeks from today, my colleagues at the Embedded Vision Alliance and I will kick off the next, and biggest and best yet, iteration of the Embedded Vision Summit, taking place on May 29 at the Santa Clara (California) Convention Center. Yann LeCun, Director of AI Research at Facebook, will deliver the morning keynote, "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems." Machine learning, found in some of the most sophisticated image-understanding systems deployed today, provides a framework that enables system training through examples. It is at the forefront of applications such as face recognition, visual navigation, and handwriting recognition, and LeCun will discuss a breakthrough method for implementing such tasks.
Nathaniel Fairfield, technical lead at Google, will deliver the afternoon keynote, "Self-Driving Cars." Google recently announced that its autonomous car fleet has logged more than 700,000 miles and is increasingly capable of self-navigating complex city street settings. Dr. Fairfield will discuss Google's overall approach to solving the driving problem, the capabilities of the car, progress so far, and remaining challenges. The Embedded Vision Summit will also include two tracks' worth of sixteen total technical presentations revolving around the themes of visual recognition and visual intelligence, and technology demonstrations from nearly two dozen Alliance member...