An Upcoming Online Tutorial Series, Presented By BDTI, The Embedded Vision Alliance And Design NewsAugust 19, 2012
A week ahead of the Embedded Vision Summit, which is a month from now (so register now!), Embedded Vision Alliance Founder Jeff Bier and BDTI senior software engineer Eric Gregori are partnering with Design News Magazine to deliver "Fundamentals of Embedded Computer Vision: Creating Machines That See", a free five-part embedded vision tutorial series September 10-14 at 2PM ET (11AM PT) each day. Attendance at the entire five-part series is encouraged. In-advance registration is necessary; note that separate registration for each session is required. See below for session details provided by Jeff Bier, along with relevant registration page links.
Day 1: Introduction to Embedded Vision
In this course we introduce embedded vision – the incorporation of computer vision techniques into embedded systems. Via case studies, we explore the functionality that systems can gain via embedded vision, and provide a taste of typical vision algorithms. We also discuss technology trends that are enabling embedded vision to be used in cost-, energy- and size-limited applications, and we highlight challenges that must be addressed in integrating embedded vision capabilities into systems.
Day 2: Fundamentals of Image Sensors for Embedded Vision
Image sensors are the “eyes” of embedded vision systems, and their characteristics largely determine the capabilities of the systems on which they are built. In this course, we introduce the most common types of 2D and 3D sensors used in embedded vision applications, and explore their strengths and weaknesses. We also highlight recent developments in sensor technology.
Day 3: Processor Choices for Embedded Vision
Embedded vision applications typically make heavy demands on processors – not just in terms of processing performance, but also regarding memory, I/O, and real-time behavior. In this course, we explore the processor requirements of embedded vision applications in quantitative and qualitative terms. We then discuss the six main types of processor used in embedded vision applications, highlighting their key strengths and weaknesses, and how they are evolving over time.
At the heart of embedded vision are algorithms. These include algorithms for improving captured images, identifying features of interest, inferring the presence of objects, and reasoning about objects and motion. In this course, we introduce some fundamental algorithms, such as motion and line detection. We explain how these algorithms work, and illustrate them with demos (which are available for download). We also introduce OpenCV, which is a free, open source vision software library.
Day 5: More Algorithms and More on Using OpenCV
In this course, we present more-complex embedded vision algorithm examples, such as face detection and object tracking. We explain how these algorithms work, and illustrate through demonstrations built with OpenCV. We also illustrate a quick and easy way to set up your own vision algorithm development environment using OpenCV and a free downloadable virtual machine image. Finally, we provide pointers to additional resources for learning about embedded vision.