Bookmark and Share

Welcome

The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Embedded Vision Alliance™ plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.

In the last five years, the automotive industry has made remarkable advances in systems that truly enrich the driving experience.

With a simple web camera, some open source software, and an animatronic head kit, FaceBot will introduce you to face detection and tracking.

This video training session covers some of the algorithms available in OpenCV, and is intended for programmers and non-programmers alike.

OpenCV is a collection of software algorithms in a library to be used by industry & academia for computer vision applications and research.

The BDTI OpenCV Executable Demo Package easily allows anyone with a Windows computer and a web camera to experiment with OpenCV algorithms.

This BDTI project evaluated high-level synthesis tools that use C code (or other high-level languages) to generate FPGA designs.

For many, computer vision was first imagined as the unblinking red lens through which a computer named HAL spied on the world around itself.

José Alvarez, Xilinx Video Technology Engineering Director, discusses using FPGAs to connect to and process data coming from image sensors.

Jeff Bier interviews Jitendra Malik, Arthur J. Chick Professor of EECS at the University of California at Berkeley (part two of three).

Jeff Bier interviews Jitendra Malik, Arthur J. Chick Professor of EECS at the University of California at Berkeley (part one of three).