Bookmark and Share


The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Embedded Vision Alliance™ plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.

Mario Bergeron, Technical Marketing Engineer at Avnet, delivers a technical presentation at the April 2013 Embedded Vision Summit.

Eric Gregori, senior software engineer at BDTI, delivers a technical presentation at the April 2013 Embedded Vision Summit.

NVIDIA makes life easier for developers by providing all of the software tools needed to develop for Android on NVIDIA’s Tegra platform.

You now can hold in the palm of your hand computing power that required a desktop PC form factor just a decade ago.

The Embedded Vision Summit was held on April 25, 2013 in San Jose, California, as a technical educational forum for engineers.

We’ll increasingly be able to interact with and control our devices by signaling with fingers, gesturing with hands, and moving our bodies.

Semiconductor and software advances are enabling medical devices to derive meaning from digital still and video images.

3D imaging technology has come a long way from its academic research lab roots, and is now in a variety of machine automation applications.

Machine vision technology is growing in adoption, and it has been and will continue to be deployed in a variety of application areas.

Image enhancement functions are key elements of many embedded vision designs in improving downstream algorithms' ability to extract meaning.