Bookmark and Share


The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Embedded Vision Alliance™ plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.

Bing Yu, Senior Tech Manager at MediaTek, delivered a presentation on March 13, 2019 at the Alliance's Computer Vision and Visual AI Meetup.

Satya Mallick, Interim CEO of, delivered a presentation on March 13, 2019 at the Alliance's Computer Vision and Visual AI Meetup.

AI, is everywhere. It’s a revolutionary technology that is slowly pervading more industries than you can imagine.

This white paper provides selected results from our most recent computer vision developer survey, conducted in November 2018.

An ISP in combination with a vision processor can deliver more robust processing capabilities than vision processing can provide standalone.

Combining visible light image sensors with other situational and positional awareness sensor technologies can notably bolster autonomy.

Jeff Bier, Embedded Vision Alliance Founder, delivers a presentation on Dec. 4, 2018 to the Bay Area Computer Vision and Deep Learning Group

Calibration is a key step in the process of going from raw video data to metadata that can be analyzed for actionable insights.

As we design deep learning networks, how can we quickly prototype the complete algorithm to get a sense of timing and performance on GPUs?

Pavan Kumar of Cocoon Cam delivers a presentation at the September 2018 Vision Industry and Technology Forum.