Bookmark and Share

Welcome

The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Embedded Vision Alliance™ plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.

Pierre Paulin of Synopsys delivers an Enabling Technologies presentation at the May 2017 Embedded Vision Summit.

Jeff McVeigh of Intel delivers a Business Insights presentation at the May 2017 Embedded Vision Summit.

Tim Ramsdale of ARM delivers a Business Insights presentation at the May 2017 Embedded Vision Summit.

Tom Michiels of Synopsys delivers a Technical Insights presentation at the May 2017 Embedded Vision Summit.

Frank Brill of Cadence and the Khronos Group delivers a Technical Insights presentation at the May 2017 Embedded Vision Summit.

Neil Trevett of the Khronos Group and NVIDIA delivers a Technical Insights presentation at the May 2017 Embedded Vision Summit.

Image processing can optionally take place within the edge device, in a network-connected cloud server, or subdivided among these locations.

Vinod Kathail of Xilinx delivers an Enabling Technologies presentation at the May 2017 Embedded Vision Summit.

Professor Jitendra Malik of UC Berkeley delivers the Tuesday keynote presentation at the May 2017 Embedded Vision Summit.

Michael Melle and Felix Nikolaus of Allied Vision deliver an Enabling Technologies presentation at the May 2017 Embedded Vision Summit.