Bookmark and Share

Welcome

The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Alliance plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.


This article analyzes the three main computation blocks of the FFT mixed-radix in a step-by-step approach, in both theory and implementation

Stefan Heck of Nauto delivers a presentation at the December 2015 Embedded Vision Alliance Member Meeting.

This article builds up the background for the 1D complex to complex FFT algorithm, pointing out the limits of DFT using direct computation.

Professor Roberto Manduchi of U.C. Santa Cruz delivers a presentation at the December 2015 Embedded Vision Alliance Member Meeting.

Neil Trevett of Khronos and NVIDIA delivers a presentation at the December 2015 Embedded Vision Alliance Member Meeting.

This white paper covers the basics of convolutional neural networks (CNNs), including a description of the various layers used.

Middleware libraries together with SDAccel enable software developers to program DNNs in their native C/C++ environment.

The OpenCL framework enables the development of programs that execute across programmable logic fabric and other heterogeneous processors.

Ken Lee of VanGogh Imaging delivers a technical presentation at the May 2015 Embedded Vision Summit.

Herman Yau of Tend delivers a technical presentation at the May 2015 Embedded Vision Summit.