Bookmark and Share


The Embedded Vision Academy is a free online training facility for embedded vision product developers. This program provides educational and other resources to help engineers integrate visual intelligence―the ability of electronic systems to see and understand their environments―into next-generation embedded and consumer devices.

The goal of the Academy is to make it possible for engineers worldwide to gain the skills needed for embedded vision product and application development. Course material in the Embedded Vision Academy spans a wide range of vision-related subjects, from basic vision algorithms to image pre-processing, image sensor interfaces, and software development techniques and tools such as OpenCV. Courses will incorporate training videos, interviews, demonstrations, downloadable code, and other developer resources―all oriented towards developing embedded vision products.

The Embedded Vision Alliance™ plans to continuously expand the curriculum of the Embedded Vision Academy, so engineers will be able to return to the site on an ongoing basis for new courses and resources. The listing below showcases the most recently published Embedded Vision Academy content. Reference the links on the right side of this page to access the full suite of embedded vision content, sorted by technology, application, function, viewer experience level, provider, and type.

Mark Jamtgaard and Bill Adamec of RetailNext deliver a presentation at the December 2016 Embedded Vision Alliance Member Meeting.

Peter McGuinness of the Khronos Group delivers a presentation at the December 2016 Embedded Vision Alliance Member Meeting.

Ben Chehebar of Compology delivers a presentation at the December 2016 Embedded Vision Alliance Member Meeting.

Real-time assessments of age range, gender, ethnicity, gaze direction, attention span, emotional state and other attributes are now possible

Cameras, along with the interfaces that connect them to the remainder of the system, are critical aspects of any computer vision design.

This chapter describes how to re-use the sample implementation and test the performance of optimizations.

This chapter describes the requirements to run this sample and the example test platform which generates the results that this guide shows.

This chapter describes some conclusions from the optimization process.

This chapter describes some further optimizations to the kernel.

This white paper explores INT8 deep learning operations implemented on the Xilinx DSP48E2 slice, and how this contrasts with other FPGAs.