Bookmark and Share

Technical Articles

Real time inputs are categorized based on the pretrained classification model, in deciding whether the object is present or not.

FPGAs provide massively parallel architectures, efficient DSP resources, and large amounts of on-chip memory and bandwidth.

Deep learning has been enabled by, among other things, the steadily increasing processing "muscle" of CPUs aided by co-processors.


What was a buzz a couple years ago is now a roar. The beat of vision-based acquisitions is increasing and investment dollars are pouring in.

This article describes model estimation along with the motion correction stages of smoothing, rolling shutter correction, and frame warping.

The substantial parallel processing resources in modern GPUs makes them a natural choice for implementing vision-processing functions.


It was clear at the annual Embedded Vision Summit that the time of computer vision and deep learning on mobile device had finally arrived.

Integrating an embedded video stabilization solution into the imaging pipeline of a product adds significant value to the customer.

This chapter describes how to re-use the code from this sample, the limitations of the test method, and a method of analyzing the results.


This chapter describes the requirements needed to run this sample and the example hardware that produces the results in this guide.