Bookmark and Share

"Developing Computer Vision Algorithms for Networked Cameras," a Presentation from Intel

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:

Dukhwan Kim, computer vision software architect at Intel, presents the "Developing Computer Vision Algorithms for Networked Cameras" tutorial at the May 2018 Embedded Vision Summit.

Video analytics is one of the key elements in network cameras. Computer vision capabilities such as pedestrian detection, face detection and recognition and object detection and tracking are necessary for effective video analysis. With recent advances in deep learning technology, many developers are now utilizing deep learning to implement these capabilities.

However, developing a deep learning algorithm requires more than just training models using Caffe or TensorFlow. It should start from an understanding of use cases, which affect the nature of required training dataset, and should be tightly bound with the hardware platform to get the best performance. In this presentation, Kim explains how Intel has developed and optimized production-quality video analytics algorithms for computer vision applications.