Bookmark and Share

"Real-time Calibration for Stereo Cameras Using Machine Learning," a Presentation from Lucid VR

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the "Real-time Calibration for Stereo Cameras Using Machine Learning" tutorial at the May 2018 Embedded Vision Summit.

Calibration involves capturing raw data and processing it to get useful information about a camera's properties. Calibration is essential to ensure that a camera's output is as close as possible to what it "sees." Calibration for a stereo pair of cameras is even more critical because it also obtains data on the cameras’ positions relative to each other. These extrinsic parameters ensure that 3d data can be properly rectified for viewing, and enable further advanced processing, such as obtaining disparity and depth maps and performing 3d reconstruction.

In order for advanced processing to work correctly, calibration data should be error-fee. With age, heat and external conditions, extrinsic properties of a camera can change. In this presentation, Fernandes discusses calibration techniques and a model for calibration, and proposes advanced techniques using machine learning to estimate changes in extrinsic parameters in real time.