Bookmark and Share

"Real-time Calibration for Stereo Cameras Using Machine Learning," a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the "Real-time Calibration for Stereo Cameras Using Machine Learning" tutorial at the May 2018 Embedded Vision Summit.

Calibration involves capturing raw data and processing it to get useful information about a camera's properties. Calibration is essential to ensure that a camera's output is as close as possible to what it "sees." Calibration for a stereo pair of cameras is even more critical because it also obtains data on the cameras’ positions relative to each other. These extrinsic parameters ensure that 3d data can be properly rectified for viewing, and enable further advanced processing, such as obtaining disparity and depth maps and performing 3d reconstruction.

In order for advanced processing to work correctly, calibration data should be error-fee. With age, heat and external conditions, extrinsic properties of a camera can change. In this presentation, Fernandes discusses calibration techniques and a model for calibration, and proposes advanced techniques using machine learning to estimate changes in extrinsic parameters in real time.