Fundamentals of Monocular SLAM

Tuesday, May 21, 4:20 PM - 5:25 PM
Summit Track: 
Room 203/204

Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the device’s location relative to its surroundings and to track its path as it moves through this environment. This is a key capability for many new use cases and applications, especially in the domains of augmented reality, virtual reality and mobile robots.

Monocular SLAM is a type of SLAM that relies exclusively on a monocular image sequence captured by a moving camera. In this talk we introduce the fundamentals of monocular SLAM algorithms, from input images to 3D maps. We take a close look at key components of monocular SLAM algorithms, including Oriented Fast and Oriented Brief (ORB), Fundamental Matrix based Pose Estimation, stitching together poses using translation estimation and loop closure. We also discuss implementation considerations for these components, including arithmetic precision required to achieve acceptable mapping and tracking accuracy.


Shrinivas Gadkari

Design Engineering Director, Cadence

Shrinivas Gadkari is a Design Engineering Director in the Tensilica Processor Division of Cadence Inc. He has been involved with algorithm implementation and processor architecture for multiple Tensilica products in the Vision and Communication space for more than 10 years. He has worked in India and the US in the general area of DSP application development for over 20 years. He holds a Ph.D. in Electrical and Computer Engineering from the University of California, Santa Barbara.

See you at the Summit! May 20-23 in Santa Clara, California!
Register today and reserve your hotel room!