Bookmark and Share

"Building a Typical Visual SLAM Pipeline," a Presentation from Virgin Hyperloop One

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:

YoungWoo Seo, Senior Director at Virgin Hyperloop One, presents the "Building a Typical Visual SLAM Pipeline" tutorial at the May 2018 Embedded Vision Summit.

Maps are important for both human and robot navigation. SLAM (simultaneous localization and mapping) is one of the core techniques for map-based navigation. As SLAM algorithms have matured and hardware has improved, SLAM is spreading into many new applications, from self-driving cars to floor cleaning robots. In this talk, Seo walks through a typical pipeline for SLAM, specifically visual SLAM.

A typical visual SLAM pipeline, based on visual feature tracking, begins with extracting visual features and matching the extracted features to previously surveyed features. It then continues with estimation of the current camera poses based on the feature matching results, executing a (local) bundle adjustment to jointly optimize camera poses and map points, and lastly performing a loop-closure routine to complete maps. While explaining each of these steps, Seo also covers challenges, tips, open source libraries, performance metrics and publicly available benchmark datasets.