Bookmark and Share

"Sensory Fusion for Scalable Indoor Navigation," a Presentation from Brain Corp

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Oleg Sinyavskiy, Director of Research and Development at Brain Corp, presents the "Sensory Fusion for Scalable Indoor Navigation" tutorial at the May 2019 Embedded Vision Summit.

Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous operation requires a fusion of sensory data. In this talk, Sinyavskiy describes his company's sensor-pack agnostic sensory fusion approach, which allows it to take advantage of the latest in sensor technology to achieve robust, safe and performant perception across a large fleet of industrial robots. He explains how Brain Corp addressed a number of sensory fusion challenges such as robust and safe obstacle detection, fusing geometric and semantic information and dealing with moving people and sensory blind spots.