May 2014 Embedded Vision Summit Technical Presentation: "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality," Simon Morris, CogniVue
Simon Morris, CEO of CogniVue, presents the "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" tutorial at the May 2014 Embedded Vision Summit.
Augmented reality (AR) applications are based on accurately computing a camera's 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its "pose". In vision-based approaches to AR, the most common and basic approach to determine a camera's pose is with known fiducial markers (typically square, black and white patterns that encode information about the required graphic overlay). The position of the known marker is used along with camera calibration to accurately overlay the 3D graphics.
In marker-less AR, the problem of finding the camera pose requires significantly more complex and sophisticated algorithms, e.g. disparity mapping, feature detection, optical flow, and object classification. This presentation compares and contrasts the typical algorithmic processing flow and processor loading for both marker-based and marker-less AR. Processing loading and power requirements are discussed in terms of the constraints associated with mobile platforms.