Embedded Vision Insights: September 23, 2014 Edition
Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.
If you've already registered, click here to sign in.
See a sample of this page's content below:
In this edition of Embedded Vision Insights:
- Virtual and Augmented Reality Opportunities
Google's Project Tango
- Vision in Robotic Vacuum Cleaners
- Embedded Vision Community Conversations
- Embedded Vision in the News
|LETTER FROM THE EDITOR|
Virtual reality (VR) is a hot technology right now, especially for gaming applications. Current market leader Oculus held a developer conference in Hollywood last weekend and wowed attendees with its latest iteration of the Rift headset design, code-named "Crescent Bay". Samsung and Sony are actively developing their own VR gear, the former in partnership with Oculus. And plenty of other companies, such as Vrvana with its Totem head-mounted display (HMD), are waiting in the wings for the market to embrace VR.
There's only one problem, as panelists at an Oculus developer conference session pointed out: While the VR headset's embedded sensors are able to accurately determine your head's orientation, viewing direction and motion, there's currently no integrated way for it to discern what the rest of your body is doing. Are your feet dancing? Are your arms waving? What are your hands, and the fingers attached to them, doing? The Oculus Rift by itself doesn't have a clue, and wrist- and ankle-mounted motion sensor accessories are cumbersome and provide only rudimentary additional data. This means that your own body cannot appear in the virtual world in a realistic way.
Enter embedded vision with the solution. As demonstrated by SoftKinetic's Tim Droz at a recent Alliance Member Meeting, a depth camera mounted to the...