Embedded Vision Insights: May 15, 2012 Edition
Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.
If you've already registered, click here to sign in.
See a sample of this page's content below:
Microsoft's Kinect peripheral for the Xbox 360 game console and Windows 7-based PCs singlehandedly brought awareness of vision-based applications such as gesture interfaces and facial recognition to the masses. It's also the embedded vision foundation for a plethora of other system implementations, either based on Microsoft's O/S and thereby leveraging the official Kinect for Windows SDK, or via harnessing unofficial third-party toolsets. Not a day seemingly goes by without news of some cool new Kinect-based implementation; pipe organ control, for example, or augmented reality-augmented (pun intended) magic tricks, or Force-tapping video games, or holographic videoconferencing systems, or navigation assistance for the blind among us. Were I to try to even briefly mention each of the ones I've heard about in just the past few months, far from explain them in-depth, this introductory letter alone would be several pages in length. Instead, at least for the purposes of this particular newsletter, I'll focus on Microsoft-announced Kinect advancements.
- Later this month, the company will release v1.5 of the Kinect SDK. According to the blog post revealing the news, "Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications. Also coming is what we call 'seated' or '10-joint' skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user." The enhancements will work in both standard and "near mode", and won't require new hardware.
- Last November, the company announced that it was...