fbpx

Embedded Vision Insights: May 15, 2012 Edition

EVA180x100

Dear Colleague,

Microsoft's Kinect peripheral for the Xbox 360 game console and Windows 7-based PCs singlehandedly brought awareness of vision-based applications such as gesture interfaces and facial recognition to the masses. It's also the embedded vision foundation for a plethora of other system implementations, either based on Microsoft's O/S and thereby leveraging the official Kinect for Windows SDK, or via harnessing unofficial third-party toolsets. Not a day seemingly goes by without news of some cool new Kinect-based implementation; pipe organ control, for example, or augmented reality-augmented (pun intended) magic tricks, or Force-tapping video games, or holographic videoconferencing systems, or navigation assistance for the blind among us. Were I to try to even briefly mention each of the ones I've heard about in just the past few months, far from explain them in-depth, this introductory letter alone would be several pages in length. Instead, at least for the purposes of this particular newsletter, I'll focus on Microsoft-announced Kinect advancements.

  • Later this month, the company will release v1.5 of the Kinect SDK. According to the blog post revealing the news, "Among the most exciting new capabilities is Kinect Studio, an application that will allow developers to record, playback and debug clips of users engaging with their applications.  Also coming is what we call 'seated' or '10-joint' skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user." The enhancements will work in both standard and "near mode", and won't require new hardware.
  • Last November, the company announced that it was co-creating (with TechStars) an accelerator program intended to promote startups that are harnessing Kinect for commercial applications. Applications were accepted through late January; the victors will take part in a three-month incubation program at Microsoft, as well as receive $20,000 in seed funding. Early last month, the company unveiled the 11 winners, selected from nearly five hundred applications with concepts spanning nearly 20 different industries, including healthcare, education, retail, and entertainment.
  • Kinect, at least in its Xbox 360 form, will likely soon show up in a lot more homes. That's because Microsoft, taking a page from cellular service providers, just announced a subsidized version of the 4 GByte console-plus-peripheral bundle. You pay only $99 upfront, but commit to a two-year Xbox LIVE Gold subscription at $14.99/month. At the end of the two-year term, you've shelled out roughly $100 more than if you had bought the console-plus-subscription in one shot, but it's an attractive entry to the Kinect experience for folks without a lot of extra cash on hand.
  • And this last one should be treated as a rumor, at least for the moment. The most recent upgrade of the Xbox 360 user interface, which rolled out last December, focused the bulk of its Kinect attention on the peripheral's array microphone audio input subsystem. Persistent speculation fueled by unnamed insiders, however, suggests that the next Xbox 360 UI upgrade, currently being tested, will showcase numerous vision enhancements. Specifically, while the console currently supports Bing search engine-powered media explorations on various websites, Microsoft will supposedly soon bring a full-featured Internet Explorer browsing experience to the Xbox 360, powered by both voice commands and gestures.

There's plenty more where those came from; the best ways to track Microsoft's ongoing Kinect developments are to regularly monitor the company blog (via RSS if you wish), Twitter feed and Facebook page.

I'm curious: how many of you are planning on using Kinect (either sanctioned on the Xbox 360 or PC, or unsanctioned on another platform via enthusiast-developed SDKs) as the basis for your embedded vision implementations? And how many others of you, while you might not be harnessing Kinect directly, are still leveraging one or several of its technology building blocks; the PrimeSense depth-map processor, for example, or the structured light depth-discerning technique? I look forward to hearing from you; I'll certainly keep your comments anonymous if you wish.

Thanks as always for your support of the Embedded Vision Alliance, and for your interest in and contributions to embedded vision technologies, products and applications.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

Tobii Eye-Tracking User Interface Demonstration
Tobii Technology demonstrates its Gaze eye tracking-based user interface to Embedded Vision Alliance Founder Jeff Bier at the January 2012 Consumer Electronics Show.

Introducing Analog Devices' Blackfin ADSP-BF60x Processors
This video describes the new ADSP-BF60x series of high-performance Blackfin Processors. The BF608 and 609 are optimized for embedded vision applications, while the BF606 and 607 are optimized for high-performance general purpose DSP applications.

More Videos

FEATURED ARTICLES

Improve Perceptual Video Quality: Skin-Tone Macroblock Detection
Accurate skin-tone reproduction is important in conventional still and video photography applications, but it's also critical in some embedded vision implementations; for accurate facial detection and recognition, for example. And intermediary lossy compression between the camera and processing circuitry is common in configurations that network-link the two function blocks, either within a LAN or over a WAN (i.e. "cloud"). More generally, the technique described uses dilation and other algorithms to find regions of interest, which is relevant to many vision applications. And implementing vision algorithms efficiently, i.e. finding vision algorithms that are computationally efficient, is obviously an important concept for embedded vision. More

Embedded Vision In Medicine: Let Smartphone Apps Inspire Your Design Decisions
The tantalizing potential of embedded vision technology is being tapped by divers applications. Medical equipment is one key embedded vision early-adopter. After all, in this era of ever-increasing pressure to reduce health care costs, any robust technology assistance to human medical caregivers, speeding and improving the accuracy of diagnoses, is welcomed. How can you harness embedded vision capabilities in your next-generation medical equipment designs? For some clues, take a look at what clever software developers are doing with smartphones and tablets. More

More Articles

FEATURED NEWS

Samsung's Galaxy S III: Embedded Vision In Smartphones Goes Mainstream

Gesture Interfaces Via Sound: Clever Ideas Abound

Panorama Mode: Embedded Vision Processing Blends Pixels Together Via Microcode

Image Analysis With Cloud-Based Cerebral Cortex Assistance

Makeup Selection: An Embedded Vision-Based Determination

More News

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top