Back in July, I mentioned that the upcoming "Kinect 2.0" peripheral for the next-generation Xbox One would implementation-transition from the structured light technology used in its first-generation Kinect precursor to a time-of-flight approach.
Embedded Vision Alliance member AMD's yearly Developer Summit (APU 13, for short) will take place next month, November 11-13 to be exact, in San Jose, California. Alliance founder Jeff Bier will one of the featured speakers at the conference, delivering a presentation entitled "Creating Smarter Applications and Systems Through Visual Intelligence." Here's an abstract:
Professor Brian Lovell of the University of Queensland, Australia, who's also Chief Technical Officer at Imagus Technology, is a well-known figure in the fields of fields of computer vision and pattern recognition. Lovell is also a long-time advisor to (and advocate of) the Embedded Vision Alliance.
Kudos to Gizmodo for the heads-up on a just-published, very informative video by Computerphile, shown above. It captures an interview with several researchers from Nottingham Trent University's Interactive Systems Research Group in the United Kingdom. With clinical medical applications (such as use by stroke victims) in mind, they've developed a prototype gesture interface system that leverages a "glove", a Wiimote, and several open-source and free software packages.