Professor Brian Lovell of the University of Queensland, Australia, who's also Chief Technical Officer at Imagus Technology, is a well-known figure in the fields of fields of computer vision and pattern recognition. Lovell is also a long-time advisor to (and advocate of) the Embedded Vision Alliance.
Coming up soon (less than a month away, to be precise, as I write this post on September 9) is the second annual East Coast iteration of the Embedded Vision Summit, a free day-long technical educational forum to be held on October 2, 2013 at the Regency Inn and Conference Center in Westford, Massachusetts.
Kudos to Gizmodo for the heads-up on a just-published, very informative video by Computerphile, shown above. It captures an interview with several researchers from Nottingham Trent University's Interactive Systems Research Group in the United Kingdom. With clinical medical applications (such as use by stroke victims) in mind, they've developed a prototype gesture interface system that leverages a "glove", a Wiimote, and several open-source and free software packages.
Intel's aspirations to evolve the means by which we interact with computers beyond the conventional keyboard, mouse and trackpad, specifically extending the "vision" (pun intended) to capabilities such as gestures, gaze tracking and face recognition, are well documented at this point.
As a recently published article authored by the Alliance notes, modern smartphones provide abundant imaging-related hardware resources and corresponding operating system and application capabilities that, while they might have been originally intended for still and video photography and videoconferencing purposes, are equally applicable to a variety of embedded vision applications.
Earlier this month, I passed along word of the teardown of the Leap Motion gesture interface peripheral, one of the better-known recent embedded vision examples by virtue of its consumer electronics focus and consequent extensive potential-customer interest and press coverage.