Kudos to Gizmodo for the heads-up on a just-published, very informative video by Computerphile, shown above. It captures an interview with several researchers from Nottingham Trent University's Interactive Systems Research Group in the United Kingdom. With clinical medical applications (such as use by stroke victims) in mind, they've developed a prototype gesture interface system that leverages a "glove", a Wiimote, and several open-source and free software packages.
Intel's aspirations to evolve the means by which we interact with computers beyond the conventional keyboard, mouse and trackpad, specifically extending the "vision" (pun intended) to capabilities such as gestures, gaze tracking and face recognition, are well documented at this point.
As a recently published article authored by the Alliance notes, modern smartphones provide abundant imaging-related hardware resources and corresponding operating system and application capabilities that, while they might have been originally intended for still and video photography and videoconferencing purposes, are equally applicable to a variety of embedded vision applications.
Earlier this month, I passed along word of the teardown of the Leap Motion gesture interface peripheral, one of the better-known recent embedded vision examples by virtue of its consumer electronics focus and consequent extensive potential-customer interest and press coverage.
As past presentations and documentation have hopefully already made clear, Xilinx views its Zynq-7000 All Programmable SoCs (combining "soft" FPGA fabric and dual ARM Cortex-A9 "hard" cores) as ideal processing platforms for implementing embedded vision designs.