Facial Recognition: A Mobile Application Yearning For Stereo Vision?
As I previously mentioned in mid-October, the latest-generation Android 4 "Ice Cream Sandwich" operating system from Google touts (among other things) built-in support for facial recognition as a system unlock option. And as I mentioned a few days later, it...umm...doesn't yet work terribly well. Not only is its operation inherently erratic, especially in low-light settings, it can seemingly be fooled by a photograph of the real-life subject that's supposed to act as the 'key'.
Nonetheless, Google and its partners seem determined to promote facial recognition as a key capability of the O/S and hardware running it. The Galaxy Nexus, co-developed by Google and Samsung, went on sale today in the United States in partnership with Verizon. As you can see from the video embedded at the top of this writeup, the handset's facial recognition support is front-and-center in the advertising campaign.
Subsequent to the publication of the two October writeups on Android 4.0, I realized one key reason for the operating system-plus-handset's facial recognition shortcomings. The Galaxy Nexus contains only a single front-facing image sensor, and as such is unable to accurately discern depth details, thereby explaining the face-versus-photograph confusion. I don't know whether or not Android 4-based devices containing stereo sensor arrays would fare any better or whether a future O/S upgrade will be necessary; feedback from knowledgeable readers is appreciated.
Last week, during the latest quarterly Embedded Vision Alliance Summit, I discussed this issue (among many others) with Brian Carlson, Senior Technology Strategist in the Wireless OMAP Business Unit at Texas Instruments. Specifically, we talked about how quickly and comprehensively stereo image sensor arrays will become pervasive in devices (in the front for 3-D videoconferencing, and in the back for 3-D still and video photography), and the front-mounted arrays' equivalent relevance for facial recognition, rich gesture interfaces and other applications. I hope to have the video interview posted to the EVA website within a week or so; please keep an eye out for it.