Bookmark and Share

Facial Detection On Sony's PlayStation Move And Apple's iOS 5: Embedded Vision Continues To Thrive

If you're a software coder interested in doing embedded vision development, you're unfortunately (as mentioned last Friday) not going to be able to use the official Kinect SDK unless you target Windows 7 (and only Windows 7). Granted, you've got other options; if Kinect is your target hardware foundation, there's always the open-source community to leverage (and, don't forget, give back to). Or, if you're willing to redirect your allegiances away from Microsoft and to Sony, you can instead give the PlayStation 3's Move.me development suite a look.

Back on July 27, in describing just-released Move.me, I wrote that it was "a server application that runs on a PS3 (which subsequently transfers Move data to a LAN-tethered PC)." In making that presumption, I had leveraged Sony's corporate blog post from the early-March Game Developer Conference days, which among other things said "Move.Me sends the complete state of the PlayStation Move and navigation controllers to the PC, giving you the exact same data that licensed developers typically have access to."

As it turns out, though, the Move.me support situation is even more O/S-agnostic than I'd hoped. Peruse the 'What do I need?' section of the product page, and you'll find the following curious wording (bolded emphasis is mine):

Move.me works with the PS3™ system and PlayStation®Move technology so to use Move.me, you'll need

·       PlayStation®3 system.

·       PlayStation®Move motion controller.

·       PlayStation®Eye camera.

·       PC, or other internet-connected device.

And just below the above verbiage are links to O/S-generic network protocol documentation (PDF) and open-source sample code in C and C# formats.

But what if Apple's iOS, currently running in the iPad, iPhone, iPod touch and second-generation Apple TV (and who knows what else in the future) is more your development cup of tea? Here's more good news; one of the lesser-trumpeted new features documented when Apple unveiled iOS 5 at the Worldwide Developer Conference on June 6 was facial detection. You had to know that Apple had plans beyond conventional photography and video chat for the front- and rear-mounted image sensors built into all of its handheld products, right?

Apple released the first iOS 5 public beta to developers on Saturday, and I haven't yet heard what if any facial detection 'hooks' are inside it. However, a couple of weeks ago, an Apple enthusiast site got its hands on a private iOS 5 build and found references to the following APIs:

The first, called CIFaceFeature, can determine through an image where a person’s mouth and eyes are located. The second API, CIDetector, is a resource within the operating system that processes images for face detection.

·       hasLeftEyePosition

·       hasRightEyePosition

·       hasMouthPosition

·       leftEyePosition

·       rightEyePosition

·       mouthPosition

While old news in a big-picture sense, the level of implementation detail discovered is intriguing, to say the least, particularly in combination with the report that Apple had acquired a Swedish facial recognition technology company called Polar Rose less than a year ago. Facial detection of a sort has been baked into Apple products for some time now; way back in January of 2009, for example, iPhoto '09's 'Faces' feature enabled you to find all of the photos in your library containing a particular person (a concept not so different from the one causing abundant controversy for Facebook at the moment). And the Photo Booth utility in the latest Mac OS 10.7 includes face-tracking capabilities that enable you to, for example, rotated computer-generated graphics above your noggin.

However, this is the first time that facial detection (or if you prefer, recognition) has appeared in iOS, and equally important, has been accessible by third-party developers. It'll be very interesting to see how they harness the potential that Apple has afforded them. I'm also curious to see if (and if so, when) similar capabilities come to Mac OS X...or if Apple instead aspires to migrate the bulk of its current Mac OS X-running customer base over to iOS.

p.s...crafting this writeup has cultivated a big-picture query in my mind, for which I'd appreciate reader feedback. I can't help but notice, for example, that Wikipedia is now suggesting that the entry for face detection be merged with those of facial recognition system and three-dimensional face recognition. My questions:

1.     Do you use the terms 'face detection' and 'face recognition' interchangeably?

2.     If not, when does 'detection' become 'recognition'? When the problem to be solved transitions from simply finding a face in a frame to identifying and tracking the positions of facial features? Or is it necessary to be able to identify a specific individual for the 'recognition' requirement to be met? Or do you have criteria other than that I've already suggested?

I welcome your thoughts in the comments section.