We in the embedded vision industry live in amazing times, I'm regularly (and thankfully) reminded. Not a single day goes by lately that I'm not archiving an information tidbit (or, usually, multiple ones) for future consideration in a news writeup, an article, or a video interview.
Back in early April, I mentioned a facial recognition program called RecognizeMe, available for "jailbroken" Apple iOS-based hardware. I went ahead and dropped $2.99 on the program, then installed it on my fourth-generation iPod touch and two iPhone 4s (AT&T and Verizon); my first-generation iPad doesn't have a front-facing camera and is therefore not a testing candidate.
I've mentioned embedded vision coverage in various IEEE publications before, most notably in IEEE Spectrum but also in some of the more technical journals. I'm impressed with, and inspired by, the Society's obvious enthusiasm for this burgeoning technology, and the IEEE has been particularly prolific the past several months.
Jeff Bier, founder of the Embedded Vision Alliance and co-founder and President of BDTI, will be presenting a webcast (developed in partnership with Vision Systems Design Magazine) on Tuesday, June 5. Here's a short description:
What You'll Learn:
Back in September of last year, I introduced you to SceneTap, a service that uses webcams to provide a dynamically updated count of how many people are currently in a bar, restaurant, etc...along with estimates of male-to-female and age ratios, thereby implementing not only human face detection but also rudimentary face recognition (gender and age, but not specific-individual identification).
It seems like just last week that I was mentioning, in a Kinect-themed introductory letter to the Embedded Vision Insights newsletter, that version 1.5 of the Kinect for Windows SDK was queued up for release by month end. Actually, it was just last week.
I've gotten various writeups on this particular topic forwarded to me by several Embedded Vision Alliance contacts in the last several weeks, beginning with my boss and most recently including TI's Brian Carlson, so I'm apparently supposed to write about it ;-) And after doing the research, I've decided that New York University's Inter
File this one under "cool concept; unclear implementation fit." Microsoft Research and the University of Washington have partnered to develop SoundWave, a proof-of-concept system that leverages a computer's microphone and speaker combo to implement rudimentary gesture control. Doppler shifts, akin to those harnessed by sonar and astronomers, are at the root of the scheme.