Bookmark and Share

Embedded Vision In The News: Various Week-Ending Views

Ordinarily, my daily news writeups focus rifle-like on a single-subject theme, but I've collected a diversity of smaller tidbits in recent weeks. And so, for today I thought I'd choose a more shotgun-like approach to delivering information to you.

  • Back in mid-January, I told you how Samsung was leveraging image sensors (and microphones) built into televisions to demonstrate new capabilities at the Consumer Electronics Show; gesture interfaces, facial recognition, voice recognition, etc. As of earlier this month, the new Samsung TVs are shipping, save for the largest-screen model ("$2,999.99 for the 46-inch model, $3,479.99 for the 55-inch, $4,399.99 for the 60-inch, and $5,099.99 for the 65-inch") and Ars Technica took them for a test-drive. My summary: compelling potential, incompletely implemented.
  • Speaking of televisions, several Canadian telecom operators supposedly already have prototypes of Apple's long-rumored branded televisions in-house for testing. And like Samsung's counterparts, they're reportedly voice- and gesture-controllable.
  • Speaking of Apple, a recent patent application from the company showcases how the company envisions calibrating an autostereoscopic 3-D display's rendering based on discerning (via eye-tracking) where the viewer is positioned relative to the screen, along with measuring ambient lighting conditions at the time.
  • And speaking of gesture interfaces, not to mention voice recognition, the latest iteration of Waze's traffic and navigation app for the iPhone triggers the program's voice control algorithm to pay attention via a user's hand wave in front of the handset. See above for a demonstration video. Note that Waze's implementation uses the proximity sensor, since the software strives to support models all the way back to the iPhone 3 but the iPhone family didn't get a front-mount image sensor until the iPhone 4. Still...
  • Speaking of patent applications, Google's seemingly interested in using image sensors for more than just facial recognition unlock schemes; motion-input systems may also be in the company's implementation queue.
  • And finally, at least for today (there are plenty more news bits like these still in my implementation queue), here's another PC-based gesture interface announcement, conceptually similar to the SoftKinetic demonstration that I mentioned in Tuesday's Embedded Vision Insights newsletter. This one, however, leverages a conventional webcam image sensor. Mac application Flutter, available at the moment for free as public alpha code, currently controls iTunes and Spotify, with additional applications planned (such as YouTube, demonstrated in the video below). I'm off to download and install it as soon as I press "publish"...