One of the more impressive embedded vision implementations (IMHO) that I've come across, albeit one of the more potentially troubling from a copyright infringement perspective, is Google's book-scanning, de-warping system.
As I've discussed in a number of past news writeups, Microsoft has now broadened its vision for the Kinect 3-D camera (and microphone array) system beyond its Xbox 360 game console origins to also encompass computer interfaces, thereby formalizing a relationship that existed from Kinect's earliest days courtesy of the hacker community.
If you're located in or near Silicon Valley (or aren't, but have access to a computer), are interested in gesture-based user interfaces, and don't have any plans for tomorrow evening, this post is for you. The Bay Area SID (Society for Information Display) chapter sponsors monthly technical seminars on various display-related topics, for SID members and non-members alike.
Around two months ago, I mentioned some of the notable image processing-based technologies that Google's R&D lab was busy improving and turning into publicly available products. Here's another one, involving neural network-based analysis and identification.
Back in mid-April, I shared with you the video of the March 2012 Embedded Vision Alliance Member Summit keynote from Jim Donlon, a Program Manager for DARPA (the U.S. Defense Advanced Research Projects Agency).