If you've been up on the Embedded Vision Summit page over the past several days, you may have noticed that we've added information on the event's keynote (and keynoter). The keynote presentation on "Artificial Intelligence for Robotic Butlers and Surgeons" will be delivered by Professor Pieter Abbeel of the University of California, Berkeley.
Last summer, Embedded Vision Alliance founder Jeff Bier was the keynote presenter at the IEEE Embedded Vision Workshop, held in conjunction with the IEEE CVPR (Computer Vision and Pattern Recognition) Conference. And the Embedded Vision Alliance will likely attend the Embedded Vision Workshop again this summer, although the exact form of that participation is yet to be finalized. As such, I wanted to be sure that this seminal embedded vision event is on your radar screens.
A bit more than a year ago, Alliance member CEVA unveiled its MM3101 image and embedded vision processing core. Several months later, Alliance Platinum member Analog Devices introduced four new Blackfin SoCs, two of which contain the PVP (Pipelined Vision Processor) core.
Adding one or several image sensors to a system, along with the necessary "muscle" to process the captured still and video frames, can notably enhance that system's capabilities and consequent appeal to potential customers. But then again, as editor-in-chief of the Embedded Vision Alliance, you'd expect me to harbor such a belief, right? Keep in mind, though, that along with containing cameras, many (most? all?) of these systems offer network connectivity; wired, Wi-Fi, cellular, etc.
Speaking of Apple's attempt to compensate for people's imperfections, the last paragraph of my previous writeup:
Parziale also notes that the feature is optimized for the visually impaired thanks to VoiceOver in OS X. “VoiceOver helps positioning the card in front of the camera and the very fast image processing algorithm generates very quickly the result,” according to Parziale. “The user experience is amazing.”
Back in mid-December, I discussed the barcode as a pioneering computer-now-embedded vision application, in the context of reporting the death of its co-inventor, N. Joseph Woodland. As such, a related (and more advanced) OCR (optical character recognition) innovation from Apple that came out around the same time also caught my eye.
Earlier this month, BDTI senior software engineer Eric Gregori and I delivered a technology trends presentation on embedded vision in mobile electronics devices at the Embedded Vision Alliance Member Summit, the video of which is currently being edited and I hope to publish soon here on the site. One of the key application areas that we discussed in depth is computational photography, which Wikipedia defines as:
Plenoptic camera technology, most commonly known nowadays by virtue of Lytro's ongoing promotion of the concept (and sales of the first-generation implementation), has received primary mainstream attention to date because the light field-based approach allows for post-capture selective focus on particular depth regions of an image.