Last summer, Embedded Vision Alliance founder Jeff Bier was the keynote presenter at the IEEE Embedded Vision Workshop, held in conjunction with the IEEE CVPR (Computer Vision and Pattern Recognition) Conference. And the Embedded Vision Alliance will likely attend the Embedded Vision Workshop again this summer, although the exact form of that participation is yet to be finalized. As such, I wanted to be sure that this seminal embedded vision event is on your radar screens.
Adding one or several image sensors to a system, along with the necessary "muscle" to process the captured still and video frames, can notably enhance that system's capabilities and consequent appeal to potential customers. But then again, as editor-in-chief of the Embedded Vision Alliance, you'd expect me to harbor such a belief, right? Keep in mind, though, that along with containing cameras, many (most? all?) of these systems offer network connectivity; wired, Wi-Fi, cellular, etc.
Parziale also notes that the feature is optimized for the visually impaired thanks to VoiceOver in OS X. “VoiceOver helps positioning the card in front of the camera and the very fast image processing algorithm generates very quickly the result,” according to Parziale. “The user experience is amazing.”
Back in mid-December, I discussed the barcode as a pioneering computer-now-embedded vision application, in the context of reporting the death of its co-inventor, N. Joseph Woodland. As such, a related (and more advanced) OCR (optical character recognition) innovation from Apple that came out around the same time also caught my eye.
Earlier this month, BDTI senior software engineer Eric Gregori and I delivered a technology trends presentation on embedded vision in mobile electronics devices at the Embedded Vision Alliance Member Summit, the video of which is currently being edited and I hope to publish soon here on the site. One of the key application areas that we discussed in depth is computational photography, which Wikipedia defines as: