Adding one or several image sensors to a system, along with the necessary "muscle" to process the captured still and video frames, can notably enhance that system's capabilities and consequent appeal to potential customers. But then again, as editor-in-chief of the Embedded Vision Alliance, you'd expect me to harbor such a belief, right? Keep in mind, though, that along with containing cameras, many (most? all?) of these systems offer network connectivity; wired, Wi-Fi, cellular, etc.
Speaking of Apple's attempt to compensate for people's imperfections, the last paragraph of my previous writeup:
Parziale also notes that the feature is optimized for the visually impaired thanks to VoiceOver in OS X. “VoiceOver helps positioning the card in front of the camera and the very fast image processing algorithm generates very quickly the result,” according to Parziale. “The user experience is amazing.”
Back in mid-December, I discussed the barcode as a pioneering computer-now-embedded vision application, in the context of reporting the death of its co-inventor, N. Joseph Woodland. As such, a related (and more advanced) OCR (optical character recognition) innovation from Apple that came out around the same time also caught my eye.
Earlier this month, BDTI senior software engineer Eric Gregori and I delivered a technology trends presentation on embedded vision in mobile electronics devices at the Embedded Vision Alliance Member Summit, the video of which is currently being edited and I hope to publish soon here on the site. One of the key application areas that we discussed in depth is computational photography, which Wikipedia defines as:
Plenoptic camera technology, most commonly known nowadays by virtue of Lytro's ongoing promotion of the concept (and sales of the first-generation implementation), has received primary mainstream attention to date because the light field-based approach allows for post-capture selective focus on particular depth regions of an image.
Late last month, I shared the news of the death of Bryce Bayer, an Eastman Kodak scientist whose filter array breakthrough nearly 40 years ago is now in widespread use, enabling inherently monochrome CCDs and CMOS image sensors to capture full-spectrum color information.
If you haven't yet viewed the video of Ken Lee's (VanGogh Imaging) presentation at September's Embedded Vision Summit, I commend it to your perusal. Lee begins with a quite hilarious story about an on-site audition he did of one of the first implementations of the company's products...running on automated inspection equipment at a hog farm, and used to monitor animal health.