fbpx

Focusing On Blur

As my end-of-August technical article "Selecting and Designing with an Image Sensor: The Tradeoffs You'll Need to Master" points out, the burgeoning pixel counts of modern sensors are beginning to outstrip the resolution requirements of most camera and cameraphone users, particularly if the digital zoom feature isn't heavily employed, the images aren't substantially cropped, and/or they're not sizably enlarged.  Yet, Moore's Law trends compel sensor manufacturers (as is the case with, for example, DRAM and flash memory suppliers) to continually create ever-larger devices, with consequent shortcomings in low-light performance and other factors. In response, some sensors (and image processors connected to those sensors) optionally support aggregation modes that combine multiple pixels' detectors together, multiplying the effective photon gathering capability at the tradeoff of lower resolution.

Another unique approach to harnessing high pixel counts is taken by Lytro, who in June demonstrated a prototype image capture system called Light Field, which reportedly requires no focusing prior to snapping a photo. Harnessing the micro-lens array shown in my article, Light Field captures both traditional light intensity and light ray direction by varying the angles of micro lenses above close-proximity pixels, Lytro's scheme trades off effective sensor resolution in capturing multiple stored copies of each pixel (therefore multiple image variants) at various focus points, which the camera owner can subsequently select among. And somewhat uniquely, Lytro plans to go to market with branded cameras instead of licensing its technology to already-existing manufacturers. The Light Field approach has detractors (then again, I suspect Devin Coldeway would have been a Photoshop detractor, too), but as the Wall Street Journal's Ina Fried points out in an early-October update, the company is continuing to press forward toward production.

Speaking of Photoshop, Adobe gave hope to conventional camera owners everywhere at the company's last-week MAX conference. A demo of a "mind-blowing" prototype de-blur filter analyzed distorted images, discerned the shutter-open camera shake which caused their artifacts, then algorithmically corrected the shots to subtract out the deformations. Even more compellingly, the de-blur filter doesn't rely on any embedded metadata; imagine how much better it could operate if it were aided by the information captured by a camera or cameraphone's accelerometer, gyroscope, magnetometer and other sensors. Apparently, the demo was at least somewhat staged, so we'll have to see how well the final production filter handles real-life content. Nonetheless, I remain amazed at how Adobe continues to innovate its 21-year-old Photoshop stalwart.

Finally, I'll point you to an intriguing article that appeared on Wired Magazine's site a few days back. Titled "Psychologists Decipher Brain’s Clever Autofocus Software", it describes how a team of researchers is making progress on understanding how the human eyes and brain discern distance and near-instantly snap the object being viewed into focus. From the writeup:

In order to see an object clearly, an accurate estimate of blur is important. Humans and animals instinctively extract key features from a blurry image, use that information to determine their distance from an object, then instantly focus the eye to the precise desired focal length…I n an attempt to resolve the question of how humans and animals might use blur to accurately estimate distance, Geisler and Burge used well-known mathematical equations to create a computer simulation of the human visual system. They presented the computer with digital images of natural scenes similar to what a person might see, such as faces, flowers, or scenery, and observed that although the content of these images varied widely, many features of the images—patterns of sharpness and blurriness and relative amounts of detail—remained the same.

The duo then attempted to mimic how the human visual system might be processing these images by adding a set of filters to their model designed to detect these features. When they blurred the images by systematically changing the focus error in the computer simulation and tested the response of the filters, the researchers found that they could predict the exact amount of focus error by the pattern of response they observed in the feature detectors.

And here's the bit that I found most interesting of all:

The researchers also added common visual imperfections to their simulations and found that when it comes to judging focus, flaws are actually a good thing. “What we discovered is that the imperfections in the eye—things like astigmatism and chromatic aberration—actually help it to focus,” Geisler explains. That may help explain why people who have had their astigmatism corrected through laser eye surgery often have trouble focusing for several weeks afterward, Geisler says. That sort of understanding may have an impact on medical decisions, Thibos says. “People might be tempted to try and perfect nature,” he says, “when maybe it’s better to be a little bit imperfect.”

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top