Bookmark and Share

Panorama Mode: Embedded Vision Processing Blends Pixels Together Via Microcode

A recent "tweet" from the Google Android Twitter feed reminded me of a news topic I've long intended to mention:

Galaxy Nexus lets you take stunning wide angle photos with a wave of your hand using Panorama mode in Android 4.0

The "wave of your hand" part is admittedly what caught my attention; I thought when I first read it that Google had somehow integrated gesture interface control into the "Ice Cream Sandwich" Camera app, thereby adding to the existing facial-recognition embedded vision support that I've already mentioned several times. As you can see from the video below, that's actually not the case; what the Android team is referring to is using the hand to rotate the cameraphone across the desired panorama region:

Nevertheless, if you think about it, panorama mode still represents an impressive embedded vision processing achievement. What's in effect happening is that the image sensor is capturing a series of still images (or if you prefer, video frames) of various portions of the total desired end image, which the embedded vision processor then stitches together. In the process, it identifies common areas in each image for overlap purposes, as well as accounting for inter-image variances in orientation, distance, height and other framing variables, not to mention exposure.

Modern smartphones and tablets contain an abundance of additional sensors and transceivers whose logged data assists in these functions; GPS, silicon compass, altimeter (barometer), accelerometer, gyroscope, etc. Nonetheless, the end result is often quite impressive, particularly considering its near-real-time nature. Does anyone else out there remember when the panorama function was implemented only as a tediously slow algorithm with dubious-at-best outcome in dedicated software running on computers...prior to it even being included in Adobe Photoshop and other general-purpose image editing apps?