Embedded Vision Insights: November 15, 2011 Edition
Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.
If you've already registered, click here to sign in.
See a sample of this page's content below:
Welcome to the third edition of Embedded Vision Insights, the newsletter of the Embedded Vision Alliance.
This past few weeks have been particularly newsworthy for camera-inclusive smartphones and tablets. Consider, for example, handsets such as the HTC MyTouch Slide 4G and its plethora of "power user" snapshot settings, the 1080p video capture capabilities of the Apple iPhone 4S, the stitch-free panorama mode supported by the Samsung Galaxy Nexus and the high quality Carl Zeiss optics built into the Nokia Lumia 800. Key to new capabilities such as these are the systems' microprocessors; now-sampling CPUs built from Qualcomm's latest Krait and ARM's latest Cortex-A15 microarchitectures, for example, along with Nvidia's in-production quad-core (or more accurately, penta-core) Tegra 3 and Apple's dual-core A5.
To be clear, these systems (and the SoCs they're derived from) are useful for a diversity of embedded vision functions, not just for picture-snapping and videography purposes. Take a look, for example, at the Kinect-reminiscent gesture interfaces supported by Kinectimals for Windows Phone 7, included in latest-generation Pantech handsets, documented in both filed and granted patents from Apple, and suggested by recent Qualcomm acquisitions. Ponder the facial recognition-based unlock capabilities built into Google's "Ice Cream Sandwich" Android v4 and...