Bookmark and Share

Cameras and Sensors for Embedded Vision

Click HERE for additional resources

Definition:

Focus on digital imaging and visible light, but don't exclude other options

While analog cameras are still used in many embedded vision systems, the EVA technology definition primarily focuses on digital image sensors—usually either a CCD or CMOS sensor array that operates with visible light.  However, this definition shouldn’t constrain the technology analysis, since many machine vision systems can also sense other types of energy (IR, sonar, etc.).

Cameras are getting smarter

The camera housing has become the entire chassis for an embedded vision system, leading to the emergence of “smart cameras” with all of the electronics integrated.  By most definitions, a smart camera includes computer vision, since the camera is capable of extracting application-specific information.  As both wired and wireless networks get faster and cheaper, there still may be reasons to transmit pixel data to a central location for storage or extra processing.  A classic example is cloud computing, using the camera on a smartphone.  The smartphone could be considered a “smart camera” as well, but sending data to a cloud computer may reduce the processing performance required on the mobile device (lowering cost, power, weight, etc.).  For a dedicated smart camera, some vendors have created chips that include all of the smart camera features.  An example is the Cognivue CV220X, which the company calls an “image cognition processor” (ICP).  The device stacks up to 16 megabytes of DRAM in the same package as the processing chip, which integrates an ARM CPU with an array processor to accelerate computer vision algorithms.

Examples:

Cameras:

security cameraBefore Microsoft's Kinect, many people would imagine a camera for computer vision as the outdoor security camera shown in this picture.  There are countless vendors supplying these products, and many more supplying indoor cameras for industrial applications. It would be easy to ignore simple USB cameras for PCs, since these are arguably not embedded systems.  However, that still leaves almost a billion cameras used for the embedded system in the mobile phones of the world.  These cameras can’t be ignored, since the speed and quality have risen dramatically—supporting over 10 mega-pixel sensors with sophisticated image processing hardware.

 

Consider another important factor for cameras—the rapid adoption of 3D imaging with stereo optics. In fact, cell phones now offer this technology.  An example is the Sharp Aquos Smartphone, incorporating 2 cameras and a pair of 8 megapixel sensors to create 720p 3D video.  Look again at the picture of the outdoor camera and consider how much change is about to happen to computer vision markets as this new camera technology becomes pervasive.

Sensors:

CMOS is taking over CCD

Charge-coupled device (CCD) sensors have some advantages over CMOS image sensors, mainly because the electronic shutter of CCDs traditionally offers better image quality with higher dynamic range and resolution.  However, according to iSuppli, CMOS sensors now account for 90% of the market, driven by the technology’s lower cost, better integration and speed.  However, the cellphone business skews these market numbers, and iSuppli predicts that 25% of machine vision applications still use CCD sensors¹.

CMOSIS is an example of a company offering CMOS sensors for embedded vision, and their CMV4000 includes a global shutter and a 2K×2K CMOS sensor array that can deliver 180 frames/second with 10-bit pixels (12 bits at 37 frames/sec).

 

References:

1. http://info.adimec.com/blogposts/bid/39656/CCD-vs-CMOS-Image-Sensors-in-Machine-Vision-Cameras

  

Additional Resources:

 Member Product Information: Open Source Sensor Board with XMOS Processor