fbpx

Embedded Vision: Detect Pedestrians

EE_Times-logo

By Tom Wilson
Vice President of Business Development, CogniVue

This blog post was originally published at EE Times' Automotive Design Line. It is reprinted here with the permission of EE Times.

This week, I've invited Tom Wilson, Vice President of Business Development at Embedded Vision Alliance member company CogniVue, to share his perspective on the growing role of embedded vision in automotive safety systems. Tom is a 20-year semiconductor veteran who has held engineering, product management and sales positions. And CogniVue is a leading supplier of vision-based automotive safety technology . — Jeff

One great example of embedded vision's use in our everyday lives is advanced driver assistance systems (ADAS). Automobiles are increasingly equipped with vision-based safety features. One key area where embedded vision is playing a role in ADAS is with autonomous emergency braking (AEB).  Depending on the vehicle and manufacturer, radar and LiDAR (laser radar) have been the dominant sensor technologies, but vision is now playing a growing role.

Recently published Euro NCAP (European New Car Assessment Program) test results (PDF) reveal significantly better detection performance from the vision-based Subaru Outback vision-based system versus radar and LiDAR. Subaru's "EyeSight" system uses a stereo camera arrangement for depth sensing. EuroNCAP AEB testing is currently focused (PDF) primarily on avoidading collisions with other vehicles. However, AEB for pedestrian avoidance is on the organization's near-future evaluation roadmap. Embedded vision will play an even greater role in pedestrian-aware AEB systems; Subaru's "EyeSight" system has, in fact, already implemented pedestrian support.

Implementing pedestrian detection for AEB involves a wide range of system considerations. The selection of the optics and image sensor, for example, has a direct impact on the required algorithms and processor  performance. A classification algorithm for pedestrian detection known as HOG (histogram of oriented gradients) uses a detection window "template" whose pixel area is intended to match the expected pedestrian size in the image frame. To allow for detection of pedestrians at different distances from the camera, the algorithm scales the input image frame. For example, a pedestrian 40 meters away from a camera may fit within a 64 by 128 pixel block of a 1080p image frame, but that same pedestrian 10 meters away from the image sensor would occupy a much larger number of pixels.

Downscaling an input image frame to a series of smaller images creates what is known as an "image pyramid." Pedestrian detection algorithms operate on image frames at multiple scales, ensuring that pedestrians fit within a given pixel area even at varying distances from the camera. The interaction between image frame resolutions, lens fields of view and template sizes becomes particularly important in differing vehicle usage environments. For example, Euro NCAP specifies different AEB requirements (speed, collision-avoidance response times, etc.) for city settings, where the required detection range might be 10 meters, versus in interurban regions where the necessary detection distance may be longer.

For closer-range detection such as in a city setting, the input image frame can be considerably downscaled even before the image pyramid is constructed, greatly reducing the computational load. A software-based pedestrian detection system will be able to dynamically adjust algorithmic parameters, allowing flexible control of processor resource usage. With fixed-function hardware approaches, in contrast, such algorithmic parameters may be fixed.

To discern the distance to objects of interest, ADAS systems use stereo vision or some other 3D sensing scheme. A stereo camera arrangement requires disparity mapping to generate the 3D depth map, which is another interesting and challenging computer vision function. And as ADAS systems increasingly include other vision-based functions, such as lane keeping, road sign detection, and intelligent high beam headlight control, the aggregate processing requirements become formidable. ADAS systems, as well as other embedded vision applications, may therefore increasingly turn to dedicated vision processing cores (such as CogniVue’s APEX) to meet these processing needs while simultaneously maintaining low power consumption targets.

Visit the Embedded Vision Alliance's website to keep up with the opportunities and challenges in embedded vision, along with implementation details and supplier connections. Be sure to check out a recently published technical article that discusses pedestrian detection and other ADAS applications in great detail. And plan to attend the Alliance's next Embedded Vision Summit, a technical educational forum for engineers interested in incorporating visual intelligence into electronic systems and software, to be held in Santa Clara, California on May 29.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top