fbpx

Top 5 Myths in Automotive Vision: Designing Embedded Vision Systems Is Easier Than You Think

dz-auto-250x250-v1

This article was originally published at Avnet's website. It is reprinted here with the permission of Avnet.

By Stephen Evanczuk

Vision has always occupied a special place in information science and popular culture. One does not need to be an engineer to appreciate the vast bandwidth available in normal human vision. Most people understand that the common saying “a picture is worth a thousand words” is simply code for the rapid assimilation and interpretation of huge amounts of raw data. Accordingly, myths arise that building a vision system must be too complex to seriously contemplate.

Indeed, engineers, product planners and company executives are often too quick to accept any number of reasons for avoiding development of any kind of embedded vision application, much less those associated with mission-critical automotive applications. Despite rapid advancements in embedded vision markets and technology, as well as the ready availability of supporting products and services for automotive vision solutions, some myths still persist.

Myth 1: It's just for driver rear-view vision

Rear-view cameras might be the most familiar application of video systems in vehicles, but opportunities for vision systems abound for enhancing vehicle safety and informatics capabilities. Embedded vision systems integrate high-resolution image sensors with powerful processing hardware and sophisticated software capable of object detection, recognition and tracking. This combination of imaging hardware and software provides the underlying foundation for high-speed detection and recognition of pedestrians, other vehicles, traffic signs, lane obstructions, lane departure and any number of related applications.

Myth 2: The technology's not there yet

For automotive safety applications, embedded vision requires real-time image processing capabilities able to detect, recognize, classify and track hazards or potential hazards, whether those include road obstructions, lane containment, other vehicles or pedestrians. Furthermore, to activate vehicle safety capabilities, vision data needs to be processed and distributed to high-level vehicle control systems such as those responsible for steering, braking and acceleration.

Despite the complexity of these systems, the individual elements required to build these solutions are readily available or even already in place. Existing standards such as CAN and Ethernet AVB (Audio/Video Bridging) provide the communications backbone needed to distribute video, data and control operations across different vehicle subsystems.

Similarly, it is a myth that automotive vision systems require some sort of satellite surveillance-quality imaging capability. In fact, automotive applications dictate a broad range of image sensor requirements that lie well within the capabilities of commonly available devices.

For the underlying computing power, developers can find high-performance processors that combine multiple types of cores in heterogeneous architectures able to handle not only general purpose applications software but also the real-time processing required for this environment. For more demanding video requirements, specialized video processors leverage internal processing pipelines designed to outperform general purpose processors. In fact, devices such as the Analog Devices BF609 Blackfin® processor combine DSP cores with Analog Devices' specialized PVP (pipelined vision processor) to accelerate image processing algorithm execution. Furthermore, hybrid devices such as the Xilinx Zynq® 7000 All Programmable SoC combine general purpose ARM Cortex™-A9 cores with an FPGA fabric designed to support very high speed, hardware-based custom data processing pipelines required for specialized vision applications.

Myth 3: You have to have a Ph.D. in image processing

The myth that algorithm complexity will eventually subvert any automotive vision development project dates back to when real-time object detection and recognition efforts lay strictly in the research domain. Worse, the lack of available embedded processing power available at that time left researchers to work around computational limitations that simply do not exist in today's high performance embedded hardware platforms.

The notion of extreme complexity often persists today despite the wide availability of software solutions for image processing. For example, MATLAB's Computer Vision System Toolbox and open source software libraries from OpenCV provide pre-built and tested functions designed for advanced computer vision functionality with support for capabilities ranging from basic image manipulation to object recognition and tracking.

At the same time, development environments offer built-in support designed to accelerate design and enhance productivity. For example, the Xilinx Vivado™ tool chain provides developers with the capabilities needed to rapidly deploy high performance embedded vision systems that combine the Xilinx Zynq 7000 All Programmable SoC, third-party IP and their own proprietary algorithms. High-level synthesis tools in the Vivado tool chain allow engineers to quickly implement their performance critical C-based algorithms or OpenCV functions as hardware in the Zynq 7000's FPGA fabric.

Myth 4: It's too expensive to get started

The flip side of the software myth is the notion that embedded vision hardware subsystems are simply too costly. In fact, developers can find complete low-cost systems such as Avnet’s Blackfin® Embedded Vision Starter Kit. Priced at $299, the kit combines the FinBoard development board with a full complement of software development tools and accessories required to build sophisticated vision applications. Based on the Analog Devices Blackfin BF609, the kit enables developers to explore sophisticated imaging applications, relying on the BF609's integrated PVP to accelerate execution of image processing algorithms. Included with the kit, the CrossCore™ Embedded Studio development suite and ICE-100B In-Circuit Emulator help speed design and debug of these systems.

Myth 5: There aren't enough resources to help

For developers and companies looking to explore automotive image processing applications, perhaps the most important fact is the breadth and depth of resources available to help them design and optimize these systems. Automotive vision is a strategic market for a growing group of IC and board manufacturers and each of the leading manufacturers offers specialized assistance in specifying and designing these systems. Furthermore, developers can find assistance with embedded vision design through OpenCV with its community of 47,000 developers and through FinBoard with support for the Blackfin Embedded Vision Starter Kit. Engineers beginning to explore vision systems can also take advantage of a growing number of workshops and seminars on the topic. Along with presentations at professional conferences such as the Embedded Vision Summit, engineers can find local presentations such as the Smarter Vision Design Seminar and Workshop.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top