fbpx

Embedded Vision Summit Demonstrations Complete Your Learning, Provide Opportunities For Interaction

demo-crowd-May2014

Technology demonstrations from more than 30 participating companies at next week's Embedded Vision Summit supplement the computer vision concepts presented in the event's three presentation tracks, providing tangible examples of those concepts implemented in silicon and software. In the Technology Showcase, you'll be able to discover suppliers and enabling products that you can use in your next-generation products, particularly valuable in a fast-evolving application like embedded vision. And you'll have the opportunity to interact directly and in-depth with experts in the field, as well as with your peers.

Synopsys, for example, will highlight deep neural networks, an increasingly popular means of extracting meaning from images and one of the overarching Summit themes. The company will use its new DesignWare EV 5x vision processor IP core to demonstrate the detection and classification of speed sign images in real-time from a moving car.

Embedded Vision Alliance founding member BDTI will conduct two demonstrations at the Summit. The first shows the implementation of GPU-accelerated background subtraction using a single 2D camera. The second, implemented using OpenCV-sourced segmentation techniques, dynamically overlays "flame" effects on any moving objects in the video scene.

It's also possible to implement background subtraction using a depth-discerning sensor, which Intel plans to demonstrate using the company's RealSense stereo vision camera. Also showcased in Intel's booth will be the RealSense SDK and INDE (Integrated Native Developer Experience) OpenCV software tools, optimized for heterogeneous computing platforms.

Alliance founding member Xilinx, like Cadence, also plans to make deep neural networks the focus one of its three demonstrations. The TeraDeep nn-X will highlight the significant power consumption reduction possible when running Deep Learning in an FPGA compared with a CPU. Xilinx will also demonstrate partitioning a Canny edge detection algorithm between hardware and software using the company's SDSoC development toolset, and how video filters initially built in C or C++ can be hardware-accelerated in a Zynq All Programmable SoC.

Fellow programmable logic supplier Altera will also have multiple demonstrations in its Technology Showcase booth. Forward- and surround-camera advanced driver assistance systems (ADAS) designs will be showcased, the former also applicable to robotics and surveillance. Altera will also demonstrate a "smart city" video analytics system, and a Lucas Kanade Dense Optical Flow implementation relevant to multiple applications

Development kits are valuable in accelerating your product's development, and Avnet Electronics has you covered. The company's microZed Embedded Vision Development Kit will be shown running face analytics and an image signal processing (ISP) pipeline, while the picoZed Smart Vision Development Kit showcases various system-to-camera interface options.

These are only a few of dozens of demonstrations planned from the more than 30 companies who will be participating in the Embedded Vision Summit's Technology Showcase. And don't forget, the Summit also includes includes 24 presentations by vision technology, application and market experts, keynote talks from Mike Aldred of Dyson and Dr. Ren Wu of Baidu, and four in-depth accompanying workshops. The Embedded Vision Summit takes place on May 12, 2015 at the Santa Clara (California) Convention Center. Half- and full-day workshops will be presented on May 11 and 13. Register today, while space is still available!

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top