May 2014 Embedded Vision Summit West Technology Showcase

 

Computer Vision for Next-Generation Products

The event for software and hardware developers who want to incorporate visual intelligence into their products

29 May 2014 • 8 am to 7:30 pm
Santa Clara Convention Center • Santa Clara, CA USA

The Embedded Vision Summit Technology Showcase includes technology demonstrations, food and drink, and networking opportunities. Come to the Technology Showcase to see demos of the latest embedded vision processors, sensors, algorithms, applications, development tools and more. The Technology Showcase is an ideal opportunity to find technologies, products and suppliers for your next embedded vision product design. It’s also a great opportunity to talk face-to-face with technical experts from embedded vision technology suppliers, get your questions answered, and make connections with your peers.

The Embedded Vision Summit Technology Showcase was open from 12:00 PM to 7:30 PM on Thursday, May 29th, 2014, at the Santa Clara Convention Center in Santa Clara, California. Lunch was served from 12:00 PM to 1:00 PM, the afternoon break took place from 3:30PM to 4:00 PM, and a reception with food and drink was held from 5:30 to 7:30 PM.

The May 2014 Embedded Vision Summit West is now concluded.

Presenting Companies

The following Alliance member companies and partners participated in the Embedded Vision Summit Technology Showcase.

Embedded Vision Using GPU Acceleration
AMD's latest processors deliver over a teraflop of compute performance using an open, industry-standard software and hardware ecosystem. AMD’s embedded vision experts demonstrate several applications that harness this heterogeneous performance to accelerate industry-standard vision libraries such as OpenCV. Gesture recognition will soon become ubiquitous, and all of AMD’s new processors (APUs) have an integrated GPU that efficiently accelerates this workload. AMD also demonstrates how to harness the enormous compute capabilities of its latest discrete GPU cards, showcasing embedded vision features such as optical flow and face detection.

   

Analog Devices’ Blackfin® Face Detection
This demo is based on a Blackfin BF-609 processor.  A combination of Analog Devices’ unique Pipelined Vision Processor (PVP) video analytics accelerator and a Blackfin DSP core are utilized to detect faces in a live video stream. All frontal faces within a pre-defined range will be detected and marked in the video displayed.

Analog Devices’ Blackfin FinBoard Character /License Plate Recognition
This demo consists of high accuracy alpha-numeric License Plate Recognition software running on the FinBoard (Blackfin Embedded Vision Starter Kit). Through a USB link the FinBoard is connected to a PC where a LabView-based software application, for access control, allows the display of the recognized license plate for security and identification.

   

Face Detection & Analysis by PUX Corporation

  • Real-time face detection and analysis
  • Improved using GPU Compute
  • OpenCL Full Profile on ARM® Mali™-T600 series GPU

Gesture-Based User Interface by eyeSight Technologies using GPU Compute

  • 3D gesture recognition using 2D camera
  • Improved accuracy and reliability using GPU Compute
  • OpenCL Full Profile on ARM® Mali™-T600 series GPU
   

The Aspera Transfer Platform: Fast, Secure Data Transfer Powering New Business In the Cloud and On Premise
Demonstrations of client and server software being used to transport large files and directories of files at speeds that surpass the capabilities of traditional TCP/IP based network methods using Aspera’s patented high-speed transfer technology, fasp™.

   

Make the World Interactive. It Will Be AWEsome.
Augmented World Expo (AWE) is the world’s largest gathering of professionals dedicated to solving real problems in the Augmented World, featuring Augmented Reality, Wearable Computing, Digital Eyewear, Gesture and Sensors devices, and The Internet of Things. AWE assembles top innovators—from the hottest startups to Fortune 500—to showcase the best augmented world experiences in all aspects of life and work: from entertainment and brand engagement, to enterprise and industrial, urban and architecture, education and training, automotive and navigation, government and commerce. The AWE 2014 Program includes mind-blowing keynotes by industry luminaries, a Hackathon, Tutorials for developers and designers, Startups Launchpad, Awards Competition, Art Gala, and talks by 100 industry leaders covering business, technology, and design.

   

Accelerating Computer Vision Algorithms using FPGAs
Demonstrating Simple Image Processing pipelines. The simple pipeline is a highly parallel implementation of Bilateral Filter/Gaussian Filter and Canny Edge Detection algorithms. The demonstration highlights Auviz Systems’ expertise in image processing and high performance optimization of algorithms.

   

Your Programmable SoC Vision Platform
MicroZed is Avnet’s newest, low-cost Zynq®-7000 All Programable SoC development kit that supports designers from evaluation all the way to production. This uniquely designed board can operate in standalone mode for quick experiments or as a System-on-Module (SOM) when plugged into a carrier card. This demonstration shows MicroZed connected to a USB webcam and 7” Industrial Touch LCD panel, running accelerated OpenCV algorithms using the programmable logic section of the SoC.

Finding Objects in Live Video
The Blackfin Embedded Vision Starter Kit combines a versatile hardware platform with the necessary software development tools to enable the building of high-performance embedded vision systems. Based on the low-cost, dual core Blackfin BF609 processor, the kit is ideal for exploring advanced video analytics. This demonstration shows FinBoard in action through a real-time dice counting application.

Avnet Wandcam—5 Megapixel Camera for Wandboard
The WandCam provides a 5MP MIPI camera input for Wandboard for any application needing high-performance, high definition video. It connects seamlessly to Wandboard Solo, Dual, or Quad using the MIPI-CSI2 interface via on-module FFC (Flat Flex Cable) connector. This demonstration shows wireless real-time HD video streaming between a Wandboard module and a mobile device using real-time streaming protocol.
   
“Man vs Machine”—Dice Dot Counting on the Analog Devices BF609
This demo shows edge detection, countouring, and classification through an optimized implementation on the Analog Devices Blackfin ADSP-BF609 processor, using the chip’s dual CPU/DSP cores and integrated Pipelined Vision Processor. The dice are thrown within view of the camera, and the software finds the dots and computes the total much faster than a human. Designed, implemented, and optimized by BDTI engineers—experts in embedded vision application development.

“People on Fire”—Real-time Augmented Reality
This demo shows "flame" effects overlaid on any moving objects in the video scene.  Implemented in OpenCV by BDTI engineers, this demo exemplifies the use of segmentation techniques as an alternative to target identification and tracking in augmented reality applications.

   

Smartglasses with Gesture Controlled User Interface
The Mirama Smartglasses by Brilliantservice were developed in cooperation with Bluetechnix, using Bluetechnix´ Time-of-Flight (ToF) sensor technology. Bluetechnix’ ToF technology enables gesture-controlled AR applications.

   

HD Image Stabilization
The Cadence Tensilica IVP Image/Video Processor DSP IP core uses high-speed data processing to convert signals of HD images in real time to counteract camera shake, particularly as zoom magnification increases. This reduces the blur in video from shaky hands.

HD 3D Noise Reduction
The Cadence Tensilica IVP Image/Video Processor DSP IP core digitally removes noise from the image, even in low light conditions.  It handles traditional frame-to-frame noise reduction as well as spatial noise reduction. Spatial noise reduction compares pixels with neighboring pixels, detecting and removing unwanted noise found within the frame.  It removes digital artifacts and grainy appearances.

Face Detection with Tensilica Always-On DSP
Cadence’s always-on DSP provides effective face detection capabilities. It can be used to scan an environment and select faces, sending those facial images—if required—to a computer or person for further processing or viewing. The key to this application is an ultra-low power processor implementation, relieving the host processor from staying on in battery-operated applications.
   

CEVA-CV and SmartFrame on the CEVA-MM3101
Real-time video processing running on the MM3101. Users can choose a combination of filters including Gaussian, Sobel, Laplacian, Median, Average, Pyramid scale-down and Harris corner, used in various applications such as finding corners, gradient, derivatives, inverse by lookup table, and erosion and dilation morphology. The SmartFrame tool can be used with computer vision algorithms to handle system management; it also supports tunneling of multiple kernels.

CEVA + Visidon + Sensory Integrated Sensor-Hub
A low-power CEVA DSP with an always-on integrated sensor-hub solution. Shows sound, vision, and motion sensing for full contextual awareness including: face activation and unlocking; voice activation and command; and, 9-axis movement sensor fusion.
Visidon face activation supports always-on face detection technology in any orientation, and is non-sensitive to varying imaging conditions and facial features. Sensory voice activation supports always-listening voice trigger and commands, speaker independence, with optional speaker verification and user-defined triggers.

CEVA-MM3101 Object Recognition
The CEVA-MM3101 object recognition demo shows computer vision capabilities for innovative applications such as augmented reality and ADAS. Based on the ORB feature extraction algorithm, object recognition is the basis for face detection, pedestrian detection, augmented reality, scene content classification, and other applications. Used for markets including mobile, home entertainment, automotive, surveillance, military, and robotics.
   

CogniVue: Embedded Vision in the Newest Wearable Technologies
The CogniVue CV220x image cognition processor in wearable tech devices, such as the NeoLabs Neo.1 smart pen, enables best performance per area per power. See this disruptive technology in action as a sketch artist draws your portrait on paper and the pen converts it to digital in real time.

   

Video on Virtex
Uses the Xilinx® VC707 Virtex-7 Development System with Avnet® HDMI input/output FMC (FMC-IMAGEON), generic ARM® controller FMC, 1080p60 video camera, and 1080p60 television. The demo shows background video and 5 picture-in-picture images, including:

  • PiP generation, switching, and movement
  • 480x270 PiP
  • DDR3 memory operation
  • Test pattern, scaling, and video cross connect
  • SPI to AXI4-Lite bridging

Advanced Tool Flows on Zynq®
Uses the Avnet® Zedboard® featuring Xilinx® Zynq®, running AES, SHA2, and SH3 algorithms sequentially in three different ways: on a bare metal processor; in fabric using HLS-converted C; and, in fabric using hand-coded RTL. CPU0 is Linux; CPU1 is bare metal. Linux is running the UI and a web-server. The FPGA fabric is partially reconfigured each time a new algorithm is required (6 times during the demo!).

   

ToF (Time-of-Flight) Sensor
A time-of-flight sensor is a range imaging sensor system that resolves distance based on the known speed of light. ToF sensing can be used in applications such as air switches, finger gesture input, electronic whiteboard, object detection, and 3D measurement.

ASIC Emulation with Zynq
The TB-7V-2000T-LSI is intended for emulation of ASIC and incorporates a Virtex-7 2000T FPGA, the world’s largest FPGA with performance features that include high-speed internal logic and high-bandwidth interfaces. The TB-7V-2000T-LSI with Zynq FMC card can support multiple OSes with an ICE debugger.

HDMI1.4a 4k2k 30Hz
A Kintex-7 FPGA board has high speed transceivers that are used in this demonstration to build 4k2k 30Hz video resolution. The newest consumer video will support 4k2k resolution, so inrevium will release new video interfaces which support 4k2k.

   

Object Detection and Tracking Using MATLAB® and Simulink® With Xilinx® Zynq® SOCs
Automatic detection and motion-based tracking of moving objects in a video from a stationary camera.  Detection of moving objects and motion-based tracking are important components of many computer vision applications, including activity recognition, traffic monitoring, and automotive safety.

   

Jetson TK1: The World’s First Mobile Supercomputer for Embedded
Jetson TK1 is a developer kit based on the NVIDIA® Tegra® K1 mobile processor, with 192 CUDA® cores and over 300 GFLOPS of performance. It uses the same NVIDIA Kepler™ compute core as the fastest supercomputer in the United States and is fully CUDA-enabled, driving new breakthroughs in robotics, healthcare, security, defense, and automotive.

   

Lucas-Kanade Feature Tracking on Programmable DSPs
Complementing the PercepTonic presentation on the Lucas-Kanade Tracker, we will showcase a hands-on demonstration of real-time Lucas-Kanade tracking on the C6678 Keystone DSP by Texas Instruments. The processing pipeline features six different functions from TI's Vision Library VLIB, which are already performance-optimized for the C6x architecture. See how we can detect and track a few thousand Harris corner features in 1080p HD resolution images at 15 frames per second using just one (out of eight available) C66x cores.

Bug Off! How to Eliminate False Alarms with 3D Depth Perception
Security cameras traditionally use motion detection to alert us about intruders. The problem is that every so often the intruder turns out to be a bug! Flying insects and creeping spiders continue to plague unsuspecting smart cameras, which should be tracking objects at a distance but are confused by close-up nuisances. If only the camera could tell the distance to an object, it could filter out nearby clutter, effectively reducing false alarms. With Bug Off!, we demonstrate one possible embedded security solution.

   

Combining Flexibility and Low-Power in Embedded Vision Subsystem
An embedded-mapping and refinement case study of a pedestrian detection application. Starting from a high-level functional description in OpenCV, we decompose and map the application onto a heterogeneous parallel platform consisting of a high-performance control processor and application-specific instruction-set processors (ASIPs). This application makes use of the HOG (Histogram of Oriented Gradients) algorithm. We review the computational requirements of the different kernels of the HOG algorithm and present options for mapping onto the control processor and ASIPs.

   

Texas Instruments DLP® LightCrafter™ 4500: 3D Scanning & Machine Vision
Texas Instruments is demonstrating a new 3D point cloud generation software development kit (SDK). A programmable structured light evaluation module (EVM) for 3D Machine Vision, the DLP LightCrafter 4500 is a flexible light steering solution with high brightness and resolution for industrial, medical, and scientific applications. The EVM and SDK are paired to capture physical measurements of an object with high speed patterns.

Surround View + Front Camera Analytics
The “Surround View + Front Camera Analytics” demo is an automotive display system that provides a 360 degree birds-eye view. The reference design includes four 1 Mp cameras (OmniVision OV10635), four Serializer+Power boards, and a daughterboard with 6 Deserializers+Power. The daughterboard connects to the new TDA2x System-On-Chip, which features the industry’s broadest range of IP blocks on one device:  2x ARM A15, 4x ARM M4, 2xC66xDSP, 4xEVE (vector processor engines designed for vision), 2xSGX544 (graphics cores), and an IVA-HD for video compression. It also supports FPD-Link, Gig Ethernet, PCIe, and many other connectivity options. TDA2x has the flexibility and scalability to support multiple algorithms running concurrently on different cores using the Vision SDK framework.

   

Starry Night: Real-Time Object Recognition
A breakthrough computer vision Unity plugin that makes it very easy to develop embedded and mobile apps using 3D sensors. A patent-pending, shape-based registration technique uses a priori information about the scene or object. This approach is very tolerant to noise and occlusions typically found in the real world. Further, the entire process can be fully automated, alleviating the need for manual post-processing to form complete, accurate, fully-formed 3D models suitable for many commercial and consumer applications.

   

Low-power, High-performance, Scalable Computer Vision Acceleration Processor IP
Showing algorithms such as face detection, feature detection, feature tracking, object detection, and skin detection running on the videantis v-MP4000HDX processor architecture. These algorithms underlie applications such as automotive safety systems, gesture recognition, gaze estimation, depth sensing, augmented reality and face analysis. Demonstrating acceleration of the OpenCV library on an embedded vision processor, offloading the host CPU, increasing performance 100x and power consumption 1000x.

Low-delay H.264 10/12-bit Video Codec IP Core for Automotive Applications
The codec implements the H.264 High 4:4:4 Intra Profile, has very low encoding and decoding delay, and supports 8-, 10-, or 12-bit samples for higher-dynamic-range video. Available for videantis’ v-MP4000HDX processor architecture, which can be licensed from videantis for inclusion into SOC (system on chip) designs. On display is a live camera capturing images, encoding them, sending the compressed bitstream over an Ethernet AVB link, then decoding and presenting them on-screen.

   

Programmable 4K2K Camera Development Platform
Developed with Xylon and Northwest Logic, Xilinx Premier Alliance Members, and the MIPI Alliance, this demonstration showcases a programmable ISP and a low cost implementation of the MIPI D-PHY/CSI-2 Interface.

Accelerated Machine Vision Development Using Embedded Visual Applets
Developed with Silicon Software, this demonstration allows users to witness an embedded real-time image processing implementation designed in minutes with user-friendly Visual Applets’ graphical user interface (GUI). From a list of high performance vision operators, users can simulate and target their design seamlessly onto a Zynq-7000 All Programmable SoC without writing a single line of code.

Additional Information

If you have other questions about the Embedded Vision Summit West, please contact us at summit@embedded-vision.com.

Join us May 22-24, 2018 in Santa Clara, California.
Book your discounted hotel room now - offer expires April 30!