fbpx

Embedded Vision Insights: May 1, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Vision Tank

I'm excited to share that we¹ve selected our 2019 Vision Tank finalists! The Vision Tank is the Embedded Vision Summit's annual start-up competition, in which five computer vision start-up companies present their innovations to a panel of investors and industry experts. The winner gets a prize package worth more than $15,000–to say nothing of bragging rights of being the most innovative visual AI start-up at the Summit.

Here are this year’s finalists:

  • Blink.AI Technologies utilizes machine learning to enhance sensor performance, extending the range of what cameras can see and detect in the real world. Building upon proprietary deep learning techniques, the company has developed robust low-light video inference deployed on efficient low-power devices for camera-embedded systems.
  • Entropix provides better vision for computer vision. Its patented technology employs dual sensor cameras with AI and deep learning software to extract extreme levels of detail from video and still images for ultra-accurate intelligent video analytics. This patented computational resolution reconstruction supercharges video data analytics detection and classification.
  • Robotic Materials enables robotic components with human-like manipulation skills for the robotics industry. The company provides a sensing hand mechanism using tactile sensing, stereo vision, and high-performance embedded vision to mimic the tight integration of sensing, actuation, computation, and communication found in natural systems.
  • Strayos is a 3D visual AI platform using drone images to reduce cost and improve efficiency in job sites. Its software helps mining and quarry operators optimize the placement of drill holes and quantities of explosives and improve site productivity and safety by providing highly accurate survey data analytics.
  • Vyrill is focused on helping to understand and interpret the massive amount of user-generated video content (UGVC) on social media and the web and offers a proprietary AI-powered platform for UGVC discovery, analytics, licensing and content marketing to help brand marketers better understand their customers.

This is a truly outstanding finalist roster; the Vision Tank competition on Wednesday, May 22, will definitely be one to attend! I hope you will join me to watch it–and to learn from more than 90 other speakers, including the keynotes from Google's Pete Warden and MIT's Ramesh Raskar, over two solid days of presentations and demos. (You can see the current lineup of talks here, and the complete event schedule will be published on the website tomorrow…keep an eye out for it!) Two hands-on trainings, the updated Deep Learning for Computer Vision with TensorFlow 2.0 and brand new Computer Vision Applications in OpenCV, will run concurrently on the first day of the Summit, May 20. And the last day of the Summit, May 23, will feature the popular vision technology workshops from the Summit's Premier sponsor, Intel (both introductory and advanced), as well as from Khronos and Synopsys. Act quickly and secure your spot: register online today! I look forward to seeing you at the Summit!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

OBJECT DETECTION AND IDENTIFICATION

Recognizing Novel Objects in Novel Surroundings with Single-shot DetectorsUniversity of North Carolina at Chapel Hill
The work done by Alexander Berg, Associate Professor at the University of North Carolina at Chapel Hill and CTO of Shopagon, and his group in 2016 on single-shot object detection (SSD) reduced the computation cost for accurate detection of object categories to be in the same range as image classification, enabling deployment of general object detection at scale. Subsequent extensions add segmentation and improve accuracy, but still require many training examples in real-world contexts for each object category. In certain applications, it may be desirable to detect new objects or categories for which many training examples are not readily available. Berg's presentation considers two approaches to address this challenge. The first takes a small number of examples of objects not in context and composes them into scenes in order to construct training examples. The other approach learns to detect objects that are similar to a small number of target images provided during detection and does not requiring retraining the network for new targets.

Introduction to LiDAR for Machine PerceptionDeepen AI
LiDAR sensors use pulsed laser light to construct 3D representations of objects and terrain. Recently, interest in LiDAR has grown, for example for generating high-definition maps required for autonomous vehicles and other robotic systems. In addition, many systems use LiDAR for 3D object detection, classification and tracking. In this presentation, Mohammad Musa, the Founder and CEO of Deepen AI, explains how LiDAR sensors operate, compares various LiDAR implementations available today, and explores the pros and cons of LiDAR in comparison to other vision sensors.

VISION ALGORITHM FUNDAMENTALS

The Perspective Transform in Embedded VisionCadence
This presentation from Shrinivas Gadkari, Design Engineering Director, and Aditya Joshi, Lead Design Engineer, both of Cadence, focuses on the perspective transform and its role in many state-of-the-art visual AI applications like video stabilization, high dynamic range (HDR) imaging and super resolution imaging. The perspective transform accurately recomputes an image as seen from a different camera position (perspective). Following an introduction to this transform, Gadkari and Joshi explain its applicability in applications which involve joint processing of multiple frames taken from slightly different camera positions. They then discuss considerations for efficient implementation of this transform on an embedded processor. Gadkari and Joshi highlight the computational challenge posed by the division operation involved in a canonical implementation of the perspective transform. Finally, they describe a division-free modification of the perspective transform which achieves a 3X reduction in processing cycles for typical use cases.

Implementing Image Pyramids Efficiently in SoftwarePolymorphic Technologies
An image pyramid is a series of images, derived from a single original image, wherein each successive image is at a lower resolution than its predecessors. Image pyramids are widely used in computer vision, for example to enable detection of features at different scales. After a brief introduction to image pyramids and their uses in vision applications, Michael Stewart, Proprietor of Polymorphic Technologies, explores techniques for efficiently implementing image pyramids on various processor architectures, including CPUs and GPUs. He illustrates the use of fixed- and floating-point arithmetic, vectorization and parallelization to speed up image pyramid implementations. He also examines how memory caching approaches impact the performance of image pyramid code and discusses considerations for applications requiring real-time response or minimum latency. Also see the reference manual Arm's Guide to OpenCL Optimizing Pyramid.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

FRAMOS Presents Latest Sensor Modules, 3D Vision and Software Solutions at the 2019 Embedded Vision Summit

Intel Acquires Omnitek, Strengthens FPGA Video and Vision Offering

Vision Components At the Embedded Vision Summit: High-end OEM Cameras

Basler Launches Intelligent Lighting Solutions

Image Processing Over 10 Kilometers: New Baumer LX Cameras with Fiber Optics Interface

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top