fbpx

Embedded Vision Insights: November 13, 2018 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The Embedded Vision Summit is the preeminent conference on practical computer vision, covering applications at the edge and in the cloud. It attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are creating and using computer vision technology. The Embedded Vision Summit has experienced exciting growth over the last few years, with 97% of 2018 Summit attendees reporting that they’d recommend the event to a colleague. The next Summit will take place May 20-23, 2019 in Santa Clara, California. The deadline to submit presentation proposals is December 1, 2018. For detailed proposal requirements and to submit a proposal, please visit https://www.embedded-vision.com/summit/call-proposals. For questions or more information, please email [email protected].

The Embedded Vision Alliance is performing research to better understand what types of technologies are needed by product developers who are incorporating computer vision in new systems and applications. To help guide suppliers in creating the technologies that will be most useful to you, please take a few minutes to fill out this brief survey. As our way of saying thanks for completing it, you’ll receive $50 off an Embedded Vision Summit 2019 2-Day Pass. Plus, you'll be entered into a drawing for one of several cool prizes. Please fill out the survey here.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

IMAGE CAPTURE FUNDAMENTALS

Introduction to Optics for Embedded VisionEdmund Optics
This talk from Jessica Gehlhar, Vision Solutions Engineer at Edmund Optics, provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics. She explains key parameters and concepts such as field of view, f number, working f number, NA, focal length, working distance, depth of field, depth of focus, resolution, MTF, distortion, keystoning, and telecentricity and their relationships. Optical design basics and trade-offs are introduced, such as design types, aberrations, aspheres, pointing accuracy, sensor matching, color and protective coatings, filters, temperature and environmental considerations, and their relation to sensor artifacts. Gehlhar also explores manufacturing considerations including testing the optical components and imaging lenses in your product, and the industrial optics used for a wide range of manufacturing tests. Depending on requirements, a wide variety of tests and calibrations may be performed. These tests and calibrations become important with designs that include technologies such as multi-camera, 3D, color and NIR.

Designing Vision Front Ends for Embedded SystemsBasler
This presentation from Friedrich Dierks, Director of Product Marketing and Development for the Module Business at Basler, guides viewers through the process of specifying and selecting a vision front end for an embedded system. It covers topics such as selecting the right sensor, a suitable optical setup, sensor interface, real-time process coupling, and (last but not least) how and where to do the image pre-processing such as de-Bayering, white balancing, etc. While there are many experts who understand how to process images once they are in memory, the detailed knowledge of how to create these images in the first place is not so widely known. A lot of these more “analog” topics are critical for the success of many projects, in terms of meeting performance, cost and other design targets. This presentation gives an overview of the design flow for the vision front end, addressing common pitfalls and describing solutions for typical applications.

UNDERSTANDING HUMAN BEHAVIOR

Understanding and Implementing Face Landmark Detection and TrackingPathPartner Technology
Face landmark detection is of profound interest in computer vision, because it enables tasks ranging from facial expression recognition to understanding human behavior. Face landmark detection and tracking can be quite challenging, though, due to a wide range of face appearance variations caused by different head poses, lighting conditions, occlusions and other factors. In this tutorial, Jayachandra Dakala, Technical Architect at PathPartner Technology, introduces face landmarks and discuss some of the applications in which face landmark detection and tracking are used. He also highlights some of the key challenges that must be addressed in designing and implementing a robust face landmark detection and tracking algorithm. He surveys algorithmic approaches, highlighting their complexities and trade-offs. He concludes with a discussion of implementation approaches for a real-time embedded face landmark tracking system.

Embedded AI for Smart Cities and RetailHorizon Robotics
Over the past ten years, online shopping has changed the way we do business. Now, with the development of AI technology, we are seeing the beginning of the so-called “new retail revolution,” in which nearly all China-based internet giants, such as Alibaba, Tencent and JD are active. These companies want to use big data, internet and AI technologies to transform brick-and-mortar retail. Embedded AI will play a critical role in this trend: It is an essential ingredient for extracting analyzable digital information from physical shops, and for connecting offline retail to online big data. Using embedded AI technology, cameras installed in shops can analyze customer behavior as well as interactions between customers, goods and the location real time, improving the shopping experience and operational efficiency at the same time. In this talk, Yufeng Zhang, VP of Global Business at Horizon Robotics, analyzes recent developments in the new retail revolution in China, and identifies key challenges that must be addressed for this trend to achieve its full potential.

UPCOMING INDUSTRY EVENTS

Consumer Electronics Show: January 8-11, 2019, Las Vegas, Nevada

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

FRAMOS Launches Embedded Vision Ecosystem of Sensor Modules and Adapters

Allied Vision Introduces Its First Alvium Cameras

ON Semiconductor Unveils IoT Solutions for Wireless Mesh Networking, Battery-less Edge Nodes and Artificial Intelligence

Baumer's LX Series 10 GigE Cameras with Liquid Lens Support and New Functions Deliver Flexible Focus Adjustment

Wave Computing Turbo Boosts “MIPS” with Licensable AI Subsystems, An Expanded Ecosystem & New Product Roadmap

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top