fbpx

Embedded Vision Insights: February 5, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Vision Tank

Are you in a start-up company that is developing a new product or service that incorporates or enables computer vision? (Or do you know of one that is?) We are still accepting submissions for the 2019 Vision Tank Start-up competition, which offers start-up companies the opportunity to present their new products and product ideas to attendees at the 2019 Embedded Vision Summit, the preeminent conference on practical computer vision, covering applications at the edge and in the cloud. But the deadline is next week, Wednesday, February 14, so don’t delay! For more information, including detailed instructions and an online submission form, please see the Vision Tank page on the Alliance website. Good luck!

The Embedded Vision Summit attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are developing and using computer vision technology. The Embedded Vision Summit has experienced exciting growth over the last few years, with 97% of 2018 Summit attendees reporting that they’d recommend the event to a colleague. The next Summit will take place May 20-23, 2019 in Santa Clara, California, and online registration is now available. The Summit is the place to learn about the latest applications, techniques, technologies, and opportunities in visual AI and deep learning. And in 2019, the event will feature new, deeper and more technical sessions, with more than 90 expert presenters in 4 conference tracks and 100+ demonstrations in the Technology Showcase. Register today using promotion code SUPEREBNL19 to save 25% at our limited-time Super Early Bird Discount rate. For Alliance Member companies, also note that entries for this year's Vision Product of the Year awards, to be presented at the Summit, are now being accepted. If you're interested in having your company become a Member of the Embedded Vision Alliance, see here for more information!

On March 27, 2019 at 11 am ET (8 am PT), Jeff Bier, founder of the Embedded Vision Alliance, will deliver a free hour-long webinar, "Embedded Vision: The Four Key Trends Driving the Proliferation of Visual Perception," in partnership with Vision Systems Design. Bier will examine the four most important trends that are fueling the proliferation of vision applications and influencing the future of the industry. For more information, including online registration, please visit the event page. For more than 20 years, Vision Systems Design has provided in-depth technical and integration insights focused exclusively on the information needs of machine vision and imaging professionals. Sign up today for a free subscription to stay up to date.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

SENSOR FUSION APPLICATIONS

Multi-sensor Fusion for Robust Device AutonomyMulti-sensor Fusion
While visible light image sensors may be the most common type of image sensors found in autonomous systems, they're not necessarily a panacea. Combining them with other sensor technologies can deliver more robust perception in applications such as semi- and fully-autonomous vehicles, industrial robots, drones, and other autonomous devices. This article from the Alliance and Member companies FRAMOS, MathWorks and Synopsys discusses sensor options, along with strengths and shortcomings of those options, and considerations involved in combining multiple sensor technologies within an autonomous device design.

Visual-Inertial Tracking for AR and VRMeta
This tutorial presented by Timo Ahonen, former Director of Engineering for Computer Vision at Meta, covers the main current approaches to solving the problem of tracking the motion of a display for augmented and virtual reality. Ahonen covers methods for inside-out tracking that use cameras and inertial sensors such as accelerometers and gyroscopes, for both wearable head-mounted displays and for handheld devices such as cell phones.

HETEROGENEOUS VISION PROCESSING

Combining an ISP and Vision Processor to Implement Computer VisionISP plus Vision Processor(s)
The combination of an image signal processor (ISP) with one or several vision processors can deliver more robust computer vision processing capabilities than what can be obtained with vision processors alone. However, an ISP optimized for vision may operate quite differently from one optimized to create images for human viewing. How can ISPs be optimized for computer vision—and for applications where images are used both for computer vision and for human viewing? This article from the Alliance and Member companies Imagination Technologies and Synopsys discusses the implementation options available for leveraging an ISP alongside vision processors to efficiently and effectively execute both traditional and deep learning-based computer vision algorithms.

Enabling Software Developers to Harness FPGA Compute AcceleratorsIntel
FPGAs play a critical part in heterogeneous compute platforms as flexible, reprogrammable, multi-function accelerators. They enable custom-hardware performance with the programmability of software. The industry trend towards software-defined hardware challenges not just traditional hardware architectures—including compute, memory, network resources—but also the programming model of heterogeneous compute platforms. Traditionally, the FPGA programming model has been narrowly tailored and hardware-centric. As FPGAs become part of heterogeneous compute platforms and users expect the hardware to be “software-defined”, FPGAs must be accessible not just by hardware developers but by software developers, which requires the programming model of FPGAs to evolve dramatically. This presentation from Bernhard Friebe, Senior Director of Marketing for the Programmable Solutions Group at Intel, outlines a highly evolved, software-centric programming model which enables software developers to harness FPGAs through a comprehensive solutions stack including FPGA-optimized libraries, compilers, tools, frameworks, SDK integration and an FPGA-enabled ecosystem. Friebe also shows a real-world example using machine learning inference acceleration on FPGAs.

UPCOMING INDUSTRY EVENTS

SPIE Photonics West : February 5-7, 2019, San Francisco, California

Bay Area Computer Vision and Deep Learning Meetup Group: February 13, 2019, Santa Clara, California

Embedded World: February 26-28, 2019, Nuremberg, Germany

Vision Systems Design Webinar – Embedded Vision: The Four Key Trends Driving the Proliferation of Visual Perception: March 27, 2019, 11 am ET

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

Hailo Expands Series A Round to $21M and Launches Hailo-8 Fast Track Program for Select Customers

Upcoming Silicon Valley Meetup Presentations Discuss Visual Navigation in Robots

FLIR Launches Second Generation Thermal Camera for Self-Driving Cars and New Thermal Handheld for Automotive Repair

Ambarella Introduces CV25 SoC with CVflow Computer Vision to Enable the Next Generation of Mainstream Intelligent Cameras

Baumer Wins the Inspect Award 2019 With Its CX.I Cameras

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top