fbpx

Embedded Vision Insights: January 18, 2017 Edition

EVA180x100




LETTER FROM THE EDITOR

Dear Colleague,Why Attend the Embedded Vision Summit

We invite you to join us at the Embedded
Vision Summit
, the only event dedicated entirely to the creation of
products and services that see. We’ve curated three immersive days of
learning, discovery and networking. We’ll have more than 50 computer vision industry leaders, including:

  • Jeff Bier,
    President, BDTI and Founder, Embedded Vision Alliance
  • Jitendra Malik,
    Professor and Chair, Electrical Engineering and Computer Science, U.C. Berkeley
  • Pete Warden,
    Research Engineer, Google, and
  • Chris Osterwood,
    Chief Technical Officer, Carnegie
    Robotics

Join us May 1-3 at the Santa Clara Convention Center to take a
deep dive into the embedded vision ecosystem. Your Super Early Bird
Discount expires February 1. Register
now
using discount code nlevi0118
before it’s too late!

This year the Summit will feature four distinct learning
tracks: Enabling Technologies,
Business Insights, Technical Insights, and a new Fundamentals track loaded with
expert tutorials to help you get up to speed in practical computer
vision.

We’ve told you how great the Embedded Vision Summit is, but
don’t take our word for it. Watch
our short video
to listen to past attendees tell in their own words
why this is the must-attend computer vision event of the year for
anyone developing vision-enabled products.

Register
now
using Super Early Bird Discount code nlevi0118 and save! We look forward
to seeing you there.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

DEEP LEARNING FOR VISION

Deep-learning-based Visual Perception in Mobile and Embedded
Devices: Opportunities and Challenges
Qualcomm
Deep learning approaches have proven
extremely effective for a range of perceptual tasks, including visual
perception. Incorporating deep-learning-based visual perception into
devices such as robots, automobiles and smartphones enable these
machines to become much more intelligent and intuitive. And, while some
applications can rely on the enormous compute power available in the
cloud, many systems require local intelligence for various reasons. In
these applications, the enormous computing requirements of
deep-learning-based vision creates unique challenges related to power
and efficiency. In this talk, Jeff Gehlhaar, Vice President of
Technology, Corporate Research and Development at Qualcomm, explores
applications and use cases where on-device deep-learning-based visual
perception provides great benefits. He dives into the challenges
that these applications face, and explores techniques to overcome them.


Trade-offs in Implementing Deep Neural Networks on FPGAsXilinx
Video and images are a key part of
Internet traffic—think of all the data generated by social networking
sites such as Facebook and Instagram—and this trend continues to grow.
Extracting usable information from video and images is thus a growing
requirement in the data center. For example, object and face
recognition are valuable for a wide range of uses, from social
applications to security applications. Convolutional neural networks
(CNNs) are currently the most popular form of deep neural networks
used in data centers for such applications. 3D convolutions are a core
part of CNNs. In this talk, Nagesh Gupta, CEO and Founder of Auviz
Systems (now owned by Xilinx), presents alternative implementations of
3D convolutions on FPGAs, and discusses trade-offs among them.

ENABLING MACHINES TO UNDERSTAND PEOPLE

Facial Analysis Delivers Diverse Vision Processing
Capabilities
Facial Analysis
Computers can learn a lot about a person
from their face – even if they don’t uniquely identify that person.
Assessments of age range, gender, ethnicity, gaze direction, attention
span, emotional state and other attributes are all now possible at
real-time speeds, via advanced algorithms running on cost-effective
hardware. This technical article from Alliance member companies
FotoNation, NXP and Synopsys, in partnership with Tractica, provides an
overview of the facial analysis market, including sizes of various
market segments and types of applications. It then discusses the facial
analysis capabilities
possible via vision processing, along with the means of implementing
these functions, including the use of deep learning. More


Using Vision to Create Smarter Consumer Devices with Improved
Privacy
ARM
Machines are valuable primarily due to
their ability interact with people and with the physical world. But
today, most of the consumer devices in the “Internet of Things” know
very little about their environment. Of all the available sensory
inputs, visual imagery is potentially the richest source. However,
extracting information from video in real time remains challenging and
the capture of video raises privacy questions. However, now it is
becoming practical to imbue many of these IoT devices with visual
intelligence, enabling them to gather not only more information, but
better insights into users and their environments. Armed with these
insights, IoT devices can become more intelligent, more responsive and
easier to use. In this talk, Michael Tusch, Founder and CEO of Apical
(now owned by ARM), explores what types of visual intelligence are
currently feasible in consumer devices, and how this will evolve in the
near future.

UPCOMING INDUSTRY
EVENTS

Webinar – Recent
Developments in Embedded Vision: Algorithms, Processors, Tools and
Applications
: January 25, 2017, 1 pm ET

Cadence Embedded Neural Network Summit – Deep
Learning: The New Moore’s Law
: February 1, 2017, San Jose,
California

Embedded World Conference:
March 14-16, 2017, Messezentrum Nuremberg, Germany

Embedded
Vision Summit
:
May 1-3, 2017, Santa Clara, California

More Events

FEATURED NEWS

Qualcomm
Snapdragon 835 Mobile Platform
to Power Next-Generation
Immersive Experiences

NXP Demonstrates Intelligent
Traffic Control and Communication Solution

Thundersoft
Launches TurboX
to Drive Faster Design and Time-to-Market
of Intelligent IoT Devices

Xilinx Demonstrates Solutions for ADAS
and Automated Driving at CAR-ELE Japan 2017

Basler
Presenting Camera Modules for Embedded Vision
at Embedded
World 2017

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top