fbpx

Embedded Vision Insights: September 18, 2018 Edition

EVA180x100





LETTER FROM THE
EDITOR

Dear Colleague,Lattice Semiconductor Webinar

Lattice Semiconductor will deliver the free webinar “Architecting
Always-On, Context-Aware, On-Device AI Using Flexible Low-power FPGAs

on October 30, 2018 at 9 am Pacific Time, in partnership with the
Embedded Vision Alliance. The webinar will be presented by Deepak
Boppana, the company’s Senior Director of Marketing, and Gordon Hands,
Marketing Director for IP and Solutions (and a highly-rated Embedded Vision Summit presenter). In this webinar, the presenters will
leverage the company’s experience in developing low-cost, low-power,
always-on, vision-based AI solutions to illustrate deep learning
inferencing design tradeoffs and explore optimizations across edge
processing implementations ranging from 1 mW to 1 W and $1 to $10. For
more information, and to register, see the event
page
.

The next session of the Embedded Vision Alliance’s in-person,
hands-on technical training class series, Deep Learning for
Computer Vision with TensorFlow
, takes place in two weeks
in San Jose, California. These classes give you the critical knowledge
you need to develop deep learning computer vision applications with
TensorFlow. The one-day class takes place on October 4, 2018. Details,
including online registration, can be found here.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

HARDWARE AND
SOFTWARE DEVELOPMENT FOR RESOURCE-CONSTRAINED SYSTEMS

Computer Vision for Augmented Reality in Embedded DesignsAugmented Reality
Augmented reality (AR) and related
technologies are becoming increasingly popular and
prevalent, led by their adoption in smartphones, tablets and other
mobile computing and communications devices. While developers of more
deeply embedded platforms are also motivated to incorporate AR
capabilities in their products, the comparative scarcity of processing,
memory, storage, and networking resources is challenging, as are cost,
form factor, power consumption and other constraints. However, by
making effective use of available compute capabilities, along with
APIs, middleware and other software toolsets, implementing robust AR in
resource-constrained designs is increasingly feasible.

Building Efficient CNN Models for Mobile and Embedded
Applications
Facebook
Recent advances in efficient deep learning
models have led to many potential applications in mobile and embedded
devices. In this talk, Peter Vajda, Research Scientist at Facebook,
discusses state-of-the-art model architectures, and introduces
Facebook’s work on real-time style transfer and pose estimation on
mobile phones.

ADVANCED IMAGE
SENSING AND PROCESSING TECHNIQUES

Generative Sensing: Reliable Recognition from Unreliable
Sensor Data
Arizona State University
While deep neural networks (DNNs) perform
on par with – or better than – humans on pristine high-resolution
images, DNN performance is significantly worse than human performance
on images with quality degradations, which are frequently encountered
in real-world applications. This talk from Lina Karam, Professor and
Computer Engineering Director at Arizona State University, introduces a
new generative sensing framework which integrates low-end sensors with
computational intelligence to attain recognition accuracy on par with
that attained using high-end sensors. This generative sensing framework
aims to transform low-quality sensor data into higher quality data in
terms of classification accuracy. In contrast with existing methods for
image generation, this framework is based on discriminative models and
aims to maximize recognition accuracy rather than a similarity
measure. This is achieved through the introduction of selective feature
regeneration in a deep neural network.

Neuromorphic Event-based Computer Vision: Sensors, Theory and
Applications
Ryad Benosman
In this presentation, Ryad B. Benosman,
Professor at the University of Pittsburgh Medical Center, Carnegie
Mellon University and Sorbonne Universitas, introduces neuromorphic,
event-based approaches for image sensing and processing.
State-of-the-art image sensors suffer from severe limitations imposed
by their very principle of operation. These sensors acquire visual
information as a series of “snapshots” recorded at discrete points in
time, hence time-quantized at a predetermined frame rate, resulting in
limited temporal resolution, low dynamic range and a high degree of
redundancy in the acquired data. Nature suggests a different approach:
Biological vision systems are driven and controlled by events happening
within the scene in view, and not – like conventional image sensors –
by artificially created timing and control signals that have no
relation to the source of the visual information. Translating the
frameless paradigm of biological vision to artificial imaging systems
implies that control over the acquisition of visual information is no
longer imposed externally on an array of pixels but rather the decision
making is transferred to each individual pixel, which handles its own
information individually. Benosman introduces the fundamentals
underlying such bio-inspired, event-based image sensing and processing
approaches, and explores their strengths and weaknesses. He shows that
bio-inspired vision systems have the potential to outperform
conventional, frame-based vision acquisition and processing systems and
to establish new benchmarks in terms of data compression, dynamic
range, temporal resolution and power efficiency in applications such as
3D vision, object tracking, motor control and visual feedback loops, in
real-time.

UPCOMING INDUSTRY
EVENTS

Deep
Learning for Computer Vision with TensorFlow Training Class
:
October 4, 2018, San Jose,
California

Lattice Semiconductor Webinar – Architecting
Always-On, Context-Aware, On-Device AI Using Flexible Low-power FPGAs
:
October 30, 2018, 9:00 am PT

Embedded
Vision Summit
: May 20-23, 2019, Santa Clara,
California

More Events


FEATURED NEWS

BrainChip
Announces the Akida Architecture
, a Neuromorphic
System-on-Chip

Imagination and Chips&Media Deliver Integrated
GPU and Video Codec IP with Advantages of System Level Compression

NVIDIA
Jetson AGX Xavier Developer Kit
Now Available

Basler Showcases Innovations at Upcoming
Industry Conference

OmniVision
Launches 1080p High Definition Medical CMOS Image Sensor

with Excellent Image Quality for Endoscopic Surgical Procedures

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top