fbpx

Embedded Vision Insights: February 28, 2017 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Why Attend the Embedded Vision Summit

The Embedded Vision Summit, the only event dedicated to the creation of products and systems that see, is pleased to announce four new speakers for our Technical Insights Track. These industry luminaries will share their experiences and lessons learned in creating vision-based products for the demanding consumer market.

Our Technical Insights Track features more than 30 educational talks covering topics ranging from vision-based emotion analysis to 360-degree video systems. This track will accelerate your learning curve and help you uncover practical techniques in computer vision, with special emphasis on deep learning, 3D perception and low-power implementation.

We invite you to join us and experience three days of robust learning featuring 4 tracks and over 50 top industry speakers – plus Vision Technology Workshops and our Technology Showcase, where you’ll see the hottest new computer vision enabling technologies. Early Bird pricing ends soon; use promotional code nlevi0228 to secure your spot and save 15%. See you at the Summit!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

IMAGE CAPTURE AND OPTIMIZATION FOR COMPUTER VISION

Image Quality Analysis, Enhancement and Optimization Techniques for Computer VisionImage Quality
This technical article co-authored by Alliance member companies Algolux and Allied Vision explains the differences between images intended for human viewing and for computer analysis, and how these differences factor into the hardware and software design of a camera intended for computer vision applications. It also discusses methods for assessing and optimizing computer vision image quality. More

Image Sensors for Vision: Foundations and TrendsImage Sensors
Choosing the right sensor, lens and system configuration is crucial to setting you off in the right direction for your vision application. In this presentation, Robin Jenkin, Director of Analytics, Algorithm and Module Development at ON Semiconductor, examines fundamental considerations of image sensors that are important for embedded vision, such as pixel size, frame rate, rolling shutter vs. global shutter, back side illumination vs. front side illumination, color filter array choice and lighting, and quantum efficiency vs. crosstalk. He also explains chief ray angle, phase detect auto focus pixels, dynamic range, electron multiplied charge coupled devices, synchronization and noise, and concludes with observations on sensor trends.

IMAGE STABILIZATION HARDWARE AND SOFTWARE

Video Stabilization Using Computer Vision: Techniques for Embedded DevicesVideo Stabilization
Today, video streams are increasingly captured by small, moving devices, including action cams, smartphones and drones. These devices enable users to capture video conveniently in a wide range of situations. But, they also pose significant challenges with respect to video quality, including being prone to vibration and shaking. Fortunately, undesired motion can be removed by processing a video stream as it is captured. In this presentation, Ben Weiss, Computer Vision Developer at CEVA, surveys video stabilization techniques suitable for embedded platforms, assessing their strengths and weaknesses. He focuses on computer-vision-based video stabilization approaches and explores trade-offs between factors such as video quality and computation requirements.

Digital Gimbal: Rock-steady Video Stabilization without Extra WeightImage Stabilization
This presentation from Petronel Bigioi, Senior Vice President of Engineering and General Manager at FotoNation, describes new hardware solutions that can process video at up to 60 fps, delivering rock-steady video that is practically immune to platform motion and vibration. These solutions can be readily customized for your application to distinguish between large, deliberate motions of the imaging platform and smaller vibrations and oscillations that ruin video quality and disrupt post-processing operations. A mechanical gimbal is no longer a necessary part of your design. The age of the digital gimbal is upon us, and FotoNation is making it possible. Whether you are designing drones, wearables, or a larger vehicular platform, learn how state-of-the-art, energy-efficient, GPU-free chipset solutions proven in the smartphone and action-camera markets can empower your video subsystems with real-time, high frame-rate, rock steady video, while also correcting for distortions created by wide-angle lenses.

UPCOMING INDUSTRY EVENTS

Bay Area Computer Vision and Deep Learning Meetup Group: Jeff Bier, "Recent Developments in Embedded Vision: Algorithms, Processors, Tools and Applications," March 7, 2017, Mountain View, California

Embedded World Conference: March 14-16, 2017, Messezentrum Nuremberg, Germany

Silicon Catalyst: Jeff Bier, "When Every Device Can See: AI & Embedded Vision in Products," March 22, 2017, Mountain View, California

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California

Sensors Expo & Conference: June 27-29, 2017, San Jose, California

More Events

FEATURED COMMUNITY DISCUSSIONS

Tokyo-based Machine Vision and Robotics Software Opportunities

Multiple Positions at Shaper Tools in San Francisco

Intel Internet of Things Group: Computer Vision and Deep Learning Strategy and Planning

More Community Discussions

FEATURED NEWS

Huawei Launches New HUAWEI P10 and P10+ with Photo Enhancement

Occipital and Inuitive Present New Integrated Solution for AR/VR/MR Headsets and Robotics

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top