fbpx

Embedded Vision Insights: September 13, 2016 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Deep Learning for Vision and Caffe Tutorial

Next Thursday, September 22 from 9 am to 5 pm, the primary Caffe developers from U.C. Berkeley's Vision and Learning Center will present "Deep Learning for Vision Using CNNs and Caffe," a full-day detailed technical tutorial focused on convolutional neural networks (CNNs) for vision and the Caffe framework for deep learning. Organized by the Embedded Vision Alliance and BDTI, the tutorial will take place at the Hyatt Regency in Cambridge, Massachusetts. It takes participants from an introduction to convolutional neural networks, through the theory behind them to their actual implementation, and includes multiple hands-on labs using Caffe. For more information, including online registration, please see the event page.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Bringing Computer Vision to the Consumer," a Keynote Presentation from DysonDyson
While vision has been a research priority for decades, the results have often remained out of reach of the consumer. Huge strides have been made, but the final, and perhaps toughest, hurdle is how to integrate vision into real world products. It’s a long road from concept to finished machine, and to succeed, companies need clear objectives, a robust test plan, and the ability to adapt when those fail. The Dyson 360 Eye robot vacuum cleaner uses computer vision as its primary localization technology. 10 years in the making, it was taken from bleeding edge academic research to a robust, reliable and manufacturable solution by Mike Aldred, Electronics Lead at Dyson, and his team. Aldred’s Embedded Vision Summit keynote talk charts some of the high and lows of the project, the challenges of bridging between academia and business, and how to use a diverse team to take an idea from the lab into real homes.

"Vision-as-a-Service: Democratization of Vision for Consumers and Businesses," a Presentation from TendTend
Hundreds of millions of video cameras are installed around the world, in businesses, homes, and public spaces, but most of them provide limited insights. Installing new, more intelligent cameras requires massive deployments with long time-to-market cycles. Computer vision enables us to extract meaning from video streams generated by existing cameras, creating value for consumers, businesses, and communities in the form of improved safety, quality, security, and health. But how can we bring computer vision to millions of deployed cameras? The answer is through "Vision-as-a-Service" (VaaS), a new business model that leverages the cloud to apply state-of-the-art computer vision techniques to video streams captured by inexpensive cameras. Centralizing vision processing in the cloud offers some compelling advantages, such as the ability to quickly deploy sophisticated new features without requiring upgrades of installed camera hardware. It also brings some tough challenges, such as scaling to bring intelligence to millions of cameras. In this Embedded Vision Summit talk, Herman Yau, Co-Founder and CEO of Tend, explains the architecture and business model behind VaaS, shows how it is being deployed in a wide range of real-world use cases, and highlights some of the key challenges and how they can be overcome.

More Videos

FEATURED ARTICLES

Speeding Up the Fast Fourier Transform Mixed-Radix on Mobile ARM Mali GPUs By Means of OpenCLARM
In this three-part technical article series (part 1, part 2 and part 3), Gian Marco Iodice, GPU Compute Software Engineer at ARM, covers the following topics:

  • Background information on the one-dimension complex FFT algorithm, pointing out the limits of the DFT (discrete Fourier transform) using direct computation
  • An analysis of the three main computation blocks of the FFT (fast Fourier transform) mixed-radix in a step-by-step approach, in both theory and implementation, and
  • Extension of the mixed-radix FFT OpenCL implementation to two dimensions, along with explanations of optimizations for mobile ARM Mali GPUs.

Also see Iodice's technical presentation "Using SGEMM and FFTs to Accelerate Deep Learning" from this year's Embedded Vision Summit.

Is the Future of Machine Vision Already Here? and Computer Vision as the New Industry Growth Driver?Dave Tokic
In this two-article series from Dave Tokic, consultant to the Embedded Vision Alliance, the author shares a diversity of insights and perspectives obtained from this year's Embedded Vision Summit.

More Articles

FEATURED NEWS

Intel to Acquire Movidius: Accelerating Computer Vision through RealSense for the Next Wave of Computing

Imaginghub Embedded Vision Web Portal Goes Live

Allied Vision Presents Broad Variety of Industrial Cameras at Enova Paris

NVIDIA Launches World's First Deep Learning Supercomputer

ON Semiconductor Introduces Advanced 13 Mpixel CMOS Image Sensor with SuperPD PDAF Technology

More News

UPCOMING INDUSTRY EVENTS

Deep Learning for Vision Using CNNs and Caffe: A Hands-on Tutorial: September 22, 2016, Cambridge, Massachusetts

IEEE International Conference on Image Processing (ICIP): September 25-28, 2016, Phoenix, Arizona, including the following events:

SoftKinetic DepthSense Workshop: September 26-27, 2016, San Jose, California

Sensors Midwest (use code EVA for a free Expo pass): September 27-28, 2016, Rosemont, Illinois

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California

More Events

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top