Embedded Vision Alliance: Technical Articles

What Is Deep Learning? Three Things You Need to Know

This article was originally published at MathWorks' website. It is reprinted here with the permission of MathWorks.

Sensor Modules Help Accelerate Embedded Vision Development

This article was originally published at FRAMOS' website. It is reprinted here with the permission of FRAMOS.

Automated Optical Inspection

This article was originally published at Basler's website. It is reprinted here with the permission of Basler.

Optical Measurement Systems inspect objects and detect a variety of different characteristics. With its large selection of area scan and line scan cameras, Basler has the right model for any camera inspection task.

Live and in Color: Why Color Calibration is So Important in Medical Technology

This article was originally published at Basler's website. It is reprinted here with the permission of Basler.

Event-based Sensing Enables a New Generation of Machine Vision Solutions

This article excerpt is published in full form at Prophesee's website. It is reprinted here with the permission of Prophesee.

Event-based sensing is a new paradigm in imaging technology inspired by human biology. It promises to enable a smarter and safer world by improving the ability of machines to sense their environments and make intelligent decisions about what they see.

Camera Selection – How Can I Find the Right Camera for My Image Processing System?

This article was originally published at Basler's website. It is reprinted here with the permission of Basler.

Lost in the Jungle of Options?

Faced with the challenge of designing an image processing system, you may find yourself in a veritable jungle of options, amidst a dizzying range of camera models, relevant properties, helpful features and potential applications.

Speeding Up Semantic Segmentation Using MATLAB Container from NVIDIA NGC

Bookmark and Share

Speeding Up Semantic Segmentation Using MATLAB Container from NVIDIA NGC

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


This article was originally published at NVIDIA's website. It is reprinted here with the permission of NVIDIA.

Gone are the days of using a single GPU to train a deep learning model.  With computationally intensive algorithms such as semantic segmentation, a single GPU can take days to optimize a model. But multi-GPU hardware is expensive, you say. Not any longer;  NVIDIA multi-GPU hardware on cloud instances like the AWS P3 allow you to pay for only what you use. Cloud instances allow you to take advantage of the latest generation of hardware with support for Tensor Cores, enabling significant performance boots with modest investments. You may have heard that setting up a cloud instance is difficult, but NVIDIA NGC makes life much easier. NGC is the hub of GPU-optimized software for deep learning, machine learning, and HPC. NGC takes care of all the plumbing so developers and data scientists can focus on generating actionable insights. 

This post walks through the easiest path to speeding up semantic segmentation by using NVIDIA GPUs on a cloud instance with the MATLAB container for deep learning available from NGC. First, we will explain semantic segmentation. Next we will show performance results for a semantic segmentation model trained in MATLAB on two different P3 instances using the MATLAB R2018b container available from NGC . Finally, we’ll cover a few tricks in MATLAB that make it easy to perform deep learning and help manage memory use.

What is Semantic Segmentation?

The semantic segmentation algorithm for deep learning assigns a label or category to every pixel in an image. This dense approach to recognition provides critical capabilities compared to traditional bounding-box approaches in some applications. In automated driving, it’s the difference between a generalized area labeled “road” and an exact, pixel-level determination of the drivable surface of the road. In medical imaging, it means the difference between labeling a rectangular region as a “cancer cell” and knowing the exact shape and size of the cell.


Figure 1. Example of an image with semantic labels for every pixel

We tested semantic segmentation using MATLAB to train a SegNet model, which has an encoder-decoder architecture with four encoder layers and four decoder layers. The...

Bringing it All into Focus: Finding the Right Lens for Your Camera

This article was originally published at Basler's website. It is reprinted here with the permission of Basler.

Intel’s Recommendations for the U.S. National Strategy on Artificial Intelligence

This article was originally published at Intel's website. It is reprinted here with the permission of Intel.

Improving TensorFlow Inference Performance on Intel Xeon Processors

This article was originally published at Intel's website. It is reprinted here with the permission of Intel.