Blog Posts

How Embedded Vision Is Shaping the Next Generation of Autonomous Mobile Robots

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Autonomous Mobile Robots (AMRs) are being deployed across industries, from warehouses and hospitals to logistics and retail, thanks to embedded vision systems. See how cameras are integrated into AMRs so that they can quickly and […]

How Embedded Vision Is Shaping the Next Generation of Autonomous Mobile Robots Read More +

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The age of video analytics AI agents is here. Video is one of the defining features of the modern digital landscape, accounting for over 50% of all global data traffic. Dominant in media and increasingly important for

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries Read More +

Effortless Edge Deployment of AI Models with Digica’s AI SDK (Featuring ExecuTorch)

This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. Deploying AI models on mobile and embedded devices is a challenge that goes far beyond just converting a trained model. While frameworks like PyTorch offer a streamlined way to develop deep learning models, efficiently deploying them on

Effortless Edge Deployment of AI Models with Digica’s AI SDK (Featuring ExecuTorch) Read More +

Key Drone Terminology: A Quick Guide for Beginners

This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content

Key Drone Terminology: A Quick Guide for Beginners Read More +

Build & Deploy AI Vision Models for Windows on Snapdragon: OpenCV Live! Podcast

This blog post was originally published at EyePop.ai’s website. It is reprinted here with the permission of EyePop.ai. Want to deploy AI models to the edge without the cloud? On this special LIVE podcast hosted by OpenCV, Andy Ballester and Blythe Towal show how EyePop.ai is enabling real-time, offline inference with Snapdragon NPUs from Qualcomm.

Build & Deploy AI Vision Models for Windows on Snapdragon: OpenCV Live! Podcast Read More +

Seamless Software Development for Qualcomm Platforms with Qualcomm Visual Studio Code Extension

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. To simplify the development of applications for Qualcomm Dragonwing products, Qualcomm Technologies, Inc. has introduced the Qualcomm Visual Studio Code Extension. This innovative extension provides a streamlined, end-to-end environment that enhances your workflow across various tools and

Seamless Software Development for Qualcomm Platforms with Qualcomm Visual Studio Code Extension Read More +

Unlocking the Power of 100,000 fps: How Quad-pixel Shutter Control Works

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. When objects move fast, typical imaging sensors deal with motion blur, artifacts, and sensitivity issues. But these can be overcome with Sony IMX900’s quad-pixel shutter control. Learn how it enables the 100K fps feature and

Unlocking the Power of 100,000 fps: How Quad-pixel Shutter Control Works Read More +

R²D²: Unlocking Robotic Assembly and Contact Rich Manipulation with NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This edition of NVIDIA Robotics Research and Development Digest (R2D2) explores several contact-rich manipulation workflows for robotic assembly tasks from NVIDIA Research and how they can address key challenges with fixed automation, such as robustness, adaptability, and

R²D²: Unlocking Robotic Assembly and Contact Rich Manipulation with NVIDIA Research Read More +

Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features

This blog post was originally published at Nota AI’s website. It is reprinted here with the permission of Nota AI. Our method, Trimmed-Llama, reduces the key-value cache (KV cache) and latency of cross-attention-based Large Vision Language Models (LVLMs) without sacrificing performance. We identify sparsity in LVLM cross-attention maps, showing a consistent layer-wise pattern where most

Efficient LLaMA-3.2-Vision by Trimming Cross-attended Visual Features Read More +

SENSING Tech to Debut Three Advanced Vision Solutions at Embedded Vision Summit

May 16, 2025 – SENSING Tech will debut three new visual perception solutions at the upcoming Embedded Vision Summit USA, taking place from May 20 to 22 at the Santa Clara Convention Center. Reflecting the company’s ongoing commitment to imaging innovation, the new lineup includes an 8MP HDR/LFM Camera, a Defrosting & Deicing HDR Camera

SENSING Tech to Debut Three Advanced Vision Solutions at Embedded Vision Summit Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411