Vision Technology Showcase Directory

The Vision Technology Showcase is your one-stop shop to see more than 100 live computer vision demos from the more than 60 exhibitors listed below.

Showcase hours: Tuesday, May 22, noon - 8 pm (reception 6-8 pm); Wednesday, May 23, 10:30 am - 6 pm.

 

Achronix, a privately held, fabless semiconductor corporation, is showcasing its Speedcore eFPGA IP, which can be integrated into an ASIC or SoC to provide a customized programmable fabric. Users specify their logic, memory and DSP resource needs, then Achronix configures the Speedcore IP to meet their individual requirements.
 

CONTACT
Alok Sanghavi,
Sr. Marketing Manager
aloksanghavi@achronix.com
www.achronix.com

Vision Tank Finalist

AiFi is building a scalable version of “Amazon Go” to empower stores of the future to be check-out free. AiFi’s innovative AI-powered sensor networks also provide retailers with valuable insights about shopping behavior and product preference, as well as improved inventory management.
 

CONTACT
Steve Gu,
CEO, Aifi, Inc.
steve@aifi.io

SILVER SPONSOR

AImotive is a global provider of vision-first self-driving technology. We utilize artificial intelligence, simulation and supporting hardware architectures for a safe autonomous experience. aiDrive, our scalable vision-first self-driving solution will be showcased through videos of real-world testing. Our technology will also drive live in our advanced simulator for autonomous vehicle development, aiSim. Alongside aiWare, these solutions form the foundation of AImotive's vision for the scalable future of autonomy.
 

CONTACT
Szabolcs Jánky,
Business Development
szabolcs.janky@aimotive.com
www.aimotive.com

 

Aldec will present two demos:

  • ADAS reference designs based on the TySOM-3-ZU7EV (Zynq Ultrascale+ MPSoC) + FMC-ADAS, including 360-degree surround view, bird's eye view, driver drowsiness detection, and a smart rear
  • Zynq hardware/software co-simulation solution for Zynq architecture based on QEMU and Riviera-PRO


CONTACT
Farhad Fallah
Farhadf@aldec.com
www.aldec.com

 

Algolux demos:

  • CRISP-ML: workflow tool using machine learning to automatically optimize your imaging/vision system through objective metrics, shrinking image quality tuning from months to hours.
  • CANA: full DNN stack for more robust perception in difficult conditions (e.g. low light, adverse weather), 30%+ better accuracy than state of the art alternatives.


CONTACT
Dave Tokic,
VP Mktg/Partnerships
dave.tokic@algolux.com
www.algolux.com

PREMIER & LANYARD SPONSOR

The 1 product line is a new versatile technical platform that is designed and optimized for embedded vision. Instead of a traditional FPGA-based design used in machine vision cameras, the 1 product line are powered by our new ALVIUM® Technology. The 1 product line 130 and 140 Series support a very large range of sensors from 0.5 to 18 megapixel, several industry approved interfaces such as MIPI CSI-2 and USB3 Vision, different feature sets, as well as various housing options like board level, open end, complete housing and various lens mounts.
 

CONTACT
Contact Sales at info@alliedvision.com

 

For more than 45 years AMD has driven innovation in high-performance computing, graphics and visualization technologies ― the building blocks for gaming, immersive platforms and the datacenter. Hundreds of millions of consumers, Fortune 500 businesses and cutting-edge scientific research facilities worldwide rely on AMD technology daily to improve how they live, work and play.
 

CONTACT
Guy Ludden
Radeon Open Compute (ROCm) Group
www.amd.com

Vision Tank Finalist

Aquifi provides visual inspection services for logistics and manufacturing, based on the combination of 3D reconstruction and deep learning. The company’s solution, a trainable virtual inspection system, increases the throughput of human workers and reduces errors due to fatigue and repetition.
 

CONTACT
Carlo Dal Mutto, CTO
cdm@aquifi.com
www.aquifi.com

GOLD SPONSOR

Arm recently announced Project Trillium, a new suite of Arm® IP that brings machine learning (ML) to edge devices. Arm will demonstrate a variety of its machine learning technologies for embedded through client devices, as well as showing a few use case products for attendees to explore.
 

CONTACT
Tim Hartley
tim.hartley@arm.com
+44 7788 750 900

Au-Zone is a leading provider of development tools, engineering design services, and enabling IP for intelligent embedded vision products. By utilizing our tools, our customers can quickly develop and securely deploy machine learning solutions. Through our engineering consulting engagements, we help our clients lower development costs, mitigate program risk and shorten time to revenue.
 

CONTACT
Brad Scott, President
brad@au-zone.com
www.embeddedml.com

Basler is the leading global provider of high quality industrial cameras and camera modules for a wide range of applications. With 30 years of vision expertise and a dedicated embedded portfolio, Basler supports manufacturers worldwide in incorporating cutting edge vision technology into their products and applications.
 

CONTACT
Daniel Toth, Partner Manager
daniel.toth@baslerweb.com
www.baslerweb.com

GOLD SPONSOR

BDTI helps companies create products that incorporate computer vision and deep learning. BDTI specializes in designing custom algorithms that meet unique customer requirements, creating efficient software that executes demanding algorithms within tight cost and power budgets, and enabling informed decisions on the best techniques and technologies for customer products. See demos of DNN object detection, 3D sensing, and object measurement at Booth 603.
 

CONTACT
Jeremy Giddings
(925) 954-1411
giddings@bdti.com
www.BDTI.com

Vision Tank Finalist

Boulder AI has created an intelligent GPU-enabled deep-learning neural network camera, DNNcam, that is waterproof and dust-proof. The camera executes AI/machine learning and computer vision algorithms at the edge, distilling visual information into actionable event data. The end-to-end Boulder AI platform enables collecting edge data events into cloud environments.
 

CONTACT
Dan Conners, Co-Founder & CTO
dan@boulderAI.com
www.boulderAI.com

 

Brodmann17’s advanced deep-learning algorithms produce state-of-the-art vision accuracy with only a fraction of the usual computation load. Brodmann17 will demonstrate how edge devices such as autonomous vehicles and ADAS can handle deep-learning vision on standard low-power processors. Brodmann17 is making IoT and automotive edge-devices cloud-free and autonomous.
 

CONTACT
Adi Pinhas, Co-founder & CEO
Adi@brodmann17.com
www.brodmann17.com

GOLD SPONSOR

Cadence will demonstrate its high-performance and low-power vision and AI DSPs, designed to handle complex imaging, computer vision, and AI processing functions in mobile handset, automotive, AR/VR, surveillance, drone, and wearable products.
 

CONTACT
Pulin Desai, Product Marketing Director
pulin@cadence.com
www.cadence.com

CEVA is the leading licensor of signal processing platforms and artificial intelligence processors for a smarter, connected world and a range of end markets. CEVA’s ultra-low-power IP for vision, audio, communications and connectivity includes DSP-based platforms for advanced imaging, computer vision and deep learning for any camera-enabled device.
 

CONTACT
Yair Siegel
Yair.Siegel@ceva-dsp.com
www.ceva-dsp.com

We will demonstrate fully hardwired deep learning inference IP performing object detection on 4Kp30 video from real-time camera inputs. Lens distortion correction will also be demonstrated, where images from wide-angled lenses are corrected for enhanced input images. The demonstrations will be performed on FPGA-based boards.
 

CONTACT
Philip Han, Head of Marketing
marketing@chipsnmedia.com
www.chipsnmedia.com

Crossbar ReRAM embedded non-volatile memory technology enables massive amounts of computational bandwidth at the lowest energy consumption when used to store AI trained models on the same silicon die as computing cores running neural networks and algorithms. The demos showcase ReRAM for object classification, face recognition and license plate recognition.
 

CONTACT
Sylvain Dubois, VP Business Development & Marketing
sylvain.dubois@crossbar-inc.com
www.crossbar-inc.com

DEKA Research & Development Corporation develops internally generated inventions and provides research and development for major corporate clients. DEKA’s innovative devices have expanded the frontiers of health care worldwide. Some of DEKA’s notable inventions include the first wearable insulin pump for diabetics, the HomeChoice™ portable peritoneal dialysis machine, the LUKE prosthetic arm, the iBot stair climbing wheel chair and the Segway Human Transporter.
 

CONTACT
Dirk Van Der Merwe
dmerwe@dekaresearch.com

If your company develops vision-based end products, visit the Embedded Vision Alliance booth to learn how we can accelerate your development and reduce risk.

If you’re a provider of vision components, software or services, learn how the Alliance can connect you with customers, partners and provide early insights into key market and technology trends.
 

CONTACT
Kim Vaupen or Ruthann Fisher
info@embedded-vision.com
www.embedded-vision.com

FIRST® is a movement. The oldest and largest nonprofit organization of its kind, FIRST inspires innovation by teaching science, engineering, technology, math (STEM), and leadership skills through hands-on robotics challenges developed to ignite curiosity and passion in K-12 students.
 

CONTACT
www.firstinspires.org

FLIR Systems, Inc. is a global leader in the design and manufacture of innovative, high-performance digital cameras for industrial, medical and life science, traffic, biometric, GIS, and people counting applications.
 

CONTACT
Preston Barrett, Territory Account Manager
preston.barrett@flir.com
www.flir.com/mv

FRAMOS® will be showcasing SONY’s CMOS Starvis Rolling Shutter and Pregius Global Shutter sensors, including their new IMX250 polarized sensor, Intel®’s RealSense™ Technology, and a sampling of modules and cameras ready for any embedded vision application.
 

CONTACT
Chris Donegan, Sales Mgr. N. America
C.donegan@framos.com
www.framos.com

 

Gidel provides intelligent FPGA solutions for acceleration and imaging. The company's Infinivision technology enables companies to develop 3D mapping, VR and AR products, by helping them capture high-quality images from a large array of cameras to create panoramic 360-dgree content for automotive and next-generation immersive experiences in media, sports and entertainment.
 

CONTACT
Nurit Ben Moshe
bm_nurit@gidel.com
www.gidel.com

 

GreenWaves Technologies is a fabless semiconductor start-up designing disruptive ultra-low power embedded solutions for image, sound and vibration AI processing in sensing devices. GreenWaves’ will be showing GAP8, the industry’s first ultra-low power processor enabling battery operated interpretation of images, sounds and vibrations in Internet of Things (IoT) applications.
 

CONTACT
Martin Croome, VP Business Development
sales@greenwaves-technologies.com
www.greenwaves-technologies.com

GOLD SPONSOR

Horizon Robotics is a leading technology powerhouse, dedicated to providing integrated and open edge artificial intelligence (AI) solutions with high performance, low power and low cost. After two years’ R&D, we unveiled China's first world-leading, Brain Processing Unit (BPU) proprietary Gauss architecture-based edge AI computer vision processors, Journey and Sunrise. They power smart cars and smart cameras, providing industrial customers with a complete solution including chips, algorithms and cloud support.
 

CONTACT
Yufeng Zhang, VP, Global Business
yufeng.zhang@hobot.cc
www.horizon.ai

Using object recognition, segmentation, and variable precision demonstrations Imagination will show the benefits of neural network acceleration for edge devices using either Imagination's PowerVR 2NX, a complete, highly efficient standalone hardware IP neural network accelerator solution for SoC; or a combination of the 2NX NNA and PowerVR GPUs together.
 

CONTACT
David Harold, VP Mktg. Comms
David.Harold@imgtec.com
www.imgtec.com

 

iMerit is a technology services company, delivering data to some of the most innovative companies in machine learning, eCommerce, and computer vision. iMerit’s “humans in the loop” AI services are recognized globally for enabling advanced computing capabilities. iMerit does so while effecting positive social and economic change by empowering marginalized youth and young women.
 

CONTACT
Robert Frary, Director
robert@imerit.net
www.imerit.net

 

ImmerVision enables intelligent vision in the world’s devices. The company designs patented, augmented resolution, wide-angle lenses and AI-ready image processing so consumer devices, and professional, automotive, robotics, and medical applications can See More, Smarter. ImmerVision licenses its technology to innovative component, OEM, and ODM manufacturers.
 

CONTACT
Angus Mackay, Dir. Marketing & Comms
angus.mackay@immervision.com
www.immervisionenables.com

PREMIER PLUS SPONSOR

Intel, a leader in computing innovation, is driving the evolution of edge-to-cloud vision solutions, helping unlock new possibilities for the data that businesses generate with a comprehensive stack of products designed for AI. The company’s robust hardware and software portfolio gives OEMs/ODMs, system integrators, ISVs, and solution providers the tools required to accelerate the design, development and deployment of high performance computer vision solutions. With heterogeneous camera-to-cloud inference and acceleration silicon—Intel® Movidius- VPUs, Intel® FPGAs, CPUs and CPUs with integrated graphics—along with high performance analytics development and deployment tools, Intel is enabling rich solutions for AI everywhere.
 

CONTACT
Brenda Christoffer, Marketing Specialist
Brenda.a.christoffer@intel.com
www.intel.com

PREMIER PLUS SPONSOR

Lattice will showcase our small form factor, ultra-low power, production priced FPGAs addressing embedded vision needs in Industrial, Automotive and Consumer markets. Learn how we are creating innovative low power solutions for face tracking for surveillance cameras, collision avoidance for industrial robots, and speed sign detection for automotive aftermarket cameras. On display will be the latest demos on image sensor connectivity, computer vision, and machine learning inferencing, based on ECP5—and CrossLink—FPGAs.
 

CONTACT
Deepak Boppana
Deepak.bopaana@latticesemi.com
408-826-6336

SILVER SPONSOR

Luxoft is a leading independent software service provider for Automotive OEMs, Tier 1s and semiconductor companies. Transition to autonomous cars stimulates massive investments into advanced technologies that enable vehicles to drive themselves, and will change expectations of in-vehicle user experiences. We develop high-end automotive software solutions across UX/UI, HMI, ADAS, connectivity, IoT, telematics and navigation that enable gradual introduction of autonomous drive. Computer Vision and AI are crucial parts of our technology and industry transformation.
 

CONTACT
Visit us online:
automotive.luxoft.com
automotive@luxoft.com

Deep Learning in MATLAB

MATLAB makes it easy to design deep learning based vision applications, and deploy optimized generated code to multiple embedded GPUs like Jetson TX2, Drive PX2, or Intel based CPUs or ARM-based platforms. Visit the MathWorks booth to experience the live demos and learn more!
 

CONTACT
Sandeep Hiremath, Product Marketing
sandeep.hiremath@mathworks.com
www.mathworks.com

Mentor provides custom compilers, HPC libraries Catapult HLS (high level synthesis), and consultation services for deploying computer vision and machine leaning applications on embedded platforms requiring performance accelerators such as FPGAs, GPUs, DSPs and SIMD engines.
 

CONTACT
Pete Decher, Director Business Development
Pete_decher@mentor.com
www.mentor.com/embedded

Microsoft's Cognitive Services offers a broad spectrum of vision capabilities which power imaginative and inspired uses. They include ready-to-use capabilities such as image tagging, content moderation, OCR and face detection, as well as fully customizable classifiers and object detectors that can be exported for edge delivery through the Custom Vision service.
 

CONTACT
Cornelia Carapcea, Principal Program Manager
orncar@microsoft.com
www.microsoft.com/en-us/

Morpho, Inc., a global leader in embedded image processing software, will showcase "SoftNeuro-", the world's fastest deep learning inference engines. "SoftNeuro-" obtains profile data from target platforms that execute inference, and performs optimizations based on the data to achieve higher speeds, making it easy to deploy trained multi-framework networks.
 

CONTACT
Toshi Torihara, Vice President
h-torihara@morphoinc.com
www.morphoinc.com/en/

MVTec is demonstrating an inspection application that utilizes Deep Learning. The images are being acquired live with a GigE camera and processed on the Jetson TX2 from NVIDIA. In addition we are showing an identification application on the Raspberry PI utilizing the integrated camera module.
 

CONTACT
Heiko Eisele
sales@mvtec.us
(617) 401-2112

 

NALBI specializes in deep learning for embedded systems. Our technologies include highly optimized deep learning models and a computing engine especially optimized for the specific target device. We present real-time, accurate human segmentation and detection on embedded devices. The solution can be used in mobile apps, surveillance, smart home, and more.
 

CONTACT
Kina Jin, CEO
kinajin@nalbi.ai
www.nalbi.ai

 

NET will be presenting a real-time object tracking system using a linescan camera. Its FPGA is configurable with custom functionality, and the x86 architecture computer can run Windows/Linux and image processing libraries of a customer's choice. NET´s Open Camera Concept enables solution providers to create their own embedded vision solutions.
 

CONTACT
Grzegorz Kolodynski,
Marketing & PR
g.kolodynski@net-gmbh.com
www.net-gmbh.com

SILVER SPONSOR

Nextchip is a vision solution company that offers an extraordinary ISP (image signal processor) with functions such as HDR, DeFog, 3DNR, LFM etc., as well as vision-based ADAS solutions and SVM (surround view monitoring) software. The company's experience and capabilities are enabling Nextchip to take its next steps as a global image expert group, tackling myriad projects and challenges in the automotive field.
 

CONTACT
Mathias Sunghoon Chung,
Deputy General Manager
sunghoonch@nextchip.com
www.nextchip.com

 

Embedded vision is now possible in all light conditions, even in outdoor scenes with high illumination, thanks to our HDR sensor NSC1602 and MAGIC mono board. Live demos will show High Dynamic Range combined with advanced image processing for a reliable face recognition application.
 

CONTACT
Nicolas Baroan, BDM
Tel: +33 1 64 47 88 58
info@new-imaging-technologies.com

GOLD SPONSOR

NovuMind is dedicated to improving your life through Artificial Intelligence by making things think. Through cutting edge, in-house-developed artificial intelligence technology, NovuMind combines big data, high-performance, and heterogeneous computing to change the Internet of Things (IoT) into the Intelligent Internet of Things (I2oT).

Our NovuTensor chip does tensor computation at the speed of silicon, to provide unsurpassed performance-to-power ratios. NovuTensor is ideal for AI applications such as fast video object detection or video resolution enhancement.
 

CONTACT
NovuMind Inc
Santa Clara, CA
info@novumind.com
www.novumind.com

SILVER SPONSOR

NXP Semiconductors N.V. enables secure connections and infrastructure for a smarter world, advancing solutions that make lives easier, better and safer. As the world leader in secure connectivity solutions for embedded applications, NXP is driving innovation in embedded vision solutions for the secure connected vehicle, end-to-end security & privacy and smart connected solutions markets, built on more than 60 years of combined experience and expertise.
 

CONTACT
Ali Osman Ors
Director, AI Strategy and Partnerships
ali.ors@nxp.com
www.nxp.com

 

PathPartner, a global product engineering specialist, is demonstrating its expertise in developing solutions for advanced embedded vision use-cases including:

  • Advanced driver assistance systems for traffic sign detection, vehicle/pedestrian detection
  • Driver monitoring systems

Also don’t miss our speaker session on “Creating a computationally efficient embedded CNN face recognizer”.
 

CONTACT
Mr. Ramkishor Korada,
Global Head Sales & Mkt
ramkishor.korada@pathpartnertech.com
www.pathpartnertech.com

GOLD SPONSOR

Qualcomm invents breakthrough technologies that transform how the world connects and communicates. When we connected the phone to the Internet, the mobile revolution was born. Today, our inventions are the foundation for life-changing products, experiences, and industries. As we lead the world to 5G, we envision this next big change in cellular technology spurring a new era of intelligent, connected devices and enabling new opportunities in connected cars, networking, and the IoT — including smart cities, smart homes, and wearables. For more information, visit Qualcomm’s website, www.qualcomm.com, the OnQ blog, and our Twitter and Facebook pages.
 

CONTACT
www.qualcomm.com

Companies like Google, Microsoft, Qualcomm, and NVIDIA choose Samasource’s training data services to power their human-in-the-loop artificial intelligence and machine learning projects. And because we offer fully managed services, we can guarantee your service levels and create reliability for your business. Get more out of your training data with Samasource.
 

CONTACT
Karolina Zajac
Senior Director of Global Sales
Karolina.zajac@samasource.org

Scale accelerates the development of AI applications by helping generate high-quality ground truth data for computer vision teams including Cruise, Voyage and Embark. Scale specializes in a variety of perception use cases and industries including LiDAR point cloud, video annotation, and semantic segmentation with high accuracy and scalable volumes.
 

CONTACT
sales@scaleapi.com

StradVision provides SVNet, an accurate production-ready perception software on automotive embedded hardware, for ADAS and autonomous driving. SVNet helps vehicles identify and navigate around objects for a better, safer driving experience. It enables vehicles to sense precisely where they are in space and in relation to their surroundings.
 

CONTACT
Hak-Kyoung Kim, Algorithm Engineer
hak-kyoung.kim@stradvision.com
www.stradvision.ai

Vision Tank Finalist

Sturfee is building city-scale Visual Positioning Service (VPS) based on deep learning, computer vision and satellite imaging principles, enabling camera connected devices and machines to precisely locate themselves in the real world, identify where they are looking, and recognize what is around them - all based on visual input data. Cameras need VPS more than GPS.
 

CONTACT
Sheng Huang
Head of Business Operations & Partnerships
www.sturfee.com

PREMIER SPONSOR

Come to Synopsys’s booth to learn and discuss the latest embedded vision techniques and hardware to implement deep learning in edge applications including surveillance, AR/MR, mobile and automotive. Synopsys and our customers and partners will demonstrate new technology using the DesignWare EV6x Embedded Vision Processors including object and face recognition, Android neural networks and sparse optical flow. The programmable, scalable EV6x processors include scalar, vector DSP and CNN processing units for highly accurate and fast vision processing. They combine the flexibility of software solutions with the high performance and low power consumption of dedicated hardware.
 

CONTACT
Gordon Cooper,
Product Marketing Manager, SG
gordonc@synopsys.com
www.synopsys.com

 

We will showcase embedded AI algorithms including face detection, scene recognition, object tracking, and food recognition. These AI algorithms can be optimized to run on various computing architectures, such as GPUs, DSPs, CPUs or dedicated AI chipsets.
 

CONTACT
Olivia Bai, Marketing Director
baijie@thundersoft.com
www.thundersoft.com

 

Twisthink is a value-added partner helping companies bring IoT and vision-based products to life, using strategic insights and applying intuitive design with technology. Come to the Twisthink space (#109) to discuss what’s next for your business and see product examples showcasing custom algorithms, cameras, UI and UX design and IoT.
 

CONTACT
Kaitlyn Marsman, Marketing Lead
kaitlyn@twisthink.com
www.twisthink.com

 

VeriSilicon’s VIP8000 processors reach performance and memory efficiency of dedicated fixed-function logic with the customizable future-proofing of full programmability in OpenCL, OpenVX, and a wide range of NN frameworks. VeriSilicon’s exhibit will feature information about its OpenVX extensions in the i.MX8 applications processor.
 

CONTACT
www.verisilicon.com
Vision IP: viv_info@verisilicon.com
Turnkey Design Services:
us_sales@verisilicon.com

 

VIA Technologies, Inc. is a global leader in the development of highly integrated embedded platform and system solutions for M2M, IoT, and Smart City applications, ranging from video walls and digital signage to healthcare and industrial automation. VIA’s customer base includes the world’s leading high-tech, telecommunications, and consumer electronics industry brand names.
 

CONTACT
Jason Lee Gillikin
Business Development Manager
JasonLeeGillikin@ViaTech.com
www.ViaTech.com

 

videantis is a one-stop deep learning, computer vision and video processor IP provider. Together with our partners we deliver low-power, high-performance, intelligent visual sensing and compute platforms to the automotive, mobile, consumer, and embedded markets.
 

CONTACT
Tony Picard, VP Sales
tony.picard@videantis.com
www.videantis.com

Vision Tank Finalist

VirtuSense Technologies’ product identifies people who are at risk of falls and injuries. The core technology is based on machine vision, using a 3D time-of-flight sensor to track a person’s static and dynamic balance, identify sensory and muscular deficits and provide objective data to assess and treat issues.
 

CONTACT
Deepak Gaddipati, CTO
deepakg@virtusensetech.com www.virtusensetech.com

Vision Components will showcase the new make & model feature of its multi-platform ALPR (automatic license plate recognition) that runs on Android phones, tablets, PCs and in the cloud. Our latest VC Nano 3D-Z embedded 3D triangulation system based on a dual-core ARM CPU and Linux O/S, will also be demonstrated with an angle measurement application.
 

CONTACT
Mariann M. Kiraly,
Business Development Director
mariann.kiraly@vision-components.com
www.vision-components.com

Wave Computing, Inc. is the Silicon Valley company that is revolutionizing artificial intelligence and deep learning with its dataflow-based systems. The company’s vision is to “follow the data” and bring deep learning to customers’ data wherever it may be—from the datacenter to the edge of the cloud.
 

CONTACT
info@wavecomp.com
www.wavecomp.ai

wrnch is teaching cameras to read human body language. The wrnchAI engine is a real-time AI software platform to digitize human motion and behaviour from standard video. wrnch inc is a computer vision / deep learning company based in Montréal, Canada.
 

CONTACT
Dr. Paul Kruszewski, CEO
paul@wrnch.ai
www.wrnch.ai

XIMEA designs and produces leading-edge, high-performance cameras with the lowest power consumption and the smallest footprint, as well as highly optimized cameras and imaging solutions, based on a PCI Express interface. The portfolio is targeted towards industrial, hyperspectral, scientific, high-speed, high-resolution, multi-sensor as well as OEM vision (sub-) systems.
 

CONTACT
Michael Cmok,
Technical Sales Director
mc@ximea.com
www.ximea.com

XNOR.ai offers state-of-the-art deep learning at the edge and will demonstrate object detection running in real-time on a single core of an iPhone 6s CPU, an Ambarella S5LM and a Samsung Galaxy 8. XNOR.ai is up to 10x faster, 200x more power efficient, and requires 8-15x less memory than floating point CNNs.
 

CONTACT
Dan Waters,
VP Marketing & Bus. Dev.
dan@xnor.ai | www.xnor.ai
AI at your fingertips

Join us May 21-24, 2018 in Santa Clara, California.