Bookmark and Share

"NovuTensor: Hardware Acceleration of Deep Convolutional Neural Networks for AI," a Presentation from NovuMind

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:

Miao (Mike) Li, Vice President of IC Engineering at NovuMind, presents the "NovuTensor: Hardware Acceleration of Deep Convolutional Neural Networks for AI" tutorial at the May 2018 Embedded Vision Summit.

Deep convolutional neural networks (DCNNs) are driving explosive growth of the artificial intelligence industry. Effective performance, energy efficiency and accuracy are all significant challenges in DCNN inference, both in the cloud and at the edge. All these factors fundamentally depend on the hardware architecture of the inference engine. To achieve optimal results, a new class of special-purpose AI processor is needed – one that works at optimal efficiency on both computer arithmetic and data movement.

NovuMind achieves this efficiency by exploiting the three-dimensional data relationship inherent in DCNNs, and by combining highly efficient, specialized hardware with an architecture flexible enough to accelerate all foreseeable DCNN structures. The result is the NovuTensor FPGA and ASIC chip, which puts server-class GPU/TPU performance into battery-powered embedded devices.