Bookmark and Share

"Achieving 15 TOPS/s Equivalent Performance in Less Than 10 W Using Neural Network Pruning on Xilinx Zynq," a Presentation from Xilinx

Register or sign in to access the Embedded Vision Academy's free technical training content.

The training materials provided by the Embedded Vision Academy are offered free of charge to everyone. All we ask in return is that you register, and tell us a little about yourself so that we can understand a bit about our audience. As detailed in our Privacy Policy, we will not share your registration information, nor contact you, except with your consent.

Registration is free and takes less than one minute. Click here to register, and get full access to the Embedded Vision Academy's unique technical training content.

If you've already registered, click here to sign in.

See a sample of this page's content below:


Nick Ni, Director of Product Marketing for AI and Edge Computing at Xilinx, presents the "Achieving 15 TOPS/s Equivalent Performance in Less Than 10 W Using Neural Network Pruning on Xilinx Zynq" tutorial at the May 2018 Embedded Vision Summit.

Machine learning algorithms, such as convolution neural networks (CNNs), are fast becoming a critical part of image perception in embedded vision applications in the automotive, drone, surveillance and industrial vision markets. Applications include multi-object detection, semantic segmentation and image classification. However, when scaling these networks to modern image resolutions such as HD and 4K, the computational requirements for real-time systems can easily exceed 10 TOPS/s, consuming hundreds of watts of power, which is simply unacceptable for most edge applications.

In this talk, Ni describes a network/weight pruning methodology that achieves a performance gain of over 10 times on Zynq Ultrascale+ SoCs with very small accuracy loss. The network inference running on Zynq Ultrascale+ has achieved performance equivalent to 20 TOPS/s in the original SSD network, while consuming less than 10 W.