fbpx

“Utilizing Neural Networks to Validate Display Content in Mission Critical Systems,” a Presentation from VeriSilicon

Shang-Hung Lin, Vice President of Vision and Imaging Products at VeriSilicon, presents the “Utilizing Neural Networks to Validate Display Content in Mission Critical Systems” tutorial at the May 2018 Embedded Vision Summit.

Mission critical display systems in aerospace, automotive and industrial markets require validation of the content presented to the user, in order to enable detection of potential failures and triggering of failsafe mechanisms. Traditional validation methods are based on pixel-perfect matching between expected and presented content. As user interface (UI) designs in these systems become more elaborate, the traditional validation methods become obsolete, and must be replaced with more robust methods that can recognize the mission critical information in a dynamic UI.

In this talk, Lin explores the limitations of the current content integrity checking systems and how they can be overcome by deployment of neural network pattern classification in the display pipeline. He also discusses the downscaling of these neural networks to run efficiently in a functionally safe microcontroller environment, and the requirements imposed on such solutions by the safety standards enforced in these domains.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top