fbpx

“AI Reliability Against Adversarial Inputs,” a Presentation from Intel

Gokcen Cilingir, AI Software Architect, and Li Chen, Data Scientist and Research Scientist, both at Intel, presents the “AI Reliability Against Adversarial Inputs” tutorial at the May 2019 Embedded Vision Summit.

As artificial intelligence solutions are becoming ubiquitous, the security and reliability of AI algorithms is becoming an important consideration and a key differentiator for both solution providers and end users. AI solutions, especially those based on deep learning, are vulnerable to adversarial inputs, which can cause inconsistent and faulty system responses. Since adversarial inputs are intentionally designed to cause an AI solution to make mistakes, they are a form of security threat.

Although security-critical functions like login based on face, voice or fingerprint are the most obvious solutions requiring robustness against adversarial threats, many other AI solutions will also benefit from robustness against adversarial inputs, as this enables improved reliability and therefore enhanced user experience and trust. In this presentation, Cilingir and Chen explore selected adversarial machine learning techniques and principles from the point of view of enhancing the reliability of AI-based solutions.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top