fbpx

AI Hitting Headwinds in 2018

21-red2-710x231a

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

The year 2018 has been a tough one, so far, for AI. If AI had a public relations (PR) or brand consultant, they would have a tough job on their hands right now. Most of us are tired of the Terminator and Skynet references surrounding AI, but what is emerging in terms of AI threats is much more worrying and, unfortunately, is based on well-grounded reality. If AI were a company or brand, here is a list of some of the key challenges that have already surfaced in the last few months and are having a direct impact in the real world or are a major cause for concern in the immediate future.

  1. Technological Bottlenecks: Geoff Hinton, widely regarded as one of the fathers of AI and deep learning, recently said about his own creation, “My view is throw it all away and start again.” Hinton is “deeply suspicious” about a key mechanism widely used in deep learning called “backpropagation,” which allows the updating of the weights of a neural network based on training data. In general, there is a feeling that no new AI breakthroughs have occurred beyond deep learning and reinforcement learning, and that within those areas, we are hitting bottlenecks in performance. Gary Marcus, another academic who has been highly critical of deep learning, goes a step further and feels that the current AI obsession around deep learning can be classified as “irrational exuberance” and that it could lead us to another AI winter. The worry seems to be growing as seasoned data scientists who deploy algorithms to solve business problems have started to raise alarms.
  2. Privacy: Facebook has never been a poster boy for data privacy; however, its recent struggles around the Cambridge Analytica issue prove that Mark Zuckerberg has some tough challenges concerning how Facebook uses customer data, and more importantly, how its users perceive Facebook regarding privacy and ethics. Most of us have been aware that Facebook sells our data to advertisers; however, we are less knowledgeable, or in some ways “willingly oblivious” about the nature or granularity of the data being collected and shared, who is using this data, and for what purposes. AI is at the center of this because Facebook and Cambridge Analytica have been running their own AI algorithms to create detailed profiles of users for the purposes of advertising or political targeting. The upcoming General Data Protection Regulation (GDPR) in Europe will partially lift the veil on these questions and force companies that are using customer data for AI algorithms to be more transparent about their data practices. It is not clear whether GDPR will stop people from giving away their data; however, it will make people think twice before accepting data policies. Anyone with any doubts about how much data already exists about oneself in the public domain should download their own Facebook profile data and prepare to be unpleasantly surprised.
  3. Security: The ease with which one can perform adversarial attacks on vision systems is a major cause for worry, especially considering the rate at which AI vision systems are being deployed. As was explored in a recent Tractica blog, imagine an adversarial attack on a fleet of autonomous cars, city surveillance systems, e-commerce warehouse picking systems, or military drone fleets, all of which use AI vision systems. The fact that most deep learning is like a black box and changing 1 pixel in the data can throw off a complex neural network speaks to the vulnerabilities and lack of full understanding we have about AI today. The security issue is likely to get worse and goes beyond just corruption or manipulation of data. If AI systems are making decisions, and companies are trusting AI to make those decisions, the fact that one can sabotage that AI without the owner knowing about it raises a key question of whether we can even trust the AI in the first place.
  4. Safety: The recent death of a pedestrian at the hands of a self-driving Uber car in Arizona raises the AI safety issue. Even though there was a safety driver, the car failed to spot a pedestrian crossing the road. It is not clear whether the safety driver was at fault, or whether the systems in the car failed to spot the pedestrian. Once AI systems start making life and death decisions, as is the case for self-driving cars, and the more such incidents happen, even if they are a much smaller proportion of overall fatalities on the road, AI making one wrong decision will be the equivalent to hundreds or even thousands of human decisions. AI will be judged differently and any company with a faulty AI system that causes human fatalities is likely to face harsh consequences, even closure of their business.
  5. Authenticity: A specific type of AI called a generative adversarial network (GAN) is becoming so good at creating images that they are good enough to pass off as authentic. The whole point of a GAN is for one algorithm to convince the other algorithm of its authenticity. DeepFakes is the first instance of GANs being used in the public domain, allowing anyone to create fake videos of themselves or their favorite celebrities. The same GANs have also been used to artificially generate voices of famous people like Barack Obama or Donald Trump. We are very close to crossing that bridge where any and all content will need to be put through an authenticity test. We are entering a world where it will be hard to identify authentic versus fake content. We have already crossed that bridge with fake news, but things will get worse with content. Mix fake news and content together and we have a very dangerous cocktail.
  6. Geopolitical Risk: One of the most disturbing reports recently published in relation to AI and defense is “The Race for AI,” which is a collection of essays put together by Defense One. It covers the ability for AI to enable targeting human faces at a range of 200 km; the U.S. National Security Agency (NSA) moving toward using AI for cyber offense, rather than just defense; and Chinese commanders having AI help them make battlefield decisions very soon. Overall, the piece points to the threats of Russia and China overtaking the United States in the AI arms race, specifically focusing on geopolitical instability. If anyone mocks Elon Musk or Nick Bostrom raising alarms about AI as an existential threat, this is the document to read.

Shift in How the Enterprise Sector Adopts AI

We have not even explored issues like AI bias and interpretability, which have not yet had a real-world impact, but are very likely to surface very soon and cause distress to both end users and AI companies.

So, what impact do the above issues have on the adoption of AI in the enterprise sector? For the most part, based on Tractica’s experience, the adoption of AI will continue and will accelerate. The value that AI brings to the table in terms of enabling deeper insights, improved efficiencies, and lower costs are hard to beat. The rapid improvement in AI vision and language capabilities will create new business models and improved business processes. In general, most people underestimate the value of AI in the software development context, as one no longer needs to rely on humans coding logic or thinking through the code line by line, as AI brings a totally new and much more powerful way of building software systems where the logic is automatically created based on the data that is fed into the algorithm. This is a profound shift in the way that software and technology will be built and architected, and there is no turning back.

Thoroughly Evaluating the Risks before Implementing AI Strategies

What is likely to change is the evaluation of risks associated with AI. Any company that is implementing an AI strategy will need to carefully think through the downsides and risks of AI across all the issues described above, and some more than others. An automotive company that is rolling out a self-driving car fleet will need to think through safety and security much more thoroughly. Consumer internet companies will need to pay closer attention to their customer data privacy policies and carefully review data security, while making sure they are not overinvested in one technology area (e.g., deep learning). Healthcare companies will also need to review privacy, security, and human safety issues and have safeguards in place. Governments will need to assess their own AI development strategy and think closely about societal implications, but also weigh the geopolitical impact and review the AI plans of their adversaries. Investors and venture capital (VC) companies will be the most impacted, and some of the crazy money flowing into AI is likely to ebb in 2018 and 2019 as seasoned investors start to better understand the risks around AI.

Aditya Kaul
Research Director, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top