fbpx

Artificial Intelligence Roundup

ces1

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

The 2018 Consumer Electronics Show (CES) was dominated by the central theme of artificial intelligence (AI) becoming much more visible and tangible. AI had a dedicated marketplace and the impact of AI could be felt across the show floor.

Ironically, the biggest news coming out of this year’s “AI themed” CES was the power blackout caused by a result of the deluge of rain that hit Las Vegas during its opening days. The blackout drove home the point that this is still a gadget show, and without electricity, AI is powerless.

In the consumer context, AI is making rapid advances in the human perception capabilities of language and vision. We are at the very early stages of integrating “human-like” perception into robots, mobile phones, wearables, drones, cars, and other consumer electronics. If you were searching for the elusive killer device that will revolutionize consumer electronics, unfortunately, there was not much on offer. Most of the robots, Internet of Things (IoT) devices, wearables, and smart home and consumer electronics devices felt gimmicky, with tons of marketing dollars spent on making devices look smarter than they actually are.

The “real business” of AI was happening behind closed doors, as the consumer electronics ecosystem starts to integrate some of these AI-enabled human perception capabilities. Voice assistants are the first wave of AI-enabled products, with Amazon Alexa and Google Assistant making their presence felt across the show. While Google had a large booth with giant gumball machines connected to Google Assistant and a large billboard presence throughout Las Vegas, Amazon preferred a private meeting area at the Sands Expo and Convention Center focused on expanding its Alexa ecosystem and partnerships.

In Tractica’s view, CES 2018 was more about the ecosystem evolving, thinking aloud about how AI will be infused into our cars, homes, toys, TVs, and washing machines. It was less about the actual devices and gadgets. Below are some themes that Tractica noticed across the show.

The Artificial Intelligence “Recognition Stack” Is in the Early Stages of Maturity

Many companies, both large and small, were offering voice recognition, object recognition, face recognition, and emotion recognition. This can also be called the “recognition stack.” Not surprisingly many of these companies were Chinese, which has led to many calling this year’s CES the Chinese electronics show. Today, Amazon and Google are mostly focused on the voice recognition piece, possibly linked to device sales of the Echo and Google Home, but the Chinese have jumped on the gap in the market for the vision stack, allowing robots, cars, and home cameras to become intelligent, recognizing faces and objects.

Chinese companies are also going after voice translation, especially from Chinese to other languages. There were many real-time voice translation earbuds from unknown Chinese companies and handheld devices like the Yibei translator from iFlytek. This is a good sign, suggesting that soon we will see the market for the recognition stack become commoditized and partly standardized, like audio and video codecs. Soon, all smartphones will have real-time voice translation, providing local offline processing of languages, making language barriers a thing of the past. As the AI recognition stack matures, this will bring down the cost of embedding AI into devices or as mobile apps and will allow the better quality recognition stacks to surface as the market consolidates. While Google controls the English and Western European language translation stack, it has really left the local Chinese companies to innovate in the Chinese language stack.

Chinese Artificial Intelligence Platform Ecosystems Are Racing Ahead

The Baidu CES launch featured its Apollo and DuerOS AI platforms, which already have major traction in China. Apollo is its autonomous car platform that already has more than 90 partners. Apollo provides a vehicle reference platform, hardware reference platform, software platform, and cloud platform as a part of the Apollo stack. Since its launch in 2017, Apollo has already enabled mass production of a Level 3 car, and with the Apollo 2.0 release, autonomous driving on simple urban roads will be possible. Baidu is also focused on producing mini-buses, shuttles, and trucks, some of which are already in mass production. By 2020, it hopes to launch the first Level 4 autonomous car, most possibly in commercial production. DuerOS is the voice and vision stack for robots, smartphones, TVs, wearables, and other smart home devices. DuerOS has more than 200 partners already developing solutions using its technology.

At CES 2018, Baidu had an impressive range of devices on show, including some very impressive designs from its in-house device group called Raven. The underlying AI platform that Baidu has built is called the Baidu Brain, which powers both the DuerOS and the Apollo with perceptive and cognitive capabilities. It is clear that Baidu is more focused on the commercial scaling of AI, rather than chasing the dream of being the first company to have artificial general intelligence (AGI), something that Google is seeking to do. Baidu’s proximity to the Chinese electronics manufacturing ecosystem, and now the burgeoning Chinese automaker market, gives it a unique scale and opportunity. Baidu’s AI ambitions are also fueled by a new generation of Chinese consumers who love gadgets and are happy to shell out extra money for an AI-enabled feature. And more importantly, Baidu has access to one of the largest market databases in the world, with over a billion users. Baidu is clearly in a strong position in the Chinese market, but based on its move toward open platforms, it aims to compete directly with Google, Microsoft, and Amazon in the AI platform wars.

iFlytek is another Chinese AI solutions provider competing with Baidu over the recognition stack for the car, robot, and smart home markets. iFlyTek claims to have a leading market share for voice assistants in cars, ahead of Baidu, with more than 10 million cars having that capability today. iFlytek was also demoing face recognition and emotion recognition cameras inside the car, embedded with a full hardware and software stack, which is likely to gain faster traction with Chinese automakers than their European and North American counterparts. Eyeris Technologies, a North American AI vision stack provider was also demoing object recognition, facial recognition, and emotion recognition inside cars and is known to have received a lot of interest from European and North American auto original equipment manufacturers (OEMs) that are likely to start trialing these systems in the coming years. However, comparing the Chinese AI stack providers with their Western counterparts across automobile and robotics sectors, it appears that the Chinese companies like Baidu and iFlytek have a more mature solution, ready with development boards and different options for hardware, providing a full hardware-software integration. Rokid is another Chinese robot company that had a full stack voice AI developer kit on display.

Artificial Intelligence as a “Marketing Tool” versus an “Enabling Tool”

As one walked the show floor, there were two contrasting approaches to AI, as it starts being built into consumer electronics. The first was using AI as a marketing tool, in which AI is mentioned prominently in the branding and marketing. Many of the TVs this year from companies like Samsung and LG were using AI as a marketing tool. For example, AI was touted as being able to improve the quality of the picture by Samsung, essentially upscaling the pixel density, taking a 780p video feed and upscaling it to 8K, which was quite impressive and has major implications for TV display technologies. On the other hand, Phyn (Belkin) was using AI as an enabler, under the hood, to power the functionality of its smart water assistant product to detect leaks.

AI was not mentioned anywhere in Phyn’s marketing materials or booth, but after speaking with the engineers, it turns out that Phyn uses machine learning to understand the intricacies of water flow and the types of leaks. Similar contrasts were visible in the marketing approaches of Google, Baidu, and Microsoft, which claim to be “AI first” companies versus Apple, which shies away from mentioning AI in its marketing or products, but silently uses AI to improve the quality of photos or Siri. AI is increasing its footprint across consumer electronics, so it is interesting to see which of the two approaches will become more popular, and more importantly, what resonates with end consumers.

Geofenced (and Boring) Robot Taxis and Shuttles Are Here

Last year, CES doubled as a preview to the Detroit Auto Show with the North Halls showcasing the latest in autonomous car technology. This year, the autonomous transport theme continued, but felt bigger and more mature. Although everyone is looking forward to 2020, when we are expected to see some of the first Level 4 (fully autonomous) cars hit the road, the focus this year was on the near term. Autonomous taxis and shuttles are seen as more near-term and practical implementations to get autonomous technology on the road today. Taxis and shuttles can be geofenced to a specific neighborhood or city, allowing them to be regulated. Navya, a French autonomous transport company is one of the pioneers in this space, with approximately 60 shuttles already operating across sites in Europe and the United States.

At CES, Navya was demoing its fully autonomous Autonom Cab and offering rides. Although the shuttle was operating in an enclosed parking lot, moving across a fixed loop, it did offer a glimpse of what it would be like to ride in a robot taxi from the future. It was equipped with a mobile app-controlled entertainment system and location information on giant screens. The Autonom Cab can seat six people, with two sets of three facing each other, which felt like a much more roomy and futuristic version of the London Black Cab, which seats five. Navya also had its Autonom Shuttle offer rides in the Fremont district of Las Vegas throughout the show. Navya is also partnering with Keolis to roll out autonomous cab and shuttle technology across cities. Keolis is one of the largest solution and service providers for public transport solutions in France and has a growing presence in Europe and North America.

Ride hailing companies Lyft and Aptiv were also offering a self-driving taxi service in Las Vegas during the show. Although this was a Level 3 autonomous car, with a safety driver behind the wheel, it did venture out onto the main roads and offered passengers rides to 20 preset destinations. The consensus among people who have experienced a robot taxi firsthand was that it felt like having your grandma drive you, with the main complaint that the AI was programmed as an overcautious driver. This raised interesting questions about whether there could be a “cautiously aggressive” version of the robot taxi on offer in the future, which might get you from A to B in the style of a New York or Mumbai cabbie, but without hitting anyone. It is hard to imagine the regulators green lighting this! In other words, get used to boring rides in robot taxis and shuttles. This also explains why we need better entertainment systems in autonomous cars.

Embedded Artificial Intelligence Is the New Battleground for Chipset Companies

As Tractica has covered before, AI is moving to the edge, and embedded AI is where most chip companies are trying to compete now. Embedded AI hardware was all over the show, from robots to smart speakers, security cameras, drones, and autonomous cars. The idea is that, over time, embedded AI hardware will allow devices to process vision, speech, or other sensor data locally, or at the edge, allowing for real-time AI capabilities. Latency, privacy, and security are typically seen as the reasons why one would want data to be processed on the edge. Intel was showcasing its Mobileye acquisition, which follows somewhat of a hybrid processing approach, enabling it to crowdsource road and mapping information from its 2 million fleet of Mobileye-equipped cars, process part of the information locally, and update the cloud with new models and data, which the rest of the Mobileye fleet can use. This sounds like the federated learning approach that Google announced last year and fits right into the embedded AI trend. Qualcomm was presenting its Neural Processing Engine SDK for Snapdragon, something that is already being used by mobile phones today. However, unlike Intel, which has a much broader proposition with Altera, Mobileye, and Movidius powering different applications and power budgets, Qualcomm seems to be lagging behind in this race to the AI edge. Digital signal processing (DSP) intellectual property (IP) provider CEVA announced its family of AI processors for deep learning at the edge, covering a broad range of applications, including the IoT, smartphones, surveillance, automotive, robotics, medical, and industrial segments, with performance ranging from 2 TOPS to 12.5 TOPS. There were some newer players like Novumind and Efinix (field programmable gate array [FPGA] at the edge) that are focused on the AI edge opportunity.

Aditya Kaul
Research Director, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top