Bookmark and Share

New Architectures Emerge in the AI Chipset Race

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

As the AI chipset market is becoming crowded, many AI companies have started creating solutions that cater to a niche market. The needs for chipset power, performance, software, and other attributes vary greatly depending on the nature of application. For instance, the Internet of Things (IoT) edge market needs ultra-low power (in milliwatts), mobile phones can work well with power consumption of up to 1 W, drones can consume a bit more, automotive can go from 10 W to 30 W, and so on.

Today’s two most prominent architectures are the central processing unit (CPU) and the graphics processing unit (GPU). Both have been around for decades and have been extremely successful. While the CPU is a general-purpose compute architecture, the GPU is developed with graphics in mind. When it comes to AI, both have their own limitations and that is where startups are trying to innovate.

Many architectural solutions have been proposed by academia for some time to solve the AI acceleration problem. Each architecture has its own advantages and disadvantages and some of them eventually boil down to the physics of the semiconductor process node. The most popular architecture being deployed by application-specific integrated circuit (ASIC) companies today essentially involves a large array of processing elements that tries to minimize memory access, thus increasing compute capabilities and reducing power.

However, neural networks (NNs) are becoming increasingly complex and application specific. The number of weights and the operations per pass are increasing, and so is the optimization level. The companies coming late to the market have chosen to innovate at the architecture level to take the network acceleration to next level. These approaches represent some of the fundamental ways to approach computing and include:

  • Optical Computing: In optical computing, light is used to perform matrix multiplication, rather than the digital medium access control (MAC). The advantage of this approach is that the multiplication is carried out in almost zero time, thus increasing the overall performance. The downside is that memory is still required to store the result, which may limit the performance. Two companies that came out of the Massachusetts Institute of Technology (MIT), Lightmatter and Lightelligence, are taking this approach and both have received funding.
  • Analog: In analog computing, a similar approach is taken and the matrix multiplication is carried out using an analog circuit. In essence, two signals are multiplied using a transistor-based analog amplifier. The power consumption in the analog multiplication is much less than the digital counterpart. The analog multiplication results are not always accurate; however, NNs are notoriously good at generating good results at lower bit width, so the argument is that the analog multiplication will perform very well for smaller NNs, given that the error will not multiply. Irvine, California-based Syntiant is taking this approach.
  • Processing in Memory (PIM): PIM removes the cost of data transfer from random-access memory (RAM) to ASIC. In essence, the PIM architecture takes an array of Flash memory and inserts compute elements in between. The weights are permanently stored in the Flash and the incoming signal simply goes from input to output. PIMs work very well for inferencing. Startups out of Austin, Texas, Mythic and Gyrfalcon, are taking this approach.
  • Neuromorphic: Neuromorphic chips try to simulate the behavior of a brain by mimicking neurons and synapses. Neuromorphic compute has been around for some time and can be done digitally, as well as via analog. Several large companies, such as IBM and Intel, have announced neuromorphic chipsets, while many startups are also coming online.

There are many pros and cons for each approach and it remains to be seen how the progress from academia to industry pans out in the end. Oher than neuromorphic chipset, none of the chipsets have been released and neuromorphic chipsets have had limited success. Of course, all of the hardware needs good software support and these companies will have to innovate on that front when they go to market.

However, one thing is clear: the need for specialization goes beyond what is available today in the traditional digital platform, which seems to have been recognized by the market. We are still a few years away from seeing these products go into production, as the transition from academia to industry takes place. In the short term, the current architectures will continue to sell well, but given the compute-intensive nature of AI applications, we can potentially see some fundamental changes to the compute architecture in the long term.

Anand Joshi
Principal Analyst, Tractica