Demand for deep-learning accelerator (DLA) chips, also known as artificial intelligence (AI) processors, continues to be strong in spite of the pandemic. Deep-learning applications are being deployed throughout industry and can be found in everything from data centers to self-driving cars to edge devices, and embedded (IoT) systems. New entrants continue to emerge challenging large chip vendor incumbents in a market that has now topped $7 billion, growing 58% over 2019. A new report from The Linley Group, “A Guide to Processors for Deep Learning,” provides clear guidance on this dynamic market with concise analysis of deep-learning accelerators for artificial intelligence, neural networks, and vision processing for inference and training.
AI acceleration has proliferated in a wide variety of deep-learning applications. Large cloud-service providers use deep learning to power web services such as language translation and refining search results. In client devices it can be found in smart speakers, high-end smartphones, voice assistants, smart doorbells, and smart cameras. The technology is also critical to the development of advanced driver assistance systems (ADAS) and autonomous vehicles. As AI acceleration proliferates and demand increases, it finds its way into lower-cost products. In fact, edge devices have emerged as the highest-volume application for AI-enhanced processors.
“This rapidly growing market has attracted many new companies eager to develop AI chips, in particular for the embedded market,” said Linley Gwennap, principal analyst with The Linley Group. “Because of the rapid evolution in this field, comparing capabilities across the broad landscape of AI chips is extremely complicated. We’ve carefully researched and analyzed the various architectures and products to determine which are best suited for each application and who we think will emerge as the winners in each area.”
The comprehensive report covers more than 60 different vendors of AI chips. It provides detailed technical analysis of deep-learning accelerator chips from AMD, Cambricon, Cerebras, Graphcore, Groq, Intel (including former Altera, Habana, Mobileye, and Movidius), Mythic, Nvidia (including Tegra and Tesla), NXP, and Xilinx. Other chapters cover Google’s TPU family of ASICs and Tesla’s autonomous-driving ASIC. It also includes shorter profiles of numerous other vendors developing AI chips of all sorts, including Amazon, Brainchip, Gyrfalcon, Hailo, Huawei, Lattice, Qualcomm, Synaptics, and Texas Instruments.
The report includes head-to-head technical comparisons in each product category, as well as extensive technical and market overviews to help those coming up to speed on this complex technology. Those seeking a quantitative look at the market for deep-learning accelerators will find market size and forecasts in three market segments: data center, automotive, and embedded.