Enflame Technology Announces CloudBlazer with DTU Chip on GLOBALFOUNDRIES 12LP FinFET Platform for Data Center Training

In conjunction with the launch of Enflame’s CloudBlazer T10, Enflame Technology and GLOBALFOUNDRIES® (GF®) today announced a new high-performing deep learning accelerator solution for data center training. Designed to accelerate deep learning deployment, the accelerator’s core Deep Thinking Unit (DTU) is based on GF’s 12LP™ FinFET platform with 2.5D packaging to deliver fast, power-efficient data processing for cloud-based AI training platforms.

In conjunction with the launch of Enflame’s CloudBlazer T10, Enflame Technology and GLOBALFOUNDRIES® (GF®) today announced a new high-performing deep learning accelerator solution for data center training. Designed to accelerate deep learning deployment, the accelerator’s core Deep Thinking Unit (DTU) is based on GF’s 12LP™ FinFET platform with 2.5D packaging to deliver fast, power-efficient data processing for cloud-based AI training platforms. 

Enflame’s DTU leverages GF’s 12LP FinFET platform with more than 14 billion transistors packaged in advanced 2.5D, and supports PCIe 4.0 interface and the Enflame Smart Link high-speed interconnection. The AI accelerator, which is optimized for large-scale cluster training in data centers to provide high performance and power efficiency, supports CNN/RNN and other network models and a broad range of data types (FP32/FP16/BF16/Int8/Int16/Int32, etc.).

Based on a reconfigurable chip design approach, Enflame’s DTU computing core features 32 scalable intelligent processors, with 8 SIPs combined into 4 scalable intelligent clusters (SICs). The enhanced technology offers high bandwidth memory (HBM2) integrated through 2.5D packaging, with an on-chip configuration algorithm to achieve fast, power-efficient data processing.

“This is unique in many ways,” said Patrick Moorhead, Founder and Principal Analyst at Moor Insights & Strategy. “There’s only a handful of relevant machine learning training chips out there and the Enflame AI accelerator on GF’s 12LP platform is proof that you don’t need bleeding edge and expensive processes to tackle power-hungry workloads for data center applications.”

“Enflame is focused on accelerating on-chip communications to increase the speed and accuracy of neural network training while reducing data center power consumption,” said Arthur Zhang, Enflame Tech COO. “GF’s 12LP platform supported by its comprehensive and high-quality IP libraries are expected to play a critical foundational role in development of our AI training solutions and enable our customers to meet their most demanding server computing needs.”

“As AI becomes pervasive, there is a growing demand for high-performance accelerators,” said Amir Faintuch, senior vice president and general manager of Computing and Wired Infrastructureat GF. “The synergy between Enflame’s unique architecture and the design of DTU on GF’s 12LP platform will deliver a high computational power and efficiency with low cost for cloud-based deep learning frameworks and AI training platforms.”

GF’s 12LP advanced FinFET platform offers best-in-class combination of performance, power and area, along with a set of differentiated features, including a unique low voltage SRAM that enables AI processor acceleration and extends the ability to scale 12nm long into the future.

Enflame’s AI accelerator SoC, DTU on GF’s 12LP platform has been sampled and production is scheduled for early 2020 in GF’s Fab 8 in Malta, New York.  

Exit mobile version