Baidu Accelerator Rises in AI
Kunlun chip claims 30x performance of FPGAs
Rick Merritt, EETimes
7/6/2018 00:01 AM EDT
SAN JOSE, Calif. — China’s Baidu followed in Google’s footsteps this week, announcing it has developed its own deep learning accelerator. The move adds yet another significant player to a long list in AI hardware, but details of the chip and when it will be used remain unclear.
Baidu will deploy Kunlun in its data centers to accelerate machine learning jobs for both its own applications and those of its cloud-computing customers. The services will compete with companies such as Wave Computing and SambaNova who aim to sell to business users appliances that run machine-learning tasks.
Kunlun delivers 260 Tera-operations/second while consuming 100 Watts, 30 times as powerful as Baidu’s prior accelerators based on FPGAs. The chip is made in a 14nm Samsung process and consists of thousands of cores with an aggregate 512 GBytes/second of memory bandwidth.
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- Complex Digital Up Converter
- Bluetooth Low Energy 6.0 Digital IP
- Verification IP for Ultra Ethernet (UEC)
- MIPI SWI3S Manager Core IP
Related News
- NEUCHIPS Announces World's First Deep Learning Recommendation Model (DLRM) Accelerator: RecAccel
- Expedera Raises $18M Series A Funding to Advance Its Deep Learning Accelerator IP
- Expedera Announces First Production Shipments of Its Deep Learning Accelerator IP in a Consumer Device
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
Latest News
- GlobalFoundries Completes Acquisition of MIPS
- Infineon successfully completes acquisition of Marvell's Automotive Ethernet business
- TSMC 6-inch Wafer Fab Exit Affirms Strategy Shift
- Brite Semiconductor Releases PCIe 4.0 PHY IP
- Perceptia Completes Silicon Characterisation of pPLL03 for GF 22FDX – Report Now Available