百度人工智能加速器兴起
Kunlun chip claims 30x performance of FPGAs
Rick Merritt, EETimes
7/6/2018 00:01 AM EDT
SAN JOSE, Calif. — China’s Baidu followed in Google’s footsteps this week, announcing it has developed its own deep learning accelerator. The move adds yet another significant player to a long list in AI hardware, but details of the chip and when it will be used remain unclear.
Baidu will deploy Kunlun in its data centers to accelerate machine learning jobs for both its own applications and those of its cloud-computing customers. The services will compete with companies such as Wave Computing and SambaNova who aim to sell to business users appliances that run machine-learning tasks.
Kunlun delivers 260 Tera-operations/second while consuming 100 Watts, 30 times as powerful as Baidu’s prior accelerators based on FPGAs. The chip is made in a 14nm Samsung process and consists of thousands of cores with an aggregate 512 GBytes/second of memory bandwidth.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related News
- SiFive宣布推出首款采用NVIDIA深度学习加速器技术的开源RISC-V SoC平台
- Mentor的Catapult HLS使Chips&Media提供深度学习硬件加速器IP的时间节省一半
- Expedera深度学习加速器 IP针对消费类设备实现首批量产出货
- 百度数据中心采用Xilinx FPGA加速机器学习应用