Deep learning inference performance on the Yitian 710
In recent years, deep learning has been widely implemented in various areas of industry, such as vision, natural language processing, and recommender systems. The exponential rise in the number of deep learning model parameters and the new business demand for complex models require cloud vendors to reduce arithmetic costs and improve computational efficiency. This condition is especially true in deep learning inference, which has become our focus for optimization. Under this influence, Alibaba Cloud unveils the new Arm server chip - Yitian 710, with the 5nm process. Yitian 710 is based on Arm Neoverse and supports the latest Armv9 instruction set. This instruction set includes extended instruction such as Int8 MatMul, BFloat16 (BF16), and others, enabling a performance advantage in high-performance computing.
In this blog post, we focus on Alibaba Elastic Cloud Service (ECS) powered by Yitian 710 to test and compare the performance of deep learning inference.
To read the full article, click here
Related Semiconductor IP
- Root of Trust (RoT)
- Fixed Point Doppler Channel IP core
- Multi-protocol wireless plaform integrating Bluetooth Dual Mode, IEEE 802.15.4 (for Thread, Zigbee and Matter)
- Polyphase Video Scaler
- Compact, low-power, 8bit ADC on GF 22nm FDX
Related Blogs
- Improve Apache httpd Performance up to 40% by deploying on Alibaba Cloud Yitian 710 instances
- The CEVA-XM6 Vision Processor Core Boosts Performance for Embedded Deep Learning Applications
- AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration
- Take your neural networks to the next level with Arm's Machine Learning Inference Advisor
Latest Blogs
- Cadence Announces Industry's First Verification IP for Embedded USB2v2 (eUSB2v2)
- The Industry’s First USB4 Device IP Certification Will Speed Innovation and Edge AI Enablement
- Understanding Extended Metadata in CXL 3.1: What It Means for Your Systems
- 2025 Outlook with Mahesh Tirupattur of Analog Bits
- eUSB2 Version 2 with 4.8Gbps and the Use Cases: A Comprehensive Overview