Geopolitical Tensions Fuel a Wave of AI Chip Independence as US and Chinese CSPs Race to Develop In-House ASICs, Redefining the Market Landscape, Says TrendForce
May 15, 2025 -- TrendForce’s latest research reveals that the surge in demand for AI servers is accelerating the pace at which major US CSPs are developing in-house ASICs, with new iterations being released every one to two years. In China, the AI server market is adjusting to new US export controls introduced in April 2025, which are expected to reduce the share of imported chips (e.g., from NVIDIA and AMD) from 63% in 2024 to around 42% in 2025.
Meanwhile, domestic Chinese chipmakers—such as Huawei—are projected to boost their market share to 40%, nearly on par with imported chips, supported by strong government policies promoting homegrown AI processors.
TrendForce notes that CSPs are prioritizing ASIC development to reduce reliance on NVIDIA and AMD, gain greater control over cost and performance, and bolster supply chain flexibility. This shift is essential for managing growing AI workloads and optimizing long-term operational spending.
Google leads among US CSPs with its TPU v6 Trillium, which offers improved energy efficiency and performance for large-scale AI models. Google has also expanded from a single-supplier model (Broadcom) to a dual-sourcing strategy by partnering with MediaTek. This move enhances design flexibility, reduces supply chain risk, and supports more aggressive adoption of advanced process nodes.
AWS continues to focus on Trainium v2, co-developed with Marvell, which is designed for generative AI and LLM training. The company is also working with Alchip on Trainium v3, with TrendForce forecasting the strongest year-over-year growth in ASIC shipments among US CSPs for AWS in 2025.
Meta, having deployed its first in-house AI accelerator MTIA, is now co-developing MTIA v2 with Broadcom. The new version emphasizes energy efficiency and low-latency architecture and is in line with Meta’s highly customized inference workloads to ensure optimal performance and operational cost control.
Microsoft remains heavily reliant on NVIDIA GPUs for AI server deployments but is rapidly advancing its own ASIC efforts. Its Maia series, tailored for generative AI on the Azure platform, is progressing toward Maia v2, with GUC handling physical design and production. Microsoft is also collaborating with Marvell on an enhanced version of Maia v2 to strengthen its chip design capabilities and mitigate development and supply chain risks.
China accelerates AI chip independence
Huawei is actively developing its Ascend series of AI chips, targeting domestic needs such as LLM training, smart city infrastructure, and AI-powered telecom networks. With national-level support and surging demand from internet giants and DeepSeek’s LLM ecosystem, Huawei is increasingly positioned to challenge NVIDIA’s dominance in China’s AI server market.
Cambricon is also expanding its Siyuan (MLU) chip series to support AI training and inference in the cloud. After conducting feasibility tests with major Chinese CSPs throughout 2024, the company is expected to ramp up deployment of its solutions in 2025.
TrendForce also notes that Chinese CSPs are accelerating their ASIC initiatives. Alibaba’s T-Head has launched the Hanguang 800 inference chip; Baidu is moving from volume production of Kunlun II to development of Kunlun III, designed for high-performance training and inference; Tencent, in addition to its in-house Zixiao inference chip, is leveraging Enflame’s ASIC solutions via strategic investment.
In the face of geopolitical pressure and supply chain restructuring, Chinese chipmakers such as Huawei and Cambricon—and in-house ASIC efforts by major CSPs—are becoming increasingly vital. This trend is expected to drive the global AI server market toward a bifurcated ecosystem: one within China, and another outside of it.
Related Semiconductor IP
- E-Series GPU IP
- Arm's most performance and efficient GPU till date, offering unparalled mobile gaming and ML performance
- 3D OpenGL ES 1.1 GPU IP core
- 2.5D GPU
- 2D GPU Hardware IP Core
Related News
- Neurons cast in silicon: AI chip SENNA accelerates spiking neural networks
- Synopsys Accelerates Chip Design with NVIDIA Grace Blackwell and AI to Speed Electronic Design Automation
- Musk Says Chip Capacity Will Decide Winner of AI Race
- Rapidus Announces Strategic Partnership with Quest Global to Enable Advanced 2nm Solutions for the AI Chip Era
Latest News
- CAST Releases First Dual LZ4 and Snappy Lossless Data Compression IP Core
- Arteris Wins “AI Engineering Innovation Award” at the 2025 AI Breakthrough Awards
- SEMI Forecasts 69% Growth in Advanced Chipmaking Capacity Through 2028 Due to AI
- eMemory’s NeoFuse OTP Qualifies on TSMC’s N3P Process, Enabling Secure Memory for Advanced AI and HPC Chips
- AIREV and Tenstorrent Unite to Launch Advanced Agentic AI Stack