VeriSilicon’s Scalable High-Performance GPGPU-AI Computing IPs Empower Automotive and Edge Server AI Solutions

Provide AI acceleration with high computing density, multi-chip scaling, and 3D-stacked memory integration

Shanghai, China -- June 9, 2025 -- VeriSilicon (688521.SH) today announced the latest advancements in its high-performance and scalable GPGPU-AI computing IPs, which are now empowering next-generation automotive electronics and edge server applications. Combining programmable parallel computing with a dedicated Artificial Intelligence (AI) accelerator, these IPs offer exceptional computing density for demanding AI workloads such as Large Language Model (LLM) inference, multimodal perception, and real-time decision-making in thermally and power-constrained environments.

VeriSilicon’s GPGPU-AI computing IPs are based on a high-performance General Purpose Graphics Processing Unit (GPGPU) architecture with an integrated dedicated AI accelerator, delivering outstanding computing capabilities to AI applications. The programmable AI accelerator and sparsity-aware computing engine accelerate transformer-based and matrix-intensive models through advanced scheduling techniques. These IPs also support a broad range of data formats for mixed-precision computing, including INT4/8, FP4/8, BF16, FP16/32/64, and TF32, and are designed with high-bandwidth interfaces of 3D-stacked memory, LPDDR5X, HBM, as well as PCIe Gen5/Gen6 and CXL. They are also capable of multi-chip and multi-card scale-out expansion, offering system-level scalability for large-scale AI application deployments.

VeriSilicon’s GPGPU-AI computing IPs provide native support for popular AI frameworks for both training and inference, such as PyTorch, TensorFlow, ONNX, and TVM. These IPs also support General Purpose Computing Language (GPCL) which is compatible with mainstream GPGPU programming languages, and widely used compilers. These capabilities are well aligned with the computing and scalability requirements of today’s leading LLMs, including models such as DeepSeek.

“The demand for AI computing on edge servers, both for inference and incremental training, is growing exponentially. This surge requires not only high efficiency but also strong programmability. VeriSilicon’s GPGPU-AI computing processors are architected to tightly integrate GPGPU computing with AI accelerator at fine-grained levels. The advantages of this architecture have already been validated in multiple high-performance AI computing systems,” said Weijin Dai, Chief Strategy Officer, Executive Vice President, and General Manager of the IP Division at VeriSilicon. “The recent breakthroughs from DeepSeek further amplify the need for maximized AI computing efficiency to address increasingly demanding workloads. Our latest GPGPU-AI computing IPs have been enhanced to efficiently support Mixture-of-Experts (MoE) models and optimize inter-core communication. Through close collaboration with multiple leading AI computing customers, we have extended our architecture to fully leverage the abundant bandwidth offered by 3D-stacked memory technologies. VeriSilicon continues to work hand-in-hand with ecosystem partners to drive real-world mass adoption of these advanced capabilities.”

About VeriSilicon

VeriSilicon Microelectronics (Shanghai) Co., Ltd. (VeriSilicon, 688521.SH) is committed to providing customers with platform-based, all-around, one-stop custom silicon services and semiconductor IP licensing services leveraging its in-house semiconductor IP.

VeriSilicon possesses six categories of in-house processing IPs, namely Graphics Processing Unit (GPU) IP, Neural Network Processing Unit (NPU) IP, Video Processing Unit (VPU) IP, Digital Signal Processing (DSP) IP, Image Signal Processing (ISP) IP, and Display Processing IP, as well as more than 1,600 analog and mixed-signal IPs and RF IPs.

Leveraging its own IPs, VeriSilicon has developed a wealth of software and hardware custom chip design platforms targeting Artificial Intelligence (AI) applications, covering always-on ultralight spatial computing devices such as smartwatches and AR/VR glasses, high-efficiency edge computing devices such as AI PCs, AI phones, smart cars, and robots, as well as high-performance cloud computing devices like data centers and servers.

In response to the trend of System-on-Chip (SoC) evolving towards System-in-Package (SiP) driven by the demand for large computing power, VeriSilicon put forward the concepts of "IP as a Chiplet”, "Chiplet as a Platform", and "Platform as an Ecosystem”. The company keeps advancing the R&D and industrialization of its Chiplet technologies and projects from the perspective of interface IP, Chiplet architecture, advanced packaging technology, and others for AI-Generated Content (AIGC) and autonomous driving solutions.

Under its unique “Silicon Platform as a Service” (SiPaaS) business model, VeriSilicon serves a broad range of market segments, including consumer electronics, automotive electronics, computer and peripheral, industry, data processing, Internet of Things (IoT), among others. Its main customers include fabless, IDM, system vendors (OEM/ODM), large internet companies, and cloud service providers.

Founded in 2001 and headquartered in Shanghai, China, VeriSilicon has 8 design and R&D centers, along with 11 sales and customer service offices worldwide. VeriSilicon currently has more than 2,000 employees.

×
Semiconductor IP