Vendor: T-Head Category: CPU

High-performance 32-bit multi-core processor with AI acceleration engine

C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating syst…

Overview

C860 utilizes a 12-stage superscalar pipeline, with a standard memory management unit, and can run Linux and other operating systems. It also utilizes a 3-issue and 8-execution deep out-of-order execution architecture, with a single/double-precision floating point engine. It can be equipped with an AI acceleration engine. It is suitable for application fields requiring high server performance, such as intelligent monitoring, machine vision and edge servers.

Key features

  • Instruction set: T-Head ISA (32-bit/16-bit variable-length instruction set);
  • Multi-core: Isomorphic multi-core, with 1 to 4 optional cores;
  • Pipeline: 12-stage;
  • Microarchitecture: Tri-issue, deep out-of-order;
  • General register: 32 32-bit GPRs; 16 128-bit VGPRs;
  • Cache: 2-stage cache; I-cache: 32 KB/64 KB (size options); D-cache: 32 KB/64 KB (size options); L2 Cache: 128 KB-2 MB (size options);
  • Cache check: Optional ECC check or parity check;
  • Bus interface: 1 128-bit master interface; 1 128-bit slave interface;
  • Memory protection: On-chip memory management unit supports hardware backfilling;
  • Floating point engine: Supports single and double precision floating point operations;
  • AI vector calculation engine: Dual-line 128-bit operation width, supporting half-precision/single-precision/8-bit/16-bit/32-bit parallel computing;
  • ng half-precision/single-precision/8-bit/16-bit/32-bit parallel computing
  • Multi-core consistency: Multiple-core shared L2-cache, and supports cache data consistency;
  • Interrupt controller: Supports a multi-core shared interrupt controller;
  • Debugging: Supports multi-core collaborative debugging;
  • Performance monitoring: Supports a hardware performance monitoring unit;
  • AI acceleration engine: Provides dedicated acceleration instructions to accelerate various typical neural networks;
  • Hybrid branch processing: Hybrid branch processing technology including branch direction, branch address, function return address and indirect jump address prediction to improve the fetching efficiency;
  • Data prefetching: Multi-channel and multi-mode data prefetching technology greatly improves data access bandwidth.

Block Diagram

Applications

  • Intelligent Vision;
  • Smart Home Appliances.

Specifications

Identity

Part Number
C860
Vendor
T-Head
Type
Silicon IP

Files

Note: some files may require an NDA depending on provider policy.

Provider

T-Head
HQ: China
PingTouGe Semiconductor Co., Ltd is the business entity of Alibaba Group specializing in semiconductor chips, with the primary goal of developing the next-generation of cloud integrated chip architecture, data centers and embedded IoT chip products. PingTouGe achieves cloud and terminal technological innovation through in-depth software and hardware collaboration, with the aspiration of making data and computing more inclusive, while also continuously pushing the boundaries of data technology.

Learn more about CPU IP core

Announcing Arm AGI CPU: The silicon foundation for the agentic AI cloud era

For the first time in our more than 35-year history, Arm is delivering its own silicon products – extending the Arm Neoverse platform beyond IP and Arm Compute Subsystems (CSS) to give customers greater choice in how they deploy Arm compute – from building custom silicon to integrating platform-level solutions or deploying Arm-designed processors.

Encarsia: Evaluating CPU Fuzzers via Automatic Bug Injection

Hardware fuzzing has recently gained momentum with many discovered bugs in open-source RISC-V CPU designs. Comparing the effectiveness of different hardware fuzzers, however, remains a challenge: each fuzzer optimizes for a different metric and is demonstrated on different CPU designs.

Pie: Pooling CPU Memory for LLM Inference

Pie maintains low computation latency, high throughput, and high elasticity. Our experimental evaluation demonstrates that Pie achieves optimal swapping policy during cache warmup and effectively balances increased memory capacity with negligible impact on computation. With its extended capacity, Pie outperforms vLLM by up to 1.9X in throughput and 2X in latency. Additionally, Pie can reduce GPU memory usage by up to 1.67X while maintaining the same performance. Compared to FlexGen, an offline profiling-based swapping solution, Pie achieves magnitudes lower latency and 9.4X higher throughput.

Frequently asked questions about CPU IP cores

What is High-performance 32-bit multi-core processor with AI acceleration engine?

High-performance 32-bit multi-core processor with AI acceleration engine is a CPU IP core from T-Head listed on Semi IP Hub.

How should engineers evaluate this CPU?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this CPU IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP