Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
- DSP Core
Ceva-SensPro is a family of DSP cores architected to combine vision, Radar, and AI processing in a single architecture.
ComputeRAM is an SRAM macro with integrated compute capability.
Vector-Capable Embedded RISC-V Processor
The EMSA5-GP is a -featured 32-bit RISC-V embedded processor IP core optimized for processing-demanding applications.
Codasip Studio is a set of Electronic Design Automation (EDA) tools for processor design and customization.
Tensilica DSP IP supports efficient AI/ML processing
The Cadence AI IP platform includes the extensible DSP platform from Cadence, which provides flexible instruction sets designed t…
AI and DSP performance leader with up to 4X the NN performance and 2X the DSP performance of the HiFi 4 DSP, ideal for digital as…
Multi-core capable 64-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X180 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
Multi-core capable 32-bit RISC-V CPU with vector extensions
The SiFive® Intelligence™ X160 core IP products are designed to meet the increasing requirements of embedded IoT and AI at the fa…
AIoT processor with vector computing engine
I805 utilizes a 4-stage sequential pipeline, and is equipped with a vector computing engine oriented to applications such as AI a…
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Safety Enhanced GPNPU Processor IP
Automotive applications are uniquely demanding for any AI acceleration solution.
First DSP for embedded vision and AI with millions of units shipped in the market The Cadence® Tensilica® Vision P6 DSP, introduc…
Built using 1024-bit SIMD and offering up to 3.84TOPS of performance The Cadence® Tensilica® Vision Q8 DSP delivers up to 3.84 te…
Built on our latest Xtensa NX architecture and offers up to 2.18TOPS of performance The Cadence® Tensilica® Vision Q7 DSP deliver…
GPNPU Processor IP - 32 to 864TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 16 to 108 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 4 to 28 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
GPNPU Processor IP - 1 to 7 TOPs
Designed from the ground up to address significant machine learning (ML) inference deployment challenges facing system on chip (S…
Enabling Next-Generation Voice AI and Immersive Audio Applications With the rise of immersive audio and AI in home entertainment,…
224G-LR SerDes PHY enables 1.6T and 800G networks
The ever-increasing bandwidth in high-performance computing (HPC) applications is driving the rapid growth of high-speed I/O capa…