RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
- Edge AI Accelerator
RISC-V-Based, Open Source AI Accelerator for the Edge
Coral NPU is a machine learning (ML) accelerator core designed for energy-efficient AI at the edge.
The GenAI IP is the smallest version of our NPU, tailored to small devices such as FPGAs and Adaptive SoCs, where the maximum Fre…
AI Accelerator Specifically for CNN
Our IP inference accelerators enhance AI computations, providing outstanding performance across various applications.
Performance AI Accelerator for Edge Computing
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Performance Efficiency AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
For device makers, a small, inexpensive, low-power chip that can run the large AI models is needed to lead the market with their …
Lowest Power and Cost End Point AI Accelerator
The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…
Neural engine IP - AI Inference for the Highest Performing Systems
From data centers to autonomous cars, the most demanding AI applications need high-performance NPUs with the lowest possible late…
Neural engine IP - The Cutting Edge in On-Device AI
With support for the latest generative AI models and traditional RNN, CNN, and LSTM models, the Origin™ E6 NPUs scale from 16 to …
Neural engine IP - Balanced Performance for AI Inference
On-device AI is a must-have for many new designs.
Neural engine IP - Tiny and Mighty
Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features.
A CPU Foundation for the AI-Driven Datacenter The Arm Neoverse V3 CPU is built to deliver maximum performance for cloud applicati…
High-Performance Memory Expansion IP for AI Accelerators
AI inference performance is increasingly constrained by memory bandwidth and capacity - not compute.
Run-time Reconfigurable Neural Network IP
The Dynamic Neural Accelerator II (DNA-II) is a -efficient and neural network IP core that can be paired with any host processor.
224G SerDes PHY and controller for UALink for AI systems
Efficient Scaling of AI Accelerators for Achieving High Performance and Throughput UALink, the standard for AI accelerator interc…
The UALink IP solution, consisting of UALink Controller, PHY, and verification IP, is designed to meet the performance requiremen…
The Ceva-NeuPro-Nano is a efficient and self-sufficient Edge NPU designed for Embedded ML applications.
Tensilica AI Max - NNA 110 Single Core
Single-core neural network accelerator offering from 0.5 to 4 TOPS Optimized for machine learning inference applications The Cade…
Edge-friendly LLM and CNN AI Inference processing Edge devices are increasingly equipped with AI processing capabilities that enh…
Mobile-Centric LLM and CNN AI Inference processing Consumers are excited about the latest AI features in smartphones.