Vendor: Ceva, Inc. Category: NPU

AI SDK for Ceva-NeuPro NPUs

Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of A…

Overview

Ceva-NeuPro Studio is a comprehensive software development environment designed to streamline the development and deployment of AI models on the Ceva-NeuPro NPUs. It offers a suite of tools optimized for the Ceva NPU architectures, providing network optimization, graph compilation, simulation, and emulation, ensuring that developers can train, import, optimize, and deploy AI models with highest efficiency and precision.

The Solution

Ceva-NeuPro Studio provides the tools for a complete end-to-end flow transforming trained neural network models into executable code on Ceva-NeuPro NPUs. There are two operating modes for using the Ceva-NeuPro Studio. In the Ceva AI Model mode, users can select from a wide range of common neural-network models already trained and optimized by Ceva. This allows users to focus on their application rather than on model design and training.

This mode may also be used to provide benchmarks for optimizing the user’s own models. In the Bring-Your-Own-Model mode, users may import their own trained models from Caffe, PyTorch, ONNX, TensorFlow, or Keras frameworks. In either mode, models are imported and transformed using quantization and a comprehensive suite of optimizations. Then graph compilation is performed in TVM or microTVM, which are also available for run-time inference. The resulting code may be simulated and debugged in an Eclipse based IDE or may be executed using hardware emulation.

During development users may also import Ceva audio codecs or feature-extraction algorithms, as well as user-developed code. The resulting code may be partitioned across the multiple processing elements of the NPUs or user-defined accelerators and profiled in the Arch Planner tool to explore optimum use of computing and memory resources.

Key features

  • Fully programmable to efficiently execute Neural Networks, feature extraction, signal processing, audio and control code
  • Scalable performance
  • Import from major training frameworks including Caffe, Keras, PyTorch, ONNX, TensorFlow, or LiteRT
  • Support for pre-trained Ceva AI models and Bring Your Own Model (BYOM) approaches
  • Powerful quantization and compression exploit Ceva-NeuPro NPU features
  • Graph compiler produces optimized code to implement networks
  • Software or hardware simulation and debug in Eclipse IDE with visual user interface
  • Performance profiling using Arch Planner for system and memory partitioning
  • Ability to include Ceva libraries, audio- and image-processing functions, hardware-ready Ceva Model Zoo models, and user code
  • Target NPUs include Ceva-NeuPro-Nano, Ceva-NeuPro-M, and user-defined accelerators
  • to meet a wide range of use cases, with MAC configurations up to 64 int8 (native 128 of 4×8) MACs per cycle
  • Two NPU configurations to address a wide variety of use cases are available:
    • Ceva-NPN32 with 32 4×8, 32 8×8, 16 16×8, 8 16×16, 4 32×32 MAC operations per cycle
    • Ceva-NPN64 with 128 4×8, 64 8×8, 32 16×8, 16 16×16, 4 32×32 MAC operations per cycle and 2x performance acceleration using 50% weight sparsity (Sparsity Acceleration)
  • Future proof architecture that supports the most advanced ML data types and operators, including 4-bit to 32-bit integer support and native transformer computation
  • Ultimate ML performance for all use cases, with Sparsity Acceleration, acceleration of non-linear activation types, and fast quantization – up to 5 times acceleration of internal re-quantizing tasks
  • Powerful microcontroller and DSP capabilities with a Coremark/MHz score of 6.0
  • Ultra-low memory requirements achieved with Ceva-NetSqueeze™, yielding up to 80% memory footprint reduction through direct processing of compressed model weights without the need for an intermediate decompression stage. NetSqueeze solves a key bottleneck inhibiting the broad adoption of AIoT processors today
  • Ultra-low energy achieved through innovative energy optimization​​s, including dynamic voltage and frequency scaling support tunable for the use-case​, and dramatic energy and bandwidth reduction by distilling computations using weight-sparsity acceleration
  • Complete, simple to use Ceva-NeuPro-Studio AI SDK, optimized to work seamlessly with leading, open-source AI inference frameworks, such as LiteRT for Microcontrollers and µTVM
  • Model Zoo of pre-trained and optimized machine learning models covering Embedded ML audio, voice, vision and sensing use cases
  • Comprehensive portfolio of optimized runtime libraries and off-the-shelf application-specific software

Benefits

  • Ceva-NeuPro Studio provides a bridge between cloud or PC-trained AI models and execution on energy-efficient, cost-conscious edge-AI devices, reducing the time between model training and optimized, thoroughly tested edge deployment. Taking in trained network models, the NeuPro- Studio environment optimizes and compiles the models, producing C/C++ code for Ceva-NeuPro NPUs in a visual Eclipse IDE.
  • There developers can compile the C/C++, simulate execution, profile performance, apply a full suite of debug tools, and explore efficient mappings of models onto the available NPU mechanisms. Along the way, users may include Ceva audio- or image-processing routines and user-written code. Throughout, industry-standard tools are used, and NeuPro Studio maintains a common user interface for all target Ceva-NeuPro cores.

Applications

  • Consumer IoT
  • Automotive
  • Industrial Automation

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
Ceva-NeuPro Studio
Vendor
Ceva, Inc.

Provider

Ceva, Inc.
HQ: USA
The Smart Edge runs on Ceva! Ceva is the leader in innovative silicon and software IP solutions that enable smart edge products to connect, sense, and infer data more reliably and efficiently. At Ceva, we are passionate about the smart edge. Providing the technology and market expertise our customers need to be successful is what we do best, and we’ve been doing it for over 30 years. With the industry’s only portfolio of comprehensive communications and scalable edge AI IP, Ceva powers the connectivity, sensing, and inference in today’s most advanced smart edge products across consumer IoT, mobile, automotive, infrastructure, industrial, and personal computing. More than 17 billion of the world’s most innovative smart edge products from smartphones to drones to cellular base stations and more are powered by Ceva. We create innovative technologies that help our customers turn great ideas into extraordinary products. We license our portfolio of wireless communications and scalable edge AI IP to our customers, breaking down barriers to entry and enabling them to bring new cutting-edge products to market faster, more reliably, efficiently, and economically. Ceva is a trusted partner to over 400 of the leading semiconductor and OEM companies including Actions, Artosyn, ASR, Atmosic, Autotalks, Beken, Bestechnic, Brite, Broadcom, Celeno, Ceragon, Cirrus Logic, Dialog Semiconductor, DSP Group, Espressif, FujiFilm, GCT Semi, iCatch, InPlay, Intel, Itron, Leadcore, LG Electronics, Mediatek, Microchip, Nextchip, Nokia, Novatek, NXP, ON Semiconductor, Optek, Oticon, Panasonic, RDA, Renesas, Rockchip, Rohm, Samsung, Sanechips, Sharp, Siflower, SigmaStar, Socionext, Sony, Socionext, Sonova, STMicroelectronics, Toshiba, Unisoc, Vatics, Yamaha and ZTE all leverage Ceva’s industry-leading IP. These companies incorporate our IP into application-specific integrated circuits (“ASICs”) and application-specific standard products (“ASSPs”) that they manufacture, market and sell to consumer electronics companies. Headquartered in Rockville, Maryland, Ceva has over 400 employees worldwide, with design centers in Israel, Ireland, France, United Kingdom, United States, Serbia, and sales and support offices located in Europe, the U.S. and throughout Asia. Ceva is a sustainable and environmentally conscious company, adhering to our Code of Business Conduct and Ethics. As such, we emphasize and focus on environmental preservation, recycling, the welfare of our employees and privacy – which we promote on a corporate level. At Ceva, we are committed to social responsibility, values of preservation and consciousness towards these purposes.

Learn more about NPU IP core

Heterogeneous NPU Data Movement Tax: Intel's Own Slides Tell the Story

At Quadric, we have long argued that heterogeneous NPU designs — those that stitch together multiple specialized fixed-function engines — carry an unavoidable hidden cost: data has to move. A lot. And data movement burns power, adds latency, and creates silicon-area overhead that scales with every new generation of AI models. Now, Intel has made that case for us.

The Upcoming NPU Shakeout

The IP industry is no stranger to boom and bust cycles, and it looks to be at the crest of another wave.

Frequently asked questions about NPU IP cores

What is AI SDK for Ceva-NeuPro NPUs?

AI SDK for Ceva-NeuPro NPUs is a NPU IP core from Ceva, Inc. listed on Semi IP Hub.

How should engineers evaluate this NPU?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this NPU IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP