Our IP Core enables you to create chips for automotive nn acceleration with excellent sustained performance and low power under real workloads
The aiWare hardware IP Core is highly customizable and developed by engineers working side-by-side with our automated driving teams. It can be deployed within a soc or as a stand-alone nn accelerator. On-chip and external memory sizes are highly configurable to optimize performance for customer requirements. aiWare maximizes host CPU offload using on-chip SRAM and external DRAM to keep execution and dataflow within the core. aiWare was designed for volume production in L2/L2+ and above ADAS systems. The first version of the mature IP core was released over three years ago. Building on this expertise the aiWare IP is more sophisticated than a leading automotive OEM’S recently announced accelerator.
The aiWare IP core is fully synthesizable RTL needing no special libraries, enabling neural network acceleration cores from 0,5 TOPS to 16 TOPS. The IP is layout-friendly thanks to its tile-based modular design. Optimized for efficiency at low clock speeds, the aiWare IP core can operate anywhere from 100 MHz to 1 GHz. The hardware IP core is also highly deterministic to increase safety, removing the complexity of caches or programmable cores. aiWare delivers more than 2 TMAC/s per W (4 TOP/s per Watt – 7nm estimated) while sustaining >95% efficiency under continuous operation. The IP core offers a range of ASIL-B–D compliant implementation options either on-chip with a host CPU SoC or as a dedicated NN-accelerator.
Power efficient, high-performance neural network hardware IP for automotive embedded solutions
Overview
Key Features
- Optimized for multiple high-resolution sensor applications
- Configurable, low-latency, high-efficiency architecture
- Scalable from 0.5 to 16 TOPS @ 1 GHz per instance
- Optimizes use of on-chip SRAM and local DDR for efficiency
- Patented data management for automotive inference workloads
- Comprehensive SDK includes tools to convert FP32 NNs to INT8
Benefits
- Enables integration within SoCs or dedicated accelerators
- Supports most ASIL-B and ASIL-D compliant solutions
- Ideal for NN processing in camera, LiDAR or radar subsystems
- Highly autonomous NN processing maximizes host offload
- NNEF and ONNX allows import from most AI/ML frameworks
- Application agnostic – accelerates any NN
Technical Specifications
Related IPs
- Hardware Security Module (HSM) for Automotive
- Multi Protocol IO Concentrator (RDC) IP Core for Safe and Secure Ethernet Network
- Neural engine IP - Balanced Performance for AI Inference
- Neural engine IP - AI Inference for the Highest Performing Systems
- Unified Hardware IP for Post-Quantum Cryptography based on Kyber and Dilithium
- NPU IP for Embedded ML