Vendor: Expedera Category: Edge AI Accelerator

AI inference engine for Audio

Even the smallest, lowest-power audio devices embed AI capabilities to enhance the user experience.

Overview

Even the smallest, lowest-power audio devices embed AI capabilities to enhance the user experience. Successful deployment of AI on resource-constrained products like headsets and wearables entails careful attention to power consumption and silicon area requirements.

Power-Sipping, Always-Sensing AI

The TimbreAI™ is an ultra-low-power AI Interface engine designed for audio noise reduction use cases in consumer devices such as wireless headsets. It provides optimal performance within strict power and area constraints. Featuring 3.2 billion operations per second (GOPS) performance, the TimbreAI T3 sips an astonishingly low 300µW or less power. TimbreAI supports quick and seamless deployments. It is available as soft IP and is portable to any foundry silicon process.

Innovative Architecture

The TimbreAI is purpose-built for audio noise reduction in power-constraint devices. It uses Expedera’s packet-based architecture, and use case optimizations to achieve impressive performance and power efficiency.

Specifications

Compute Capacity 3.2 GOPS
Power Efficiency 300 μW
Layer Support Standard NN functions
Data types INT4/INT8/INT16 Activations/Weights
Quantization Channel-wise Quantization (TFLite Specification)
Latency Deterministic performance guarantees, no back pressure
Frameworks TensorFlow, TFlite, ONNX, others supported

Key features

  • Run Your Trained Models Unchanged: The T3 requires no changes to your trained models and no sacrifices to accuracy or performance to achieve your desired PPA goals.
  • Pre-Configured for Audio Neural Networks: The T3 is pre-configured to support common audio neural networks.
  • Ultra-Low-Power AI Interface: Reducing power consumption to an absolute minimum is essential to product success; the T3 has been architected to minimize dark silicon and requires no external memory, consuming less than 300 μW of power.
  • Successfully Deployed in 10M Devices: Quality is key to any successful product. Origin IP has successfully deployed in over 10 million consumer devices, with designs in multiple leading-edge nodes.

Benefits

  • 3.2 GOPS performance at <300μW
  • Full software stack provided
  • Runs audio neural networks
  • Delivered as Soft IP (RTL)
  • No need for hardware or software optimizations

What’s Included?

  • RTL or GDS
  • SDK (TVM-based)
  • Documentation

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
TimbreAI T3
Vendor
Expedera

Provider

Expedera
HQ: USA
Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI inference applications. Third-party silicon validated and shipped in more than 10M customer devices, Expedera’s solutions produce superior performance and are scalable to a wide range of applications from edge nodes and smartphones to automotive. Expedera’s Origin™ Neural Processing Unit IP solutions are easily integrated, readily scalable, and customized to unique use cases and application requirements. The company is headquartered in Santa Clara, California, with engineering and sales offices around the globe.

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is AI inference engine for Audio?

AI inference engine for Audio is a Edge AI Accelerator IP core from Expedera listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP