Low power AI accelerator

Overview

For device makers, a small, inexpensive, low-power chip that can run the large AI models is needed to lead the market with their device features. A very efficient and low-cost, low-power way to achieve this is to compress large AI models and design a computer chip that runs such an AI compression algorithm. ABR has done exactly this with our patented AI time-series compression algorithm, called the Legendre Memory Unit (LMU).
The LMU was engineered by emulating the algorithm used by time-cells, a kind of neuron, in the human brain. The work was done by ABR in partnership with the neuroscience engineering research lab at U Waterloo where our company was spun out of.

Key Features

  • Complete speech processing at less than 100W
  • Able to run time series nerworks for signal and speech
  • 10X more efficient than traditional NNs

Benefits

  • Ultra low power
  • Run complete speech stack on the chip
  • Add voice interface to any device at less than 100mW
  • Comprehensive signal processing for health IoT devices

Applications

  • IoT devices
  • Wearables
  • Medical devices
  • Consumer Devices

Deliverables

  • RTL
  • Synthesis Scripts
  • Test environment

Technical Specifications

Maturity
Available now
Availability
Available now
×
Semiconductor IP