For device makers, a small, inexpensive, low-power chip that can run the large AI models is needed to lead the market with their device features. A very efficient and low-cost, low-power way to achieve this is to compress large AI models and design a computer chip that runs such an AI compression algorithm. ABR has done exactly this with our patented AI time-series compression algorithm, called the Legendre Memory Unit (LMU).
The LMU was engineered by emulating the algorithm used by time-cells, a kind of neuron, in the human brain. The work was done by ABR in partnership with the neuroscience engineering research lab at U Waterloo where our company was spun out of.
Low power AI accelerator
Overview
Key Features
- Complete speech processing at less than 100W
- Able to run time series nerworks for signal and speech
- 10X more efficient than traditional NNs
Benefits
- Ultra low power
- Run complete speech stack on the chip
- Add voice interface to any device at less than 100mW
- Comprehensive signal processing for health IoT devices
Applications
- IoT devices
- Wearables
- Medical devices
- Consumer Devices
Deliverables
- RTL
- Synthesis Scripts
- Test environment
Technical Specifications
Maturity
Available now
Availability
Available now
Related IPs
- Ultra Low Power AI core
- High Performance / Low Power Microcontroller Core
- Ultra low power C-programmable DSP core
- Highest code density, Low Power 32-bit Processor with optional DSP
- Ultra low power, high-performance DSP / controller RISC core
- Ultra low power C-programmable Baseband Signal Processor core