Small, low-power dedicated AI engines are essential for home appliances, security cameras, and always-on smartphone features. Customized for specific use cases, Origin™ E1 delivers targeted low-power performance and requires little to no external memory.
The Origin E1 NPUs are individually customized to various neural networks commonly deployed in edge devices, including home appliances, smartphones, and security cameras. For products like these that require dedicated AI processing that minimizes power consumption, silicon area, and system cost, E1 cores offer the lowest power consumption and area in a 1 TOPS engine.
Power-Sipping, Always-Sensing AI
Always-sensing cameras continuously sample and analyze visual data to identify specific triggers relevant to the user experience. They enable a seamless, more natural user experience. However, always-sensing data requires specialized AI processing due to the quantity and complexity of data generated. OEMs are turning to specialized AI engines like Expedera’s LittleNPU. The LittleNPU is optimized to process the low-power, high-quality neural networks used by leading OEMs in always-sensing applications. It runs at low power—often as low as 10-20mW—and keeps all camera data securely within the LittleNPU subsystem to preserve user privacy.
Innovative Architecture
The Origin E1 neural engines use Expedera’s unique packet-based architecture, which enables parallel execution across multiple layers, achieving better resource utilization and deterministic performance. This innovative approach significantly increases performance while lowering power, area, and latency.
Specifications
Compute Capacity | 0.5K INT8 MACs |
Multi-tasking | Run Simultaneous Jobs |
Power Efficiency | 18 TOPS/W effective; no pruning, sparsity or compression required (though supported) |
Example Networks Supported | MobileNet, EfficientNet, NanoDet, PicoDet, Inception V3, RNN-T, MobileNet SSD, BERT, FSR CNN, CPN, CenterNet, Unet, YOLO V3, ShuffleNet2, others |
Layer Support | Standard NN functions, including Conv, Deconv, FC, Activations, Reshape, Concat, Elementwise, Pooling, Softmax, others. |
Data types | INT4/INT8/INT10/INT12/INT16 Activations/Weights |
Quantization | Channel-wise Quantization (TFLite Specification) Software toolchain supports Expedera, customer-supplied, or third-party quantization |
Latency | Deterministic performance guarantees, no back pressure |
Frameworks | TensorFlow, TFlite, ONNX, others supported |