IP cores for ultra-low power AI-enabled devices

Overview

Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.

  • Optimized neural network inferencing for visual, spatial and other applications
  • Unparallelled flexibility: customized & optimized for the customer’s use case
  • Produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
  • Minimized development & integration time

Ideal for battery-powered mobile, XR and IoT devices

Why nearbAI?

Highly computationally efficient and flexible NPUs
  • Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
  • Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
  • Enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips

Let's do a custom benchmark together:
provide us with your use case:

• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras

• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node

Key Features

  • L3 optimizer for optimal power, area and latency balance
  • Power: supports sub 1 mW always-on visual / spatial applications
  • NN compiler for firmware updates in the field
  • Configurable number of parallel MACs in convolution engine: 16 to 8192
  • Zero-latency NN switching - e.g., face ID NN gets faces only from face detection NN, multiplexed with other NNs
  • Integrates seamlessly with a wide range of RISC-V and ARM processor cores
  • Top computational efficiency for crystallized AI functions, yet programmable and field-upgradable
  • Typ. 4nm for mobile XR processor chip
  • Typ. 22nm for compact extreme edge AI chip
  • Configurable MAC accuracy: independent coefficient and data quantization, 4 to 16-bit, single-bit granularity
  • Supports model zoo of CNN / RNN / LSTM, and tailored customization
  • Long-term support: 5+ years

Benefits

  • Enable lightweight devices with long battery life
  • Enable truly immersive experiences
  • Enable smart and flexible capabilities

Block Diagram

IP cores for ultra-low power AI-enabled devices Block Diagram

Applications

  • Ideal for battery-powered mobile, XR and IoT devices

Deliverables

  • Standard off-the-shelf IP core(s)
  • Customized IP core(s)
  • IP core(s) + ASIC integration services
  • IP core(s) + full ASIC design services
  • Prototype

Technical Specifications

Foundry, Node
Independent
Availability
Available now
×
Semiconductor IP