- Optimized neural network inferencing for visual, spatial and other applications
- Unparallelled flexibility: customized & optimized for the customer’s use case
- Produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
- Minimized development & integration time
Ideal for battery-powered mobile, XR and IoT devices
Why nearbAI?
Highly computationally efficient and flexible NPUs
- Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
- Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
- Enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips
Let's do a custom benchmark together:
provide us with your use case:
• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras
• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node