Vendor: easics Category: Edge AI Accelerator

IP cores for ultra-low power AI-enabled devices

Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler.

Overview

Each nearbAI core is an ultra-low power neural processing unit (NPU) and comes with an optimizer / neural network compiler. It provides immediate visual and spatial feedback based on sensory inputs, which is a necessity for live augmentation of the human senses.

  • Optimized neural network inferencing for visual, spatial and other applications
  • Unparallelled flexibility: customized & optimized for the customer’s use case
  • Produces the most optimal NPU IP core for the customer’s use case: power, area, latency and memories trade-off
  • Minimized development & integration time

Ideal for battery-powered mobile, XR and IoT devices

Why nearbAI?

Highly computationally efficient and flexible NPUs
  • Enable lightweight devices with long battery life ... with ultra-low power, run heavily optimized AI-based functions locally
  • Enable truly immersive experiences ... achieve sensors-to-displays latency within the response time of the human senses
  • Enable smart and flexible capabilities ... fill the gap between “swiss-army knife” XR / AI mobile processor chips and limited-capability edge IoT / AI chips

Let's do a custom benchmark together:
provide us with your use case:

• Quantized or unquantized NN model(s):
ONNX, TensorFlow (Lite), PyTorch, or Keras

• Constraints:
Average power & energy per inference, silicon area, latency, memories, frame rate, image resolution, foundry + technology node

Key features

  • L3 optimizer for optimal power, area and latency balance
  • Power: supports sub 1 mW always-on visual / spatial applications
  • NN compiler for firmware updates in the field
  • Configurable number of parallel MACs in convolution engine: 16 to 8192
  • Zero-latency NN switching - e.g., face ID NN gets faces only from face detection NN, multiplexed with other NNs
  • Integrates seamlessly with a wide range of RISC-V and ARM processor cores
  • Top computational efficiency for crystallized AI functions, yet programmable and field-upgradable
  • Typ. 4nm for mobile XR processor chip
  • Typ. 22nm for compact extreme edge AI chip
  • Configurable MAC accuracy: independent coefficient and data quantization, 4 to 16-bit, single-bit granularity
  • Supports model zoo of CNN / RNN / LSTM, and tailored customization
  • Long-term support: 5+ years

Block Diagram

Benefits

  • Enable lightweight devices with long battery life
  • Enable truly immersive experiences
  • Enable smart and flexible capabilities

Applications

  • Ideal for battery-powered mobile, XR and IoT devices

What’s Included?

  • Standard off-the-shelf IP core(s)
  • Customized IP core(s)
  • IP core(s) + ASIC integration services
  • IP core(s) + full ASIC design services
  • Prototype

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
nearbAI
Vendor
easics
Type
Silicon IP

Provider

easics
HQ: Belgium
easics is market leader in ASIC / SoC design and supply services. easics licenses nearbAI™, its semiconductor intellectual property (IP) products for implementing artificial intelligence (AI) at the extreme edge, close to the image sensors. End-markets include mobile, consumer, Internet-of-Things (IoT), healthcare, automotive, industrial and measurement equipment. easics is a rock-solid company, in business for over 30 years and was founded in 1991. easics is an independent company, headquartered in Leuven - Belgium, with an office in Silicon Valley (California). easics is an ISO 9001:2015 certified company.

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is IP cores for ultra-low power AI-enabled devices?

IP cores for ultra-low power AI-enabled devices is a Edge AI Accelerator IP core from easics listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP