Vendor: BM Labs Category: Edge AI Accelerator

Neuromorphic Processor IP

License our analog in-memory compute macros (e.g., 32×32 X1 crossbars) for integration into your ASIC or SoC.

Overview

License our analog in-memory compute macros (e.g., 32×32 X1 crossbars) for integration into your ASIC or SoC.

Key features

  • In-Memory Compute: Efficient analog MACs for AI workloads
  • Compact Footprint: 0.28 mm² including peripheral circuitry
  • Wishbone Interface: Easy integration with standard digital buses
  • Ready for Tapeout: Fully synthesized and foundry-compatible

Block Diagram

Benefits

Neuromorphic X1 is a compact and efficient analog in-memory compute macro designed for next-generation edge AI applications. Built on a 32×32 1T1R crossbar array, it leverages analog weights to perform multiply-accumulate operations directly in memory, minimizing data movement and maximizing energy efficiency.

With integrated decoders and sense amplifiers, the X1 macro delivers 1kb of analog weight storage in a compact 0.28 mm² area. Its Wishbone bus compatibility ensures seamless integration into digital SoCs, including Caravel-based platforms.

Applications

  • Neuromorphic X1 enables AI processing at the edge with ultra-low power and area, making it ideal for sensor-rich, power-constrained environments.

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
Neuromorphic X1
Vendor
BM Labs

Provider

BM Labs
HQ: Singapore
BM Labs is a leading-edge semiconductor design company specialization in energy-effective (6x) next gen GPU using in-memory compute.

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is Neuromorphic Processor IP?

Neuromorphic Processor IP is a Edge AI Accelerator IP core from BM Labs listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP