Vendor: AiM Future, Inc. Category: Edge AI Accelerator

Performance AI Accelerator for Edge Computing

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture an…

Overview

The NeuroMosaic Processor (NMP) family is shattering the barriers to deploying ML by delivering a general-purpose architecture and simple programmer’s model to enable virtually any class of neural network architecture and use case.

Our unique differentiation starts with the ability to simultaneously execute multiple AI/ML models significantly expanding the realm of capability over existing approaches. This game-changing advantage is provided by the co-developed NeuroMosAIc Studio software’s ability to dynamically allocate HW resources to match the target workload resulting in highly optimized, low-power execution. The designer may also select the optional on-device training acceleration extension enabling iterative learning post-deployment. This key capability cuts the cord to cloud dependence while elevating the accuracy, efficiency, customization, and personalization without reliance on costly model retraining and deployment, thereby extending device lifecycles.

Key features

  • Up to 16 TOPS
  • Up to 16 MB Local Memory
  • RISC-V/Arm Cortex-R or A 32-bit CPU
  • 3 x AXI4, 128b (Host, CPU & Data)

Block Diagram

Benefits

  • As the highest-performance member of the NeuroMosAIc Processor family, the NMP-750 delivers over 16TOPS in its largest configuration making it an ideal choice for edge and edge network devices. A smaller configuration option allows engineers to scale back performance where area and power are the primary design criteria.
  • Numerous architectural advances result in higher convolution throughput and 2x compute density while lowering total area by 25%. The addition of MISH and SWISH activation function support extends efficiency while an upgraded RISC-V controller delivers 4X initialization and post-processing performance over the NMP-500. Alternatively, designers may elect to use the Arm® Cortex®-M or Cortex-A for further flexibility and software extension.
  • The patented and co-developed hardware and software architecture enables end-user flexibility to mold multiple models to the accelerator resources to achieve simultaneous, sequential or event-based requirements.

Applications

  • Mobility and Autonomous Control
  • Process, Building, and Factory
  • Automation
  • Multi-Camera Stream Processing
  • Spectral Efficiency and Energy Mgmt

Files

Note: some files may require an NDA depending on provider policy.

Specifications

Identity

Part Number
NMP-750
Vendor
AiM Future, Inc.
Type
Silicon IP

Provider

AiM Future, Inc.
HQ: United States
AiM Future specializes in machine learning and deep learning accelerators for the distributed intelligent edge. The company was spun out of LG Electronics R&D where it was responsible for strategic semiconductor IP addressing the premium consumer electronics portfolio. In June of 2023, it completed a Series A round investment from a group of leading Korean venture capital firms including L & S Venture Capital, Hi Investment Partners, Daedeok Venture Partners, KB Investment, and We Ventures. Its flagship NeuroMosAIc architecture achieved production silicon in 2019 and is shipping in numerous LG Electronics products, including the MoodUp line of refrigerators. The company has since executed several license agreements with partners around the world and released the next generation of NeuroMosAIc Processors in Q4, 2023.

Learn more about Edge AI Accelerator IP core

RISC-V Based TinyML Accelerator for Depthwise Separable Convolutions in Edge AI

While lightweight architectures like MobileNetV2 employ Depthwise Separable Convolutions (DSC) to reduce computational complexity, their multi-stage design introduces a critical performance bottleneck inherent to layer-by-layer execution: the high energy and latency cost of transferring intermediate feature maps to either large on-chip buffers or off-chip DRAM. To address this memory wall, this paper introduces a novel hardware accelerator architecture that utilizes a fused pixel-wise dataflow.

Accelerating Your Development: Simplify SoC I/O with a Single Multi-Protocol SerDes IP

Enter the Multi-Protocol SerDes (Serializer/Deserializer)—a flexible, reusable IP block that allows a single PHY to support multiple serial communication protocols, such as PCIe, SATA, Ethernet, USB, and more. This approach enables SoC vendors to meet diverse customer requirements and application needs without redesigning I/O for each target market.

Frequently asked questions about Edge AI Accelerator IP cores

What is Performance AI Accelerator for Edge Computing?

Performance AI Accelerator for Edge Computing is a Edge AI Accelerator IP core from AiM Future, Inc. listed on Semi IP Hub.

How should engineers evaluate this Edge AI Accelerator?

Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Edge AI Accelerator IP.

Can this semiconductor IP be compared with similar products?

Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.

×
Semiconductor IP