Panmnesia Kicks off $30M Project to Redefine AI Infrastructure with Chiplets, Manycore Architectures, In-Memory Processing, and CXL

May 8, 2025 -- Panmnesia, a company developing solutions for AI infrastructure, is set to receive $30 million in funding to carry out a project to revolutionize the architecture of AI data centers. As part of this effort, the company will develop chiplet-based modular accelerators that can accelerate large-scale AI services, including large-language models (LLMs), vector search and recommendation systems.

Time to Rethink Infrastructure for Modern AI Workloads5

Since AI services became increasingly integrated into our daily lives, model accuracy has emerged as a critical factor influencing user engagement and overall revenue. To improve accuracy, many companies continue to scale up their models and datasets. As these models and datasets grow rapidly, the infrastructure needed to support them is reaching unprecedented scales. The massive demand for computing and memory—often involving millions of GPUs in a data center—is driving 1. costs and 2.energy consumption to extreme levels. In addition, current AI infrastructures are suffering from 3. resource underutilization due to their reliance on GPUs—devices with a fixed ratio of computing and memory resources—without considering the diverse resource demands of different AI workloads. To overcome these challenges, AI infrastructure must evolve to become significantly more efficient in terms of cost, power consumption, and resource utilization.

Overview of Panmnesia’s Solution

Project Overview

Panmnesia aims to address the aforementioned challenges in AI infrastructure through the new R&D project. With $30M in support, Panmnesia plans to develop a next-generation chiplet-based AI accelerator and an integrated infrastructure system designed to efficiently support large-scale AI workloads. The accelerator can be configured in a flexible manner based on demands, and it also incorporates in-memory processing technology to minimize data movement. These components will be seamlessly integrated into Panmnesia’s proprietary CXL full-system solution.

Concept Figure Representing Panmnesia’s Chiplet-based SoC.

Technology Highlights

Building on its existing expertise in memory and interconnect technologies such as CXL, Panmnesia is developing a next-generation system equipped with the following features to meet the demands of modern AI workloads:

1. Reusable and Flexible Chiplet-based Architecture

Panmnesia’s AI accelerator consists of multiple chiplets, which serve as the building blocks of a system-on-chip (SoC). This chiplet-based architecture enables flexible scaling of compute and memory resources by placing compute and memory chiplets (also called as ‘tiles’) based on the demands of AI workloads, and it results in optimized resource utilization. In addition, this chiplet-based modular design allows for faster development cycles through partial SoC modification—only the chiplets related to the required functionality need to be updated, while the rest can be reused without any modification. This significantly reduces development time, thereby facilitating rapid adaptation to evolving industry trends.

2. Manycore Architecture and Vector Processor

Panmnesia’s AI accelerator features two types of compute chiplets: (1) chiplets with manycore architecture (called ‘Core Tiles’) and (2) chiplets with vector processors (called ‘PE Tiles’). Therefore, customers can optimize performance per watt, by choosing appropriate compute chiplet architecture for their target workload. For instance, Core Tiles (chiplets with manycore architecture) are well-suited for accelerating highly parallel tasks exploiting its massive parallelism, while PE Tiles (chiplets with vector processors) are specialized for operations involving vector data formats, such as distance calculations. Moreover, it is able to integrate over a thousand cores within a chip, since the company is going to leverage advanced semiconductor process nodes to design its SoCs. Therefore, it is able to meet the massive computational demands of modern AI applications efficiently.

3. In-Memory Processing

Data movement between memory resources and computational resources is one of the significant causes of power consumption. To solve this power issue, Panmnesia incorporates in-memory processing technologies to remove unnecessary data movement within AI infrastructure. This leads to significant reductions in power consumption associated with data transfers that occur during the processing of large-scale AI workloads.

In addition to the development of new solutions, Panmnesia also integrates its proprietary CXL full-system solutions, such as CXL IP (intellectual property) into the system. As resource pooling based on CXL enables on-demand expansion of computing and memory resources, customers can optimize the cost of their AI infrastructure.

Panmnesia’s Solution for Large-Scale AI

A representative from Panmnesia stated, “We were able to secure this project thanks to the recognition of our existing expertise in memory and interconnect technologies, as well as our blueprint for revolutionizing AI infrastructure.” The representative added, “We expect this to be a cornerstone in developing unprecedented AI infrastructure solutions that will fundamentally transform data center architecture.”

×
Semiconductor IP