French Team Led by CEA-Leti Develops First Hybrid Memory Technology Enabling On-Chip AI Learning and Inference

‘Nature Electronics’ Paper Details System That Blends Best Traits Of Once-Incompatible Technologies—Ferroelectric Capacitors and Memristors

GRENOBLE, France, Sept. 25, 2025 – Breaking through a technological roadblock that has long limited efficient edge-AI learning, a team of French scientists developed the first hybrid memory technology to support adaptive local training and inference of artificial neural networks.

In a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” published in Nature Electronics, the team presents a new hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture delivers a long-sought solution to one of edge AI’s most vexing challenges: how to perform both learning and inference on a chip without burning through energy budgets or challenging hardware constraints.

Led by CEA-Leti, and including scientists from several French microelectronic research centers, the project demonstrated that it is possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems. The team’s innovation enables edge systems and devices like autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—adapting models on the fly while keeping energy consumption and hardware wear under tight control.

The Challenge: A No-Win Tradeoff

Edge AI demands both inference (reading data to make decisions) and learning (updating models based on new data). But until now, memory technologies could only do one well:

  • Memristors (resistive random access memories) excel at inference because they can store analog weights, are energy-efficient during read operations, and the support in-memory computing.
  • Ferroelectric capacitors (FeCAPs) allow rapid, low-energy updates, but their read operations are destructive—making them unsuitable for inference.

As a result, hardware designers faced the choice of favoring inference and outsourcing training to the cloud, or attempt training with high costs and limited endurance.

Training at the Edge

The team’s guiding idea was that while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.

“Inspired by quantized neural networks, we adopted a hybrid approach: Forward and backward passes use low-precision weights stored in analog in memristors, while updates are achieved using higher-precision FeCAPs. Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper.

The Breakthrough: One Memory, Two Personalities

The team engineered a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode device can operate as a FeCAP or a memristor, depending on how it's electrically "formed."

  • The same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state.
  • A digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.

This hardware was fabricated and tested on an 18,432-device array using standard 130nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.

In addition to CEA-Leti, the research team included scientists from Université Grenoble Alpes, CEA-List, the French National Centre for Scientific Research (CNRS), the University of Bordeaux, Bordeaux INP, IMS France, Université Paris-Saclay, and the Center for Nanosciences and Nanotechnologies (C2N).

We acknowledge funding support from the European Research Council (consolidator grant DIVERSE: 101043854) and through a France 2030 government grant (ANR-22-PEEL-0010).

A single memory, which functions as both memristor and FeCAP, for neural network inference and training

About CEA-Leti (France)

CEA-Leti, a technology research institute at CEA, is a global leader in miniaturization technologies enabling smart, energy-efficient and secure solutions for industry. Founded in 1967, CEA-Leti pioneers micro-& nanotechnologies, tailoring differentiating applicative solutions for global companies, SMEs and startups. CEA-Leti tackles critical challenges in healthcare, energy and digital migration. From sensors to data processing and computing solutions, CEA-Leti’s multidisciplinary teams deliver solid expertise, leveraging world-class pre-industrialization facilities. With a staff of more than 2,000 talents, a portfolio of 3,200 patents, 11,000 sq. meters of cleanroom space and a clear IP policy, the institute is based in Grenoble (France) and has offices in San Francisco (United States), Brussels (Belgium), Tokyo (Japan), Seoul (South Korea) and Taipei (Taiwan). CEA-Leti has launched 80 startups and is a member of the Carnot Institutes network. Follow us on www.leti-cea.com.

Technological expertise

CEA has a key role in transferring scientific knowledge and innovation from research to industry. This high-level technological research is carried out in particular in electronic and integrated systems, from microscale to nanoscale. It has a wide range of industrial applications in the fields of transport, health, safety and telecommunications, contributing to the creation of high-quality and competitive products.

For more information: www.cea.fr/english 

×
Semiconductor IP