IntelliProp First to Market with Memory Fabric Based on CXL; Driving Most Disruptive Technology to Hit Data Centers in Decades
Unveils IntelliProp Omega Memory Fabric Chips Which Allows for Dynamic Allocation and Sharing of Memory Across Compute Domains – Both In and Out of Server
Longmont, Colo. – September 21, 2022 – IntelliProp, a leading innovator of composable data center transformation technology, today announced its intent to deliver its disruptive Omega Memory Fabric chips. The chips incorporate the Compute Express Link™ (CXL) Standard, along with IntelliProp’s innovative Fabric Management Software and Network Attached Memory (NAM) system. In addition, the company announced the availability of three field-programmable gate array (FPGA) solutions built with its Omega Memory Fabric.
The Omega Memory Fabric eliminates memory bottleneck and allows for dynamic allocation and sharing of memory across compute domains both in and out of the server, delivering on the promise of Composable Disaggregated Infrastructure (CDI) and rack scale architecture, an industry first. IntelliProp’s memory-agnostic innovation will lead to the adoption of composable memory and transform data center energy, performance, efficiency and cost.
As data continues to grow, database and AI applications are being constrained on memory bandwidth and capacity. At the same time billions of dollars are being wasted on stranded and unutilized memory. According to a recent Carnegie Mellon / Microsoft report [1],Google stated that average DRAM utilization in its datacenters is 40%, and Microsoft Azure said that 25% of its server DRAM is stranded.
“IntelliProp’s efforts in extending CXL connection beyond simple memory expansion demonstrates what is achievable in scaled out, composable data center resources,” said Jim Pappas, Chairman of the CXL Consortium. “Their advancements on both CXL and Gen-Z hardware and management software components has strengthened the CXL ecosystem.”
Experts agree that memory disaggregation increases memory utilization and reduces stranded or underutilized memory. Today’s remote direct memory access (RDMA)-based disaggregation has too much overhead for most workloads and virtualization solutions are unable to provide transparent latency management. The CXL standard offers low-overhead memory disaggregation and provides a platform to manage latency.
“History tends to repeat itself. NAS and SAN evolved to solve the problems of over/under storage utilization, performance bottlenecks and stranded storage. The same issues are occuring with memory,” stated John Spiers, CEO, IntelliProp. “Our trailblazing approach to CXL technology unlocks memory bottlenecks and enables next-generation performance, scale and efficiency for database and AI applications. For the first time, high-bandwidth, petabyte-level memory can be deployed for vast in-memory datasets, minimizing data movement, speeding computation and greatly improving utilization. We firmly believe IntelliProp’s technology will drive disruption and transformation in the data center, and we intend to lead the adoption of composable memory.”
Omega Memory Fabric/ NAM System, Powered by IntelliProp’s ASIC
IntelliProp’s Omega Memory Fabric and Management Software enables the enterprise composability of memory, and CXL devices, including storage. Powered by IntelliProp’s ASIC, the Omega Memory Fabric based NAM System and software expands the connection and sharing of memory in and outside the server, placing memory pools where needed. The Omega NAM is well suited for AI, ML, big data, HPC, cloud and hyperscale / enterprise data center environments, specifically targeting applications requiring large amounts of memory.
“In a survey IDC completed in early 2022, almost half of enterprise respondents indicated that they anticipate memory-bound limitations for key enterprise applications over time,” said Eric Burgener, research vice president, Infrastructure Systems, Platforms and Technologies Group, IDC. “New memory pooling technologies like what IntelliProp is offering with their NAM system will help to address this concern, enabling dynamic allocation and sharing of memory across servers with high performance and without hardware slot limitations. The composable disaggregated infrastructure market that IntelliProp is playing in is an exciting new market that is expected to grow at a 28.2 percent five-year compound annual growth rate to crest at $4.8 billion by 2025.”
With IntelliProp’s Omega Memory Fabric and Management Software, hyperscale and enterprise customers will be able to take advantage of multiple tiers of memory with predetermined latency. The system will enable large memory pools to be placed where needed, allowing multiple servers to access the same dataset. It also allows new resources to be added with a simple hot plug, eliminating server downtime and rebooting for upgrades.
“IntelliProp is on to something big. CXL disaggregation is key, as half of the cost of a server is memory. With CXL disaggregation, they are taking memory sharing to a whole new level,” said Marc Staimer, Dragon Slayer analyst. “IntelliProp’s technology makes large pools of memory shareable between external systems. That has immense potential to boost data center performance and efficiency while reducing overall system costs.”
Omega Memory Fabric Features, incorporating the CXL Standard
- Scale and share memory outside the server
- Dynamic multi-pathing and allocation of memory
- E2E security using AES-XTS 256 w/ addition of integrity
- Supports non-tree topologies for peer-to-peer
- Direct path from GPU to memory
- Management scaling for large deployments using multi-fabrics/ subnets and distributed managers
- Direct memory access (DMA) allows data movement between memory tiers efficiently and without locking up CPU cores
- Memory agnostic and up to10x faster than RDMA
“AI is one of the world’s most demanding applications, in terms of compute and storage. The prospects of using ML in genomics, for example, requires exascale compute and low latency access to petabytes of storage. The ability to dynamically allocate shareable pools of memory over the network and across compute domains is a feature we are very excited about,” says Nate Hayes, Co-Founder and Board Member at RISC AI. “We think the fabric from IntelliProp provides the latency, scale and composable disaggregated infrastructure for the next generation AI training platform we are developing at RISC AI, and this is why we are planning to integrate IntelliProp’s technology into the high performance RISC-V processors that we will be manufacturing.”
Omega Memory Fabric Solutions Bring Future CXL Advantages to Data Centers
IntelliProp unveiled three FPGA solutions as part of its Omega Fabric product suite. The solutions connect CXL devices to CXL hosts, allowing data centers to increase performance, scale across dozens to thousands of host nodes, consume less energy since data travels with fewer hops and enable mixed use of shared DRAM (fast memory) and shared SCM (slow memory), allowing for lower total cost of ownership (TCO).
Omega Memory Fabric Solutions
- Omega Adapter
- Enables the pooling and sharing of memory across servers
- Connects to the IntelliProp NAM array
- Omega Switch
- Enables the connection of multiple NAM arrays to multiple servers through a switch
- Targeted for large deployments of servers and memory pools
- Omega Fabric Manager (open source)
- Enables key fabric management capabilities:
- End-to-end encryption over CXL to prevent applications from seeing the contents in other applications’ memory along with data integrity
- Dynamic multi-pathing for redundancy in case links go down with automatic failover
- Supports non-tree topologies for peer-to-peer for things like GPU-to-GPU computing and GPU direct path to memory
- Enables Direct Memory Access for data movement between memory tiers without using the CPU
- Enables key fabric management capabilities:
Availability
The IntelliProp Omega Memory Fabric solutions are available as FPGA versions and will have the full features of the Omega Fabric architecture. The IntelliProp Omega ASIC based on CXL technology will be available in 2023.
Resources
More about IntelliProp Omega Fabric Solutions
[1] Source: Carnegie Mellon University, Microsoft Research and Microsoft Azure report, First-generation Memory Disaggregation for Cloud Platforms, March 2022.
About IntelliProp
IntelliProp is a Colorado-based company founded in 1999 to provide ASIC design and verification services for the data storage and memory industry. Today, IntelliProp is leading the composable data center transformation, fundamentally changing the performance, efficiency and cost of data centers. IntelliProp continues to gain recognition as a leading expert in the data storage industry and actively participates in standards groups developing next-generation memory infrastructure. www.intelliprop.com
Related Semiconductor IP
- 1-port Receiver or Transmitter HDCP 2.3 on HDMI 2.1 ESM
- HDMI 2.0/MHL RX Combo 1P PHY 6Gbps in TSMC 28nm HPC 1.8V, North/South Poly Orientation
- HDMI 2.0 RX PHY in SS 8LPP 1.8V, North/South Poly Orientation
- HDMI 2.0 RX Controller with HDCP
- HDMI 2.0 RX 4P PHY 6Gbps in TSMC 28nm HPM 1.8V, North/South Poly Orientation
Related News
- AI Software Startup Moreh Partners with AI Semiconductor Company Tenstorrent to Challenge NVIDIA in AI Data Center Market
- JEDEC® Adds to Suite of Standards Supporting Compute Express Link® (CXL®) Memory Technology with Publication of Two New Documents
- Cryptomathic and PQShield form strategic alliance to offer PQC solutions for code signing and data protection in compliance with latest NIST and CNSA recommendations
- Crypto Quantique teams up with Attopsemi to simplify the implementation of PUF technology in MCUs and SoCs
Latest News
- Cerebras Appoints Tom Lantzsch to Board of Directors
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet