Up to 50% main memory bandwidth acceleration
The Ziptilion Bandwidth IP accelertes the main memory bandwidth with up to 50%.
Overview
The Ziptilion Bandwidth IP accelertes the main memory bandwidth with up to 50%. The IP core packages a novel and proprietary technology that accelerates the limited off-chip bandwidth of main memories through real-time, general-purpose and on-the-fly memory data compression. The product benefit is significantly more main memory bandwidth at unmatched power efficiency.
Ziptilion Bandwidth IP is integrated in the memory subsystem of the SoC, close to the memory controller so that it can intercept all the memory traffic to/from DRAM to compress and decompress the data on-the-fly. The effect of compression is transparent to the CPU/accelerator subsystem as well as to the operating system and applications. Similarly, the memory controller is also unaware that the transmitted/received memory data is compressed. In essence, data compression and decompression, compaction as well as addressing the compressed memory space are handled automatically, transparently and hardware-accelerated by the IP.
Key features
- Bandwidth acceleration: 25-50%
- Performance acceleration: 10-25%
- Compression ratio: 2-3x across diverse data sets
- Frequency: DDR4/DDR5 DRAM speed
- IP area: Starting at 0.3mm2 (@5nm TSMC)
- Memory technologies supported: (LP)DDR4, (LP)DDR5, HBM
- Ziptilion Bandwidth IP is compatible with all DRAM technologies and supports standard interfaces such as AXI and CHI. Other proprietary interfaces can be supported upon request.
Benefits
- High performance and low latency main memory bandwidth acceleration 25% average, with peak of 50%
- Unmatched power efficiency
- Real-time compression, super-fast compaction and transparent memory management
- Operating at main memory speed and throughput
- Compatible to AXI4/CHI, both 128-b and 256-b bus interface
- Intelligent real-time analysis and tuning of the IP Block
Applications
- Server CPUs, Smart devices and Embedded systems all face the same challenge. The memory bandwidth is limiting the system scaling and the many cores and accelerators are fighting to serve their memory access requests. A wide range of data set from these different applications have been evaluated and they all verify that it is evident that bandwidth acceleration provides a very efficient and effective way to utilize the full memory potential.
What’s Included?
- Synthesizable Verilog RTL (encrypted)
- Implementation constraints
- UVM testbench (self-checking)
- Vectors for testbench and expected results
- User Documentation
Files
Note: some files may require an NDA depending on provider policy.
Silicon Options
| Foundry | Node | Process | Maturity |
|---|---|---|---|
| TSMC | 7nm | N7+ | — |
Specifications
Identity
Provider
Learn more about Data Compression IP core
Data compression tutorial: Part 3
Firmware Compression for Lower Energy and Faster Boot in IoT Devices
A configurable FPGA-based multi-channel high-definition Video Processing Platform
IP Core for an H.264 Decoder SoC
Digital Associative Memories Based on Hamming Distance and Scalable Multi-Chip Architecture
Frequently asked questions about Data Compression IP
What is Up to 50% main memory bandwidth acceleration?
Up to 50% main memory bandwidth acceleration is a Data Compression IP core from ZeroPoint Technologies AB listed on Semi IP Hub. It is listed with support for tsmc.
How should engineers evaluate this Data Compression?
Engineers should review the overview, key features, supported foundries and nodes, maturity, deliverables, and provider information before shortlisting this Data Compression IP.
Can this semiconductor IP be compared with similar products?
Yes. Buyers can compare this product with similar semiconductor IP cores or IP families based on category, provider, process options, and structured technical specifications.