Overview
The Cache MX IP compresses on-chip L2, L3 SRAM cache enabling 2x effective capacity. SRAM Caches can take upto 30-50% of an SoC xPU silicon real estate and a significant power budget that increases with physical dimensions. While digital logic scales effectively with process technology node shrink, SRAM essentially stopped scaling from 5nm to 3nm technology nodes. The number of compute cores demands higher SRAM capacity to effectively scale compute IPC performance. Increasing SRAM area can negatively impact both the die cost as well as die yield. Cache MX offers a power, area and cost effective alternative to enable performance scaling with single digit latency.
The Cache MX IP compresses on-chip L2, L3 SRAM cache enabling 2x effective capacity. SRAM Caches can take upto 30-50% of an SoC xPU silicon real estate and a significant power budget that increases with physical dimensions. While digital logic scales effectively with process technology node shrink, SRAM essentially stopped scaling from 5nm to 3nm technology nodes. The number of compute cores demands higher SRAM capacity to effectively scale compute IPC performance. Increasing SRAM area can negatively impact both the die cost as well as die yield. Cache MX offers a power, area and cost effective alternative to enable performance scaling with single digit latency.
Learn more about Data Compression IP core
Data compression plays a critical role in modern computing, enabling efficient storage and faster transmission of information. Among lossless data compression algorithms, GZIP, ZSTD, LZ4, and Snappy have emerged as prominent contenders, each offering unique trade-offs in terms of compression ratio, speed, and resource utilization. This white paper evaluates these algorithms and their corresponding hardware cores, providing an in-depth comparison to help developers and system architects choose the optimal solution for their specific use case.
Part three of this three-part series explains how JPEG and MPEG compression work.
The phrase “IoT” for Internet of Things has exploded to cover a wide range of different applications and diverse devices with very different requirements. Most observers, however, would agree that low energy consumption is a key element for IoT, as many of these devices must run on batteries or harvest energy from the environment.
This paper describes an FPGA-based high-definition video processing platform. The platform supports a wide range of applications including flat-panel TV, projection TV and video monitor.
This paper presents the development of an IP core for an H.264 decoder. This state-of-the-art video compression standard contributes to reduce the huge demand for bandwidth and storage of multimedia applications. The IP is CoreConnect compliant and implements the modules with high performance constraints.
In this paper, we present a new concept and its circuit implementation for high-speed associative memories based on Hamming distance