SoC design: When a network-on-chip meets cache coherency
By Andy Nightingale, Arteris
EDN (January 24, 2024)
Many people have heard the term cache coherency without fully understanding the considerations in the context of system-on-chip (SoC) devices, especially those using a network-on-chip (NoC). To understand the issues at hand, it’s first necessary to understand the role of cache in the memory hierarchy.
Cache in the memory hierarchy
Inside a CPU are a relatively small number of registers with extremely high speed. These registers can be accessed by the CPU in a single clock cycle. However, their storage capacity is minimal. In contrast, accessing the main memory for reading or writing data takes up many clock cycles. This often results in the CPU being idle most of the time.
In 1965, a British computer scientist, Maurice Wilkes, introduced the concepts of cache memory and memory caching. This involved having a small amount of fast memory called a cache adjoining the CPU. The word “cache” itself comes from the French word “cacher,” meaning “to hide” or “to conceal,” the idea being that the cache hides the main memory from the CPU.
Related Semiconductor IP
- NoC System IP
- Cloud-active NOC configuration tool for generating and simulating Coherent and Non-Coherent NoCs
- Tessent NoC Monitor
- Network-on-Chip (NoC) Interconnect IP
- Coherent Network-on-chip (NoC) IP
Related White Papers
- SoC design: When is a network-on-chip (NoC) not enough?
- Optimize SoC Design with a Network-on-Chip Strategy
- Fast, Thorough Verification of Multiprocessor SoC Cache Coherency
- A Knowledge Sharing Framework for Fabs, SoC Design Houses and IP Vendors
Latest White Papers
- New Realities Demand a New Approach to System Verification and Validation
- How silicon and circuit optimizations help FPGAs offer lower size, power and cost in video bridging applications
- Sustainable Hardware Specialization
- PCIe IP With Enhanced Security For The Automotive Market
- Top 5 Reasons why CPU is the Best Processor for AI Inference