IP's a slow burner
Cadence Design Systems has decided that semiconductor intellectual property (IP) is important to the chip design and so has bet fairly big on its $300m cash acquisition of Denali together with a plan to pre-integrate IP from other suppliers.
Let’s have a look at the rationale. Design costs keep going up - in proportion to the number of gates you can squeeze onto an integrated circuit (IC).
“The only thing that scales is IP. You can’t move to the stratosphere in terms of abstraction. You have to use IP…The SoC battle is won or lost over the quality of IP…IP reuse will go up to and beyond 90 per cent of the die. But having multiple IP vendors for one chip design brings challenges, so you will qualify IP vendors rather than individual IP cores. Everything points to consolidation.”
Oh no, wait. That wasn’t Cadence. Those were the words of Raul Camposano in mid-2004, who was then Synopsys CTO. Sounds a lot like the Cadence rationale doesn’t it? But it doesn’t say a lot for Cadence’s new found strategy rewriting the rules of EDA. It’s easy to see how an EDA-plus-IP strategy can develop because you only have to look at what has happened at Synopsys, the only major EDA vendor so far to have made a living at IP. Mentor Graphics had a go but that did not turn out so well and the company got out of the business.
Related Semiconductor IP
- HOTLink II IP Core
- High Speed Data Bus (HSDB) IP Core
- 1-port Receiver or Transmitter HDCP 2.3 on HDMI 2.1 ESM
- HDMI 2.0/MHL RX Combo 1P PHY 6Gbps in TSMC 28nm HPC 1.8V, North/South Poly Orientation
- HDMI 2.0 RX PHY in SS 8LPP 1.8V, North/South Poly Orientation
Latest Blogs
- UALink™ Shakes up the Scale-up AI Compute Landscape
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
- Cadence Transforms Chiplet Technology with First Arm-Based System Chiplet
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 2
- Redefining XPU Memory for AI Data Centers Through Custom HBM4 – Part 1