Algorithms Outpace Moore's Law for AI
Moore's Law continues to change the world. But algorithmic advances have been every bit as critical for driving electronics.
How confident are we that algorithms of tomorrow are a good fit for existing semiconductor chips or new computational fabrics under development? With algorithmic advances outpacing hardware advances, even the most advanced deep-learning model could be deployed on a chip as small as a $5 Raspberry Pi.
Which solves faster: a top modern algorithm on a 1980s processor or a 1980s algorithm running on a top modern processor? The surprising answer is that often it’s a new algorithm on an old processor.
While Moore’s Law gets a lot of attention as the driver of rapid advance of electronics, it is only one of the drivers. We regularly forget that algorithmic advances beat Moore’s Law in many cases.
Professor Martin Groetschel observed that a linear programming problem that would take 82 years to solve in 1988 could be solved in one minute in 2003. Hardware accounted for 1,000 times speedup, while algorithmic advance accounted for 43,000 times. Similarly, MIT professor Dimitris Bertsimas showed that the algorithm speedup between 1991 and 2013 for mixed integer solvers was 580,000 times, while the hardware speedup of peak supercomputers increased only a meager 320,000 times. Similar results are rumored to take place in other classes of constrained optimization problems and prime number factorization.
What does that mean for AI?
To read the full article, click here
Related Semiconductor IP
- MIPI SoundWire I3S Peripheral IP
- MIPI SoundWire I3S Manager IP
- eDP 2.0 Verification IP
- Gen#2 of 64-bit RISC-V core with out-of-order pipeline based complex
- LLM AI IP Core
Related Blogs
- Moore’s Law and 40nm Yield
- Moore's Law and 28nm Yield
- Moore's (Empirical Observation) Law!
- Intel says Moore's Law alive and well and living at 32nm
Latest Blogs
- Rivos Collaborates to Complete Secure Provisioning of Integrated OpenTitan Root of Trust During SoC Production
- From GPUs to Memory Pools: Why AI Needs Compute Express Link (CXL)
- Verification of UALink (UAL) and Ultra Ethernet (UEC) Protocols for Scalable HPC/AI Networks using Synopsys VIP
- Enhancing PCIe6.0 Performance: Flit Sequence Numbers and Selective NAK Explained
- Smarter ASICs and SoCs: Unlocking Real-World Connectivity with eFPGA and Data Converters