Algorithms Outpace Moore's Law for AI
Moore's Law continues to change the world. But algorithmic advances have been every bit as critical for driving electronics.
How confident are we that algorithms of tomorrow are a good fit for existing semiconductor chips or new computational fabrics under development? With algorithmic advances outpacing hardware advances, even the most advanced deep-learning model could be deployed on a chip as small as a $5 Raspberry Pi.
Which solves faster: a top modern algorithm on a 1980s processor or a 1980s algorithm running on a top modern processor? The surprising answer is that often it’s a new algorithm on an old processor.
While Moore’s Law gets a lot of attention as the driver of rapid advance of electronics, it is only one of the drivers. We regularly forget that algorithmic advances beat Moore’s Law in many cases.
Professor Martin Groetschel observed that a linear programming problem that would take 82 years to solve in 1988 could be solved in one minute in 2003. Hardware accounted for 1,000 times speedup, while algorithmic advance accounted for 43,000 times. Similarly, MIT professor Dimitris Bertsimas showed that the algorithm speedup between 1991 and 2013 for mixed integer solvers was 580,000 times, while the hardware speedup of peak supercomputers increased only a meager 320,000 times. Similar results are rumored to take place in other classes of constrained optimization problems and prime number factorization.
What does that mean for AI?
To read the full article, click here
Related Semiconductor IP
- SHA-256 Secure Hash Algorithm IP Core
- EdDSA Curve25519 signature generation engine
- DeWarp IP
- 6-bit, 12 GSPS Flash ADC - GlobalFoundries 22nm
- LunaNet AFS LDPC Encoder and Decoder IP Core
Related Blogs
- Moore’s Law and 40nm Yield
- Moore's Law and 28nm Yield
- Moore's (Empirical Observation) Law!
- Intel says Moore's Law alive and well and living at 32nm
Latest Blogs
- Area, Pipelining, Integration: A Comparison of SHA-2 and SHA-3 for embedded Systems.
- Why Your Next Smartphone Needs Micro-Cooling
- Teaching AI Agents to Speak Hardware
- SOCAMM: Modernizing Data Center Memory with LPDDR6/5X
- Bridging the Gap: Why eFPGA Integration is a Managed Reality, Not a Schedule Risk