The rise of parallel computing: Why GPUs will eclipse NPUs for edge AI

By Dennis Laudick, Vice President of Product Management, Imagination Technologies
eeNews Europe | May 30, 2025

Artificial Intelligence (AI) isn’t just a technological breakthrough — it’s a permanent evolution in how software is written, understood, and executed. Traditional software development, built on deterministic logic and largely sequential processing, is giving way to a new paradigm: probabilistic models, trained behaviours, and data-driven computation. This isn’t a fleeting trend. AI represents a fundamental and irreversible shift in computer science — from rule-based programming to adaptive, learning-based systems that are increasingly integrated into a wider range of computing problems and capabilities.

This transformation demands a corresponding change in the hardware that powers it. The old model of building highly specialised chips for narrowly defined tasks no longer scales in a world where AI architectures and algorithms are in constant flux (as they are and forever will be). To meet the evolving needs of AI — especially at the edge — we need compute platforms that are as dynamic and adaptable as the workloads they run.

That’s why general-purpose parallel processors, GPUs, are emerging as the future of edge AI, displacing specialised processors like Neural Processing Units (NPUs). It’s not just a question of performance — it’s about flexibility, scalability, and alignment with the future of software itself.

To read the full article, click here

×
Semiconductor IP