Architectures Battle for Deep Learning
Linley Gwennap, The Linley Group
10/31/2017 03:41 PM EDT
Chip vendors implement new applications in CPUs. If the application is suitable for GPUs and DSPs, it may move to them next. Over time, companies develop ASICs and ASSPs. Is Deep learning is moving through the same sequence?
In the brief history of deep neural networks (DNNs), users have tried several hardware architectures to increase their performance. General-purpose CPUs are the easiest to program but are the least efficient in performance per watt. GPUs are optimized for parallel floating-point computation and provide several times better performance than CPUs. As GPU vendors discovered a sizable new customer base, they began to enhance their designs to further improve DNN throughput. For example, Nvidia’s new Volta architecture adds dedicated matrix-multiply units, accelerating a common DNN operation.
Even these enhanced GPUs remain burdened by their graphics-specific logic. Furthermore, the recent trend is to use integer math for DNN inference, although most training continues to use floating-point computations. Nvidia also enhanced Volta’s integer performance, but it still recommends using floating point for inference. Chip designers, however, are well aware that integer units are considerably smaller and more power efficient than floating-point units, a benefit that increases when using 8-bit (or smaller) integers instead of 16-bit or 32-bit floating-point values.
To read the full article, click here
Related Semiconductor IP
- Configurable CPU tailored precisely to your needs
- Ultra high-performance low-power ADC
- HiFi iQ DSP
- CXL 4 Verification IP
- JESD204E Controller IP
Related News
- Altek License CEVA Imaging and Vision DSP for Deep Learning in Mobile Devices
- Neurala Announces $14 Million Series A to Bring Deep Learning Neural Network AI Software to Drones, Self-Driving Cars, Toys and Cameras
- ZTE Wireless Institute Achieves Performance Breakthrough for Deep Learning with Intel FPGAs
- Kalray Announces the Release of an Efficient Manycore Processing Solution Dedicated to Deep Learning
Latest News
- RaiderChip showcases the evolution of its local Generative AI processor at ISE 2026
- ChipAgents Raises $74M to Scale an Agentic AI Platform to Accelerate Chip Design
- Avery Dennison announces first-to-market integration of Pragmatic Semiconductor’s chip on a mass scale
- Ceva, Inc. Announces Fourth Quarter and Full Year 2025 Financial Results
- Ceva Highlights Breakthrough Year for AI Licensing and Physical AI Adoption in 2025