Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
To read the full article, click here
Related Semiconductor IP
- E-Series GPU IP
- Arm's most performance and efficient GPU till date, offering unparalled mobile gaming and ML performance
- 3D OpenGL ES 1.1 GPU IP core
- 2.5D GPU
- 2D GPU Hardware IP Core
Related Blogs
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Synopsys Broadens Neural Network Engine IP Core Family
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Imagination's neural network accelerator and Visidon's denoising algorithm prove to be perfect partners
Latest Blogs
- MIPS P8700 RISC-V Processor for Advanced Functional Safety Systems
- Boost SoC Flexibility: 4 Design Tips for Memory Subsystems with Combo DDR3/4 Interfaces
- High Bandwidth Memory Evolution from First Generation HBM to the Latest HBM4
- Keeping Pace with CXL Specification Revisions
- Silicon-proven LVTS for 2nm: a new era of accuracy and integration in thermal monitoring