Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
To read the full article, click here
Related Semiconductor IP
- E-Series GPU IP
- Arm's most performance and efficient GPU till date, offering unparalled mobile gaming and ML performance
- 3D OpenGL ES 1.1 GPU IP core
- 2.5D GPU
- 2D GPU Hardware IP Core
Related Blogs
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Synopsys Broadens Neural Network Engine IP Core Family
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Imagination's neural network accelerator and Visidon's denoising algorithm prove to be perfect partners
Latest Blogs
- How is RISC-V’s open and customizable design changing embedded systems?
- Imagination GPUs now support Vulkan 1.4 and Android 16
- From "What-If" to "What-Is": Cadence IP Validation for Silicon Platform Success
- Accelerating RTL Design with Agentic AI: A Multi-Agent LLM-Driven Approach
- UEC-CBFC: Credit-Based Flow Control for Next-Gen Ethernet in AI and HPC