Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
To read the full article, click here
Related Semiconductor IP
- Arm's most performance and efficient GPU till date, offering unparalled mobile gaming and ML performance
- Highest performance automotive GPU IP, with revolutionary functional safety technology
- 3D OpenGL ES GPU (Graphics Processing Unit)
- High performance GPU for cloud gaming with DirectX support
- Arm’s latest flagship GPU is based on the new 5th Gen GPU architecture, bringing the next generation of visual computing to mobile
Related Blogs
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Synopsys Broadens Neural Network Engine IP Core Family
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Imagination's neural network accelerator and Visidon's denoising algorithm prove to be perfect partners
Latest Blogs
- The Growing Importance of PVT Monitoring for Silicon Lifecycle Management
- Unlock early software development for custom RISC-V designs with faster simulation
- HBM4 Boosts Memory Performance for AI Training
- Using AI to Accelerate Chip Design: Dynamic, Adaptive Flows
- Locking When Emulating Xtensa LX Multi-Core on a Xilinx FPGA