Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
Related Semiconductor IP
- High performance GPU for cloud gaming with DirectX support
- GPU based on Arm's 5th Gen architecture
- High Performance GPU for premium DTVs
- Efficient GPU ideal for integrating into smart home hubs, set-top boxes or mainstream DTVs
- Smallest GPU to support native HDR applications, suitable for wearable devices, smart home hubs, or mainstream set-top boxes
Related Blogs
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Synopsys Broadens Neural Network Engine IP Core Family
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Imagination's neural network accelerator and Visidon's denoising algorithm prove to be perfect partners
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?