Imagination Technologies' Upgraded GPUs, New Neural Network Core Provide Deep Learning Processing Options
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first neural network coprocessor core family.
To read the full article, click here
Related Semiconductor IP
- E-Series GPU IP
- Arm's most performance and efficient GPU till date, offering unparalled mobile gaming and ML performance
- 3D OpenGL ES GPU (Graphics Processing Unit)
- Highest performance automotive GPU IP, with revolutionary functional safety technology
- High-performance 2D (sprite graphics) GPU IP combining high pixel processing capacity and minimum gate count.
Related Blogs
- Synopsys Fields Processor Core for Neural Network Computer Vision Applications
- Synopsys Broadens Neural Network Engine IP Core Family
- NVIDIA Previews Open-source Processor Core for Deep Neural Network Inference
- Imagination's neural network accelerator and Visidon's denoising algorithm prove to be perfect partners
Latest Blogs
- Cadence Extends Support for Automotive Solutions on Arm Zena Compute Subsystems
- The Role of GPU in AI: Tech Impact & Imagination Technologies
- Time-of-Flight Decoding with Tensilica Vision DSPs - AI's Role in ToF Decoding
- Synopsys Expands Collaboration with Arm to Accelerate the Automotive Industry’s Transformation to Software-Defined Vehicles
- Deep Robotics and Arm Power the Future of Autonomous Mobility