Reduced Operation Set Computing (ROSC) for Flexible, Future-Proof, High-Performance Inference
The versatility and power of deep learning means that modern neural networks have found myriad applications in areas as diverse as machine translation, action recognition, task planning, sentiment analysis and image processing. This is inevitable as the field matures, and there is a high and growing degree of specialisation, which is an accelerating trend. This makes it a challenge to keep with the current state of the art, let alone predict the future compute needs of neural networks.
Designers of neural network accelerator (NNA) IP have a Herculean task on their hands: making sure that their product is sufficiently general to apply to a very wide range of current and future applications, whilst guaranteeing high performance. In the mobile, automotive, data centre and embedded spaces targeted by Imagination’s cutting-edge IMG Series4 NNAs, there are even more stringent constraints on bandwidth, area and power consumption. The engineers at Imagination have found innovative ways to address these daunting challenges and deliver ultra-high-performance and future-proof IP.
To read the full article, click here
Related Semiconductor IP
- SHA-256 Secure Hash Algorithm IP Core
- EdDSA Curve25519 signature generation engine
- DeWarp IP
- 6-bit, 12 GSPS Flash ADC - GlobalFoundries 22nm
- LunaNet AFS LDPC Encoder and Decoder IP Core
Related Blogs
- Global semicon market set for slowdown due to deteriorating business climate!
- Bluetooth set as short range wireless standard for smart energy!
- ARM Eyes Computing
- Moore's Law Continues, but Needs Help from Heterogeneous Computing
Latest Blogs
- Area, Pipelining, Integration: A Comparison of SHA-2 and SHA-3 for embedded Systems.
- Why Your Next Smartphone Needs Micro-Cooling
- Teaching AI Agents to Speak Hardware
- SOCAMM: Modernizing Data Center Memory with LPDDR6/5X
- Bridging the Gap: Why eFPGA Integration is a Managed Reality, Not a Schedule Risk