NeuReality Boosts AI Acelerator Utilization With NAPU
By Sally Ward-Foxton, EETimes (April 4, 2024)
Startup NeuReality wants to replace the host CPU in data center AI inference systems with dedicated silicon that can cut total cost of ownership and power consumption. The Israeli startup developed a class of chip it calls the network addressable processing unit (NAPU), which includes hardware implementations for typical CPU functions like the hypervisor. NeuReality’s aim is to increase AI accelerator utilization by removing bottlenecks caused by today’s host CPUs.
NeuReality CEO Moshe Tanach told EE Times its NAPU enables 100% utilization of AI accelerators.
To read the full article, click here
Related Semiconductor IP
- LPDDR6/5X/5 PHY V2 - Intel 18A-P
- ML-KEM Key Encapsulation & ML-DSA Digital Signature Engine
- MIPI SoundWire I3S Peripheral IP
- ML-DSA Digital Signature Engine
- P1619 / 802.1ae (MACSec) GCM/XTS/CBC-AES Core
Related News
- Israeli AI Chip Startup Raises Seed Funding
- NeuReality unveils novel AI-centric platform to empower the growth of real-life AI applications
- IBM and NeuReality team up to build the next generation of AI inference platforms
- Samsung Ventures invests in Israeli AI systems and semiconductor company NeuReality
Latest News
- UMC Reports Sales for October 2025
- Arm Q2 FYE26 revenue surpasses $1 billion for third consecutive quarter
- Perceptia Updates Design Kit for pPLL03 on GlobalFoundries 22FDX Platform
- Ceva Partners with Microchip Technology to Enable AI Acceleration Across Edge Devices and Data Center Infrastructure
- Arteris Announces Financial Results for the Third Quarter and Estimated Fourth Quarter and Updated Full Year 2025 Guidance