NeuReality Boosts AI Acelerator Utilization With NAPU
By Sally Ward-Foxton, EETimes (April 4, 2024)
Startup NeuReality wants to replace the host CPU in data center AI inference systems with dedicated silicon that can cut total cost of ownership and power consumption. The Israeli startup developed a class of chip it calls the network addressable processing unit (NAPU), which includes hardware implementations for typical CPU functions like the hypervisor. NeuReality’s aim is to increase AI accelerator utilization by removing bottlenecks caused by today’s host CPUs.
NeuReality CEO Moshe Tanach told EE Times its NAPU enables 100% utilization of AI accelerators.
To read the full article, click here
Related Semiconductor IP
- RVA23, Multi-cluster, Hypervisor and Android
- 64 bit RISC-V Multicore Processor with 2048-bit VLEN and AMM
- NPU IP Core for Mobile
- V-by-One® HS plus Tx/Rx IP
- MSP7-32 MACsec IP core for FPGA or ASIC
Related News
- Israeli AI Chip Startup Raises Seed Funding
- NeuReality unveils novel AI-centric platform to empower the growth of real-life AI applications
- IBM and NeuReality team up to build the next generation of AI inference platforms
- Samsung Ventures invests in Israeli AI systems and semiconductor company NeuReality
Latest News
- Robust AI Demand Drives 6% QoQ Growth in Revenue for Top 10 Global IC Design Companies in 1Q25
- PCI-SIG® Releases PCIe® 7.0 Specification to Support the Bandwidth Demands of Artificial Intelligence at 128.0 GT/s Transfer Rates
- PCI-SIG® Announces PCIe® Optical Interconnect Solution
- Cadence Advances Design and Engineering for Europe’s Manufacturers on NVIDIA Industrial AI Cloud
- Marvell Expands Custom Compute Platform with UALink Scale-up Solution for AI Accelerated Infrastructure