RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
The new embedded accelerator boosts inference speed by 2.4x, combining complete privacy and autonomy with a groundbreaking innovation: it eliminates the need for CPUs.
Spain, January 24th, 2024 -- RaiderChip has officially launched the GenAI NPU, a fully hardware-based accelerator that sets new standards for efficiency and scalability in Generative AI. The GenAI NPU retains the key features of its predecessor, the GenAI v1: offline operation and autonomous functionality.
Additionally, it becomes fully stand-alone by embedding all Large Language Models (LLMs) operations directly into its hardware, thereby eliminating the need for CPUs.
RaiderChip GenAI NPU running the Llama 3.2 1B LLM model and streaming its output to a terminal
Thanks to its fully hardware-based design, the GenAI NPU achieves unprecedented levels of efficiency, unattainable by hybrid designs. According to RaiderChip CTO Victor Lopez: “By eliminating latency caused by hardware-software communication, we achieve superior performance while removing external dependencies, such as CPUs. The performance that you see is what you will get, regardless of the target electronic system where the accelerator is integrated. This improves energy efficiency and ensures fully predictable performance—advantages which make the GenAI NPU the ideal solution for embedded systems.”
Furthermore, the new design optimizes token generation speed per available memory bandwidth, multiplying it by 2.4x, while enabling the use of more cost-efficient memories like DDR or LPDDR without relying on expensive options such as HBM to achieve excellent performance. It also delivers equivalent results with fewer components, reducing size, cost, and energy consumption. These features allow for the development of more affordable and sustainable generative AI solutions, with faster return on investment and seamless integration into a variety of products tailored to different needs.
With this innovation, RaiderChip strengthens its strategy of offering optimized solutions based on affordable hardware, designed to bring generative AI to the Edge. These solutions ensure complete privacy and security for applications thanks to their ability to operate entirely offline and on-premises, while eliminating dependence on the cloud and recurring monthly subscriptions.
WANT TO KNOW MORE?
Related Semiconductor IP
- NPU
- Image Processing NPU IP
- Optional extension of NPX6 NPU tensor operations to include floating-point support with BF16 or BF16+FP16
- NPU IP for Data Center and Automotive
- General Purpose Neural Processing Unit (NPU)
Related News
- RaiderChip Hardware NPU adds Falcon-3 LLM to its supported AI models
- OPENEDGES AI Accelerator (NPU) & Memory Subsystem IP licensed for Eyenix AI-powered surveillance camera chipset
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
- RaiderChip raises 1 Million Euros in seed capital to market its innovative generative AI accelerator: the GenAI v1.
Latest News
- RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
- The world’s first open source security chip hits production with Google
- ZeroPoint Technologies Unveils Groundbreaking Compression Solution to Increase Foundational Model Addressable Memory by 50%
- Breker RISC-V SystemVIP Deployed across 15 Commercial RISC-V Projects for Advanced Core and SoC Verification
- AheadComputing Raises $21.5M Seed Round and Introduces Breakthrough Microprocessor Architecture Designed for Next Era of General-Purpose Computing