RaiderChip NPU for LLM at the Edge supports DeepSeek-R1 reasoning models
The rise of optimized reasoning models, capable of matching the performance of massive solutions like ChatGPT, strengthens RaiderChip’s commitment to AI acceleration through its affordable and high-performance edge devices.
Spain, February 21, 2025 -- RaiderChip, a leading fabless semiconductor company specializing in hardware acceleration for Generative Artificial Intelligence, has added the DeepSeek-R1 family of reasoning LLMs to the growing list of models supported on its GenAI NPU accelerator, with the fundamental capability to allow users to swap LLM models on the fly, thanks to the flexibility of its hardware design. This integration marks a significant breakthrough in local Generative AI inference by combining RaiderChip’s optimized architecture for affordable devices with the outstanding computational efficiency of DeepSeek-R1.
The new DeepSeek-R1 LLM family, developed in China, has recently revolutionized the industry, and stands out for its exceptional balance between operational cost and cognitive performance. Despite its compact design, it outperforms larger models in efficiency and capability, challenging the traditional strategy of massive proprietary LLMs relying on cloud-based infrastructure.
The future of Artificial Intelligence is moving toward more compact, optimized, and specialized models that can run at the Edge, reducing the high costs of inference. Víctor López, CTO of RaiderChip, highlights: “By combining our stand-alone hardware NPU semiconductors with all of DeepSeek-R1’s distilled models, we provide our customers with exceptional performance without relying on costly cloud infrastructure. Additionally, we offer greater independence, security and privacy for their solutions, guaranteeing AI-service availability and low-latency, supporting the customization of extraordinarily intelligent models, ultimately enabling the highest-performing AI Agents at the Edge”.
WANT TO KNOW MORE?
Related Semiconductor IP
- NPU
- All-In-One RISC-V NPU
- Image Processing NPU IP
- Optional extension of NPX6 NPU tensor operations to include floating-point support with BF16 or BF16+FP16
- NPU IP for Data Center and Automotive
Related News
- RaiderChip unveils its fully Hardware-Based Generative AI Accelerator: The GenAI NPU
- RaiderChip raises 1 Million Euros in seed capital to market its innovative generative AI accelerator: the GenAI v1.
- OPENEDGES AI Accelerator (NPU) & Memory Subsystem IP licensed for Eyenix AI-powered surveillance camera chipset
- Neurxcore Introduces Innovative NPU Product Line for AI Inference Applications, Powered by NVIDIA Deep Learning Accelerator Technology
Latest News
- How CXL 3.1 and PCIe 6.2 are Redefining Compute Efficiency
- Secure-IC at Computex 2025: Enabling Trust in AI, Chiplets, and Quantum-Ready Systems
- Automotive Industry Charts New Course with RISC-V
- Xiphera Partners with Siemens Cre8Ventures to Strengthen Automotive Security and Support EU Chips Act Sovereignty Goals
- NY CREATES and Fraunhofer Institute Announce Joint Development Agreement to Advance Memory Devices at the 300mm Wafer Scale