Leveraging RISC-V as a Unified, Heterogeneous Platform for Next-Gen AI Chips

By Akeana

Introduction

The demand for high-performance AI computation is reshaping the semiconductor industry, requiring innovative solutions that optimize performance, power efficiency, and flexibility. RISC-V, an open-source instruction set architecture (ISA), has emerged as a key enabler for next-generation AI chips by providing a unified framework for both hardware and software. This open and extensible architecture allows companies to develop customized solutions tailored to specific AI workloads, making it a compelling choice for heterogeneous computing, for lower cost, and power consumption.

Akeana, a leader in RISC-V-based processor IP, is at the forefront of this transformation. By leveraging its extensive processor and interconnect design expertise, Akeana is developing high-performance RISC-V cores that support AI computation at multiple levels. Akeana’s solutions provide an integrated and scalable approach to AI chip development from general-purpose compute arrays to highly specialized AI accelerators and data compute arrays.

The Rise of RISC-V in AI Computation

The AI landscape is evolving rapidly, with companies seeking greater control over their hardware to optimize for specific workloads. Traditional processor architectures, such as ARM and x86, have long dominated AI computation, but RISC-V is now gaining traction due to its open and flexible nature. Major technology firms, including Meta (accelerators), and NVIDIA (GPUs), have announced their adoption of RISC-V in chips used for AI applications, recognizing its potential to standardize processing across diverse computing needs.

One of the key advantages of RISC-V is its ability to support heterogeneous computing. AI workloads are highly varied, requiring different levels of processing power, memory bandwidth, and energy efficiency. RISC-V provides a standardized ISA that spans multiple types of processing cores, from lightweight microcontrollers to high-performance vector processors. This flexibility simplifies software development, allowing AI models and frameworks to be optimized across a range of hardware implementations without requiring significant code modifications.

Akeana’s Approach to AI Processing with RISC-V

Akeana is building a comprehensive suite of RISC-V processor IP designed specifically for AI workloads. The company’s portfolio includes three primary categories of compute cores:

  1. General-Purpose Compute Arrays: These scalable, multi-core systems handle front-end software tasks, including AI framework execution, memory management, and workload synchronization. They form the backbone of AI processing pipelines, efficiently managing data flow between different processing elements.
  2. AI-Accelerated Compute Arrays: Optimized for high-throughput machine learning tasks, these specialized compute blocks integrate multiple AI engines, such as vector processors and matrix multipliers, to accelerate neural network inference and training. These accelerators are designed for maximum efficiency, balancing performance with power consumption.
  3. Data Movement and Interconnect Solutions: AI computation is not just about processing power—it also requires efficient data movement between compute elements. Akeana has developed high-bandwidth interconnects and data movement engines that minimize latency and optimize memory access, ensuring that AI accelerators operate at peak efficiency.

By combining these three compute domains into a unified RISC-V-based platform, Akeana enables customers to develop highly efficient AI chips that meet the demands of modern workloads. The company’s approach ensures that different processing elements work seamlessly together, allowing for dynamic workload distribution and optimized power efficiency.

Unified Software Stack for AI Acceleration

Akeana’s RISC-V-based architecture is supported by a unified software stack that streamlines AI development. This includes:

● Optimized AI Libraries: Pre-tuned libraries and kernel routines that maximize the performance of neural networks, ensuring efficient execution across RISC-V cores and accelerators.
● Standardized Toolchains: Support for LLVM compilers, profiling tools, and debugging frameworks to simplify software development and optimization.
● Flexible Operating System Support: Compatibility with Linux, real-time operating systems (RTOS), and bare-metal implementations, providing developers with multiple deployment options.
This software ecosystem allows AI developers to focus on algorithmic innovation without worrying about low-level hardware details. By providing a consistent environment across heterogeneous processors, Akeana accelerates time-to-market for AI solutions. Furthermore, this software stack is able to avail itself of industry ecosystem collaboration efforts such as that of RISE.

The Role of RISC-V in AI Performance Optimization

Akeana’s RISC-V processors integrate several key technologies that enhance AI performance:

  • Vector Extensions for AI Acceleration: Akeana’s RISC-V cores support vector processing, which is crucial for machine learning workloads. These vector units execute multiple AI instructions in parallel, significantly improving inference and training speeds.
  • Systolic Arrays for Matrix Computation: Matrix multiplications form the foundation of deep learning computations. Akeana’s hardware accelerators leverage systolic array architectures to maximize throughput and minimize inference latency, achieving high TOPS-per-watt efficiency.
  • Memory Bandwidth Optimization: AI applications require seamless data access to prevent bottlenecks. Akeana’s architecture incorporates shared multi-bank memory and intelligent caching mechanisms to reduce memory contention and improve overall system efficiency.

By integrating these capabilities into its RISC-V solutions, Akeana ensures that AI applications can scale efficiently across different performance levels, from edge devices to high-performance computing environments.

Industry Adoption and Future Outlook

The adoption of RISC-V in AI processing is accelerating as more companies recognize its advantages. Leading AI firms are incorporating RISC-V-based solutions into their designs, leveraging the architecture’s flexibility to optimize performance for specific workloads. With increasing investment in RISC-V software ecosystems, toolchains, and standardization efforts, the technology is well-positioned to become a dominant force in AI hardware development.

Akeana continues to push the boundaries of RISC-V-based AI computation, working with industry partners to refine and enhance its processor IP. As AI workloads evolve, the need for specialized yet flexible hardware solutions will only grow, and Akeana is committed to providing the most advanced and efficient RISC-V platforms to meet these demands.

Akeana’s Product Lines

Akeana offers a suite of RISC-V processor IP tailored for different applications, ensuring optimized performance across various market segments. The Akeana 100 series consists of 32-bit cores, making them ideal for embedded and consumer applications. The Akeana 1000 series delivers highly efficient computation for high performance up to four-wide issue, in-order and out-of-order architectures. The Akeana 5000 series delivers ultra-high performance with six-wide to ten-wide issue, out-of-order architectures These processors can be vector-extended, support AI acceleration instructions, and handle multi-threaded workloads with up to four threads, making them ideal for complex AI and data-intensive applications. The cores can be made RVA23 Profile compatible if they need to support an OS, or they can be stripped down to be more power and area efficient but still be RV64/RVV1.0 compatible. This customization flexibility accompanied by quick delivery from Akeana’s single design database platform is ideal when AI SoC architects make trade-offs for their chips.

Conclusion

The convergence of RISC-V and AI computing represents a paradigm shift in hardware design, offering unprecedented levels of flexibility, efficiency, and scalability. By leveraging a unified, heterogeneous hardware and software platform, Akeana is driving the next wave of AI innovation, empowering companies to build cutting-edge AI chips that meet the needs of an increasingly data-driven world.

With a robust portfolio of RISC-V processor IP, AI acceleration engines, and optimized software stacks, Akeana is setting a new standard for AI hardware development. As the industry embraces open and extensible architectures, Akeana’s solutions will play a pivotal role in shaping the future of AI computation, enabling breakthroughs in efficiency, performance, and scalability.

About Akeana

Akeana was founded to bring maximum performance and capabilities to the RISC-V ecosystem. The team is leveraging its vast experience, including contributions to successful projects like ThunderX at Cavium/Marvell, to deliver best-in-class CPU and system IP solutions.

Akeana licenses a complete suite of RISC-V Core IP—including microcontrollers, Big-Little application cores, high-performance data center cores, and multi-threaded cores for networking and other high-throughput applications. The company’s highly configurable design methodology allows for cores optimized for specific vertical markets and applications.

In addition to cores, Akeana provides a complete suite of system IP, including cluster caches, non-coherent interconnects (AXI), coherent interconnects (CHI), RISC-V interrupt controllers, RISC-V IOMMU, security IP, and AI accelerators. With a commitment to innovation and performance, Akeana is shaping the future of RISC-V-based AI computing.

×
Semiconductor IP