Cassia.ai Achieves Breakthrough in AI Accelerator Technology with Successful Tapeout of two Test Chips

San Jose, CA – December 17, 2025Cassia.ai, a pioneer in improved mathematical hardware functions for Artificial Intelligence (AI) and Machine Learning (ML), today announced the successful tapeout of a pair of AI accelerator test chips to demonstrate that Cassia.ai’s improved math functions deliver significant improvements in performance, power efficiency and cost for AI/ML workloads.

State of the art AI/ML GPU and xPU chips can perform quadrillions of mathematical operations per second (PetaFLOPs) per chip, which makes the power and performance of these mathematical functions critically important to Artificial Intelligence. Cassia.ai’s patent-pending advanced mathematics can improve the performance of mathematical functions by 10x or reduce their power by 10x and do it at lower cost.

Cassia.ai’s improved math test chip is designed to add another proof point for Cassia.ai’s approach to accelerating AI and ML applications, supplementing simulation and FPGA proofs of the Cassia.ai methods that have been shown to improve the performance of a number of transformer models including Generative Pre-trained Transformer (GPT) models, Convolutional Neural Network (CNN) models, and AI recommendation engines. By incorporating Cassia.ai’s methods, the improved math test chip achieves a substantial boost in performance-per-watt, making it an attractive solution for everything from datacenter to the edge where performance and power efficiency are paramount. A second testchip using the same architecture but using traditional mathematical functions has been taped out to allow direct comparison between traditional mathematics and Cassia.ai improved mathematical methods.

“Our team has made a significant breakthrough in AI accelerator technology with the successful tapeout of our test chips,” said Dr James Tandon, CEO of Cassia.ai. “By embracing accelerated mathematics, we've opened up new possibilities for improving the efficiency and scalability of AI systems, which is critical for the next generation of AI applications, and that’s why we say, ‘We do math better’. We're thrilled to have partnered with The Hoang Laboratory at the University of Electro-Communication in Tokyo for these test chips, and their team provided the ideal combination of performance, power efficiency, and feature density.”

The successful tapeout of these test chips is a crucial step towards the development of next-generation AI accelerators that can efficiently handle the complex and computationally intensive workloads associated with AI and ML. Cassia.ai is committed to continuing its research and development efforts in AI acceleration, with the goal of bringing this technology to market and enabling a new class of AI-enabled applications.

The test chips follow a Neural Processor Unit (NPU) architecture similar to Google’s recently open-sourced Coral NPU architecture and the Cassia.ai test chips are expected to be able to run models optimized for the Coral NPU through a software translation layer, with higher efficiency due to the accelerated math functions from Cassia.ai. The test chips can be used as a proof-of-concept for future AI accelerator products, and will be made available for evaluation and testing by select partners and customers with a goal of licensing the IP.

Cassia.ai’s improved mathematical functions are available now as an RTL foundation library for AI/ML chips with a full range of collateral including Verilog®, C++, PyTorch®, CUDA®, and FPGA reference designs.

For more information, please visit www.cassia.ai


Explore AI Processor IP


×
Semiconductor IP