Wave Computing Unveils New Licensable 64-Bit AI IP Platform to Enable High-Speed Inferencing and Training in Edge Applications
Wave’s TritonAI™ 64 Platform Provides a Scalable, Programmable Solution for AI System on Chip (SoC) Designers Targeting Automotive, Enterprise and Other High-Growth AI Edge Markets
CAMPBELL, Calif., April 10, 2019 – Wave Computing®, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, today announced its new TritonAI™ 64 platform, which integrates a triad of powerful technologies into a single, future-proof intellectual property (IP) licensable solution. Wave’s TritonAI 64 platform delivers 8-to-32-bit integer-based support for high-performance AI inferencing at the edge now, with bfloat16 and 32-bit floating point-based support for edge training in the future.
Wave’s TritonAI 64 platform is an industry-first solution, enabling customers to address a broad range of AI use cases with a single platform. The platform delivers efficient edge inferencing and training performance to support today’s AI algorithms, while providing customers with the flexibility to future-proof their investment for emerging AI algorithms. Features of the TritonAI 64 platform include a leading-edge MIPS® 64-bit SIMD engine that is integrated with Wave’s unique approach to dataflow and tensor-based configurable technology. Additional features include access to Wave’s MIPS integrated developer environment (IDE), as well as a Linux-based TensorFlow programming environment.
The global market for AI products is projected to dramatically increase to over $170B by 2025, according to technology analyst firm Tractica. The total addressable market (TAM) for AI at the edge comprises over $100B of this market and is being driven primarily by the needs for more efficient inferencing, and new AI workloads and use cases, as well as the need for training at the edge.
“Wave Computing is achieving another industry first by delivering a licensable IP platform that enables both AI inferencing and training at the edge,” said Derek Meyer, Chief Executive Officer of Wave Computing. “The tremendous growth of edge-based AI use cases is exacerbating the challenges of SoC designers who continue to struggle with legacy IP products that were not designed for efficient AI processing. Our TritonAI solution provides them with the investment protection of a programmable platform that can scale to support the AI applications of both today and tomorrow. TritonAI 64 enhances our overall AI offerings that span datacenter to edge and is another company milestone enabled by our acquisition of MIPS last year.”
Details of Wave’s TritonAI 64 Platform:
- MIPS 64-bit + SIMD Technology: Offering an open instruction set architecture (MIPS Open™), coupled with a mature integrated development environment (IDE), provides an ideal software platform for developing AI applications, stacks and use cases. The MIPS IP subsystem in the TritonAI 64 platform enables SoCs to be configured with up to six MIPS 64 CPUs, each with up to four hardware-threads. The MIPS subsystem hosts the execution of Google’s TensorFlow framework on a debian-based Linux operating system, enabling the development of both inferencing and edge learning applications. Additional AI frameworks such as Caffe2, can be ported to the MIPS subsystem, as well as support a wide variety of AI networks using ONNX conversion.
- WaveTensor™ Technology: The WaveTensor subsystem can scale up to a PetaOP of 8-bit integer operations on a single core instantiation by combining extensible slices of 4×4 or 8×8 kernel matrix multiplier engines for the highly efficient execution of today’s key Convolutional Neural Network (CNN) algorithms. The CNN execution performance can scale up to 8 TOPS/watt and over 10 TOPS/mm2 in industry standard 7nm process nodes with libraries using typical voltage and processes.
- WaveFlow™ Technology: Wave Computing’s highly flexible, linearly scalable fabric is adaptable for any number of complex AI algorithms, as well as conventional signal processing and vision algorithms. The WaveFlow subsystem features low latency, single batch size AI network execution and reconfigurability to address concurrent AI network execution. This patented WaveFlow architecture also supports algorithm execution without intervention or support from the MIPS subsystem.
Additional information about Wave Computing’s new TritonAI™ 64 Platform, in addition to details on Wave’s complete portfolio of IP solutions, can be found at https://wavecomp.ai.
About Wave Computing
Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based systems and solutions that deliver orders of magnitude performance improvements over legacy architectures. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. Wave Computing was named Frost & Sullivan’s 2018 “Machine Learning Industry Technology Innovation Leader” and is recognized by CIO Applications magazine as one of the “Top 25 Artificial Intelligence Providers.” Wave now has over 400 granted and pending patents and hundreds of customers worldwide. More information about Wave Computing can be found at https://wavecomp.ai.
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related News
- Rambus Announces Industry-First HBM4 Controller IP to Accelerate Next-Generation AI Workloads
- Alphawave Semi to Showcase Latest Advances in AI Connectivity IP at ECOC 2024
- Analog Bits to Demonstrate Power Management and Embedded Clocking and High Accuracy Sensor IP at the TSMC 2024 Open Innovation Platform Ecosystem Forum
- TSMC and Cadence Collaborate to Deliver AI-Driven Advanced-Node Design Flows, Silicon-Proven IP and 3D-IC Solutions
Latest News
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers