Arteris IP and Wave Computing Collaborate on Reference Architecture for Enterprise Dataflow Platform
The Arteris FlexNoC Artificial Intelligence (AI) Package Coupled with Wave Computing’s AI Systems and IP Technology Create a Unified Platform Optimized for AI Data Processing
CAMPBELL, Calif. – May 21, 2019 – Arteris IP, the world’s leading supplier of innovative silicon-proven network-on-chip (NoC) interconnect intellectual property (IP), and Wave Computing®, the Silicon Valley company accelerating artificial intelligence (AI) from the datacenter to the edge, are collaborating to create a blueprint that can help customers overcome compute-to-memory design challenges. Additionally, Wave Computing is licensing Arteris IP’s Ncore Cache Coherent Interconnect, FlexNoC interconnect IP, and its accompanying FlexNoC AI Package for use in the AI-enabled chips that fuel Wave Computing’s data center systems products. By working together to assimilate each other’s technology attributes, Wave Computing and Arteris can ensure the seamless flow of information enterprise-wide, helping speed time-to-insight.
“Wave and Arteris have complementary compute and networking technologies that, when packaged together, address some of the key challenges facing system-on-chip designers today such as shorter product cycles and rapidly increasing product complexity,” said Steve Brightfield, senior director, Strategic AI IP Marketing, Wave Computing. “The world of AI demands greater compute power. Working with Arteris allows us to design a scalable data platform with blazing-fast performance at a cost-effective price that helps customers accelerate insight from the edge to the data center.”
The key to a successful AI-enabled, system-on-chip (SoC) design is effectively managing the flow of information across the chip. By linking Arteris’ NoC interconnect and AI package IP technology with Wave Computing’s TritonAI 64 dataflow processing elements and cores, customers can successfully reduce latency and optimize the flow of information across their SoC platforms.
“Arteris IP has developed unique on-chip interconnect capabilities that facilitate the rapid assembly of complex machine learning SoCs with cache coherent, non-coherent and regular AI structures to provide a competitive advantage to engineering teams designing the next generation of AI and machine learning chips,” said K. Charles Janac, President and CEO of Arteris IP. “The combination of the TritonAI 64 IP platform and Arteris IP’s portfolio of interconnect technologies helps customers significantly boost performance and enable the seamless flow of data across a wide variety of compute-intensive, AI-enabled automotive, enterprise and networking applications.”
For more information on Wave Computing’s complete portfolio of IP and systems products visit www.wavecomp.ai. Additional details on Arteris IP’s line of AI-enabled network computing solutions visit www.arteris.com.
About Wave Computing
Wave Computing, Inc. is revolutionizing artificial intelligence (AI) with its dataflow-based systems and solutions. The company’s vision is to bring deep learning to customers’ data wherever it may be—from the datacenter to the edge—helping accelerate time-to-insight. Wave Computing is powering the next generation of AI by combining its dataflow architecture with its MIPS embedded RISC multithreaded CPU cores and IP. Wave Computing received Frost & Sullivan’s 2018 “Machine Learning Industry Technology Innovation Leader” award and recognized as one of the “Top 25 Artificial Intelligence Providers” by CIO Applications magazine. More information about Wave Computing can be found at https://wavecomp.ai.
Wave Computing, the Wave Computing logo, MIPS Open, MIPS32, microAptiv, TritonAI 64 and MIPS are trademarks of Wave Computing, Inc. and its applicable affiliates. All other trademarks are used for identification purposes only and are the property of their respective owners.
About Arteris IP
Arteris IP provides network-on-chip (NoC) interconnect IP to accelerate system-on-chip (SoC) semiconductor assembly for a wide range of applications from AI to automobiles, mobile phones, IoT, cameras, SSD controllers, and servers for customers such as Baidu, Mobileye, Samsung, Huawei / HiSilicon, Toshiba and NXP. Arteris IP products include the Ncore® cache coherent and FlexNoC® non-coherent interconnect IP, the CodaCache® standalone last level cache, and optional Resilience Package (ISO 26262 functional safety), FlexNoC AI Package, and PIANO® automated timing closure capabilities. Customer results obtained by using Arteris IP products include lower power, higher performance, more efficient design reuse and faster SoC development, leading to lower development and production costs. For more information, visit www.arteris.com.
Related Semiconductor IP
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
- High Speed Ether 2/4/8-Lane 200G/400G/800G PCS
Related News
- Andes Technology Provides System Control Processor IP for Wave Computing's Revolutionary Dataflow Processing Unit Design
- Silex Insight and Wave Computing Collaborate to Deliver Security-Conscious Artificial Intelligence (AI) Platforms Across Enterprise and Automotive Markets
- Arteris IP FlexNoC Interconnect Again Licensed by NETINT Technologies for Codensity Enterprise SSD Controllers
- Arteris IP FlexNoC Interconnect Again Licensed by KYOCERA for Enterprise Printing and Imaging Solutions
Latest News
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers
- Blueshift Memory launches BlueFive processor, accelerating computation by up to 50 times and saving up to 65% energy