LeapMind's "Efficiera" Ultra-low Power AI Inference Accelerator IP Was Verified RTL Design for ASIC/ASSP Conversion
May 20th, 2021, Tokyo Japan - LeapMind Inc., a creator of the standard in edge AI (Shibuya-ku, Tokyo; CEO: Soichi Matsuda) today announced that company’s proprietary ultra-low power AI inference accelerator IP “Efficiera” was verified RTL design for ASIC/ASSP.
“By conducting the design verification this time, we were able to confirm the expected PPA (Power/Performance/Area) at the time of IP configuration.” Said Katsutoshi Yamazaki, VP of Business at LeapMind. “This is a big step for us moving forward to future LSI commercialization of Efficiera”.
Efficiera is an ultra-low power consumption AI inference accelerator IP specialized for CNN inference arithmetic processing that operates as a circuit on FPGA devices or ASIC/ASSP devices. For more information, visit here (https://leapmind.io/business/ip/).
About LeapMind
LeapMind Inc. was founded in 2012 with the corporate philosophy of "bringing new devices that use machine learning to the world". Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, centered in manufacturing including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware.
Head office: Shibuya Dogenzaka Sky Building 5F, 28-1 Maruyama-cho, Shibuya-ku, Tokyo 150-0044
Representative: Soichi Matsuda, CEO
Established: December 2012
URL:https://leapmind.io/en/
Related Semiconductor IP
- Neural engine IP - AI Inference for the Highest Performing Systems
- AI inference engine for Audio
- Neural engine IP - Balanced Performance for AI Inference
- AI inference engine for real-time edge intelligence
- AI inference processor IP
Related News
- LeapMind Unveils "Efficiera", the New Ultra Low Power AI Inference Accelerator IP
- Official Commercial Launch of Efficiera Ultra-Low Power AI Inference Accelerator IP Core
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
- LeapMind Announces the Beta Release of their Ultra-low Power Consumption AI Inference Accelerator IP
Latest News
- Fraunhofer IIS receives EMMY® Award for its JPEG XS high-quality, low-latency video codec
- Chip Interfaces and Soliton Technologies jointly test their I3C Solution at MIPI Plugfest in Poland
- Andes Technology Announces D23-SE: A Functional Safety RISC-V Core with DCLS and Split-Lock for ASIL-B/D Automotive Applications
- OPENEDGES Technology Expands ISO 26262 ASIL-B Certificationto Network-on-Chip IP
- GUC Joins NVIDIA NVLink Fusion Ecosystem to Drive Seamless XPU Integration