LeapMind's "Efficiera" Ultra-low Power AI Inference Accelerator IP Was Verified RTL Design for ASIC/ASSP Conversion
May 20th, 2021, Tokyo Japan - LeapMind Inc., a creator of the standard in edge AI (Shibuya-ku, Tokyo; CEO: Soichi Matsuda) today announced that company’s proprietary ultra-low power AI inference accelerator IP “Efficiera” was verified RTL design for ASIC/ASSP.
“By conducting the design verification this time, we were able to confirm the expected PPA (Power/Performance/Area) at the time of IP configuration.” Said Katsutoshi Yamazaki, VP of Business at LeapMind. “This is a big step for us moving forward to future LSI commercialization of Efficiera”.
Efficiera is an ultra-low power consumption AI inference accelerator IP specialized for CNN inference arithmetic processing that operates as a circuit on FPGA devices or ASIC/ASSP devices. For more information, visit here (https://leapmind.io/business/ip/).
About LeapMind
LeapMind Inc. was founded in 2012 with the corporate philosophy of "bringing new devices that use machine learning to the world". Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, centered in manufacturing including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware.
Head office: Shibuya Dogenzaka Sky Building 5F, 28-1 Maruyama-cho, Shibuya-ku, Tokyo 150-0044
Representative: Soichi Matsuda, CEO
Established: December 2012
URL:https://leapmind.io/en/
Related Semiconductor IP
- AI inference engine for real-time edge intelligence
- Neural engine IP - AI Inference for the Highest Performing Systems
- AI inference engine for Audio
- Neural engine IP - Balanced Performance for AI Inference
- AI inference processor IP
Related News
- d-Matrix and Andes Team on World's Highest Performing, Most Efficient Accelerator for AI Inference at Scale
- LeapMind Unveils "Efficiera", the New Ultra Low Power AI Inference Accelerator IP
- Official Commercial Launch of Efficiera Ultra-Low Power AI Inference Accelerator IP Core
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
Latest News
- Qualitas Semiconductor Secures Strategic IP Licensing Agreement for MIPI Solutions
- Chinese RISC-V Chipmaker SpacemiT Launches K3 AI CPU, Highlighting the Rise of Open-Source Hardware in Intelligent Computing
- Weebit Nano Q2 FY26 Quarterly Activities Report
- Arasan announces the immediate availability of the industries first xSPI NOR + eMMC NAND Combo PHY IP
- AMIQ EDA Gives AI Agents Access to Essential Design and Verification Data