LeapMind's "Efficiera" Ultra-low Power AI Inference Accelerator IP Was Verified RTL Design for ASIC/ASSP Conversion
May 20th, 2021, Tokyo Japan - LeapMind Inc., a creator of the standard in edge AI (Shibuya-ku, Tokyo; CEO: Soichi Matsuda) today announced that company’s proprietary ultra-low power AI inference accelerator IP “Efficiera” was verified RTL design for ASIC/ASSP.
“By conducting the design verification this time, we were able to confirm the expected PPA (Power/Performance/Area) at the time of IP configuration.” Said Katsutoshi Yamazaki, VP of Business at LeapMind. “This is a big step for us moving forward to future LSI commercialization of Efficiera”.
Efficiera is an ultra-low power consumption AI inference accelerator IP specialized for CNN inference arithmetic processing that operates as a circuit on FPGA devices or ASIC/ASSP devices. For more information, visit here (https://leapmind.io/business/ip/).
About LeapMind
LeapMind Inc. was founded in 2012 with the corporate philosophy of "bringing new devices that use machine learning to the world". Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, centered in manufacturing including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware.
Head office: Shibuya Dogenzaka Sky Building 5F, 28-1 Maruyama-cho, Shibuya-ku, Tokyo 150-0044
Representative: Soichi Matsuda, CEO
Established: December 2012
URL:https://leapmind.io/en/
Related Semiconductor IP
- Neural engine IP - AI Inference for the Highest Performing Systems
- AI inference engine for Audio
- Neural engine IP - Balanced Performance for AI Inference
- AI inference engine for real-time edge intelligence
- AI inference processor IP
Related News
- d-Matrix and Andes Team on World's Highest Performing, Most Efficient Accelerator for AI Inference at Scale
- LeapMind Unveils "Efficiera", the New Ultra Low Power AI Inference Accelerator IP
- Official Commercial Launch of Efficiera Ultra-Low Power AI Inference Accelerator IP Core
- Flex Logix Pairs its InferX X1 AI Inference Accelerator with the High-Bandwidth Winbond 4Gb LPDDR4X Chip to Set a New Benchmark in Edge AI Performance
Latest News
- Mixel MIPI IP Integrated into Automotive Radar Processors Supporting Safety-critical Applications
- GlobalFoundries and Navitas Semiconductor Partner to Accelerate U.S. GaN Technology and Manufacturing for AI Datacenters and Critical Power Applications
- VLSI EXPERT selects Innatera Spiking Neural Processors to build industry-led neuromorphic talent pool
- SkyWater Technology and Silicon Quantum Computing Team to Advance Hybrid Quantum-Classical Computing
- Dnotitia Revolutionizes AI Storage at SC25: New VDPU Accelerator Delivers Up to 9x Performance Boost