LeapMind 发布超低功耗 AI 推理加速器 IP 的 Beta 版本
Tokyo Japan - October 5, 2021 -- LeapMind Inc., a creator of the standard in edge AI (Shibuya-ku, Tokyo; CEO: Soichi Matsuda) today announced the beta release of ultra-low power consumption AI inference accelerator IP “Efficiera” version 2 (v2) before its commercial launch by the end of this year. Efficiera originally developed and licensed by LeapMind Inc. has been highly valued since its launch in October 2020 for its features of power saving, high performance, space saving, and performance scalability which enable easy installation to small FPGA and contribute to shortened development period of end users’ final products by supporting mass-produced boards. Upon the release of v2 beta version, LeapMind is welcoming trial use by and feedback from many people including SoC vendors and end user product designers. To obtain v2 beta version, please contact us at business@leapmind.io.
Main specifications/features of Efficiera v2
- Great improvement of hardware performance: Achievement of 100TOPS/W from 27TOPS/W and speeding up of functions such as Skip connection, Convolution and Pixel Embedding
- Flexible system development utilizing ASIC/ASSP as well as FPGA
- Provision of extremely low bit quantization model development environment, “Efficiera NDK (Network Development Kit)” to meet the demands of end users who would like to develop individual models on Efficiera IP
“It is a pleasure for us to release beta version while the development of Efficiera v2 is proceeding as scheduled”, commented Katsutoshi Yamazaki, LeapMind’s VP of Business. “Efficiera v2 is a product with a view to the use for ASIC/ASSP as well as FPGA, and its hardware performance enables image processing at the edge by utilizing machine learning. In the market, there is discussion on which is better, cloud processing or edge processing for complicated inference processing that requires large amounts of data, considering various conditions and constraints such as network load, power efficiency and real-time performance. Through our core technology, extremely low bit quantization which has developed under the assumption of inference processing using large volumes of complicated data on edge devices, we aim to help promote device development that allows users to choose inference processing on edge without hesitation. Efficiera v2 has achieved dramatically improved processing capability, especially for image processing, and our development department has confirmed that Efficiera v2 has demonstrated its great capability in situations where high-definition image processing by AI on edge devices is required. We hope that Efficiera v2 will trigger significant expansion of the conventional application scope of machine learning on edge”.
About Efficiera
Efficiera is an ultra-low power AI inference accelerator IP specialized for CNN inference processing that runs as a circuit on FPGA or ASIC devices. The "ultra-small quantization" technology minimizes the number of quantization bits to 1 - 2 bits, maximizing the power and area efficiency of convolution, which accounts for most of the inference processing, without the need for advanced semiconductor manufacturing processes or special cell libraries. By using this product, deep learning functions can be incorporated into a variety of edge devices, including consumer electronics such as home appliances, industrial equipment such as construction machinery, surveillance cameras, broadcasting equipment, as well as small machines and robots that are constrained by power, cost, and heat dissipation, which has been technically difficult in the past. Visit product website at https://leapmind.io/business/ip/
About LeapMind
LeapMind Inc. was founded in 2012 with the corporate philosophy of "bringing new devices that use machine learning to the world". Total investment in LeapMind to date has reached 4.99 billion yen (as of May 2021). The company's strength is in extremely low bit quantization for compact deep learning solutions. It has a proven track record of achievement with over 150 companies, centered in manufacturing including the automobile industry. It is also developing its Efficiera semiconductor IP, based on its experience in the development of both software and hardware.
Related Semiconductor IP
- AI Inference IP. Ultra-low power, tiny, std CMOS. ~ 100K parameter RNN
- AI inference processor IP
- Highly scalable inference NPU IP for next-gen AI applications
Related News
- LeapMind 推出 Efficiera v2 超低功耗 AI 推理加速器 IP
- SureCore推出适用于 AI 应用的超低功耗内存 IP
- 西门子推出Solido 仿真套件,强化人工智能验证解决方案
- Semidynamics公布其全新一体式AI IP的张量单元效率数据