AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration
AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An FPGA-based evaluation kit for the IP is now available, with an initial ASIC implementation scheduled to follow it to market early next year.
To read the full article, click here
Related Semiconductor IP
- CAN-SEC Acceleration Engine
- CANsec Acceleration Engine
- High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
- High-performance 32-bit multi-core processor with AI acceleration engine
Related Blogs
- Deep learning inference performance on the Yitian 710
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
- Silicon Hive CTO: How Transaction-Based Acceleration Speeds IP Verification And Prevents TV "Crashes"
- Accelerating Machine Learning Deployment with CEVA Deep Neural Network (CDNN)
Latest Blogs
- Half of the Compute Shipped to Top Hyperscalers in 2025 will be Arm-based
- Industry's First Verification IP for Display Port Automotive Extensions (DP AE)
- IMG DXT GPU: A Game-Changer for Gaming Smartphones
- Rivos and Canonical partner to deliver scalable RISC-V solutions in Data Centers and enable an enterprise-grade Ubuntu experience across Rivos platforms
- ReRAM-Powered Edge AI: A Game-Changer for Energy Efficiency, Cost, and Security