AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration
AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An FPGA-based evaluation kit for the IP is now available, with an initial ASIC implementation scheduled to follow it to market early next year.
Related Semiconductor IP
- CAN-SEC Acceleration Engine
- CANsec Acceleration Engine
- High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
- High-performance 32-bit multi-core processor with AI acceleration engine
Related Blogs
- Deep learning inference performance on the Yitian 710
- Silicon Hive CTO: How Transaction-Based Acceleration Speeds IP Verification And Prevents TV "Crashes"
- Accelerating Machine Learning Deployment with CEVA Deep Neural Network (CDNN)
- CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
Latest Blogs
- Why Choose Hard IP for Embedded FPGA in Aerospace and Defense Applications
- Migrating the CPU IP Development from MIPS to RISC-V Instruction Set Architecture
- Quintauris: Accelerating RISC-V Innovation for next-gen Hardware
- Say Goodbye to Limits and Hello to Freedom of Scalability in the MIPS P8700
- Why is Hard IP a Better Solution for Embedded FPGA (eFPGA) Technology?