AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration
AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An FPGA-based evaluation kit for the IP is now available, with an initial ASIC implementation scheduled to follow it to market early next year.
To read the full article, click here
Related Semiconductor IP
- CANsec Acceleration Engine
- High-performance 32-bit multi-core processor with AI acceleration engine
- High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
Related Blogs
- Deep learning inference performance on the Yitian 710
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
- Silicon Hive CTO: How Transaction-Based Acceleration Speeds IP Verification And Prevents TV "Crashes"
- CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
Latest Blogs
- lowRISC Tackles Post-Quantum Cryptography Challenges through Research Collaborations
- How to Solve the Size, Weight, Power and Cooling Challenge in Radar & Radio Frequency Modulation Classification
- Programmable Hardware Delivers 10,000X Improvement in Verification Speed over Software for Forward Error Correction
- The Integrated Design Challenge: Developing Chip, Software, and System in Unison
- Introducing Mi-V RV32 v4.0 Soft Processor: Enhanced RISC-V Power