AImotive Expands Into Silicon IP for Deep Learning Inference Acceleration
AImotive has been developing its aiDrive software suite for advanced driver assistance systems (ADAS) and autonomous vehicles for nearly a decade. As the computing demands of its algorithms continue to increase, the company is finding that conventional processor approaches aren't keeping pace. In response, and with an eye both on vehicle autonomy and other deep learning opportunities, the company began developing its own inference acceleration engine, aiWare, at the beginning of last year. An FPGA-based evaluation kit for the IP is now available, with an initial ASIC implementation scheduled to follow it to market early next year.
To read the full article, click here
Related Semiconductor IP
- CANsec Acceleration Engine
- High-performance 32-bit multi-core processor with AI acceleration engine
- High-performance 64-bit RISC-V architecture multi-core processor with AI vector acceleration engine
Related Blogs
- Deep learning inference performance on the Yitian 710
- Scaling Out Deep Learning (DL) Inference and Training: Addressing Bottlenecks with Storage, Networking with RISC-V CPUs
- Silicon Hive CTO: How Transaction-Based Acceleration Speeds IP Verification And Prevents TV "Crashes"
- CEVA Software Framework Brings Deep Learning to Embedded Vision Systems
Latest Blogs
- Cadence Extends Support for Automotive Solutions on Arm Zena Compute Subsystems
- The Role of GPU in AI: Tech Impact & Imagination Technologies
- Time-of-Flight Decoding with Tensilica Vision DSPs - AI's Role in ToF Decoding
- Synopsys Expands Collaboration with Arm to Accelerate the Automotive Industry’s Transformation to Software-Defined Vehicles
- Deep Robotics and Arm Power the Future of Autonomous Mobility