AImotive's aiWare3 Hardware IP Helps Drive Autonomous Vehicles To Production

Latest technology enables scalable, low-power automotive inference engines with >50 TMAC/s NN processing power

MOUNTAIN VIEW, Calif., Oct. 31, 2018 -- AImotive™, the global provider of full stack, vision-first self-driving technology, today announced the release of aiWare3™, the company's 3rd generation, scalable, low-power, hardware Neural Network (NN) acceleration core. Designed for the rigorous requirements of automotive embedded solutions, aiWare3's patented IP core delivers unprecedented scalability and flexibility. By delivering new levels of performance in both central processing and sensor fusion units, aiWare3 enables automotive OEMs and Tier One suppliers to achieve L3 autonomy in production in the shortest timescales.

The scalable aiWare3 architecture facilitates low-power continuous operation for autonomous vehicles (AVs) with up to 12 or more high-resolution cameras, LiDARs and/or radar. aiWare3 delivers up to 50 TMAC/s (> 100 TOPS) per chip at more than 2 TMAC/s (4 TOPS) per W1. This makes it optimal for real-time, embedded inference engines with strict power, thermal and real-time constraints. It is ideal for the most processing-intensive NN tasks, such as low-latency, high frame rate segmentation, perception, and classification.

Building on the award-winning2 aiWare v2 core, aiWare3's highly configurable and scalable architecture enables OEMs to implement a variety of NN acceleration strategies in their hardware platforms. These can range from centralized NN resources shared among multiple workloads as part of a powerful central processing unit, to pre-processing integrated into each sensor or groups of sensors. Now, OEMs have far greater choice in the ways they approach production AV timescales, by shrinking the size of the electronic control units (ECUs) needed to support high-performance NN processing. The highly autonomous accelerator-based approach used by aiWare also enables customers to maximize re-use of their significant investment in existing hardware and software designs.

"Our portfolio of hardware technologies enables the implementation of high-performance, AI-based ECUs that allow our automotive partners to move beyond demonstrators and into volume AV production," said Marton Feher, senior vice president of hardware engineering at AImotive. "aiWare3 makes NN processing for multi HD sensor configurations achievable, which is essential for AVs. It answers the question of how best to implement hardware platforms capable of delivering the real-time, high performance results required by future AVs' NN-based AI systems; and it does so at low power and to full ASIL-D standards if required."

Though many NN accelerators claim superior power consumption and performance, this is often when running their accelerator under ideal simulated conditions. Unlike other solutions, aiWare3's architecture has been optimized to deliver the highest efficiencies when integrated into realistic production ECU designs. The aiWare3 core can deliver more than 2 TMAC/s per W (7nm estimated) while sustaining >95% efficiency under continuous operation

AImotive's aiWare3 IP core can be implemented on-chip as part of an SoC (system on chip) for central processing, sensor processing or sensor fusion gateway subsystems. It can also form the basis of one or more accelerator chips working alongside the SoC in a highly autonomous NN acceleration subsystem.

aiWare3's IP core is supported by a comprehensive software development kit (SDK) that uses The Khronos Group's NNEF™ standard. It will ship to lead customers in Q1 2019.

About aiWare3

  • Delivers >95% efficiency for wide range of NN applications
  • Scalable from 1 TMAC/s to > 50 TMAC/s
  • Designed for continuous real-time inference engine operation
  • Configurable for wide range of ASIL-B to ASIL-D approaches
  • Low power ® 2 TMAC/s per W (7nm estimated)
  • Highly deterministic low-level architecture – no caches

Read more about aiWare3 here.
Find our latest benchmarks based on a range of well-known workloads here.

Notes:
1 7nm estimated
2 Vision Product of the Year, Embedded Vision Alliance, May 2018

×
Semiconductor IP