ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications

Overview

Machine vision and deep learning are being embedded in highly integrated SoCs and expanding into high-volume applications such as automotive ADAS, surveillance, and augmented reality. A major challenge in enabling mass adoption of embedded vision applications is in providing the processing capability at a power and cost point low enough for embedded applications, while maintaining sufficient flexibility to cater to rapidly evolving markets.

The Synopsys ARC EV Processors are fully programmable and configurable IP cores that are optimized for embedded vision applications, combining the flexibility of software solutions with the low cost and low power consumption of hardware. For fast, accurate execution of convolutional neural networks (CNNs) or recurrent neural networks (RNNs), the EV Processors integrate an optional high-performance deep neural network (DNN) accelerator.

The EV Processors are designed to integrate seamlessly into an SoC and can be used with any host processors and operate in parallel with the host. To speed application software development, the EV processors are supported by a comprehensive software programming environment based on existing and emerging embedded vision and neural network standards including OpenCV, OpenVX™, OpenCL™ C, and Caffe with Synopsys' ARC MetaWare EV Development Toolkit.

Key Features

  • ARC processor cores are optimized to deliver the best performance/power/area (PPA) efficiency in the industry for embedded SoCs. Designed from the start for power-sensitive embedded applications, ARC processors implement a Harvard architecture for higher performance through simultaneous instruction and data memory access, and a high-speed scalar pipeline for maximum power efficiency. The 32-bit RISC engine offers a mixed 16-bit/32-bit instruction set for greater code density in embedded systems.
  • ARC's high degree of configurability and instruction set architecture (ISA) extensibility contribute to its best-in-class PPA efficiency. Designers have the ability to add or omit hardware features to optimize the core's PPA for their target application - no wasted gates. ARC users also have the ability to add their own custom instructions and hardware accelerators to the core, as well as tightly couple memory and peripherals, enabling dramatic improvements in performance and power-efficiency at both the processor and system levels.
  • Complete and proven commercial and open source tool chains, optimized for ARC processors, give SoC designers the development environment they need to efficiently develop ARC-based systems that meet all of their PPA targets.

Benefits

  • ARC processors are highly configurable, allowing designers to optimize the performance, power and area of each processor instance on their SoC by implementing only the hardware needed.
  • The ARChitect wizard enables drag-and-drop configuration of the core, including options for Instruction, program counter and loop counter widths
  • Register file sizeTimers, reset and interrupts Byte ordering Memory type, size, partitioning, base address Power management, clock gating Ports and bus protocol Multipliers, dividers and other hardware features Licensable components such as a Memory Protection Unit (MPU), Floating Point Unit (FPU) and Real-Time Trace (RTT).
  • Adding/removing instructions

Technical Specifications

Maturity
Available on request
Availability
Available
×
Semiconductor IP