Cadence Unveils Industry's First Neural Network DSP IP for Automotive, Surveillance, Drone and Mobile Markets
Complete, standalone DSP offers 1TMAC/sec computational capacity
EMBEDDED VISION SUMMIT, SANTA CLARA, Calif., May 1, 2017 -- Cadence Design Systems, Inc. (NASDAQ: CDNS) today unveiled the Cadence® Tensilica® Vision C5 DSP, the industry’s first standalone, self-contained neural network DSP IP core optimized for vision, radar/lidar and fused-sensor applications with high-availability neural network computational needs. Targeted for the automotive, surveillance, drone and mobile/wearable markets, the Vision C5 DSP offers 1TMAC/sec computational capacity to run all neural network computational tasks. For more information, visit www.cadence.com/go/visionc5.
As neural networks get deeper and more complex, the computational requirements are increasing rapidly. Meanwhile, neural network architectures are changing regularly, with new networks appearing constantly and new applications and markets continuing to emerge. These trends are driving the need for a high-performance, general-purpose neural network processing solution for embedded systems that not only requires little power, but also is highly programmable for future-proof flexibility and lower risk.
Neural Network DSP vs. a Neural Network Accelerator
Camera-based vision systems in automobiles, drones and security systems require two fundamental types of vision-optimized computation. First, the input from the camera is enhanced using traditional computational photography/imaging algorithms. Second, neural-network-based recognition algorithms perform object detection and recognition. Existing neural network accelerator solutions are hardware accelerators attached to imaging DSPs, with the neural network code split between running some network layers on the DSP and offloading convolutional layers to the accelerator. This combination is inefficient and consumes unnecessary power.
Architected as a dedicated neural-network-optimized DSP, the Vision C5 DSP accelerates all neural network computational layers (convolution, fully connected, pooling and normalization), not just the convolution functions. This frees up the main vision/imaging DSP to run image enhancement applications independently while the Vision C5 DSP runs inference tasks. By eliminating extraneous data movement between the neural network DSP and the main vision/imaging DSP, the Vision C5 DSP provides a lower power solution than competing neural network accelerators. It also offers a simple, single-processor programming model for neural networks.
“Many of our customers are in the difficult position of selecting a neural network inference platform today for a product that may not ship for a couple of years or longer,” said Steve Roddy, senior group director, Tensilica marketing at Cadence. “Not only must neural network processors for always-on embedded systems consume low power and be fast on every image, but they should also be flexible and future proof. All of the current alternatives require undesirable tradeoffs, and it was clear a new solution is needed. We architected the Vision C5 DSP as a general-purpose neural network DSP that is easy to integrate and very flexible, while offering better power efficiency than CNN accelerators, GPUs and CPUs.”
“The applications for deep learning in real-world devices are tremendous and diverse, and the computational requirements are challenging,” said Jeff Bier, founder of the Embedded Vision Alliance. “Specialized programmable processors like the Vision C5 DSP enable deployment of deep learning in cost- and power-sensitive devices.”
Vision C5 DSP Features and Performance
The Vision C5 DSP offers class-leading neural network performance in a self-contained engine:
- 1TMAC/sec computational capacity (4X greater throughput than the Vision P6 DSP) in less than 1mm2 silicon area provides very high computation throughput on deep learning kernels
- 1024 8-bit MACs or 512 16-bit MACs for exceptional performance at both 8-bit and 16-bit resolutions
- VLIW SIMD architecture with 128-way, 8-bit SIMD or 64-way, 16-bit SIMD
- Architected for multi-core designs, enabling a multi-teraMAC solution in a small footprint
- Integrated iDMA and AXI4 interface
- Uses the same proven software toolset as the Vision P5 and P6 DSPs
- Compared to commercially available GPUs, the Vision C5 DSP is up to 6X faster in the well-known AlexNet CNN performance benchmark and up to 9X faster in the Inception V3 CNN performance benchmark
The Vision C5 DSP is a flexible and future-proof solution that supports variable kernel sizes, depths and input dimensions. It also accommodates several different coefficient compression/decompression techniques, and support for new layers can be added as they evolve. In contrast, hardware accelerators provide a rigid solution because of more limited re-programmability.
The Vision C5 DSP also comes with the Cadence neural network mapper toolset, which will map any neural network trained with tools such as Caffe and TensorFlow into executable and highly optimized code for the Vision C5 DSP, leveraging a comprehensive set of hand-optimized neural network library functions.
Active engagements with select early customers are currently underway. Customers interested in the Vision C5 DSP should contact their Cadence sales representative.
About Cadence
Cadence enables electronic systems and semiconductor companies to create the innovative end products that are transforming the way people live, work and play. Cadence software, hardware and semiconductor IP are used by customers to deliver products to market faster. The company’s System Design Enablement strategy helps customers develop differentiated products—from chips to boards to systems—in mobile, consumer, cloud datacenter, automotive, aerospace, IoT, industrial and other market segments. Cadence is listed as one of Fortune Magazine’s 100 Best Companies to Work For. Learn more at cadence.com.
Related Semiconductor IP
- ARC NPX Neural Processing Unit (NPU) IP supports the latest, most complex neural network models and addresses demands for real-time compute with ultra-low power consumption for AI applications
- Compact neural network engine offering scalable performance (32, 64, or 128 MACs) at very low energy footprints
- Neural Network Processor IP
- Neural Network Processor IP
- Neural Network Processor IP
Related News
- Cadence Announces New Tensilica Vision P6 DSP Targeting Embedded Neural Network Applications
- DSP Group Unveils DBM10 Low-Power Edge AI/ML SoC with Dedicated Neural Network Inference Processor
- Cadence Expands System IP Portfolio with Network on Chip to Optimize Electronic System Connectivity
- ARM has R&D interest in neural network cores
Latest News
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers