AI Startup Deep Vision Powers AI Innovation at the Edge
LOS ALTOS, Calif., November 19, 2020 – Deep Vision exits stealth mode and launches its ARA-1 inference processor to enable the creation of new world AI vision applications at the edge. The processors provide the optimal balance of compute, memory, energy efficiency (2W Typical), and ultra-low latency in a compact form factor, making it the definitive choice for endpoints such as cameras, sensors, as well as edge servers where high compute requirements, model flexibility, and energy efficiency is paramount.
“Today’s complex AI workloads require not only low power but also low latency to deliver real-time intelligence at the edge,” said Ravi Annavajjhala, CEO of Deep Vision. “No more making tradeoffs between performance and efficiency. Developers now have access to higher accuracy outcomes and rich data insights, all on one processor.”
Groundbreaking High-Efficiency Architecture
Deep learning models are growing in complexity, and driving increased compute demand for AI at the Edge. The Deep Vision ARA-1 Processor is based on a patented Polymorphic Dataflow Architecture, capable of handling varied dataflows to minimize on-chip data movement. The architecture supports instructions within each of the neural network models, which allows for optimally mapping any dataflow pattern within a deep learning model. Keeping data close to the compute engines minimizes data movement ensuring high inference throughput, low latency, and greater power efficiency. The compiler automatically evaluates multiple data flow patterns for each layer in a neural network and chooses the highest performance and lowest power pattern.
With its simultaneous multi-model processing, The Deep Vision ARA-1 Processor can also effectively run multiple models without a performance penalty, generating results faster and more accurately. With a lower system power consumption than Edge TPU and Movidius MyriadX, Deep Vision ARA-1 processor runs deep learning models such as Resnet-50 at a 6x improved latency than Edge TPU and 4x improved latency than MyriadX.
Software-Centric Approach Breaks Down Complexity Barriers
Deep Vision’s software development kit (SDK) and hardware are tightly intertwined to work seamlessly together,
ensuring optimal model accuracy with the lowest power consumption. With a built-in quantizer, simulator, and profiler, developers have all the tools needed to support computationally complex inference applications’ design and execution. The process of migrating models to production without extensive code development has historically been challenging. Deep Vision’s SDK also allows for a frictionless workflow, which results in a low code, automated, seamless migration process from the training model to the production application. The SDK reduces expensive development time by dramatically increasing productivity and reducing overall time to market.
Paving the Path for New Markets
The Deep Vision ARA-1 processors are designed to accelerate neural network models’ performance for smart retail, robotics, industrial automation, smart cities, autonomous vehicles, and more. Deep Vision is currently in POCs with customers in a variety of these industries.
Pricing and Availability
The processor offers developers great flexibility in hardware integration, with three form factors including high-speed USB and PCIe interface options. The Deep Vision ARA-1 processors are now shipping. For pricing and availability, please contact sales@deepvision.io.
About Deep Vision:
Founded by Dr. Rehan Hameed and Dr. Wajahat Qadeer in 2015, Deep Vision enables rich data insights to better optimize real-time actions at the edge. Our AI inference solutions deliver the optimum balance of compute, memory, low-latency, and energy efficiency for the demands of today’s latency-sensitive AI-based applications. Deep Vision has raised $19 million and backed by multiple investors, including Silicon Motion, Western Digital, Stanford, Exfinity Ventures, and Sinovation Ventures. www.deepvision.io
Related Semiconductor IP
- AES GCM IP Core
- High Speed Ethernet Quad 10G to 100G PCS
- High Speed Ethernet Gen-2 Quad 100G PCS IP
- High Speed Ethernet 4/2/1-Lane 100G PCS
- High Speed Ethernet 2/4/8-Lane 200G/400G PCS
Related News
- Startup Digs Deep Learning, Snags Big Backers
- SigmaStar Deploys CEVA Computer Vision and Deep Learning Platform in its Intelligent Camera SoC
- CEVA Computer Vision, Deep Learning and Long Range Communication Technologies Power DJI Drones
- AI Processor Chipmaker Deep Vision Raises $35 Million in Series B Funding
Latest News
- HPC customer engages Sondrel for high end chip design
- PCI-SIG’s Al Yanes on PCIe 7.0, HPC, and the Future of Interconnects
- Ubitium Debuts First Universal RISC-V Processor to Enable AI at No Additional Cost, as It Raises $3.7M
- Cadence Unveils Arm-Based System Chiplet
- Frontgrade Gaisler Unveils GR716B, a New Standard in Space-Grade Microcontrollers