Cadence Collaboration with Kudan and Visionary.ai Enables Rapid Deployment of VSLAM and AI ISP-Based Solutions
Are you confused while navigating new environments, especially in less optimum light conditions?
I do, and the problem worsens when lighting conditions are lousy and GPS connectivity is poor! Then, I try to locate my destination by looking at some landmarks.
Likewise, robots and self-driving vehicles use simultaneous localization and mapping (SLAM) and digital imaging to navigate unknown environments, even in the worst lighting conditions. SLAM and digital imaging work as an embedded vision for robots and self-driving vehicles as they make maps (especially while working indoors) of unknown environments while navigating through them. Such advancements incorporate different kinds of sensors and computer vision-based algorithms for SLAM, 3D object detection, tracking, and trajectory estimation. These computational imaging algorithms are very complex, involve extensive computational resources, and may lead to increased latency and power requirements. Digital Imaging is another major trend plagued by challenges such as low light, high/wide dynamic range (HDR/WDR) environments, and fast-moving objects. These are the most significant barriers to clear and crisp imaging. To best address these challenges, Cadence has signed up with Kudan and Visionary.ai for its Tensilica software partner ecosystem, bringing SLAM and AI image signal processor (ISP) to its processor cores. This partnership helps achieve the best performance in various segments such as advanced automotive, mobile, consumer and IoT, and drones. The proof is in the pudding; Cadence strengthens Tensilica Vision and AI software partner ecosystem for advanced automotive, mobile, consumer, and IoT Applications. While Tensilica Vision Q7 DSP helped achieve a nearly 15% speedup of Kudan's proprietary SLAM implementation pipeline compared to CPU-based implementation, Tensilica NNA110 accelerator helps customers implement a camera pipeline with a resolution of more than full HD at over 30fps.
How do Robots/Self-Driving Vehicles Navigate? And Why do We Need AI-Based Image Signal Processors (ISP)?
To read the full article, click here
Related Semiconductor IP
- Flexible Pixel Processor Video IP
- Bluetooth Low Energy 6.0 Digital IP
- Ultra-low power high dynamic range image sensor
- Neural Video Processor IP
- Flash Memory LDPC Decoder IP Core
Related Blogs
- Common Tensilica Software Stack Delivers Best-In-Class Edge AI Performance
- Renesas Collaborates on Large Language Model Generative AI Chip Design
- Alif Is Creating SoC Solutions for Machine Learning with Cadence and Arm
- Cadence Generative AI Solution: A Comprehensive Suite for Chip-to-System Design
Latest Blogs
- What It Will Take to Build a Resilient Automotive Compute Ecosystem
- The Blind Spot of Semiconductor IP Sales
- Scalable I/O Virtualization: A Deep Dive into PCIe’s Next Gen Virtualization
- UEC-LLR: The Future of Loss Recovery in Ethernet for AI and HPC
- Trust at the Core: A Deep Dive into Hardware Root of Trust (HRoT)