Cadence Collaboration with Kudan and Visionary.ai Enables Rapid Deployment of VSLAM and AI ISP-Based Solutions
Are you confused while navigating new environments, especially in less optimum light conditions?
I do, and the problem worsens when lighting conditions are lousy and GPS connectivity is poor! Then, I try to locate my destination by looking at some landmarks.
Likewise, robots and self-driving vehicles use simultaneous localization and mapping (SLAM) and digital imaging to navigate unknown environments, even in the worst lighting conditions. SLAM and digital imaging work as an embedded vision for robots and self-driving vehicles as they make maps (especially while working indoors) of unknown environments while navigating through them. Such advancements incorporate different kinds of sensors and computer vision-based algorithms for SLAM, 3D object detection, tracking, and trajectory estimation. These computational imaging algorithms are very complex, involve extensive computational resources, and may lead to increased latency and power requirements. Digital Imaging is another major trend plagued by challenges such as low light, high/wide dynamic range (HDR/WDR) environments, and fast-moving objects. These are the most significant barriers to clear and crisp imaging. To best address these challenges, Cadence has signed up with Kudan and Visionary.ai for its Tensilica software partner ecosystem, bringing SLAM and AI image signal processor (ISP) to its processor cores. This partnership helps achieve the best performance in various segments such as advanced automotive, mobile, consumer and IoT, and drones. The proof is in the pudding; Cadence strengthens Tensilica Vision and AI software partner ecosystem for advanced automotive, mobile, consumer, and IoT Applications. While Tensilica Vision Q7 DSP helped achieve a nearly 15% speedup of Kudan's proprietary SLAM implementation pipeline compared to CPU-based implementation, Tensilica NNA110 accelerator helps customers implement a camera pipeline with a resolution of more than full HD at over 30fps.
How do Robots/Self-Driving Vehicles Navigate? And Why do We Need AI-Based Image Signal Processors (ISP)?
To read the full article, click here
Related Semiconductor IP
- HBM4 PHY IP
- Ultra-Low-Power LPDDR3/LPDDR2/DDR3L Combo Subsystem
- HBM4 Controller IP
- IPSEC AES-256-GCM (Standalone IPsec)
- Parameterizable compact BCH codec
Related Blogs
- Integrating Coherent RISC-V SoCs: Advanced Solutions with Perspec
- Navigating the Future of EDA: The Transformative Impact of AI and ML
- DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers
- GDDR7: The Ideal Memory Solution in AI Inference
Latest Blogs
- Formally verifying AVX2 rejection sampling for ML-KEM
- Integrating PQC into StrongSwan: ML-KEM integration for IPsec/IKEv2
- Breaking the Bandwidth Barrier: Enabling Celestial AI’s Photonic Fabric™ with Custom ESD IP on TSMC’s 5nm Platform
- What Does a GPU Have to Do With Automotive Security?
- Physical AI at the Edge: A New Chapter in Device Intelligence