Key Safety Design Overview in AI-driven Autonomous Vehicles

By Vikas Vyas 1, Zheyuan Xu 2
1 Autonomous Driving Department, Mercedes-Benz Research and Development North America, Sunnyvale, CA, USA
2 Independent Researcher, University of Washington, Sunnyvale, CA, USA 

Abstract

With the increasing presence of autonomous SAE level 3 and level 4, which incorporate artificial intelligence software, along with the complex technical challenges they present, it is essential to maintain a high level of functional safety and robust software design. This paper explores the necessary safety architecture and systematic approach for automotive software and hardware, including fail soft handling of automotive safety integrity level (ASIL) D (highest level of safety integrity), integration of artificial intelligence (AI), and machine learning (ML) in automotive safety architecture. By addressing the unique challenges presented by increasing AI-based automotive software, we proposed various techniques, such as mitigation strategies and safety failure analysis, to ensure the safety and reliability of automotive software, as well as the role of AI in software reliability throughout the data lifecycle.

Index Terms: Safety Design, Automotive Software, Performance Evaluation, Advanced Driver Assistance Systems (ADAS) Applications, Automotive Software Systems, Electronic Control Units.

I. INTRODUCTION

The automotive industry continues to change due to selfdriving technologies and the involvement of complex software systems. The growth of vehicle automation with minimal user interference necessitates a critical need for safety guidelines and standard implementation. This paper mainly focuses on the safety design features of complex automotive software; however, regarding SAE levels, where the vehicle can perform most of the driving assistance, it still needs driver intervention in many scenarios [1].

The continued increase in autonomous vehicle features adds more complexity and concerns about compliance with safety standards and guidelines. The increasing number of Artificial intelligence algorithms for ADAS functionality like Mapping, perception, and sensor function is creating more issues about performance, predictivity, and reliability, and somehow, it affects the ability to handle unexpected situations [2].

This paper will cover details about current research on key safety design overview in electric and autonomous vehicles, with all the safety strategies and processes. Also, it will address all the important issues and preventive actions to overcome this, such as user interference performance [1], the fail-operational approach [6], [7], and the role of artificial intelligence in safety-critical systems [2], [4]. This article focuses on all the safety standards and guidelines, development, verification, and validation approach to meet the criteria to fulfill the requirement of autonomous driving systems in battery-electric vehicles (BEV) [5], [9], [12].

The paper is divided into the following sections: Section II provides comprehensive details of safety standards and regulations to implement in electric vehicle systems (hardware/software). Section III includes user interference performance and its functions to comply with safety system state transition. Section IV and V reviews the fail-operational and fail-safe methodology to implement in complex automotive software systems. Section VI covers testing, validation, and verification methods for guaranteeing safety. Finally, Section VII summarizes the paper and suggests future research areas.

To read the full article, click here

×
Semiconductor IP