Cars are undergoing an evolution, moving from an electro-mechanical device under the control of a human driver to a completely autonomous vehicle. Today we are getting close to the tipping point, with most new cars equipped with ADAS (Advanced Driver Assistance Systems), such as lane tracking, autonomous emergency braking, enhanced vision systems and more, while experimental fully autonomous cars are racking up millions of miles of test driving.
The systems to provide these functions are built of sensors, activators, radar and lidar systems, communicating through networks, and controlled by microcontroller, so one definition of a car is an internet on wheels. Cars are also communicating with other cars (Vehicle to Vehicle Communication or V to V), to the infrastructure – traffic lights, road signs (V to I) and to satellites for navigation and reporting. Underneath all this is, of course, software – more than 100 million lines of code. As well as the code for the applications, there are operating systems, middleware such as network communication stacks and interfaces to the sensors, actuators and to the driver’s display.
With this increased complexity issues of security and safety have become an increasing concern. With the growth of V to X communication, cars become open to outside attack: already a third party has taken control of a Jeep, over riding the driver.
A further vulnerability can be added by the car user. Car manufacturers all use On Board Diagnostics (OBD) to provide monitoring various engine parameters, for fault finding and for diagnostics at servicing. The connector interface, OBD II, is publicly available and if you Google OBD2 you will find a mass of Bluetooth OBD connectors to allow a driver to monitor engine health on a cell phone. This could also open the engine control system to an unfriendly person. A recent paper from the University of Michigan has described using a direct laptop connection to the OBD to override driver instructions on a large truck and on a school bus.
With such large quantities of code, safety is also critical. The Toyota unintended acceleration court case demonstrated that much legacy code is not of a high standard. New code must be developed to a much higher standard.
It was only five years ago that a specific safety standard for cars was issued. ISO 26262 is an adaptation of the IEC 61508 functional safety standard that focuses on the needs of electrical and electronic systems installed in series- production passenger cars, and applies to all activities within the safety lifecycle of these safety-related systems. This includes requirements on the quality of software.
The standard uses automotive safety integrity level (ASILs) to provide a measure of the risk associated with a sub-system. They range from A to D, where A is the lowest safety integrity level and D the highest, that is the strictest with most requirements. In addition to these ASILs, the class QM (quality management) denotes no requirement to comply with ISO 26262, which means it is the discretion of the development organisation to warrant quality. The parameters of severity of risk, probability of exposure and controllability determine the ASIL.
The controllability parameter requires special attention. It is assumed the driver is in an appropriate condition to drive, has the appropriate driver training (a driver’s licence) and is complying with all applicable legal regulations, including due care requirements to avoid risks to other traffic participants: the driver has to comply with traffic laws.
Laws will need adapting so when an automated driving system is in operation the driver will not have to pay attention unless the system asks for driver intervention. Correct operation of driver notification and fall-back to human control is crucial. If the notification fails, the human driver may not be paying attention and won’t be able to avoid harm, as may have happened with the recent Tesla accident. If the fall-back fails, the system may stay in control instead of allowing the driver to intervene and avoid harm. Such situations must always be assigned the highest controllability class (C3), meaning less than ninety per cent of all drivers or other traffic participants are usually able, or barely able, to avoid harm.
Part 6 of 26262 is devoted to the software development process to produce code that is reliable enough, when running in a system, to meet the ASIL level needed.
The Society of Automotive Engineers (SAE) standard J3016 breaks driving automation into six classes, from no automation to fully automatic. Automated driving systems, defined as SAE level three or higher rely on software to gather the data from sensors, to create a model of the environment and then, based on the goal, deciding on how to assist the driver, or control the vehicle. It also has other critical tasks, such as determining whether sensors are functioning correctly, when to alert the driver and when to trigger fall-back to human control.
It is vital this software behaves reliably. Other software tasks, such as modelling the sensor data, may be less critical, but even for these risk analysis will be needed.
Traffic laws will need to change to accommodate automated driving systems, particularly in the area of liability and privacy. Each country has its own traffic laws and there are legislative initiatives in many jurisdictions.
In the USA nationally, the National Highway Traffic Safety Administration has proposed a formal classification system that defines five levels ranging from when the driver completely controls the vehicle at all times up to the vehicle performing all safety critical functions for the entire trip, with the driver not expected to control the vehicle at any time.
Individual states vary in their approach: Nevada was the first state to authorise the operation of autonomous vehicles, to test autonomous driving technology on public roads, in 2011, followed by California, Florida, Michigan, North Dakota, Tennessee and Washington DC.
A European research project named Automated Driving Applications & Technologies for Intelligent Vehicles began in January 2014 and develops various automated driving functions for daily traffic by dynamically adapting the level of automation to situation and driver status. The project also addresses legal issues that might impact successful market introduction.
Vehicle & Road Automation (VRA) is a support action funded by the European Union to create a collaboration network of experts and stakeholders working on deployment of automated vehicles and its related infrastructure. VRA partners with some OEMs and suppliers, but most partners are research institutes and universities. VRA has identified a list of legal and regulatory issues in the EU.
Volkswagen has appealed for collective European legal actions, including progressive amendment of ECE Regulation 79 (also a UN rule) on steering equipment. This demands that the driver can, at any time, override the function and remain in primary control at all times.
The Japanese government plans to develop laws to govern use of driverless cars. The government also created a classification of automated driving into four classes, including one for completely autonomous driving.
In China, Baidu (often called China’s Google) is also working on a self-driving car with BMW. China’s legislation is quite flexible so the government has more power to put the required changes in place. However, they will have to deal with the same complex issues as other countries.
India is also thinking about autonomous driving, but there are major challenges, one of them being the slow-moving legislation and the difficulty in imposing the expected rules because of different infrastructure.
In this context, how do you create code that is both safe and secure? As mentioned, ISO 26262 puts forward a process for software development, which includes use of coding standards and code checking tools.
System security starts with designing in features that will contribute to a secure result, such as; application separation, particularly segregating with firewalls, safety critical applications (such as steering and brakes) from those less critical, particularly those that communicate with the outside world (such as infotainment); limiting communication; checking and validating data that is communicated; and more.
As most software in this area is written in C a good starting point for safe and secure code is MISRA C:2012 (MISRA 3). This provides a set of guide lines for writing C programs, which as well as avoiding undefined behaviour, includes rules that improve maintainability, testability, portability and readability of the source code. There is also a large overlap between MISRA rules and ISO 26262-6 compliance tables, making MISRA a compelling choice when ISO 26262 compliance is required. Recently MISRA has published amendment 1 to MISRA 3. This has 14 new rules to extend still further MISRA’s coverage of the development of secure systems.
Tools are an important part of developing in accordance with 26262. Static code analysis tools are an important part of managing code quality, providing both a quality control on the code and measuring its adherence to coding standards, like MISRA. Test tools provide further confidence in the software, while verification tools measure how well the software is doing what the designer intended.
It is possible to develop safe and secure systems for vehicles, and organisations that have remodelled their development processes to conform to ISO 26262 have discovered that, after the initial introduction and learning phase, they are also reaping gains in productivity.
by Dr. Frank van den Beuken
Programming Research Ltd | www.programmingresearch.com