Calling cars ‘autonomous’ is dangerous, say industry experts


Assistance technology

Drivers could be too trusting of assist technology such as lane-keep assist

ABI and Thatcham want car makers to stop labelling assistance technology as autonomous

Car manufacturers that describe driver assistance technology as ‘autonomous’ are endangering motorists, the Association of British Insurers and Thatcham Research said.

The two organisations believe the use of the word ‘autonomous’ can lead to drivers “over-relying on technology”, potentially stopping them from reacting in time to prevent otherwise avoidable accidents.

Thatcham has produced a report entitled ‘Assisted and Automated Driving Definition and Assessment’ in which it indentifies “dangerous grey areas associated with some driver support technologies”. The report references the use of terms such as Autopilot and ProPilot, which are used by Tesla and Nissan respectively for their driver assist technology, as potentially misleading drivers into believing their car can take full control in all circumstances.

Head of research at Thatcham, Matthew Avery, said: “We are starting to see real-life examples of the hazardous situations that occur when motorists expect the car to drive and function on its own. Specifically, where the technology is taking ownership of more and more of the driving task, but the motorist may not be sufficiently aware that they are still required to take back control in problematic circumstances.”

Although driver assistance technology has moved on leaps and bounds in recent months, with models such as the Mercedes-Benz S-Class capable of taking control of the accelerator and brakes while providing autonomous steering for periods of up to 30 seconds, Avery said that “fully automated vehicles that can own the driving task from A to B, with no need for driver involvement whatsoever, won’t be available for many years to come”.

He added: “Until then, drivers remain criminally liable for the safe use of their cars and, as such, the capability of current road vehicle technologies must not be oversold.»

Most manufacturers are currently working on Level 2 (defined as ‘hands off’), Level 3 (‘eyes off’) or Level 4 (‘mind off’) autonomy. Toyota Research Institute boss Gill Pratt recently told Autocar that the nuanced differences between the three levels are clouding the development race.

“It’s extremely important not to confuse the driving of levels with a gauge of where different companies are,” he said. “Level 4 autonomy depends on where you’re doing it [due to the need for cars to communicate with sensors], so you have to go a stage deeper.

Tesla was the first major car maker to come under fire when a fatal accident involving a Model S was linked to a driver misinterpreting its Autopilot system for fully autonomous technology back in 2016. More recently, several accidents involving cars with autonomous technology have happened, including an incident in which an Uber test car hit and killed a pedestrian in Arizona, US.

To reduce the chances of a similar incident happening in Britain, Thatcham said it is going to test autonomous technology equipped on production cars to provide guidelines for consumers as to what level of capability each offers.

“The ambition is to keep people safe and ensure that drivers do not cede more control over their vehicles than the manufacturer intended,” said Avery. “How car makers name assisted systems will be a key focus, with any premature inference around automated capabilities being marked down. Automated functions that allow the driver to do other things and let the car do the driving will come — just not yet.”

More content:

Why do people think autonomous cars are scary?

Peugeot 508 2018 review

…read more

Source:: Autocar