Driver assistance systems & Self-driving cars: technical issues and ethical dilemmas

The car accident which  May 7 caused the death of ex Navy Seal Joshua Brown is the first incident in the world involving a Tesla Model S car with with the semi-automatic guidance system “Autopilot” activated. The NHTSA, the US authorities for road safety, has opened an investigation to verify the software functioning system, which did not “see” a truck that crossed the road and, consequently, didn’t activate the car’s brake. Despite the general bewilderment and some journalistic simplifications we should do some clarity by explaining the differences between driver assistance systems (such as the Tesla Autopilot) and real self-driving cars.

Tesla and the Autopilot

The driver assist systems are not automatic pilots: they are computer softwares that simply allow drivers to drive better and more safely. The NHTSA classifies them in five levels of complexity. Most of them merely recognize road markings and sudden obstacles. Other, through dedicated sensors and the control of accelerator, brakes and steering, are able to maintain or change the car lane autonomously, to make it brake, to make it vary the speed and to make it enter and exit from a parking lot (“summon”). The Tesla Autopilot system includes all these features and is ranked at level 2. In addition to Tesla currently only Mercedes-Benz offers advanced semi-autonomous driving systems.

self driving cars ethics

The accident that claimed the life of Joshua Brown is the result of a combination of human and technological errors: human error was to give up control of the vehicle to the Autopilot. It even seems that Brown was watching a DVD while he was in the car – a practice obviously prohibited – when in fact it is the same Tesla to recommend to never leave the steering wheel. In short, despite its name, the “Autopilot” is NOT an autopilot, and does not allow the driver to ignore the road. The technological error was in automatic detection systems of Tesla: the trailer of the truck that killed Brown was the same color of the sky and this misled the recognition software. But, as we said, if Brown had been at the wheel, the accident probably would not have killed him.

The Self-Driving Cars

Unlike advanced driver assistance systems, autonomous cars are still in a long process of study and experimentation and provide fully autonomous vehicles, able to take us to destination without the aid of a driver. Many manufacturers are experimenting with prototypes of driverless cars but the company that today is at the forefront of self-driving vehicles development is Google. Also in this case the system is based on a network of sensors that allow the car to figure out where it is, where to go and especially how to behave in traffic or at a stoplight. The Google cars prototypes use the technology LIDAR (light radar sensing), which uses lasers to map the surroundings and prevent crashes. The laser light sent from the sensors reflects on people and objects and returns back giving precise information on the location of the elements. The onboard computer then uses “premonition algorithms” for predicting the movements of people and other vehicles.

Google self driving cars

Road tests of the Google’s cars began last year in California: prototypes in circulation do not exceed the speed of 40 kilometers per hour and always have on board at least one person, who can intervene in case something goes wrong. Google says it hopes to put autonomous vehicles on the market by 2019. Before that date, in fact, there are still many issues to be resolved on the safety front. The onboard computer will have to learn how to prevent accidents intelligently and in a “human” way, avoiding damage to people and things and limiting collateral damage. A classic example is the car that, to avoid a pedestrian, causes a serious accident involving other vehicles or even causes the death of its passengers: should autonomous cars aim to save as many lives as possible in a crash, even if that means sacrificing the passenger? As we can imagine the issue is not just about road safety but also about artificial intelligence, ethics, freedom and individual responsibility. The debate has only just begun and we would bet that it will take many years to reach a shared conclusion.