AI Apocalypse

Imagine you are in a car that is driving and thinking for itself. You are sitting behind the wheel, not necessarily alert to the situation on the road. There’s no reason to keep a vigil eye on your window because the self-driving car came with the promise that all you would have to do is lie back and enjoy the drive. Keeping to the car company’s promise, the vehicle moves smoothly, makes all the right turns, brakes on the red light and even changes lanes on its own accord.

However, it faces a dilemma when a little boy abruptly jumps into the road with his mother chasing after him. The car advances quickly. The vehicle programs to steer you right, away from the two pedestrians. But it detects a wall to your right and hesitates. “Uh-oh.” At this point, there is no way to save everyone. The car’s next move can possibly save a life or multiple lives. Does it sacrifice you and minimize the number of deaths? Or save the boy and the mother? The same conundrum persists even if you were in charge of the wheel, but the car faces a more difficult problem, as a mechanical device wanting in the ability to think and judge ethically. Such ethical dilemmas make self-driving cars problematic. In the current legal system, who would be at fault? The car company? The owner? The pedestrian for running into the street?

The driver was behind the wheel but the autonomous car was certainly the one that steered it and made the choice of who got hurt. Technology giants such as Google and Tesla Motors that are spearheading the development of self-driving cars currently lack clear solutions to such accidents despite their increasing advancement. Tesla even recently released their first automated driving system model called Autopilot in their Model S electric sedans. An increasing number of people are purchasing such cars from companies like Tesla and creating more space for issues of accountability.

Tesla’s Autopilot allows the car “to steer, switch lanes and manage speed on its own,” according to MIT Technology Review. But shortly following the program's release, customers were listing numerous problems that pertained to the issue of accountability.

In a YouTube video, a Tesla model S driver referred to a close collision experience when the car failed to switch lanes fast enough. He commented in his video, “Had I not reacted quickly and jerked the steering wheel in the opposite direction, a devastating head-on collision would have occurred.” This portends the many possible accidents that could occur if drivers do not remain vigilant behind the wheel. The near-accident not only proved liability problems, but also negates the whole point of such vehicles: the driver might as well be in charge of the steering wheel the entire time if such accidents are foreseeable.

Companies like Google are also advancing models that will be consummately “self-driving” and will even be sold without a steering wheel. For Google’s Self-Driving Cars, the only control human drivers are offered is a single button that will stop the car. Google declares that their model of autonomous cars will dramatically reduce vehicular accidents as “94 percent of accidents in the U.S. involve human error,” but it does not eradicate the quandary of accountability if an accident were to happen due to a malfunction in the program. Stopping the car with a button will not be the uniform solution to preventing all types of car accidents. The button’s dysfunction would also mean complete autonomy of the car with no way for the driver to have control over its actions.

Moreover, anyone could be behind the wheel of fully autonomous cars and fulfill the job of reaching a destination. If so, what will the legal driving age be? The legal age is likely to go down if human drivers are no longer held responsible for the accidents. If this becomes the case, an immature child could use the car for mischief and manage to escape blame. Learning to drive with your hands on the wheel is a “rite of passage,” as it teaches one to be responsible for his—not the car’s—actions behind the wheel.

Autonomous cars will also devalue the importance of etiquette. Once fully autonomous cars roam this world, we will briskly forget what it was like to interact with other drivers on the roads and even more so face to face. Human interaction is becoming increasingly hard to find in the android-advancing century, and self-driving cars are perpetuating this epidemic. In my eyes, self-driving vehicles are approaching an era with a subdued sense of human co-existence.

One possible advantage of the advancement of vehicular Artifical Intelligence (AI) is reduced count of human error induced accidents such as from drunk driving. While this advantage may be realized in the future of flawless self-driving cars, the present AI models are prone to accidents even with sober drivers. As for now, accidental and unaccountable computer death rates will climb up the chart with self-driving cars.

Currently, autonomous cars are driving to a cliffhanger of unfathomable questions. The far future could be different, possibly providing feasible solutions to such questions. Yet through the present window, I can only see human convenience advancing towards a sticky pile of ethical conundrums.

Previous
Previous

Living as a Millennial

Next
Next

The West and The Refugee Crisis