By Edward Niedermeyer
Wednesday, May 11, 2016
In every age of human civilization, our ability to create
transformative new technology has consistently outpaced our ability to
understand its full impact. As a result, we tend to project ourselves onto our
potent but baffling creations, imagining them to be the heralds of utopian
bliss or dehumanizing decline depending on our established values. Thus
predictions about and discussions of the future effects of new technologies
tend to tell us far more about ourselves than about the technologies
themselves.
Today, no new technology elicits such revelatory
projections as artificial intelligence. From factory jobs to internet traffic,
robots seem to be replacing humans in critical aspects of our economy and
culture at a blinding pace, opening a profound debate about whether replacing
human minds and bodies with software and steel will benefit or harm our
societies and ourselves. Many of these debates are purely economic, reducing
perceptions of robots to the likelihood that one could take your job. But some,
like the debate over self-driving cars, open vistas that extend all the way to
our moral cores.
Thus far artificial intelligence has been largely
confined to the digital world, limiting its negative impacts to annoyance,
snooping, learned racism, and other relatively minor ills. But autonomous cars don’t
just bring bots into the corporeal realm, they place them at the center of one
of our most consequential and dangerous daily tasks: driving. Even though fully
autonomous vehicles are still years away from commercial deployment, the public
is already riven with debates about their possible ethical impact.
The Trolley
Problem Isn’t Really About the Cars
The most common of these debates centers on the so-called
“trolley problem,” a hypothetical situation in which an autonomous vehicle (or
trolley operator) faces a choice between two negative outcomes and must choose
the lesser of the two evils. Countless thinkpieces use this problem to
illustrate the complexity of the moral choices—especially in the life-or-death
situations that regularly occur on the road—to argue that superior sensor and
processing power alone can’t make robots better drivers than humans. It’s a
compelling dilemma, which delivers an affirming conclusion: robots will never
truly replicate our ability to weigh ethical and moral values.
But the “trolley problem,” like all moral hypotheticals,
ultimately operates only as a warning to our fellow humans: those developing
autonomous vehicles would do well to deeply consider the moral problems their
inventions raise and take pains to mitigate them. Like most critiques of new
technologies, the “trolley problem” reveals far more about ourselves than it
does about autonomous vehicles.
Although zero-sum choices can never be entirely avoided,
self-driving cars are programmed to default to a state of rest. That means most
real-world “trolley problems” will be solved by the vehicle simply stopping,
rather than barreling headlong into one of two negative outcomes. More
importantly, they will do so far more effectively than any human will, thanks
to their superior situational awareness and reaction time.
Can’t Do Much
Worse Than Humans
Although we are still learning the true capabilities of
various autonomous car AI systems, it seems entirely plausible they will be
capable of clearing the very low bar humans have set regarding driving. Some
1.3 million humans—33,000 of them Americans—die on the road each year,
eclipsing a variety of other diseases and activities that often seem far
riskier. The vast majority of these so-called “accidents” are caused by some
form of operator error, and whereas the mere prospect of a robot driver
inspires no end of ethical hand-wringing, we actually code in a kind of
cognitive helplessness about the road deaths we cause by calling them
“accidents.”
This is no coincidence. Precisely because driving carries
such a massive and regular risk of injuring or killing ourselves and others, we
tend to cultivate a sense of detachment from it. If we fully appreciated the risks
we subject ourselves to each time we get behind the wheel, the psychological
stress would be unbearable.
Instead, we cope by affecting a nonchalance about the
dangers of driving: we talk, text, eat, lose ourselves in music and podcasts,
and generally treat driving as if it were the easiest thing in the world.
Ironically, this false confidence dramatically drives up the risk of death and
injury for the people who are most afraid of it and least able to handle it.
We Handle This by
Avoiding It
This phenomenon does not reflect well on humans as moral
creatures. Given the life-and-death consequences of driving, we should be more
than willing to give it our complete attention, actively seek out training to
improve our skills, and reinforce social values that reflect what’s at stake
every time we get behind the wheel.
Yet driver’s education in the United States is on the
decline, and has never been on par with the intensive, mandatory training
regimes in most developed countries. As a country that is literally built
around automobiles, our inability to prioritize driver training all but forces
our citizens into state of moral abandon on a daily basis, incurring huge costs
in blood and treasure.
Compared to this broad abdication of moral
responsibility, fretting about the moral decisions of robot cars seems
incredibly solipsistic. Yes, someday a human will die as a result of an
autonomous car’s failure or error, and when this happens a massive moral panic
will likely ensue. But this inevitable outrage will be no more the result of
informed and effective moral reasoning than the current lack of outrage at our
willingness to sacrifice our fellow humans for a sense of psychological ease in
our daily commutes.
The rise of robot drivers won’t introduce new hazards
onto the public road so much as it will give us a scapegoat that we can condemn
without implicating ourselves in the culture of carelessness that kills more
than a million of our fellow humans each year.
We Can Only Be
Honest When Facing an Alternative
For many of the proponents of autonomous vehicles, the
emerging ability to deploy alert, well-trained, distraction-proof artificial
intelligence represents an opportunity to save humans from our monstrously
lethal nonchalance about the dangers of driving. On a practical level it seems
entirely likely that artificial intelligences will be able to clear the low bar
human drivers have set, but on a moral level embracing robot drivers as the
solution to our cynical disregard for human life is hardly a satisfactory
response to the situation. Morality is not simply about driving better
outcomes, but about understanding and embracing our responsibilities as humans
and continuously improving our sense of right and wrong.
In messianic religions like Christianity, the coming of a
savior does not mean an end to moral struggle but rather a redefinition of
moral struggle. The mere existence of a messiah does not save souls; rather,
the moral and spiritual example they provide demonstrates a path towards
redemption through which the faithful must still struggle. By creating
intelligences capable of surpassing our meager abilities of attention,
awareness, and driving technique we have cast a light on shortcomings whose
moral effects are so profound that we have thus far been unable to directly
confront them.
The task autonomous vehicles set before us is not simply
to try to match their capabilities as drivers, but to struggle against the
deeply immoral attitudes about driving we’ve buried beneath layers of
self-justification and psychologically convenient ideology. The most meaningful
moral questions raised by the advent of the autonomous car are not about their
internal codes, but about our own.
No comments:
Post a Comment