Sie sind auf Seite 1von 3

Unmanned vehicles are a new and evolving industry, with implications not just in

various job sectors, but the publics everyday life. Each year we grow closer to
fully automated vehicles, with companies vying for an early foothold in the new
market front, releasing ever increasingly sophisticated and polished models to
fight it out in the automotive arena. Most projections put full saturation of the
market at 2070, and therefore we can expect that around 50% of vehicle sales
and 30% of all vehicles in use would be fully autonomous by 2040, the impact of
which will be profound.
There is no doubt that automotive vehicles will be a huge boon to the economy.
Current estimates by KPMG show that by as early as 2030, level 2 & 3
automotive vehicles will have opened new revenue opportunities of 51 Billion
from software, hardware and sensors (KPMG, Connected and Autonomous
Vehicles - The UK Economic Opportunity, 1st ed., 2015, p. 12) an industry which
already accounts for 4% of GDP within the UK (60.5 Billion) and also provides
employment for more than 700,000.(KPMG, The UK Automotive Industry and the
EU, 1st Ed., 2014, p. 6) However, we must, as always, temper innovation and
progress with ethical considerations. Are we ready for automotive vehicles? What
effects could they have on everyday working people? And what are some of the
hard ethical questions engineers are having to ask themselves when designing
these wondrous machines?
It will seem obvious to anyone who has driven before, and perhaps even those
who have not, but driving and our roads are, statistically, not safe. The UK alone,
in 2014, suffered 194,477 casualties of all severities, 22,807 of which were
severely injured and 1775 who further died. (Department for Transport, 2015, pp.
1,2) Of these, driver error was cited as the cause of 94% of incidents. (National
Highway Traffic Safety Administration, 2015, p. 1) The economic cost of this is
staggering; due to road traffic accidents the UK lost 2% of its GPD, and also, in
turn, cost the Government and NHS 16.3 billion.
Now compare this to Googles Self-Driving car, which has covered over 1.7 million
miles, in 6 years. So far, it has only been involved in 14 minor accidents, all of
which reportedly being the fault of the cars being manually driven, not the
automated car itself, and suddenly the argument for self-driving cars looks pretty
healthy. Further to this, KPMG predict that self-driving cars could save more than
2,500 lives and negate over 25,000 accidents a year by 2030 (KPMG, Connected
and Autonomous Vehicles - The UK Economic Opportunity, 2015, p. 12)
And yet, they can never be perfectly safe, which poses some difficult questions.
How should the car be programmed in the event of an unavoidable accident?
Should we minimise the loss of life, even if it means the occupants are sacrificed,
or should we protect the occupants at all costs, regardless of other extraneous
variables? Should it between these two extremes in a logical sense, or should it
be totally random? These are important ethical questions currently faced by
engineering teams around the globe and could very well have a significant
impact in the way self-driving cars are viewed and accepted by society.
What can initially seem as sensible programming, can easily run into
complicated and confusing ethical dilemmas. To give an example, let's suppose
that an autonomous car is faced with the terrible decision to crash into one of
two objects, an XC90 Volo or a Smart car. Do we look at which is the heavier
vehicle and thus better able to absorb the impact, or do we choose the car with

the better passenger safety, in both cases the XC90 being the selected target?
Either way, we now have intentional discrimination against a particular type of
vehicle to collide with, where the owners would have to bear the burden through
no fault of their own, bar their concern with safety, needing a large car for work
or family. Suddenly a logical solution to a problem presents more problems than
it solves.
To delve into this further, if we look at crash optimisation, ideally, we should
program a car to crash into whatever can best survive the collision. Now while
this sounds good in theory, when you look at it a bit more in depth it can get
confusing. Take for instance a cyclist, if you prioritise whatever could best survive
a crash, a program's algorithm would be able to account for the much higher
odds someone has of surviving a collision with a helmet on than without one, and
so you now have a car that, in certain situations, would deliberately target
cyclists with helmets on than those without, due to a statistically and logically
decision process. In turn, effectively punishing the cyclist for being responsible
and thinking about his or her safety. Further to this, if our self-driving car
becomes a sizeable proportion of the cars driven, we may inadvertently be
encouraging cyclists not to wear helmets, to not stand out as a favoured target
of the vehicle, due to the decreased likelihood of surviving a collision. In our
previous scenario, we may also inadvertently affect sales of high safety rated
cars, such as the Volvo, as people choose less safe models so as not to become a
target of our autonomous car.
Though who is to say that our car should make a choice between one extreme or
another at all? Why not assign a random number to each option and let a
random number generator determine the outcome, thereby removing any
calculated choice altogether and thus any possibility that the programming of
the car is discriminatory in any fashion (Whether this be against large vehicles,
safety records, cyclists, etc.) This presents us with another problem, by better
mimicking human behaviour, we have overlooked the fact that autonomous
vehicles should be better than us at making choices. Further, while we can
defend human drivers for making a split-second decision during a collision, we
cannot allow that same freedom to our robot cars, to whom a split second is
more than enough time to calculate and process millions of different outcomes
and options.
Some might think that it is case closed after we solve the answers to a few of
the questions above. However, we also have to contend with the level of detail
algorithms require, do we become ageist and discriminate against the elderly,
who, compared to a young child have led a full and healthy life? Or do car
manufacturers look at the average settlement cost, and decide on the area,
where and who to crash into? Opening ourselves up to socio-economic
discrimination against the poor who either don't pursue legal cases or settle for
less compared to the wealthier in society? Thus increasing traffic in poorer
districts while lessening it in more affluent ones, placing a considerable strain on
an already underdeveloped infrastructure. There is also the added constraint of
just how you sell a car that prioritises based on total loss of life for a single
driver, who will predominantly always be in the minority and thus have his car
decide against him the majority of the time.
Expanding our view outside of purely engineering and robot ethical concerns we

have to ask ourselves the impact autonomous cars will have on society as a
whole. As we no longer have to worry about manual driving as our cars can drive
us safely home in a fully autonomous mode, will this encourage a culture of
increased alcohol consumption as the risk of drunk driving diminishes? Despite
the growth in jobs and economic output it could bring, sectors such as Taxicabs
and cargo could see hundreds of thousands of jobs lost overnight as transport
becomes autonomous, with no need for manned manual driving, potentially
removing the sole incoming for many families and condemning them to poverty
(if, albeit, temporary)
Cars may be one of the most iconic technologies ever developed, forever
changing society, cultural and economic landscapes. They have enabled types of
work once thought impossible, possible, and accelerated the rate and pace of
business. They help us rush countless to hospitals and deliver medical goods to
rural areas. They allow friends and family to be closer to one another, yet also
take us further away from each other. They kill over 30,000 in the USA alone,
each year, and waste our time sat in rush hour traffic. They are the primary
cause of increased pollution and global warming, driving a greater and greater
strain on the resources of our planet.
Automated cars promise both great benefits, and unintended, difficult to predict,
side effects, but the technology is coming regardless. Change is inescapable. The
intrinsic, deep complexity in forecasting potential problems these vehicles may
run into is a great challenge, and one we must rise to.

Das könnte Ihnen auch gefallen