Sie sind auf Seite 1von 9

Ethics of the innovation of automation in the

aeronautical industry

Ethics and Engineering for Aerospace Engineering

Fábio Fernandes[4921437] , Frans Loekito[4776887] , and Tiago Costa[4832507]

Delft University of Technology

Abstract. Keywords:

1 Introduction
Automation has been an unstoppable wave in recent transportation technolo-
gies. Nowadays, its benefits are palpable in helping us performing previously
"annoying" tasks, such as parallel parking and maintaining speed at highways,
by providing artificial intelligence to take over those actions completely. How-
ever, complete automation in transportation is still unheard of, at least for the
following 5 years. Even in the aeronautical sector, where avant-garde advance-
ments are made in a yearly basis, the idea of a crew-less flight might seems a
little bit too far-fetched for the time being. However, as man cannot stop the
tide of progress, complete automation in aircraft is bound to happen soon. Due
to this reason, it is important to anticipate the potential ethical debates that
will arise. There are various ethical questions which can be raised, however we
choose to focus on the following: Is it acceptable to have unmanned cockpits
in commercial/military aircraft? What are the risks of automation? Who’s re-
sponsible in case of failure? How far should automated systems be in control of
aircraft? There are no means of fully automated transportation nowadays. Why
should the aeronautical industry be the first? The essay will delve deeply in the
topic of responsibility and social acceptance concerning aircraft automation in
both commercial and military aeronautical sector.

2 The emergence of fly-by-wire


To gain a holistic view on the ethical problem, we must first have an understand-
ing about the automation in aeronautical transportation, or more commonly
known as "fly-by-wire system". A fly-by-wire flight control system is an electri-
cal primary flight control system employing feedback such that vehicle motion
is the controlled parameter, hence replacing completely the mechanical linkages
between the pilot’s stick and the actuators by electrical signal wires [2]. Shortly
after World War II, fully powered controls (partial automation) came into being,
2 Fábio Fernandes, Frans Loekito, and Tiago Costa

disconnecting direct pilot connection with actuators. Examples of aircraft using


fully powered controls are the F-86, F-4C, F-104, F-105, and 727 rudder. One
of the primary reasons for using fully powered control is that in the transonic
region the forces on the surfaces vary greatly and are highly nonlinear. Fully
powered controls are unaffected by non-linearity in this region.
This breakthrough, however, induced a critical question: what is the limit of
automation in flight? One might argue that, as automation in flight can reduce
the pilot dependency, it will in fact increase the safety. Accidents due to the
pilot can be classified into two major accident modes: pilot error and suicide.
EgyptAir Flight 990 and Germanwings Flight 9525 are two examples of the
myriad accidents caused by the latter [6]. On the other hand, the former has
also caused its share of devastating aircraft disaster. Korean Air Flight 801 [12]
and the collision between KLM Flight 4805 and PAN AM Flight 1736 [10] are two
infamous accidents caused by the crew’s judgement and lack of communication.
Indeed, automation can avert both accident modes; according to the Interna-
tional Air Transport Association, the airline industry trade body, there were 1.25
accidents per million jet flights in 2017 – lower than the five-year average of 1.46
(which was already at a very low level) [3]. Using the same accident-avoidance
technology in some automatic cars, for example, the aircraft can simply override
the pilot if the pilot has for some reason headed the airplane for the ground. Ad-
ditionally, it can also avert pilot errors such as those related to navigating bad
weather and poor visibility [6]. Automation in flight can also increase efficiency
in flight path, reduce pilot’s workload in repetitive tasks. Last but not least, it
can even increase passenger comfort, as the flight will be "smoother" [7].
On the other hand, many also claim that artificial intelligence used in flight
cannot be trusted fully. This fact is because interaction between an automated
system and a pilot can cause fatalities when it goes wrong, such as in the case of
Air France Flight 447. In this case, the faulty sensor system of the plane caused
pilot confusion, and hence, lead to the one of the world’s worst aviation disaster
[14]. Even if it was the crew’s fault in trusting the faulty data, what will happen
if there is in fact no crew in the plane? Training pilots to identify faulty data and
prevent accident is still possible, but without those pilots, the faulty data will
definitely cause catastrophe. Moreover, there are also some emergencies during
flight that simply cannot be handled by the fully automated aircraft system.
For instance, in case of unruly passengers or sick passengers, we still need crew
members to override the aircraft’s artificial intelligence [3].
More importantly, security is also an issue. As terrorism is one of avia-
tion greatest challenges, full automation will also bring rise to cyber threats
to flights. Even though this type of terrorism has not happened, recently FAA
found "holes" in avionic security. In 2014, they warned that the screens display-
ing critical information to the pilots in more than 1,300 Boeing airplanes could
flicker or even go blank during takeoffs and landings. These screens, manufac-
tured by Honeywell, were vulnerable both to WiFi and cellphones being used
in the passenger cabins and electronic interference from outside including from
satellite communications and weather radar [8]. Terrorists can use these vulnera-
Ethics of the innovation of automation in the aeronautical industry 3

bilities and "hack" the plane from ground using, for example, satellite connection
- one of the things aircraft avionics are particularly weak to.
These claims are the basis of the controversies regarding full automation in
flights. However, we have only discussed about commercial aviation. In fact,
automation in the military sector can be as controversial as in the commercial
one.

3 The application of automation and its risks

3.1 Military

When it comes to military aircraft applications, the topic of automation is even


more controversial due to the added risks involved like the presence of guided
weapons that could be used wrongly to kill a lot of people or cause massive
destruction.
Nevertheless, “flying military combat aircraft requires an exceptional amount
of decision making in a very short window with lots of distractions” [16], making
it impossible to pilot a modern military jet without any help from the fly-by-wire
system. With the advancement of technology, it is certain that increasingly com-
plex airplanes will be developed, making it necessary to improve the capabilities
of current fly-by-wire systems and extending their control over the aircraft.
Human pilots are subject to a thing called pilot workload, which is something
that basically affects everyone: decision fatigue. “Every decision that a person
has to make has a cost in terms of mental resources. The more decisions a person
has to make in a short period of time, the more likely that the decision won’t be
quite right” [16]. For a pilot, decision fatigue can affect his ability to focus, his
situational awareness, alter perceptions of risk and add to distraction, decreasing
his ability to properly fly the airplane and carry out the mission. On the other
hand, machines do not have these neurological limitations, making them more
capable than human pilots in certain situations, for example when having to fly
closer to exact limits.
At the moment, one of the objectives when it comes to automatic flight sys-
tems is to make them communicate more effectively with human pilots, because
having too much automatic flight control in the cockpit can make flying more
dangerous. Poorly designed autopilots are more likely to increase the pilot work-
load than decreasing it, for example by giving unclear information to the pilot.
[11]
The main goal of the fly-by-wire system is not so much to remove/reduce the
complex decisions but more to remove/reduce the complex manoeuvring that
goes along with the decisions, allowing the pilots to make better decisions than
without the help of the autopilot.
Having automated aircraft that don’t need to have a pilot on board, for
example UCAVs (Unmanned Combat Aerial Vehicles), has a lot of advantages
like not putting pilots’ lives at risk, reducing the workload on the pilot and
extending the time an aircraft can remain in loiter (a phase of flight where the
4 Fábio Fernandes, Frans Loekito, and Tiago Costa

aircraft remains over a designated target area). However, it also has a lot of risks
involved, which need to be taken into account when using such aircraft.
Eventually, with the advancement of technology, fully autonomous airplanes
will be built. But allowing fully automated killing machines to make decisions
when human lives are at stake raises all sorts of ethical questions. For instance,
how can one guarantee that the machine has moral autonomy? Can we trust
that it will always make the right decision, according to human standards?
The concept of responsible innovation is of utter importance when it comes
to designing fully automated military aircraft, the security risks involved with
developing such technology are very serious. For instance, given that there is no
pilot on board, if the autopilot malfunctions there is no one on board to regain
control of the airplane. Or, an even more serious threat, if there is a security
breach in the software that allows terrorists to hack it and take control of the
airplane, they will be able to use it for their own purposes.

3.2 Some crucial examples

In the past, some problems related to the malfunction of the fly-by-wire system
have occurred, some with more severe consequences than others. One example
is the one of the Swedish Gripen jet crash caused by a software fault in 1989.
This crash was caused by the movement of the stick exceeding the predicted
limits when designing the flight control system, causing the airplane to become
unstable in the landing phase by creating uncontrollable pitch oscillations and
finally making the airplane crash into the runway. [4]
Even the relatively simple flight control systems had faults in their design
and with the increasing complexity of flight control systems needed to pilot
autonomous aircraft, these faults are more prone to happening so in order to
minimise the risk of malfunction a very careful development should be done and
the validation processes should be very rigorous.

3.3 Risk acceptance

4 Responsibility with automation

By now, we’ve seen several examples of how automation has contributed to the
aeronautical industry, but also some examples of previously unforeseen problems.
These situations don’t have an easy solution, as responsibility of eventual failures
is set to different parties.
Let us start with two easy examples. If, today, a plane crashes due to negli-
gence of one of the pilots (supposing the airline has neither the way to predict
this behaviour nor the poor work condition, which can stimulate this behaviour,
for example), his actions make him accountable for eventual causalities as well
as for the damage caused to the airline property. Pilots are trained to act under
every kind of scenario, and have protocols to do so, which can prevent unde-
sired outcomes of critical situations. If, however, that plane crashes due to a
Ethics of the innovation of automation in the aeronautical industry 5

series of critical mechanical or electrical failures that, all combined, do not allow
any of the pilots to prevent the crash, the manufacturer is mainly to blame, as
they have responsibility to ensure normal functionality of every aircraft that is
bought by the airline. These two examples are just a minor sample of what can
happen nowadays. But in case of automated controls failure, the problematic
can propagate outside of these two spheres of pilots and manufacturers.
In these kind of failures, pilots can override the controls of the aircraft and
assume control, which prevents possible catastrophes. In a more problematic
scenario, they can even do an emergency landing, in case these failures are likely
to increase the chances of a crash. But, in case that the pilots cannot take over
the commands of the aircraft, the manufacturer is, again, responsible for not
ensuring the normal functionality of the aircraft
Let’s now imagine a fully automated aircraft, not controlled by any pilot,
carrying several passengers on board. In the case that the automated system
fails (not including any other possible mechanical, electrical or electronic mal-
function), two parties must assume responsibility. First, the manufacturer, in
the sense that they didn’t ensure that an automated system failure shouldn’t be
enough to cause a crash. Second, the airline, which now takes more responsibil-
ity than in the latter cases, as not having safety measures for these events and
allowing a computer to make actions that can put several lives of their costumers
at risk can be considered negligence.
As it will be dealt in section 5, Artificial Intelligence (AI) is one of the tools
that is predicted to have a big impact in the aeronautical industry (as said by
Rodin Lyasoff, CEO at A3 by Airbus1 ). This obviously brings most of the ethical
questions of AI into the panorama of automation. At a first glance, a failure of
this system can be considered critical, in case the aircraft commands depend on
it to work properly.
This can be considered the ’tip of the iceberg’ dilemma with AI guided air-
planes. Drastic situations can occur during flights, and decisions must be taken
by the AI systems. These can be programmed, but decisions on unforeseen sit-
uations are designed to be automatically taken by these systems, based on the
predictability of what has the best chances of providing safety for everyone in-
volved [15]. This gives autonomy to computers to make some debatable moral
actions in critical situations. And when these situations do happen, who shall
take accountability for the negative consequences?
For example, an airplane crash is imminent. In this imaginary scenario, the
airplane can either crash on water, and most likely destroy itself and everything
onboard, or try a landing on a beach which could increase the chances of survival
but risk several lives of innocent bystanders. How should the AI be programmed,
and how will it act? Should it have the moral autonomy to make these decisions?
What if the airplane is filled with merchandise only? What if there are only non-
human animals inside? What if the airplane has several passengers onboard?

1
See https://www.forbes.com/sites/quora/2018/08/07/how-artificial-intelligence-
will-impact-the-aviation-industry/
6 Fábio Fernandes, Frans Loekito, and Tiago Costa

These hypothetical questions can easily become realistic once AI is implemented


in aircrafts.
There may also be some unpredictable situations which can effectively put
this systems under stress. Some scenarios, like ill passengers, or even terrorism,
require immediate intervention. Emergency landings usually help to solve these
problems, but the automated system must know how to detect these events. It
can also be argued that hijacking an airplane becomes much harder for terrorists,
as in an automated system there can be no direct controls to the aircraft, and
the AI can even lock all accessibility if it detects that it is under attack. But in
case the automated systems fail to respond, who shall take responsibility?
It is first important to notice that, as stated in a recent paper released by the
AI Now Institute New York University, the public’s right to know which systems
impact their lives should be respected, by publicly listing and describing auto-
mated decision systems that significantly affect individuals and communities (as
in [13]). This allows, for example, the general public to have knowledge on how
automated an airplane is, before actually getting onboard. In undesired scenar-
ios, such as emergency landings or worst, plane crashes, both the manufacturer
and airline should take responsibility for any negative consequences, in case the
decision process is worked out by the automated systems. This is an outcome of
the fact that, even though the public may know how far automated the airplane
is, they have no control on it and trust the airline for their safety in any booked
flight. The airline must then ensure, along the manufacturer, that the automated
system is capable of performing under every kind of situation as well as a fully
trained pilot would. The public’s knowledge of these systems should only serve
to corroborate the acceptance of any risk and shouldn’t have an accountability
effect.

5 Future development of automation

As discussed earlier, AI is a step that must be embraced in the aeronautical


industry. This should, however, follow certain rules that, all together, allow the
replacement of pilots without any major concerns. With this, it is expected that
AI becomes a tool that is cheaper and as or even more reliable than pilots.
There are several options to implement AI in commercial flights that provide a
nice gradual adaptation to these systems such that, eventually, fully automated
commercial flights are realistic.
First, there is the concern of public acceptance. This group conducted a little
survey with (...). Even though this is not the most representative study, it can
give some realistic and useful results. Today, as the public is not in daily contact
with AI systems, there is not a great trust in these systems. This drives airlines
away from investing in AI [9]. So, implementation of AI must be done in a
gradual manner. For example, always having a responsible pilot either onboard
or on the ground can enhance the general acceptance of this implementation [3].
This new generation of pilots, however, must be trained differently.
Ethics of the innovation of automation in the aeronautical industry 7

Pilots nowadays are trained so that they are able to perform any possible task
in the cockpit. This training must be adapted to fulfil the interaction between
pilot and AI. As AI will perform most of the tasks, a pilot must know how to
better interpret signals from the computer and, also, must be ready to act when
something goes wrong. This can be considered more of a supervision task [1].
Second, the decision making process. The substitution of human pilots for
computers must also consider the way pilots manage decisions in every scenario.
It should then follow some points of what was presented in the AI Code from
the House of Lords of the United Kingdom [15]: it should be developed for
the common good and benefit of humanity and it should operate on principles
of intelligibility and fairness. It is only this way that automated systems can
take over manual pilot controls, and make the decisions have the most general
consensus.
Finally, as stated in the earlier sections, these systems must be foulproof
against any kind of attacks. The only way for this to be verified is with several
years of development. It is then we ask for a responsible innovation method,
where we don’t strive for AI airplanes (which are more cost-effective than paying
pilots), but develop AI guided systems throughout the next years. Then, they
are slowly integrated into conventional aircrafts. This shall happen progressively
and with certainty of the public’s acceptance. As with every engineering system,
risks will surely be taken, such as possible failures and flow reduction of flights
due to trust issues, but this way they are mitigated as much as possible.
It is only when this transition is well implemented that a fully automated
aircraft for commercial flights is a realistic idea that can be accepted within the
aeronautical industry. This idea should also be accompanied by the responsibility
of the several parties involved: manufacturers and airlines. They should ensure
that public’s rights are being preserved and that information about the aircrafts
and flights are publicly listed. Hence, security is preserved in the aeronautical
industry with groundbreaking technology put into action.

6 Conclusion
- Aviation should be the lead example of security and innovation
It is important to notice that, with these systems, not only cost-reduction is
pursued, but flight safety is achieved, as the human factor in plane accidents is
still worrisome.
"As our technology advances, we should not sit idly by and let human deci-
sions and actions — or indecisions and inactions — kill more people." [6]
8 Fábio Fernandes, Frans Loekito, and Tiago Costa

References
1. AeroSafety World (June 2010), The Journal of Flight Safety Foundation.
http://flightsafety.org/asw/jun10/asw_jun10.pdf [Visited 14/12/2018].
2. Air Force Flight Dynamics Laboratory (FDCL) Wright-
Patterson Air Force Base, Ohio. Fly By-Wire : Flight Control Systems
(September 1968), Cleasing House.
https://apps.dtic.mil/dtic/tr/fulltext/u2/679158.pdf [Visited 10/12/2018].
3. Collinson, P. Pilotless planes: what you need to know. The Guardian (August
2017).
https://www.theguardian.com/business/2017/aug/07/pilotless-planes-what-you-
need-to-know [Visited 09/12/2018].
4. Gains, M. Software fault caused Gripen crash. Flight International (March
1989).
https://www.flightglobal.com/FlightPDFArchive/1989/1989%20-%200734.PDF
[Visited 12/12/2018].
5. Gaskell, A. Automation, Ethics And Accountability Of AI Systems. Forbes
(April 2018).
https://www.forbes.com/sites/adigaskell/2018/04/18/automation-ethics-and-
accountability-of-ai-systems/ [Visited 09/12/2018].
6. Green, B. P. Ethics of flying: Germanwings raises question whether humans
are weakest link in air safety. The Mercury News (March 2015).
https://www.mercurynews.com/2015/03/27/ethics-of-flying-germanwings-raises-
question-whether-humans-are-weakest-link-in-air-safety/ [Visited 09/12/2018].
7. Institute for Cognitive Science University of California, San
Diego. The ’Problem’ with Automation: Inappropriate Feedback and Interaction,
not ’Over-Automation’ (July 1989), Royal Society.
https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19900004678.pdf [Visited
13/12/2018].
8. Irving, C., and Cox, J. Could Terrorists Hack an Airplane? The Government
Just Did. The Daily Beast (November 2017).
https://www.thedailybeast.com/could-terrorists-hack-an-airplane-the-
government-just-did [Visited 13/12/2018].
9. Markoff, J. Planes Without Pilots. The New York Times (April 2015).
https://www.nytimes.com/2015/04/07/science/planes-without-pilots.html?_r=0
[Visited 14/12/2018].
10. Ministerio de Fomento, Spain. Final Report Collision between B747 KLM
and B747 PAN AM Los Rodeos Tenerife Spain 27/03/1977 (July 1978), Ministerio
de Fomento, Spain.
https://skybrary.aero/bookshelf/books/313.pdf [Visited 10/12/2018].
11. NASA Ames Research Center. Single-Pilot Workload Management in
Entry-Level Jets (September 2013), Federal Aviation Administration, Office of
Aerospace Medicine, Washington, DC.
https://human-factors.arc.nasa.gov/publications/Single_pilot_workload_man_
62413.pdf [Visited 15/12/2018].
12. National Transportation Safety Board. Aircraft Accident Report
Controlled Flight into Terrain Korean Air Flight 801 Boeing 747-300, HL7468
Nimitz Hill, Guam August 6, 1997 (January 2000), National Transportation
Safety Board.
https://www.ntsb.gov/investigations/AccidentReports/Reports/AAR0001.pdf
[Visited 10/12/2018].
Ethics of the innovation of automation in the aeronautical industry 9

13. Reisman, D., et al. Algorithmic Impact Assessments: a Practical Framework


for Public Agency Accountability. House of Lords, April 2018.
https://ainowinstitute.org/aiareport2018.pdf [Visited 12/12/2018].
14. Ross, N., and Tweedie, N. Air France Flight 447: ’Damn it, we’re going to
crash’. Telegraph (April 2012).
https://www.telegraph.co.uk/technology/9231855/Air-France-Flight-447-Damn-
it-were-going-to-crash.html [Visited 10/12/2018].
15. Select Committee on Artificial Intelligence. AI in the UK: ready,
willing and able? (April 2018), House of Lords.
https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf [Vis-
ited 12/12/2018].
16. Tucker, P. The Next Step Toward Autopilot in Combat. Defense One (April
2014).
https://www.defenseone.com/technology/2014/04/next-step-toward-automated-
air-force/83029/ [Visited 14/12/2018].

Das könnte Ihnen auch gefallen