Sie sind auf Seite 1von 29

NATIONAL UNIVERSITY OF SINGAPORE

EG2401 Engineering
Professionalism
How Safe is Safe?
A0111151E Muhammad Rias
A0119681Y Muhammad Firdaus
A0094621H Roshan Kumar
A0111231H Lukmanul Hakim

Contents
1. Introduction............................................................................................................ 3
1.1 Safety, Risks and Innovation..................................................................................3
1.2 Conceptualizing Safety and Risk: An Engineers Approach............................................4
Risk-Benefit Analysis............................................................................................ 4
1.3 Understanding Accidents....................................................................................... 4
1.3.1 Procedural Accidents...................................................................................... 4
1.3.2 Engineered Accidents..................................................................................... 5
1.3.3 Systemic Accidents........................................................................................ 5
1.4 Causes of Engineering Accidents.............................................................................5
1.4.1 Technical Design........................................................................................... 5
1.4.2 Human Factors.............................................................................................. 6
1.4.3 Organisational System.................................................................................... 6
1.4.4 Socio-Cultural.............................................................................................. 6
1.4 Safety and Ethical Consideration.............................................................................7
1.4.1 Ethical Theories and Tools...............................................................................7
2. Case Studies........................................................................................................... 8
2.1 Boeing 787........................................................................................................ 8
2.1.1 Background.................................................................................................. 8
2.1.2 Engineered Problem....................................................................................... 8
2.1.3 Procedural Problem........................................................................................ 8
2.1.4 Safety/Ethical Issues....................................................................................... 8
2.1.5 How the matter was resolved..........................................................................13
2.2 Tacoma Bridge................................................................................................. 14
2.2.1

Introduction.......................................................................................... 14

2.2.2

Type of Accident.................................................................................... 14

2.2.3

Safety & Ethical Issues...........................................................................16

2.2.4 The Collapse & Aftermath..............................................................................17


2.2.5 Conclusion................................................................................................. 17
2.3 Apollo 13........................................................................................................ 18
2.3.1 Summary................................................................................................... 18
2.3.2 Type of Accident......................................................................................... 19
2.3.3 Causes of Accident....................................................................................... 20
2.3.2 Resolving Safety Issues................................................................................. 20

3. Challenges in Testing.............................................................................................. 24
3.1 Limitation of resources....................................................................................... 24
3.1.1 Inadequate time to test and evaluate new design...................................................24
3.1.2 Limitation due to technologies........................................................................24
3.2 Tight Coupling Complex Interactions.....................................................................24
3.2.1 Collaboration of complex systems....................................................................24
3.3 Uncertainty.................................................................................................. 25
3.3.1 Acceptable Risk.......................................................................................... 25
3.3.2 Designing for Safety..................................................................................... 25
4. Conclusion........................................................................................................... 26
Criteria for Safe Design........................................................................................... 26
5. References........................................................................................................... 27

1. Introduction
No duty of the engineer is more important than her duty to protect the safety and well-being
of the public. Fleddermann
Safety is a primary concern for engineer in product design because product failure can have
far-reaching and disastrous consequences on peoples lives. Often, engineering disasters or
fatal product failure is attributed to unethical engineering practices which call to question the
competency of engineers as professionals. In some cases, product failures are unavoidable,
resulting in normal accidents(Harris et al., 2013), which the engineers could have never
anticipated. Nevertheless, it is the responsibility of an engineer to hold paramount the safety
and well-being of the public, insofar as he is able to, and design products which are
adequately tested and proven to be safe.
1.1 Safety, Risks and Innovation
SAFETY

1
RISK

Safety and Risk are inversely related (Harris et al., 2013). Hence, minimal risk will
essentially maximize safety.
Innovation is an integral part of the engineering practice and invariably comes with new
risks. Novelty of a new design suggests that there is no prior experience with risks associated
with the product. When engineer launch a novel product, they may not be able to gauge all
implications that it may have on society. While it is not humanly possible to account for and
eliminate every risk there is related to a new product, it will be paradoxical to account for risk
which the engineer is not aware of to begin with. Also, it may not be economically feasible to
spend the money and effort in eliminating risks which are highly unlikely. Therefore, in light
of these limitations, an engineer has to take acceptable risk such that safety is maximized
without hindering innovative progress, which is a crucial but difficult task.

1.2 Conceptualizing Safety and Risk: An Engineers Approach


Risk-Benefit Analysis
A prudent engineer is one who holds paramount the safety of the public and makes ethically
sound decisions when it comes to taking risk. To mitigate risk, an engineer has to be able to
accurately assess risks and identify causes from which he could take the necessary safety
measures.
Traditionally, the risk-benefit analysis has been used extensively by engineers and other
professionals alike, in making ethical decisions with regards to safety. The risk-benefit
analysis, risk is defined as the product of the likelihood and magnitude of harm (eq 2).
Risk = Likelihood of Harm x Magnitude of Harm
The likelihood and magnitude of harm are given quantified in terms of monetary cost and the
multiplied to give an estimation of risk. Benefit is quantified similarly. Risk is considered
acceptable if the value of benefit is greater than that of risk. However, this utilitarian
approach to risk assessment is not very useful because it is difficult to assign a monetary
value to risk and benefit. Moreover, harm is underestimated as the broader and more indirect
impact on society is often overlooked.
1.3 Understanding Accidents
It is important engineers to study accidents which have occurred in the past so as prevent
them from occurring again. Fleddermann (2012) has categorised accidents as procedural,
engineered and systemic.
1.3.1 Procedural Accidents
These are the most common type of accidents. They occur as the result of engineers not
following regulations such building standards. It also includes situations where the engineers
approve faulty designs due to negligence. The Hyatt Regency walkway collapse in 1981 is a
well-documented example of a procedural accident (Feld & Carper, 1997). Engineers from
Dillum and Associates Ltd, the company which was responsible for overseeing the
construction of the suspended walkways, neglected their duty as they blindly approved the
new design but faulty design proposed by their sub-contractor. As a result, the poorly
designed walkways collapsed on 17th of July 1981, killing 114 and injuring 185.

1.3.2 Engineered Accidents


Engineered accidents are caused by design flaws. These flaws may be elusive during the
testing phase and may sometimes lead to overestimating the performance of the product
designed. On the other hand, the testing itself may not be rigorous enough, due to the
limitation not being able to simulate every possible condition. The key to avoid such
accidents, as suggested by Fleddermann is to through gaining knowledge by studying similar
cases and have thorough system of testing.
1.3.3 Systemic Accidents
These types of accidents are the most difficult to evade because of tight coupling and
complex interactions of systems(Harris et al., 2013). Tight coupling describe processes
which are closely related such that it is difficult to completely isolate a product failure to one
part of the system. Complex machines such as an aircraft are good examples of a tightly
coupled system because a failure in one part almost immediate affects other. Complex
interactions refer to interactions within the system which are difficult to anticipate due to the
sheer complexity design. Moreover, organisations which run these complex systems also
interact is unpredictable ways. Ultimately, these type of accidents happen when a series of
minor failures cumulatively lead to a catastrophic accident although each failure by itself
might not have caused it.
1.4 Causes of Engineering Accidents
1.4.1 Technical Design
Engineering accidents usually occur due to the following reasons with regards to design
factors:
1. Faulty design (usually the result of unethical practices)
2. Defective equipment used in design and manufacturing
3. Defective materials procured and used that does not meet industry standards
4. Faulty testing procedures performed
It is important to note that it is impossible to design and build a product that will never fail.
But as an engineer, he or she has the responsibility to ensure that when a product fails, it will

fail safely without causing any damage to user/society or the environment. Therefore, an
engineer must be able to design a product that complies with the law, meets standard of
acceptable practice and is safe for usage for a period of time.
1.4.2 Human Factors
Human factors contributing to product failures involves refers to the limitations of engineers
as human-beings to operate ideally according to the standards prescribed, procedural or
moral. For instance, design failure may result due to misjudgement, ignorance or unethical
practices. Misjudgement occurs due to lack of experience in dealing with a particular product;
thereby the engineer either overestimates or underestimates the performance capabilities of
his product. Moreover, misjudgement might result when engineers follow unconventional
ways of dealing with product design or testing which may lead to increasing risk and
compromises safety. Their decision on acceptable risks may be a deviance from what is
commonly accepted, thus, may result in product failure if not properly evaluated. On the
other hand, unethical practices are intentional behaviour which is not excusable. The moral
values and ethics the engineer holds may largely influence critical decisions with regards to
safety issues. When these values are not upheld or do not meet professional moral standards,
the competency of the engineer as a professional is in question.
1.4.3 Organisational System
Organization system
It is clear that organizational system is a key component of both good safety cultures and
high-reliability organizations. But learning can be thwarted by well-known difficulties in
handling informationtoo much information, inappropriate communication channels,
incomplete or inappropriate information sources, or failure to connect available dataand
these difficulties can pose acute challenges for safety. For example, an incomplete or
inaccurate problem representation might develop at the level of the organization as a whole
and thus influence the interpretations and decisions of the organizations individual members.
Such a representation may arise through organizational rigidity of beliefs about what is and is
not to be considered a hazard(National Academy of Engineering, 2012).
1.4.4 Socio-Cultural

1.4 Safety and Ethical Consideration


Safety doesnt sell. - Lee Iacocca, President of Ford Motor Company, 1970s
Since safety issues can be very subjective, often ethical considerations have to be made in
deciding the right course of action. Encountering ethical dilemmas during the course of
designing a new product is common in the engineering profession. For instance, there may be
a conflict of interest between designing a cost-effective product and a safe product. To ensure
a product design can be safe, more expensive parts might be necessary and money must be
spent adequately testing the product. For example, in the Ford Pinto case(Samuel & Weir,
1999), the Ford Motor Company chose to manufacture a the Pinto with a flawed fuel tank
design. Although Ford came to know of the flaw during the testing phase, they were already
too late into the production phase. Any changes there on will be very expensive and they
would fail to keep their 25 month cradle-to-launch schedule. Ford justified it decision to
continue manufacture of the Pinto with an amoral cost-benefit analysis and even argued that
automobile accidents are caused not by the car but by the driver and the road conditions.
This shows that it is not sufficient to just have tools to analyse safety and risks but they must
be applied in a morally right manner in order to effectively mitigate risks and provide safety
for the end-user.
1.4.1 Ethical Theories and Tools
Ethical theories can be applied in solving ethical problems in engineering. It is very common
for engineers to encounter situations which require them to adopt some kind of moral
standard before making a decision. There are different types or ethical theory, each providing
its own dimension to a framework which has to be used cautiously. The theories are not
universally applicable and it is up to the engineers discretion in applying them. The ethical
theories such as Utilitarianism, Duty and Right Ethics, Virtue ethics will be used in this paper.
Ethical tools such as line drawing and flow-charting will also be used.

2. Case Studies
2.1 Boeing 787
2.1.1 Background
Boeing Commercial Airplanes in 2004 launched the 787 Dreamliner, an all-new,
superefficient airplane. Its advanced design was unparalleled by its competitors. However,
soon after its launch, it got itself into numerous problems which eventually lead to its
grounding.
2.1.2 Engineered Problem
The Implementation of the lithium-ion batteries previously adapted from cars lead to its
critical design flaw. Within aircraft compartments, operating temperatures were significantly
higher than in other vehicles causing the leakage and inflammation of the fluid from within
the cells resulting in a complete burnout of the battery system.
2.1.3 Procedural Problem
The shift in management style from adopting a safety-design approach to a high supply/low
cost approach lead to engineers putting together a plane lacking in structural integrity, with
entire sections of its wiring missing, the inadequate testing of the system to ensure it meets
the safety requirements and a general lack of standby safety ensuing features that left the
plane completely vulnerable in case anything went wrong.
2.1.4 Safety/Ethical Issues
1. Outsourcing essential parts to save on manufacturing costs
Safety
Due to outsourcing its critical components such as the electrical and power panels, it received
lower quality products due to lesser oversight on quality and less stringent manufacturing
processes overseas. As a result, Boeing faced many recurrent electrical fires while operating
which caused the plane to have a forced landing several times. Such outsourcing led to
malfunctioning of critical components leading to the endangerment of the safety of the
passengers on board.

With the rapid onset of globalization, it started to face increasing competition with other
aircraft manufacturers such as Airbus. These resulted in the company having to find ways to
remain economically competitive and provide new air travel solutions to stay relevant.
Boeing 787 and decided on outsourcing its production lines so as to save costs.
According to one of its engineers involved in the project, the problem with getting low
quality products becomes a significant problem when dealing with key components such as
the power panels and the electrical system. However, what is very different on the 787 is the
structure of the outsourcing. You only know whats going on with your tier 1 supplier. You
have no visibility, no coordination, no real understanding of how all the pieces fit
together.(The Seattle Times, 2013).
Ethical
Utilitarian-thinking (Cost Benefit Approach)
With hindsight of what had occurred to Boeing 787 after its launch, we can adopt a costbenefit approach to understand Boeings actions. Costbenefit analysis is sometimes referred
to as riskbenefit analysis because much of the analysis requires estimating the probability of
certain benefits and harms. Using this approach, Boeing evaluated the available options it
had, such as in-house or outsourcing the production. It then assessed the costs and benefits of
each action and finally chose to outsource as outsourcing would significantly reduce its
manufacturing costs. In course with such an action, it predicted that such an action would
only raise slight uncertainty over the quality of its products. Hence, it made financial sense to
adopt such an approach. However, using this approach didnt work out due to two relevant
fundamental problems.
1. First, in order to know what we should do from the utilitarian perspective, we must know
which course of action will produce the most good in both the short and the long term. In
this case, Boeing had just started its radical approach on outsourcing, which was to only
monitor the complete structure of the products, and not its pieces or how it was assembled
as mentioned earlier. These lead to its inability to adequately predict what economic
benefit it would produce it the short vs long run.
2.

Second, the utilitarian aim is to make choices that promise to bring about the greatest
amount of good. We refer to the population over which the good is maximized as the

audience. The problem is determining the scope of this audience. With Boeing
categorized as an aviation company, its company values of safety and reliability were of
paramount importance. With it outsourcing to bring down its manufacturing costs, it may
have benefitted the company financially but ultimately cost heavily to public safety and
even the reliability of the aviation industry in general.
2. Disconnect between engineers and managers
Safety
The disconnection between engineers using a safety approach and the managers adopting a
cost approach lead to different design considerations for the aircraft. With the engineers
having to design with the most cost effective approach, they could not implement any
additional safety guards and had to complete the aircraft with as little resources and time as
possible. This compromised the structural integrity of the aircraft causing multiple problems
such as a jammed doorway, cracks on the tips of wings which seriously threaten to dismantle
the plane in-flight and result in massive causalities.
This critical aspect of communication had lost its way late into the restructuring programme
by Boeing after its new leadership took hold. With the incoming CEO McNerney, there came
key changes such as trimming staff to key essentials only, cutting all costs and squeezing
suppliers. Under such directive, there was a key difference is job priorities which didnt
match and resulted in ineffective communication between the engineers and managers.
With engineers demand for key safety design elements being met with possibility of
additional cost-expenditure, engineers were faced having to develop a very complex system
which the cost-limitation in place instead of its key safety and engineering element in place.
Of 15 Boeing aerospace workers asked at random, 10 said they would not fly on the
Dreamliner. Eventually, trust broke down between them and resulted in engineers, under
pressure, delivering what the management had set in place. Incidents like incomplete aircraft
parts, low wage schemes especially for overtime work, unrealistic expectations lead to a
product being created that was just not even ready for entering the industry(Huff Post
Business, 2014).
Ethical
Rights and Responsibility of Engineers (Conflict of interest)

The case study presented above, there is a clear conflict of interest between the managers and
engineers. This conflict of interest in the key design and manufacturing the aircraft directly
affects the safety and reliability of the aircraft. With the key designing of the safety leading
up to the safety of the public, it is key that the engineers are ethical in their actions. Firstly,
engineers should ensure regardless of directive, they should follow the companys safety
policy. However, if the company safety policy is felt to be insufficient for the public safety,
you can look to the statements in the professional ethics codes that describe safety and how it
explicitly overrides any such conflict of interest. Such actions ensure an objective way to
design a product that is in-line with the responsibilities of the engineer and compel managers
to accept such a design or face criminal charges. Often enough, for such complex systems,
there are no clear directives under safety codes that prescribe its exact design but however,
there are directives for actions that can be taken to ensure that the design meets the safety
requirements. Such actions include submitting design to standard engineering checks to meet
the specification marks. (Engineering Ethics by Fledderman, pg. 105)
3. Inadequate testing for new lithium batteries
Safety
The implementation of the lithium-ion batteries previously adapted from cars lead to its
critical design flaw. Within aircraft compartments, operating temperatures were significantly
higher than in other vehicles causing the leakage and inflammation of the fluid from within
the cells resulting in a complete burnout of the battery system. This caused the emergency
landing of the aircraft and could have easily resulted in a power failure in flight resulting in
grave consequences for those on-board.
The 787 is equipped with lithium-ion batteries, which are lighter and more powerful than
conventional batteries, but which have been known to cause fires in cars, computers and
mobile phones. Shortly after, all 50 of the Dreamliners that had been delivered to airlines
were grounded(The Economist, 2013). Lithium-ion batteries consistently resulted in
inflammation of the entire battery system due to its high tendency for the fluid to leak out of
one cell from the battery to another cell due to excessively high temperatures. The leak of
fluid causes ignition outside the chambers and as a result, the whole system catches fire. Even
with this prior knowledge, Boeing decided to adopt the usage of the lithium-ion batteries.
This resulted in a similar occurrence of battery fires and had to force Boeing to ground it

entire fleet of 787s which resulted in massive financial losses and it lead to them completely
redesigning the lithium-ion batteries
Ethics
Ethics-line drawing (Boeing point of view)
We can classify the Negative/Positive paradigm and categorize Boeings actions and suitable
scenarios to get a better idea of the problem.
NP: Usage of existing lithium-ion batteries without any prior testing.
PP: Completely redesign the batteries and test it vigorously to show it would be extremely
unlikely to fail under operating conditions and come up with a standby battery and flameextinguisher system.
P1: Taylor the lithium-ion batteries to meet the minimum benchmark requirements (5/10)
P2: Modification to meet standards, conduct extensive testing to ensure it works (7/10)
SC1: Redesign the battery to ensure it is very stable in operating conditions; test it
extensively before release (8.5/10)

Such an action had to be taken into consideration for both ethical/safety considerations as
firstly, a failure of a critical system can affect the lives of many passengers in flight and
secondly, being a commercial aviation company which represents the aviation industry, it has
an ethical responsibility to ensure safety and reliability are its top priority.
4. Pressure to rush entire manufacturing/assembly lines
Safety
With fast production no longer the top priority, the engineers resulted in compromised safety
standards to meet the requirements. As such, safety design parameters were not strictly
adhered to and subsequently presented itself as major safety problems in-flight and on the
ground.
However, an engineer from the South Carolina plant has said it is far behind schedule and
that management has insisted on sending unfinished planes to Seattle in order to keep up the
planned rate. With the introduction to the line of the longer 787-9 model, which is slightly

different from the 787-8, has caused production to dip from 10 to seven a month as workers
struggle with new instructions and techniques. In addition, he said that around the time the
experienced workers were let go, the company demanded a higher level of output, which
caused workers to rush jobs. Boeing plans to increase production levels of the Dreamliner to
10 per month, and then 12 per month in 2016, and 14 in 2020(The International Business
Times, 2014).
Ethics
Causes of technological disaster (4 factor approach)
Technical design factors (Faulty testing procedures)
The pressure for speed lead to several production sites testing only critical functioning
of the system before sending it away to the next plant as was required in their
requirements. Hence other substitute components were never tested to ensure it
worked with the mainboard system.
Human Factors (Unethical/Willful Acts)
Along with rushing production, the firm kept wages and overtime pay very low in
order to save costs. Such practice lead to poor professional conduct when servicing
equipment, especially when required to work overtime to meet deadlines. As such,
entire sections of the aircraft were found missing when it was sent to the next
assembly line.
Organizational system factors (Communication failure/Policy Failure)
When engineers whom had received parts with missing sections from prior assembly
line complained to the managers, the managers instructed the engineers to help fix
those problems in addition to their standard line of work. The way the management
handled it at the short-term lead to a systematic failure of the supply chain of the
assembly section of the aircraft leading up to severe quality deterioration of aircraft
assembly as engineers rushed to simply deliver the aircraft as per schedule.

Socio-cultural factors (Attitudes towards risk/Values placed on safety)


With the main objective of the management to deliver a huge number as fast as
possible, the other priorities such as safety had to re-align itself to meet the main
objective. As such, many cost-cutting measures, inadequate testing and evaluation and
disconnect between engineers and managers lead to a huge compromise of safety

2.1.5 How the matter was resolved


There were several problems associated with the Boeing 787, the most significant one being
the lithium-ion batteries. This eventually led to the grounding of the fleet with the FAA
requiring to fix these problems before the plane could fly again.
Redesigning of the batteries
Usually with most aircraft, they use hydraulics or pneumatics to operate on-board systems.
The Boeing 787, however, had a huge array of lithium-batteries instead. These itself required
over 300,000 hours of engineering time of come up with a fix. Firstly, the problem with the
original battery was identified. Due to increasing temperatures faced when operating, one cell
could rupture and spill flammable materials into the battery system. This would result in
other cells rupturing as well, and causing the whole system to burn up.
The problem was solved in-house by experienced engineers using a triple-pronged approach.
First, the cell and battery build process has been enhanced. This should prevent cells from
breaking open as easily if things get a little toasty. The testing procedure for manufactured
cells has also been revised. The design of the complete battery pack has been altered to
operate in a narrower voltage range to reduce heat. Additionally, a new charging system was
developed to prevent over-charging damage. Lastly, Boeing developed a battery enclosure to
protect the aircraft in the event of a failure.
In order to convince the FAA that the new battery was safe, Boeing did extensive testing
where the batteries were intentionally driven to failure. The new battery never came
anywhere close to those temperatures, and when it did eventually fail, only two cells vented.
There was no fire, and things were brought under control easily.
2.2 Tacoma Bridge
2.2.1

Introduction

The Tacoma Narrows Bridge at Puget Sound in the state of Washington was completed and
opened to traffic on 1st July 1940. It was built at a total cost of $6.4 million to facilitate the
crossing of Tacoma Narrows, between the city of Tacoma and the Kitsap Peninsula. It was the
third longest suspension bridge in the world at that time. During its construction, the bridge
exhibited large vertical oscillations and was nicknamed Galloping Gertie by the people of

Tacoma due to its oscillatory behaviour. It opened to traffic on 1st July 1940, and dramatically
collapsed on 7th November of the same year due to high wind conditions.
2.2.2

Type of Accident

Engineered Accident
The Tacoma Narrows Bridge collapse is an example of an engineered accident. A proposed
new, cost effective and slender bridge design reduced the stiffness greatly. A lack of
knowledge, coupled with inadequate testing and modelling resulted in the failure of the
structure.
Causes of Failure
Technical design factors
In the case of Tacoma Narrows Bridge collapse, the bridge engineer Moisseiff deviated from
the conventional bridge designs and introduced a new design that lacked theoretical and
experiential knowledge. This increased the risk and boundaries of acceptable risk. However,
nothing was done to mitigate the risks hence, this contributed to the structural failure.
Furthermore, theoretical analysis was used as the basis for design decision when there was
inadequate recognised theory to rely upon the design of the bridge.
In the absence of such knowledge and experience, the engineers could have:
1. Relied on experiential knowledge and work with established/proven and conventional
designs with slight modifications that would not have compromised the structural
integrity.
2. Conducted detailed testing and modelling to determine the structural capability and
capacity under imposed conditions that could be expected. This would have allowed
for better understanding of the new design and structure, therefore, remedial actions
could have been followed up if necessary to meet the required standards.
Organizational System Factors
Public Works Administration (PWA) had made faulty group decision in approving Moisseiff
design based on his cost saving approach and his reputation. They failed to analyse the pros
and cons of each proposed designs and instead selected a design that incurred less cost,
instead of placing emphasis on a safe and reliable structure.

Though they had cost pressures from the federal government that was not too keen in
financing the project, they have the responsibility in making the right decision based on
available information and resources without taking unnecessary risks that would place human
lives in danger. Their decision to go with Moisseiff design resulted in a great reduction in
stiffness of the bridge which caused the structural failure.
2.2.3

Safety & Ethical Issues

Insufficient experiential knowledge in design process


The design of Tacoma Narrows Bridge was based on a theory of elastic distribution described
in a paper published by Leon Moisseiff and Frederick Lienhard, a Port of New York
Authority engineer. This paper theorized that the stiffness of the main cables (via the
suspenders) would absorb up to one-half of the static wind pressure pushing a suspended
structure laterally. This energy would then be transmitted to the anchorages and towers. This
theory went beyond the conventional of deflection theory that was developed by an Austrian
engineer, Josef Melan.
Based upon this theory, Moisseiff proposed stiffening the bridge with a set of eight-foot deep
plate girders rather than the 25 feet deep trusses proposed by the Washington Department of
Highways. This change contributed substantially to the difference in the estimated cost of the
project. Additionally, because fairly light traffic was projected, the bridge was designed with
only two opposing lanes with a total width of only 39 feet. This was narrow relative to its
length. With only the 8 feet-deep plate girders providing depth, the bridge's roadway section
was substantially reduced.
The use of such shallow and narrow girders proved to be the undoing of the bridge. With such
thin roadway support girders, the deck of the bridge was insufficiently rigid and was easily
moved about by winds. The bridge became known for its movement. A modest wind could
cause alternate halves of the centre span to visibly rise and fall several feet over four- to fivesecond intervals.

Figure 2.2.3: Construction of Tacoma Narrows Bridge


Inadequate Testing and Modelling
The Tacoma Narrows Bridge was unusually long and narrow compared with other suspension
bridges previously built. The original design called for stiffening the suspended structure with
trusses. However, funds were not available, and a cheaper stiffening was adopted using 8-foot
tall girders running the length of the bridge on each side. Unfortunately, the stiffening was
inadequate. The theory of aerodynamic stability of suspension bridges had not yet been
worked out, and wind-tunnel facilities were not readily available due to the pre-war military
effort.
Due to the oscillation of the bridge, the Washington Toll Bridge Authority hired engineering
professor Frederick Burt Farquharson from the University of Washington to undertake windtunnel tests and develop solutions to reduce the oscillations. Through the studies and tests,
proposals to modify the bridge were introduced. However, it was not carried out as the bridge
collapsed 5 days after the studies were concluded.
2.2.4 The Collapse & Aftermath
On 7th November 1940, the wind was blowing through the Narrows at a steady speed of about
42 miles per hour. At about 10 am, the bridge began to oscillate severely due to aero-elastic
flutter in the torsional mode and the bridge was closed to traffic. At 11:10 am, the centre span
collapsed. Fortunately, there were no human casualties, however, a dog died in the collapse.
As a result of the disaster, there were more testing and modelling conducted to study and
analyse the aerodynamics of bridges. This allowed for better understanding of such structures
and engineers could perform remedial actions to modify the designs if required to ensure

structural integrity and safety were not compromised. The next generation of large suspension
bridges featured deep and rigid stiffening trusses. Thus, in 1950, a new suspension bridge
with an improved design was erected and opened to public.
2.2.5 Conclusion
It is impossible for an engineer to anticipate all of the technical problems which can result in
failure. There is always an uncertainty involved with new products. A failure mode is way in
which a structure, mechanism or process can malfunction. (Harris et al., 2013). There are so
many ways a new product can fail under various kinds of conditions and it is impossible to
anticipate or predict how it would fail. An engineer can mitigate this uncertainty by applying
some useful tools that could guide him (e.g. fault tree analysis). This would reduce the risk
and uncertainty involving the new product, however it does not guarantee that it is 100% safe
and would not fail during its usage.
Therefore, it is important for an engineer to come up with designs that are backed up by
sound technical knowledge & expertise with adequate testing done to ensure remedial actions
are in place to support the new product.
2.3 Apollo 13
"Within less than a minute we had this cascade of systems failures throughout the
spacecraft It was all at one time - a monstrous failure." - Liebergot, NASA flight controller
2.3.1 Summary
The Apollo 13 spacecraft was launched on the 11th April 1970 on a mission to land on the
Moon. Unfortunately, the mission was aborted halfway due to an explosion in the craft 56
hours later. The explosion was attributed to the malfunction of the second oxygen tank in the
service module (figure #). Despite the dangerous situation, the flight controllers at NASA
mission control room, together with team of engineers and designers, worked tirelessly
around the clock to get the astronauts safely back to Earth.

Organisatio
n
Role

NASA Flight Control


Kranz(Lead)
and
Team
In-charge of the Apollo
13 mission

North American Rockwell (NR)

Beeches
(BAC)

engineers in-charge of the oxygen


tank design and testing

Company which assembled


the oxygen tank

Table 1: Parties Involved

Aircraft

Corp.

Figure #: The Apollo 13 Spacecraft


Table 2: Parts Description of Apollo 13
Service Module (SM)
houses the oxygen tank and fuel

Command Module (CM)


where the astronauts ride for
most of the journey

Lunar Module (LM)


used to land on the moon and
meant to be left behind

2.3.2 Type of Accident


Engineered
The Apollo 13 accident was engineered accident because it happened due to a design flaw.
NR has specified that the tank to be designed to run at 28V and 65V direct-current (D.C.).
28V was the voltage used in the spacecraft while 65V was used on ground to carry out tank
pressurization. However, BAC had designed every part of the tank for the dual-voltage
function except the thermostat switches, which was a serious oversight. The thermostat
switches were a crucial safety mechanism to prevent the tank temperature from exceeding
80.
Procedural
The Apollo 13 accident can be classified under procedural accident because the NR engineers
testing the tank did encounter some indication that it was faulty but still approved the design
to be installed. When detanking (removing oxygen from the tank), NR engineer found it
difficult to do so because of gas leaking from a displaced loose-fitting tube in the tank.
However, the engineers were not aware of the problem and improvised an unconventional
method to detank. They subjected the tank to extended heating to vaporize the liquid
oxygen in the tank so that they can simply vent the gas out.
When tank heater was turned on at 65V, the incompatible thermostat switches (designed for
28V) melted shut and were no longer able to act as a safety mechanism. Hence the
temperature in the tank increased steadily up to 1000 during the boil off, causing the
internal tank wire insulations to be damaged. Unbeknown to them, the tank was now
effectively a bomb when filled with oxygen because any spark from the damaged wires is

going to cause an explosion. The tank was then filled with oxygen and mounted on Apollo
13.
Systemic Accident
The Apollo 13 incident is also a systemic accident because it was the result of complex
interactions involving engineers and technicians handling the oxygen tank, from the
designing to the testing and finally equipping the Apollo 13 with it. Minor failures of the
engineers and technicians by itself would not have caused the explosion but collectively, the
stage was set for an accident.
The first failure was in the handling of the oxygen tank by BAC technicians. They had
dropped the tank from 2 inches accidentally when transferring it from a spacecraft. This had
caused the displacement of the filling tubes in the tank. The tank underwent acceptance
testing no exterior damages were detected and they did not have any problems detanking the
equipment.
The displacement of the filling tubes later became the problem when NR was doing their
detanking, which motivated them to use unconventional ways to detank, thus damaging the
electrical wire insulation. This is the second failure because difficulty in detanking should
have already been sufficient indication of a tank fault. The engineers found a way around it
get the job done instead of raising concerns over the safety issues of using the tank.
2.3.3 Causes of Accident
Technical Design Factors
Flawed design
Defective Equipment
Ineffective Testing Procedures
Organisational System Factors
Cost/Schedule Pressures
Communication Failure
Policy Failure

Human Factors
Negligence/Carelessness
Misjudgement
Unethical behaviour

Socio-Cultural
Attitude towards Risk
Value of Safety with respect to other factors
Institutional Mechanism

2.3.2 Resolving Safety Issues


The Situation
When the astronauts switched on the power fans for a routine procedure of cryo-stirring the
oxygen tank, the damaged wiring created sparks which set off an explosion in the oxygenrich tank. The explosion of the oxygen tank, which was equipped in the service module, took

out the spacecrafts main supply of fuel and oxygen. This effectively crippled the command
module, where the astronauts were in.
Crisis Control

Suddenly and unexpectedly we may find ourselves in a role our performance has ultimate
consequences. Gene Kranz, NASA Flight Director, 1970
Crisis control bring preserving safety to a whole new level. With the added pressure of time
and limited resources, engineers have to be able to make the right call, be it based on
experience or ethics, at the right time, in order to protect the safety of other human-beings.
This was demonstrated by the engineers and flight controllers of NASA who were in the
flight control room, working round the clock to devise a way of return for the Apollo 13
astronauts.
Risk Assessment by Kranz
Lead flight controller Gene Kranz made the called to abort the mission immediately when
astronauts reported the explosion from space. Kranz had to weigh his options carefully as it
would mean life or death for the astronauts stuck in the Apollo 13 which was now drifting in
space which heavily damaged service module.
Kranz and team assessed the situation and instructed the astronauts to shut down the
Command Module to safe power and power up the lunar module before the oxygen supply
ran out. The lunar module had its own supply of power and oxygen. The plan was to use the
LM as a lifeboat for the moment.
Table 1: Risk Assessment
Abort Route

Possible
situation
Possible Harm
Magnitude of
Harm (1-10)
Likelihood
Harm
(1-10)

of

Direct:
Abort on the front side of the moon and be back
on Earth in 1.5 days. Requires the main engine
propulsion and perfect execution of spacecraft
manoeuvre.
Engine failure due to the explosion.
Spacecraft crashes into the moon surface
10
Definitely fatal and leaves absolute no chance of
survival.
8
Uncertainty if engine was damaged as it was very
close to the explosion.

Circumlunar:
Travel around the moon which will take
between 4-5 days. Follows a free-return
trajectory (does not require propulsion)
Resources in lunar module meant for 2
people for 2 days.
Run out of oxygen, power and food
9
Running out of oxygen definite meant
death.
7
There was a small chance of figuring out a
way to conserve the resources for 4-5

Risk
Benefit

Magnitude x Likelihood = 10 x 8
= 80
Faster return if executed successfully

days.
Magnitude x Likelihood = 9 x 7
= 63
Slower return buys more time to devise
survival plan.

Kranz had to make a decision between already high-risk return plans (refer to Table 1). In
such a dilemma and pressure, Kranz decided to go with the circumlunar route, buying more
time for his team and him to devise a strategy to keep the astronauts alive.
Safety Issues
Rising CO2 Levels
The CO2 level s in the LM was rising to dangerous levels during the time the astronauts were
residing in the lifeboat. They had spare CO2 filter canister from the CM. However the
canisters where cubic and could not fit into the circular inlets of the air purifier. This was
another design flaw in the spacecraft because the parts within each compartment were
different from the others. Standardizing the parts would have made the emergency situation
easier to cope with.
Engineers in flight control immediately improvised a way to modify the canister fittings with
materials which would be available on board the craft. The engineers communicated the adhoc design to the astronauts who were able to reconstruct the canisters as specified, thus was
able to make them fit into the circular canister sockets.
Restarting the Command Module in Mid-Flight
For their re-entry into the Earths atmosphere, the engineers in flight control faced another
safe issue. The CM had to be restarted from the shut-down in mid-flight, which had never
been done before. Moreover, the engineers had to figure out a new way to separate the LM to
a safe distance from the LM during re-entry because the SM which was required for this was
damaged in the explosion. A team of six engineers from the University of Toronto worked on
the strategy and were able find a solution within a day. The method was one which was
accurately calculate and later executed successfully by the astronauts.
Ethical Issues
Next we will demonstrate the application of ethical theory and ethical problem-solving tools
in our analysis of the Apollo 13 accident.
Applying Ethical Theory

Action/Choice
Ethics Category: Duty Ethics
NR not recognising faulty tank
as a safety hazard and using
unconventional
detanking
methods.

Beeches failure to change the


thermostat switches to a 65V
compatible version as specified
by NR.

North American Rockwell (NR)

Beeches Aircraft Corp. (BAC)

Duty ethics was violated because


it was NR responsibility to
identify
all
hazards
with
equipment. Instead, the resorted to
unconventional
methods
to
detank when the normal method
was not working properly. They
misjudged the method to be
flawed instead of the equipment to
be faulty.
Duty ethics was violated because
despite Beeches oversight, NR
tests should have been rigorous
enough to detect faulty equipment.
The tests should have specifically
included a thermostat switch test.

Unconventional
detanking
was done by NR with the
assumption that thermostat
switches were compatible and
functioning. Beeches marred
the trust NR had on it by failing
to assemble the tank according
to specification. Duty ethics is
violated.
This was a serious violation of
duty ethics due to negligence.
This also highlights loopholes
in testing which was not
thorough enough to detect the
flaw in assembly

Ethical Tool: Line Drawing

NP P1
Point

P2

SC1

Ethics Line Drawing from the point of view North American Rockwell
(NR)
Positive Paradigm NR runs thorough testing on tank before accepting it from Beeches and
(PP)
detects incompatible thermostats and loose tubes and rectifies it.
Negative Paradigm NR does not run any test on tank and equips the tank onto the Apollo 13.
(NP)
Point Under study NR does not identify hazard and unintentionally makes matters worse by
1 (P1)
conducting unconventional detanking.
Point Under study NR trusting Beeches to have properly assembled the tank and not
2
adequately testing equipment by itself.
(P2)
Scenario 1(SC1)
NR identifies difficulty with conventional detanking as due to faulty
equipment and runs further test on tank and rectifies loose filling tubes.

PP
Location
left
10/10
0/10
1/10
3/10

8/10

from

Ethical Tool: Flow Chart

3. Challenges in Testing
When testing the quality, performance and reliability of new products, engineers often face
limitations in determining the acceptable risks and the subsequent devising safety measures.
3.1 Limitation of resources
3.1.1 Inadequate time to test and evaluate new design
When engineers come up with new designs for a product or structure, there is always an
element of uncertainty in its functional capabilities and safety aspects. To mitigate such
factors, risk assessment and proper testing has to be carried out to ensure the design is
reliable and safe for human use. This requires time, effort and money. In real life scenarios,
time is of the essence. It provides the competitive advantage over other competitors. Thus, to
stay ahead of their competitors, companies tend to rush their new product into manufacturing
so that they can be released to the market at a earlier date. This puts time pressure on
engineers to deliver a design that meets the standards of acceptable practice. As a result,
inadequate testing and modelling are conducted, which leads to the failure of the product due
to a lack of understanding and evaluation of the design.
Furthermore, safety considerations are compromised at the expense of releasing the product
at an earlier time. Potentially safer alternative designs could be overlooked due to time
pressure. Hence, the company fails in selecting the best solution and implementing it in its
design.

3.1.2 Limitation due to technologies


An engineer may have ideas or concepts that would result in a new design through
innovation. However, that innovation may not be supported by the availability of technology
present at that time. For example, there may not be adequate and relevant testing procedures
available to be performed on new designs to learn and understand the complexity and
working principles of the product. Thus, technology plays an important role in ensuring that
the design is robust, reliable and safe.
3.2 Tight Coupling Complex Interactions
3.2.1 Collaboration of complex systems
The successful creation of a complex system requires coordination not only between multidisciplinary fields of engineering, but requires effective communication between the different
segment groups of the project, such as the manufacturing line, the designers , the engineers
and even the managers. For a product to successfully enter from the ideation to creation
phase, at any one time, people from various backgrounds would be required to
simultaneously solve and communicate their solutions with one another.
One way to achieve an effective management of work and communication is through tight
coupling and complex interactions. Tight coupling is the creation of technological
processes such that a unique system requirements or change can be immediately reflected
into the relevant adjacent or connected systems. For example, using the integrated systems
engineering of planes, when a certain part is modified, all existing systems which are affected
by it are automatically modified and reflect this modification in the relevant systems. This
mode of technological connecters reduces human error and miscommunication, and allows
for almost instantaneous modes of information passing.
Also, complex interactions refer to the process of systematically organizing information
systems to facilitate discussion of problems on a common platform. For example, when an
engineer has a design query from the assembly line, he can access the manufacturing details
on an integrated information system to extract information in the format he wants to
effectively understand the problem and communicate with the manufacturer.
This form of technological and information systems help direct effective communications
clearly, and provide a standardized platform for people of different fields to come together
and solve a problem.

3.3 Uncertainty
3.3.1 Acceptable Risk
Determining the acceptable level of risk is a difficult task for an engineer. The ethical tools
and theories can only serve as guides for an engineer but the right decision to make is not
always very clear. The engineer has to make a value judgement based on his knowledge,
experience and moral values when dealing with risk. For example, when using the costbenefit analysis, he should understand the inherent limitations in the method and should
integrate ethics in making decision to the best of his capability.
3.3.2 Designing for Safety
Like mentioned earlier, innovation sometimes leads to dealing with a product that the
engineer has little or no experiences with. One of the tools developed involves identifying
failure modes. Failure modes are any possible way in which a product can malfunction
(Harris et al., 2013). By constructing a fault tree (Figure 1), an engineer can systematically
analyse possible risks and subsequently device safety mechanisms to avoid product failure.

Figure 1: Fault Tree Analysis for a Nuclear Plant (Harris et al., 2013)

4. Conclusion
Criteria for Safe Design
According to Fleddermann (2012), there are four criteria to meet to ensure a safe design:
1. Design must comply with the ethical law
2. Design must meet the standard of acceptable practice
3. Potentially safer alternatives must be considered

4. Attempt has been made to foresee possible misuse and product. Design should help to
avoid such misuse.
The criteria listed above serves as a rough guideline in assessing the safety of a product
design but are not exhaustive. Also terms such as ethical law and standards are
sometimes loosely defined and interpretation is very subjective. Therefore, it is imperative for
an engineer to apply his own moral judgement when it comes to issues regarding safety.
Ethical theories and tools are essential in the engineering profession, especially when the
decision taken has ethical implications.

5. References

Encyclopedia Astronautica. (1970). Apollo 13 Review Board publishes result of


investigation. from http://www.astronautix.com/details/apo27567.htm
Feld, J., & Carper, K. L. (1997). Construction Failure: Wiley.
Fleddermann, C. B. (2012). Engineering Ethics: Prentice Hall.
G., L. b. R., & Fuller, C. R. L., and Roberta H. Lang. Twin Views of the TACOMA
NARROWS
BRIDGE
COLLAPSE.
from
https://www.aapt.org/Store/upload/tacoma_narrows2.pdf
Guyer, J. P. Ethical Issues from the Tacoma Narrows Bridge Collapse from
https://www.cedengineering.com/upload/Ethical%20Issues%20Tacoma
%20Narrows.pdf
Harris et al. (2013). Engineering Ethics: Concepts and Cases: Cengage Learning.
Huff Post Business. (2014). At Boeing, a Disconnect Between Engineers and
Executives.
from
http://www.huffingtonpost.com/will-jordan/boeingdreamliner-engineers-executives_b_5797414.html
NASA.
(2009).
The
Apollo
13
Accident.
from
http://nssdc.gsfc.nasa.gov/planetary/lunar/ap13acc.html
National Academy of Engineering. (2012). Fall Issue of The Bridge on Social
Sciences
and
Engineering
Practice.
from
https://www.nae.edu/Publications/Bridge/62556/62560.aspx
Samuel, A., & Weir, J. (1999). Introduction to Engineering Design: Elsevier
Science.
The Economist. (2013). Boeing's 787 Dreamliner: Going nowhere. from
http://www.economist.com/blogs/gulliver/2013/02/boeings-787-dreamliner
The International Business Times. (2014). Boeing's Internal War: Seattle
Engineers Point Finger At South Carolina's Shoddy Work On The 787
Dreamliner. from
http://www.ibtimes.com/boeings-internal-war-seattleengineers-point-finger-south-carolinas-shoddy-work-787-dreamliner
The Seattle Times. (2013). from http://www.seattletimes.com/business/boeing787rsquos-problems-blamed-on-outsourcing-lack-of-oversight/
Whitwam, R. (2013). How Boeing fixed the 787 Dreamliner. from
http://www.geek.com/science/how-boeing-fixed-the-787-dreamliner1552766/
Williamson, M. (2002). Aiming for the Moon: the engineering challenge of Apollo.
Engineering Science and Education Journal, 11(5), 164-172. doi:
10.1049/esej:20020501

Das könnte Ihnen auch gefallen