Beruflich Dokumente
Kultur Dokumente
EG2401 Engineering
Professionalism
How Safe is Safe?
A0111151E Muhammad Rias
A0119681Y Muhammad Firdaus
A0094621H Roshan Kumar
A0111231H Lukmanul Hakim
Contents
1. Introduction............................................................................................................ 3
1.1 Safety, Risks and Innovation..................................................................................3
1.2 Conceptualizing Safety and Risk: An Engineers Approach............................................4
Risk-Benefit Analysis............................................................................................ 4
1.3 Understanding Accidents....................................................................................... 4
1.3.1 Procedural Accidents...................................................................................... 4
1.3.2 Engineered Accidents..................................................................................... 5
1.3.3 Systemic Accidents........................................................................................ 5
1.4 Causes of Engineering Accidents.............................................................................5
1.4.1 Technical Design........................................................................................... 5
1.4.2 Human Factors.............................................................................................. 6
1.4.3 Organisational System.................................................................................... 6
1.4.4 Socio-Cultural.............................................................................................. 6
1.4 Safety and Ethical Consideration.............................................................................7
1.4.1 Ethical Theories and Tools...............................................................................7
2. Case Studies........................................................................................................... 8
2.1 Boeing 787........................................................................................................ 8
2.1.1 Background.................................................................................................. 8
2.1.2 Engineered Problem....................................................................................... 8
2.1.3 Procedural Problem........................................................................................ 8
2.1.4 Safety/Ethical Issues....................................................................................... 8
2.1.5 How the matter was resolved..........................................................................13
2.2 Tacoma Bridge................................................................................................. 14
2.2.1
Introduction.......................................................................................... 14
2.2.2
Type of Accident.................................................................................... 14
2.2.3
3. Challenges in Testing.............................................................................................. 24
3.1 Limitation of resources....................................................................................... 24
3.1.1 Inadequate time to test and evaluate new design...................................................24
3.1.2 Limitation due to technologies........................................................................24
3.2 Tight Coupling Complex Interactions.....................................................................24
3.2.1 Collaboration of complex systems....................................................................24
3.3 Uncertainty.................................................................................................. 25
3.3.1 Acceptable Risk.......................................................................................... 25
3.3.2 Designing for Safety..................................................................................... 25
4. Conclusion........................................................................................................... 26
Criteria for Safe Design........................................................................................... 26
5. References........................................................................................................... 27
1. Introduction
No duty of the engineer is more important than her duty to protect the safety and well-being
of the public. Fleddermann
Safety is a primary concern for engineer in product design because product failure can have
far-reaching and disastrous consequences on peoples lives. Often, engineering disasters or
fatal product failure is attributed to unethical engineering practices which call to question the
competency of engineers as professionals. In some cases, product failures are unavoidable,
resulting in normal accidents(Harris et al., 2013), which the engineers could have never
anticipated. Nevertheless, it is the responsibility of an engineer to hold paramount the safety
and well-being of the public, insofar as he is able to, and design products which are
adequately tested and proven to be safe.
1.1 Safety, Risks and Innovation
SAFETY
1
RISK
Safety and Risk are inversely related (Harris et al., 2013). Hence, minimal risk will
essentially maximize safety.
Innovation is an integral part of the engineering practice and invariably comes with new
risks. Novelty of a new design suggests that there is no prior experience with risks associated
with the product. When engineer launch a novel product, they may not be able to gauge all
implications that it may have on society. While it is not humanly possible to account for and
eliminate every risk there is related to a new product, it will be paradoxical to account for risk
which the engineer is not aware of to begin with. Also, it may not be economically feasible to
spend the money and effort in eliminating risks which are highly unlikely. Therefore, in light
of these limitations, an engineer has to take acceptable risk such that safety is maximized
without hindering innovative progress, which is a crucial but difficult task.
fail safely without causing any damage to user/society or the environment. Therefore, an
engineer must be able to design a product that complies with the law, meets standard of
acceptable practice and is safe for usage for a period of time.
1.4.2 Human Factors
Human factors contributing to product failures involves refers to the limitations of engineers
as human-beings to operate ideally according to the standards prescribed, procedural or
moral. For instance, design failure may result due to misjudgement, ignorance or unethical
practices. Misjudgement occurs due to lack of experience in dealing with a particular product;
thereby the engineer either overestimates or underestimates the performance capabilities of
his product. Moreover, misjudgement might result when engineers follow unconventional
ways of dealing with product design or testing which may lead to increasing risk and
compromises safety. Their decision on acceptable risks may be a deviance from what is
commonly accepted, thus, may result in product failure if not properly evaluated. On the
other hand, unethical practices are intentional behaviour which is not excusable. The moral
values and ethics the engineer holds may largely influence critical decisions with regards to
safety issues. When these values are not upheld or do not meet professional moral standards,
the competency of the engineer as a professional is in question.
1.4.3 Organisational System
Organization system
It is clear that organizational system is a key component of both good safety cultures and
high-reliability organizations. But learning can be thwarted by well-known difficulties in
handling informationtoo much information, inappropriate communication channels,
incomplete or inappropriate information sources, or failure to connect available dataand
these difficulties can pose acute challenges for safety. For example, an incomplete or
inaccurate problem representation might develop at the level of the organization as a whole
and thus influence the interpretations and decisions of the organizations individual members.
Such a representation may arise through organizational rigidity of beliefs about what is and is
not to be considered a hazard(National Academy of Engineering, 2012).
1.4.4 Socio-Cultural
2. Case Studies
2.1 Boeing 787
2.1.1 Background
Boeing Commercial Airplanes in 2004 launched the 787 Dreamliner, an all-new,
superefficient airplane. Its advanced design was unparalleled by its competitors. However,
soon after its launch, it got itself into numerous problems which eventually lead to its
grounding.
2.1.2 Engineered Problem
The Implementation of the lithium-ion batteries previously adapted from cars lead to its
critical design flaw. Within aircraft compartments, operating temperatures were significantly
higher than in other vehicles causing the leakage and inflammation of the fluid from within
the cells resulting in a complete burnout of the battery system.
2.1.3 Procedural Problem
The shift in management style from adopting a safety-design approach to a high supply/low
cost approach lead to engineers putting together a plane lacking in structural integrity, with
entire sections of its wiring missing, the inadequate testing of the system to ensure it meets
the safety requirements and a general lack of standby safety ensuing features that left the
plane completely vulnerable in case anything went wrong.
2.1.4 Safety/Ethical Issues
1. Outsourcing essential parts to save on manufacturing costs
Safety
Due to outsourcing its critical components such as the electrical and power panels, it received
lower quality products due to lesser oversight on quality and less stringent manufacturing
processes overseas. As a result, Boeing faced many recurrent electrical fires while operating
which caused the plane to have a forced landing several times. Such outsourcing led to
malfunctioning of critical components leading to the endangerment of the safety of the
passengers on board.
With the rapid onset of globalization, it started to face increasing competition with other
aircraft manufacturers such as Airbus. These resulted in the company having to find ways to
remain economically competitive and provide new air travel solutions to stay relevant.
Boeing 787 and decided on outsourcing its production lines so as to save costs.
According to one of its engineers involved in the project, the problem with getting low
quality products becomes a significant problem when dealing with key components such as
the power panels and the electrical system. However, what is very different on the 787 is the
structure of the outsourcing. You only know whats going on with your tier 1 supplier. You
have no visibility, no coordination, no real understanding of how all the pieces fit
together.(The Seattle Times, 2013).
Ethical
Utilitarian-thinking (Cost Benefit Approach)
With hindsight of what had occurred to Boeing 787 after its launch, we can adopt a costbenefit approach to understand Boeings actions. Costbenefit analysis is sometimes referred
to as riskbenefit analysis because much of the analysis requires estimating the probability of
certain benefits and harms. Using this approach, Boeing evaluated the available options it
had, such as in-house or outsourcing the production. It then assessed the costs and benefits of
each action and finally chose to outsource as outsourcing would significantly reduce its
manufacturing costs. In course with such an action, it predicted that such an action would
only raise slight uncertainty over the quality of its products. Hence, it made financial sense to
adopt such an approach. However, using this approach didnt work out due to two relevant
fundamental problems.
1. First, in order to know what we should do from the utilitarian perspective, we must know
which course of action will produce the most good in both the short and the long term. In
this case, Boeing had just started its radical approach on outsourcing, which was to only
monitor the complete structure of the products, and not its pieces or how it was assembled
as mentioned earlier. These lead to its inability to adequately predict what economic
benefit it would produce it the short vs long run.
2.
Second, the utilitarian aim is to make choices that promise to bring about the greatest
amount of good. We refer to the population over which the good is maximized as the
audience. The problem is determining the scope of this audience. With Boeing
categorized as an aviation company, its company values of safety and reliability were of
paramount importance. With it outsourcing to bring down its manufacturing costs, it may
have benefitted the company financially but ultimately cost heavily to public safety and
even the reliability of the aviation industry in general.
2. Disconnect between engineers and managers
Safety
The disconnection between engineers using a safety approach and the managers adopting a
cost approach lead to different design considerations for the aircraft. With the engineers
having to design with the most cost effective approach, they could not implement any
additional safety guards and had to complete the aircraft with as little resources and time as
possible. This compromised the structural integrity of the aircraft causing multiple problems
such as a jammed doorway, cracks on the tips of wings which seriously threaten to dismantle
the plane in-flight and result in massive causalities.
This critical aspect of communication had lost its way late into the restructuring programme
by Boeing after its new leadership took hold. With the incoming CEO McNerney, there came
key changes such as trimming staff to key essentials only, cutting all costs and squeezing
suppliers. Under such directive, there was a key difference is job priorities which didnt
match and resulted in ineffective communication between the engineers and managers.
With engineers demand for key safety design elements being met with possibility of
additional cost-expenditure, engineers were faced having to develop a very complex system
which the cost-limitation in place instead of its key safety and engineering element in place.
Of 15 Boeing aerospace workers asked at random, 10 said they would not fly on the
Dreamliner. Eventually, trust broke down between them and resulted in engineers, under
pressure, delivering what the management had set in place. Incidents like incomplete aircraft
parts, low wage schemes especially for overtime work, unrealistic expectations lead to a
product being created that was just not even ready for entering the industry(Huff Post
Business, 2014).
Ethical
Rights and Responsibility of Engineers (Conflict of interest)
The case study presented above, there is a clear conflict of interest between the managers and
engineers. This conflict of interest in the key design and manufacturing the aircraft directly
affects the safety and reliability of the aircraft. With the key designing of the safety leading
up to the safety of the public, it is key that the engineers are ethical in their actions. Firstly,
engineers should ensure regardless of directive, they should follow the companys safety
policy. However, if the company safety policy is felt to be insufficient for the public safety,
you can look to the statements in the professional ethics codes that describe safety and how it
explicitly overrides any such conflict of interest. Such actions ensure an objective way to
design a product that is in-line with the responsibilities of the engineer and compel managers
to accept such a design or face criminal charges. Often enough, for such complex systems,
there are no clear directives under safety codes that prescribe its exact design but however,
there are directives for actions that can be taken to ensure that the design meets the safety
requirements. Such actions include submitting design to standard engineering checks to meet
the specification marks. (Engineering Ethics by Fledderman, pg. 105)
3. Inadequate testing for new lithium batteries
Safety
The implementation of the lithium-ion batteries previously adapted from cars lead to its
critical design flaw. Within aircraft compartments, operating temperatures were significantly
higher than in other vehicles causing the leakage and inflammation of the fluid from within
the cells resulting in a complete burnout of the battery system. This caused the emergency
landing of the aircraft and could have easily resulted in a power failure in flight resulting in
grave consequences for those on-board.
The 787 is equipped with lithium-ion batteries, which are lighter and more powerful than
conventional batteries, but which have been known to cause fires in cars, computers and
mobile phones. Shortly after, all 50 of the Dreamliners that had been delivered to airlines
were grounded(The Economist, 2013). Lithium-ion batteries consistently resulted in
inflammation of the entire battery system due to its high tendency for the fluid to leak out of
one cell from the battery to another cell due to excessively high temperatures. The leak of
fluid causes ignition outside the chambers and as a result, the whole system catches fire. Even
with this prior knowledge, Boeing decided to adopt the usage of the lithium-ion batteries.
This resulted in a similar occurrence of battery fires and had to force Boeing to ground it
entire fleet of 787s which resulted in massive financial losses and it lead to them completely
redesigning the lithium-ion batteries
Ethics
Ethics-line drawing (Boeing point of view)
We can classify the Negative/Positive paradigm and categorize Boeings actions and suitable
scenarios to get a better idea of the problem.
NP: Usage of existing lithium-ion batteries without any prior testing.
PP: Completely redesign the batteries and test it vigorously to show it would be extremely
unlikely to fail under operating conditions and come up with a standby battery and flameextinguisher system.
P1: Taylor the lithium-ion batteries to meet the minimum benchmark requirements (5/10)
P2: Modification to meet standards, conduct extensive testing to ensure it works (7/10)
SC1: Redesign the battery to ensure it is very stable in operating conditions; test it
extensively before release (8.5/10)
Such an action had to be taken into consideration for both ethical/safety considerations as
firstly, a failure of a critical system can affect the lives of many passengers in flight and
secondly, being a commercial aviation company which represents the aviation industry, it has
an ethical responsibility to ensure safety and reliability are its top priority.
4. Pressure to rush entire manufacturing/assembly lines
Safety
With fast production no longer the top priority, the engineers resulted in compromised safety
standards to meet the requirements. As such, safety design parameters were not strictly
adhered to and subsequently presented itself as major safety problems in-flight and on the
ground.
However, an engineer from the South Carolina plant has said it is far behind schedule and
that management has insisted on sending unfinished planes to Seattle in order to keep up the
planned rate. With the introduction to the line of the longer 787-9 model, which is slightly
different from the 787-8, has caused production to dip from 10 to seven a month as workers
struggle with new instructions and techniques. In addition, he said that around the time the
experienced workers were let go, the company demanded a higher level of output, which
caused workers to rush jobs. Boeing plans to increase production levels of the Dreamliner to
10 per month, and then 12 per month in 2016, and 14 in 2020(The International Business
Times, 2014).
Ethics
Causes of technological disaster (4 factor approach)
Technical design factors (Faulty testing procedures)
The pressure for speed lead to several production sites testing only critical functioning
of the system before sending it away to the next plant as was required in their
requirements. Hence other substitute components were never tested to ensure it
worked with the mainboard system.
Human Factors (Unethical/Willful Acts)
Along with rushing production, the firm kept wages and overtime pay very low in
order to save costs. Such practice lead to poor professional conduct when servicing
equipment, especially when required to work overtime to meet deadlines. As such,
entire sections of the aircraft were found missing when it was sent to the next
assembly line.
Organizational system factors (Communication failure/Policy Failure)
When engineers whom had received parts with missing sections from prior assembly
line complained to the managers, the managers instructed the engineers to help fix
those problems in addition to their standard line of work. The way the management
handled it at the short-term lead to a systematic failure of the supply chain of the
assembly section of the aircraft leading up to severe quality deterioration of aircraft
assembly as engineers rushed to simply deliver the aircraft as per schedule.
Introduction
The Tacoma Narrows Bridge at Puget Sound in the state of Washington was completed and
opened to traffic on 1st July 1940. It was built at a total cost of $6.4 million to facilitate the
crossing of Tacoma Narrows, between the city of Tacoma and the Kitsap Peninsula. It was the
third longest suspension bridge in the world at that time. During its construction, the bridge
exhibited large vertical oscillations and was nicknamed Galloping Gertie by the people of
Tacoma due to its oscillatory behaviour. It opened to traffic on 1st July 1940, and dramatically
collapsed on 7th November of the same year due to high wind conditions.
2.2.2
Type of Accident
Engineered Accident
The Tacoma Narrows Bridge collapse is an example of an engineered accident. A proposed
new, cost effective and slender bridge design reduced the stiffness greatly. A lack of
knowledge, coupled with inadequate testing and modelling resulted in the failure of the
structure.
Causes of Failure
Technical design factors
In the case of Tacoma Narrows Bridge collapse, the bridge engineer Moisseiff deviated from
the conventional bridge designs and introduced a new design that lacked theoretical and
experiential knowledge. This increased the risk and boundaries of acceptable risk. However,
nothing was done to mitigate the risks hence, this contributed to the structural failure.
Furthermore, theoretical analysis was used as the basis for design decision when there was
inadequate recognised theory to rely upon the design of the bridge.
In the absence of such knowledge and experience, the engineers could have:
1. Relied on experiential knowledge and work with established/proven and conventional
designs with slight modifications that would not have compromised the structural
integrity.
2. Conducted detailed testing and modelling to determine the structural capability and
capacity under imposed conditions that could be expected. This would have allowed
for better understanding of the new design and structure, therefore, remedial actions
could have been followed up if necessary to meet the required standards.
Organizational System Factors
Public Works Administration (PWA) had made faulty group decision in approving Moisseiff
design based on his cost saving approach and his reputation. They failed to analyse the pros
and cons of each proposed designs and instead selected a design that incurred less cost,
instead of placing emphasis on a safe and reliable structure.
Though they had cost pressures from the federal government that was not too keen in
financing the project, they have the responsibility in making the right decision based on
available information and resources without taking unnecessary risks that would place human
lives in danger. Their decision to go with Moisseiff design resulted in a great reduction in
stiffness of the bridge which caused the structural failure.
2.2.3
structural integrity and safety were not compromised. The next generation of large suspension
bridges featured deep and rigid stiffening trusses. Thus, in 1950, a new suspension bridge
with an improved design was erected and opened to public.
2.2.5 Conclusion
It is impossible for an engineer to anticipate all of the technical problems which can result in
failure. There is always an uncertainty involved with new products. A failure mode is way in
which a structure, mechanism or process can malfunction. (Harris et al., 2013). There are so
many ways a new product can fail under various kinds of conditions and it is impossible to
anticipate or predict how it would fail. An engineer can mitigate this uncertainty by applying
some useful tools that could guide him (e.g. fault tree analysis). This would reduce the risk
and uncertainty involving the new product, however it does not guarantee that it is 100% safe
and would not fail during its usage.
Therefore, it is important for an engineer to come up with designs that are backed up by
sound technical knowledge & expertise with adequate testing done to ensure remedial actions
are in place to support the new product.
2.3 Apollo 13
"Within less than a minute we had this cascade of systems failures throughout the
spacecraft It was all at one time - a monstrous failure." - Liebergot, NASA flight controller
2.3.1 Summary
The Apollo 13 spacecraft was launched on the 11th April 1970 on a mission to land on the
Moon. Unfortunately, the mission was aborted halfway due to an explosion in the craft 56
hours later. The explosion was attributed to the malfunction of the second oxygen tank in the
service module (figure #). Despite the dangerous situation, the flight controllers at NASA
mission control room, together with team of engineers and designers, worked tirelessly
around the clock to get the astronauts safely back to Earth.
Organisatio
n
Role
Beeches
(BAC)
Aircraft
Corp.
going to cause an explosion. The tank was then filled with oxygen and mounted on Apollo
13.
Systemic Accident
The Apollo 13 incident is also a systemic accident because it was the result of complex
interactions involving engineers and technicians handling the oxygen tank, from the
designing to the testing and finally equipping the Apollo 13 with it. Minor failures of the
engineers and technicians by itself would not have caused the explosion but collectively, the
stage was set for an accident.
The first failure was in the handling of the oxygen tank by BAC technicians. They had
dropped the tank from 2 inches accidentally when transferring it from a spacecraft. This had
caused the displacement of the filling tubes in the tank. The tank underwent acceptance
testing no exterior damages were detected and they did not have any problems detanking the
equipment.
The displacement of the filling tubes later became the problem when NR was doing their
detanking, which motivated them to use unconventional ways to detank, thus damaging the
electrical wire insulation. This is the second failure because difficulty in detanking should
have already been sufficient indication of a tank fault. The engineers found a way around it
get the job done instead of raising concerns over the safety issues of using the tank.
2.3.3 Causes of Accident
Technical Design Factors
Flawed design
Defective Equipment
Ineffective Testing Procedures
Organisational System Factors
Cost/Schedule Pressures
Communication Failure
Policy Failure
Human Factors
Negligence/Carelessness
Misjudgement
Unethical behaviour
Socio-Cultural
Attitude towards Risk
Value of Safety with respect to other factors
Institutional Mechanism
out the spacecrafts main supply of fuel and oxygen. This effectively crippled the command
module, where the astronauts were in.
Crisis Control
Suddenly and unexpectedly we may find ourselves in a role our performance has ultimate
consequences. Gene Kranz, NASA Flight Director, 1970
Crisis control bring preserving safety to a whole new level. With the added pressure of time
and limited resources, engineers have to be able to make the right call, be it based on
experience or ethics, at the right time, in order to protect the safety of other human-beings.
This was demonstrated by the engineers and flight controllers of NASA who were in the
flight control room, working round the clock to devise a way of return for the Apollo 13
astronauts.
Risk Assessment by Kranz
Lead flight controller Gene Kranz made the called to abort the mission immediately when
astronauts reported the explosion from space. Kranz had to weigh his options carefully as it
would mean life or death for the astronauts stuck in the Apollo 13 which was now drifting in
space which heavily damaged service module.
Kranz and team assessed the situation and instructed the astronauts to shut down the
Command Module to safe power and power up the lunar module before the oxygen supply
ran out. The lunar module had its own supply of power and oxygen. The plan was to use the
LM as a lifeboat for the moment.
Table 1: Risk Assessment
Abort Route
Possible
situation
Possible Harm
Magnitude of
Harm (1-10)
Likelihood
Harm
(1-10)
of
Direct:
Abort on the front side of the moon and be back
on Earth in 1.5 days. Requires the main engine
propulsion and perfect execution of spacecraft
manoeuvre.
Engine failure due to the explosion.
Spacecraft crashes into the moon surface
10
Definitely fatal and leaves absolute no chance of
survival.
8
Uncertainty if engine was damaged as it was very
close to the explosion.
Circumlunar:
Travel around the moon which will take
between 4-5 days. Follows a free-return
trajectory (does not require propulsion)
Resources in lunar module meant for 2
people for 2 days.
Run out of oxygen, power and food
9
Running out of oxygen definite meant
death.
7
There was a small chance of figuring out a
way to conserve the resources for 4-5
Risk
Benefit
Magnitude x Likelihood = 10 x 8
= 80
Faster return if executed successfully
days.
Magnitude x Likelihood = 9 x 7
= 63
Slower return buys more time to devise
survival plan.
Kranz had to make a decision between already high-risk return plans (refer to Table 1). In
such a dilemma and pressure, Kranz decided to go with the circumlunar route, buying more
time for his team and him to devise a strategy to keep the astronauts alive.
Safety Issues
Rising CO2 Levels
The CO2 level s in the LM was rising to dangerous levels during the time the astronauts were
residing in the lifeboat. They had spare CO2 filter canister from the CM. However the
canisters where cubic and could not fit into the circular inlets of the air purifier. This was
another design flaw in the spacecraft because the parts within each compartment were
different from the others. Standardizing the parts would have made the emergency situation
easier to cope with.
Engineers in flight control immediately improvised a way to modify the canister fittings with
materials which would be available on board the craft. The engineers communicated the adhoc design to the astronauts who were able to reconstruct the canisters as specified, thus was
able to make them fit into the circular canister sockets.
Restarting the Command Module in Mid-Flight
For their re-entry into the Earths atmosphere, the engineers in flight control faced another
safe issue. The CM had to be restarted from the shut-down in mid-flight, which had never
been done before. Moreover, the engineers had to figure out a new way to separate the LM to
a safe distance from the LM during re-entry because the SM which was required for this was
damaged in the explosion. A team of six engineers from the University of Toronto worked on
the strategy and were able find a solution within a day. The method was one which was
accurately calculate and later executed successfully by the astronauts.
Ethical Issues
Next we will demonstrate the application of ethical theory and ethical problem-solving tools
in our analysis of the Apollo 13 accident.
Applying Ethical Theory
Action/Choice
Ethics Category: Duty Ethics
NR not recognising faulty tank
as a safety hazard and using
unconventional
detanking
methods.
Unconventional
detanking
was done by NR with the
assumption that thermostat
switches were compatible and
functioning. Beeches marred
the trust NR had on it by failing
to assemble the tank according
to specification. Duty ethics is
violated.
This was a serious violation of
duty ethics due to negligence.
This also highlights loopholes
in testing which was not
thorough enough to detect the
flaw in assembly
NP P1
Point
P2
SC1
Ethics Line Drawing from the point of view North American Rockwell
(NR)
Positive Paradigm NR runs thorough testing on tank before accepting it from Beeches and
(PP)
detects incompatible thermostats and loose tubes and rectifies it.
Negative Paradigm NR does not run any test on tank and equips the tank onto the Apollo 13.
(NP)
Point Under study NR does not identify hazard and unintentionally makes matters worse by
1 (P1)
conducting unconventional detanking.
Point Under study NR trusting Beeches to have properly assembled the tank and not
2
adequately testing equipment by itself.
(P2)
Scenario 1(SC1)
NR identifies difficulty with conventional detanking as due to faulty
equipment and runs further test on tank and rectifies loose filling tubes.
PP
Location
left
10/10
0/10
1/10
3/10
8/10
from
3. Challenges in Testing
When testing the quality, performance and reliability of new products, engineers often face
limitations in determining the acceptable risks and the subsequent devising safety measures.
3.1 Limitation of resources
3.1.1 Inadequate time to test and evaluate new design
When engineers come up with new designs for a product or structure, there is always an
element of uncertainty in its functional capabilities and safety aspects. To mitigate such
factors, risk assessment and proper testing has to be carried out to ensure the design is
reliable and safe for human use. This requires time, effort and money. In real life scenarios,
time is of the essence. It provides the competitive advantage over other competitors. Thus, to
stay ahead of their competitors, companies tend to rush their new product into manufacturing
so that they can be released to the market at a earlier date. This puts time pressure on
engineers to deliver a design that meets the standards of acceptable practice. As a result,
inadequate testing and modelling are conducted, which leads to the failure of the product due
to a lack of understanding and evaluation of the design.
Furthermore, safety considerations are compromised at the expense of releasing the product
at an earlier time. Potentially safer alternative designs could be overlooked due to time
pressure. Hence, the company fails in selecting the best solution and implementing it in its
design.
3.3 Uncertainty
3.3.1 Acceptable Risk
Determining the acceptable level of risk is a difficult task for an engineer. The ethical tools
and theories can only serve as guides for an engineer but the right decision to make is not
always very clear. The engineer has to make a value judgement based on his knowledge,
experience and moral values when dealing with risk. For example, when using the costbenefit analysis, he should understand the inherent limitations in the method and should
integrate ethics in making decision to the best of his capability.
3.3.2 Designing for Safety
Like mentioned earlier, innovation sometimes leads to dealing with a product that the
engineer has little or no experiences with. One of the tools developed involves identifying
failure modes. Failure modes are any possible way in which a product can malfunction
(Harris et al., 2013). By constructing a fault tree (Figure 1), an engineer can systematically
analyse possible risks and subsequently device safety mechanisms to avoid product failure.
Figure 1: Fault Tree Analysis for a Nuclear Plant (Harris et al., 2013)
4. Conclusion
Criteria for Safe Design
According to Fleddermann (2012), there are four criteria to meet to ensure a safe design:
1. Design must comply with the ethical law
2. Design must meet the standard of acceptable practice
3. Potentially safer alternatives must be considered
4. Attempt has been made to foresee possible misuse and product. Design should help to
avoid such misuse.
The criteria listed above serves as a rough guideline in assessing the safety of a product
design but are not exhaustive. Also terms such as ethical law and standards are
sometimes loosely defined and interpretation is very subjective. Therefore, it is imperative for
an engineer to apply his own moral judgement when it comes to issues regarding safety.
Ethical theories and tools are essential in the engineering profession, especially when the
decision taken has ethical implications.
5. References