Sie sind auf Seite 1von 8

Fault avoidance Development technique are used

that either minimise the possibility of mistakes or


trap mistakes before they result in the introduction
of system faults.
Fault tree analysis
Fault tolerance Run-time techniques are used to
ensure that system faults do not result in system
errors and/or that system errors do not lead to
system failures.
Fault detection and removal Verification and
validation techniques that increase the probability
of detecting and correcting errors before the
system goes into service are used.
Software reliability
Software failure Software fails due to errors in its
specification, design or implementation.
Software measurement
System reliability is measured by counting the
number of operational failures and, where
appropriate, relating these to the demands made
on the system and the time that the system has
been operational.
Attack assessment Decompose threats into
possible attacks on the system and the ways that
these may occur

Denial of service attacks on a system are


intended to make it unavailable
Vulnerability avoidance The system is designed
so that vulnerabilities do not occur. For example, if
there is no external network connection then
external attack is impossible
Attack detection The system is designed so that
attacks on vulnerabilities are detected and
neutralised before they result in an exposure. For
example, virus checkers find and remove viruses
before they infect a system
Exposure limitation The system is designed so
that the adverse consequences of a successful
attack are minimised. For example, a backup policy
allows damaged information to be restored
Asset identification Identify the key system
assets (or services) that have to be protected.
Asset value assessment Estimate the value of
the identified assets.
Exposure assessment Assess the potential
losses associated with each asset.
Threat identification Identify the most probable
threats to the system assets

Security Systems should protect themselves and


their data from external interference.
Security policy sets out the conditions that must
be maintained by a security system and so helps
identify system security requirements.
Safety Systems should not behave in an unsafe
way
Maintainability Reflects the extent to which the
system can be adapted to new requirements;
Reliability The probability of failure-free system
operation over a specified time in a given
environment for a given purpose
Availability The probability that a system, at a
point in time, will be operational and able to deliver
the requested services
Reparability The disruption caused by system
failure can be minimized if the system can be
repaired quickly.
Functional reliability
Survivability Reflects the extent to which the
system can deliver services whilst under hostile
attack
Error tolerance Reflects the extent to which user
input errors can be avoided and tolerated.
Standards

Control identification Propose the controls that


may be put in place to protect an asset.
Feasibility assessment Assess the technical
feasibility and cost of the controls.
Hazard avoidance The system is designed so that
some classes of hazard simply cannot arise.
Hazard assessment The process is concerned
with understanding the likelihood that a risk will
arise and the potential consequences if an accident
or incident should occur.
Damage limitation The system includes
protection features that minimise the damage that
may result from an accident.
Error tolerance Reflects the extent to which user
input errors can be avoided and tolerated.
Recovery
Risk analysis Assess the seriousness of each risk.
Risk identification = Hazard identification
Identify the hazards that may threaten the system.
Risk decomposition = Hazard analysis
concerned with discovering the root causes of risks
in a particular system.
Hardware failure Hardware fails because of
design and manufacturing errors or because

components have reached the end of their natural


life.
Software failure Software fails due to errors in its
specification, design or implementation
Operational failure Human operators make
mistakes. Now perhaps the largest single cause of
system failures in socio-technical systems.
Dependability
Embedded software whose failure can cause the
associated hardware to fail and directly threaten
people. Example is the insulin pump control
system.
Quality
Quality plan
Quality team
Product quality
Quality software
Preliminary risk Identifies risks from the systems
environment. Aim is to develop an initial set of
system security and dependability requirements.
Life cycle risk Identifies risks that emerge during
design and development e.g. risks that are
associated with the technologies used for system

construction. Requirements are extended to


protect against these risks.
Operational risk Risks associated with the
system user interface and operator errors. Further
protection requirements may be added to cope
with these.
Quality management
Quality documentation
Organizational level
Software quality management
Program inspection
Risk driven this approach has been widely used in
safety and security-critical systems.
Formal specification is part of a more general
collection of techniques that are known as formal
methods.
Software metric
Static metric
System metric
Product metric
Dynamic metric
Quality metric
Quality review

Software standard
Clerical work
Encapsulation
Inspections
Error checklist
Iso 9001
Iso 9000
System components
Extreme programming
Agile methodology
Dependability and reliability
System failures may have widespread effects
with large numbers of people affected by the
failure.
Functional requirements to define error
checking and recovery facilities and protection
against system failures.
Non-functional requirements defining the
required reliability and availability of the system.
Excluding requirements that define states and
conditions that must not arise

Arithmetic error Algorithmic error by analysis


of the fault tree, root causes of these hazards
related to software are:
Probability of failure on demand POFOD is the
most appropriate metric
Rate of fault tolerance Reflects the rate of
occurrence of failure in the system. ROCOF
Mean time to failure = reciprocal of ROCOF
Relevant for systems with long transactions i.e.
where system processing takes a long time (e.g.
CAD systems). MTTF should be longer than
expected transaction lengt
Formal specification is part of a more general
collection of techniques that are known as formal
methods.

Das könnte Ihnen auch gefallen