Sie sind auf Seite 1von 46

Chapter

p 3
Program Security

Ch l P.
Charles P Pfleeger
Pfl & Sh
Sharii LLawrence Pfl
Pfleeger, SSecurity
i ini Computing,
C i
4th Ed., Pearson Education, 2007 1

y In this chapter
y Programming
P i errors with
i h security
i implications:
i li i bbuffer
ff overflows,
fl
incomplete access control
y Malicious
M li i code: d viruses,
i worms, TTrojan
j hhorses
y Program development controls against malicious code and
vulnerabilities:
l biliti software
ft engineering
i i principles
i i l andd practices
ti
y Controls to protect against program flaws in execution: operating
system
t supportt andd administrative
d i i t ti controlst l

2
g
3.1. Secure Programs
y Security implies some degree of trust that the program enforces
expectedd confidentiality,
fid i li iintegrity,i andd availability
il bili
y An assessment of security can also be influenced by someone's general
perspective
ti on software
ft quality
lit
y E.g., if your manager's idea of quality is conformance to
specifications,
ifi ti then
th she h might
i ht consider
id theth coded secure if it meets
t
security requirements, whether or not the requirements are
completel t or correct.t

y IEEE Terminology for Quality


y A bug
b can be
b a mistake
i k in i interpreting
i i a requirement,
i a syntax error
in a piece of code, or the (as-yet-unknown) cause of a system
crash.
h
y When a human makes a mistake, called an error, in performing
some software
ft activity,
ti it th the error may lead
l d tto a fault,
f lt or an incorrect
i t
step, command, process, or data definition in a computer program.
y A failure
f il isi a ddeparture
t ffrom the th system's
t ' required
i d bbehavior.
h i
y a fault is an inside view of the system, as seen by the eyes of the
d l
developers, whereas
h a failure
f il isi an outside
t id view:
i a problembl th thatt the
th
user sees. 4
y Fixing Faults
y A module
d l iin which
hi h 100 faults
f l were di
discoveredd andd fifixedd isi better
b
than another in which only 20 faults were discovered and fixed,
suggesting
ti that
th t more rigorous
i analysis
l i andd ttesting
ti had
h d lledd tto the
th
finding of the larger number of faults (?)
y Early
E l workk iin computert security
it was bbasedd on the
th paradigm
di off
"penetrate and patch," in which analysts searched for and repaired
f lt
faults.

y Fixing Faults (Cont’d)


y However,
H the
h patchh efforts
ff were largely
l l useless,
l making
ki the
h system
less secure rather than more secure because they frequently
i t d d new faults.
introduced f lt
y The pressure to repair a specific problem encouraged a narrow focus on the
fault itself and not on its context.
context
y The fault often had nonobvious side effects in places other than the
immediate area of the fault.
y Fixing one problem often caused a failure somewhere else
y The fault could not be fixed properly because system functionality or
performance would suffer as a consequence
6
y Unexpected Behavior
y To
T understand
d d program security,
i we can examine
i programs to see
whether they behave as their designers intended or users expected.
y Such
S h unexpectedt d bbehavior
h i a program security it flaw;
fl it isi
inappropriate program behavior caused by a program vulnerability.
y Program
P security
it flaws
fl can derive
d i ffrom any kikindd off software
ft ffaultlt
y Divide program flaws into two separate logical categories:
i d t t human
inadvertent h errors versus malicious,
li i intentionally
i t ti ll
induced flaws.

y Regrettably, we do not have techniques to eliminate or address all


program securityi flflaws.
y Security is fundamentally hard, security often conflicts with usefulness
andd performance,
f there
th isi no ""silver
"" il bbullet"
ll t" tto achieve
hi security
it
effortlessly, and false security solutions impede real progress toward
more secure programming i

8
y There are two reasons for this distressing situation.
1. PProgram controlsl apply
l at the
h level
l l off the
h individual
i di id l program andd
programmer.
2. Programming
P i andd software
ft engineering
i i ttechniques
hi change
h andd
evolve far more rapidly than do computer security techniques.

y Types of Flaws
y validation
lid i error (incomplete
(i l or iinconsistent):
i ) permission
i i checks
h k
y domain error: controlled access to data
y serialization
i li ti andd aliasing:
li i program flflow order
d
y inadequate identification and authentication: basis for
authorization
th i ti
y boundary condition violation: failure on first or last case
y other
th exploitable
l it bl logic
l i errors

10
g Errors
3.2. Nonmalicious Program
y Buffer Overflows
y A bbuffer
ff ((or array or string)
i ) iis a space iin which
hi h ddata can bbe hheld.
ld
y A buffer's capacity is finite.

11

y Suppose a C language program contains the declaration:


char sample[10];

y Now
N we execute
t th
the statement:
tt t
sample[10] = 'B';

y However, if the statement were


sample[i]
l [i] = '
'B';
'
we could not identify the problem until i was set during execution to a
t bi subscript.
too-big b i t
12
y Suppose each of the ten elements of the array sample is filled with the
l
letter A andd the
h erroneous reference
f uses the
h letter
l BB, as follows:
f ll

for (i=0; i<=9; i++)


sample[i] = 'A';
sample[10] = 'B‘;

13

14
15

y Security Implication
y Two
T bbuffer
ff overflow fl attacks k that
h are usedd frequently
f l
1. The attacker may replace code in the system space. By replacing a few
instructions right after returning from his or her own procedure,
procedure the
attacker regains control from the operating system, possibly with raised
privileges.
2. On the other hand, the attacker may make use of the stack pointer or the
return register. Subprocedure calls are handled with a stack, a data
structure in which the most recent item inserted is the next one removed
(last arrived, first served).

16
y An alternative style of buffer overflow occurs when parameter values
are passedd iinto a routine,
i especially
i ll when
h theh parameters are passedd
to a web server on the Internet. Parameters are passed in the URL line,
with
ith a syntax
t similar
i il tto

http://www.somesite.com/subpage/userinput.asp?
http://www somesite com/subpage/userinput asp?
parm1=(808)555-1212 &parm2=2009Jan17

The attacker might question what the server would do with a really
long telephone number, say, one with 500 or 1000 digits.

17

y Incomplete Mediation
y Consider
C id the h example
l
http://www.somesite.com/subpage/userinput.asp?
parm1=(808)555-1212 &parm2=2009Jan17

y What
Wh t would
ld happen
h if parm22 were submitted
b itt d as 1800Jan01?
1800J 01? Or
O
1800Feb30? Or 2048Min32? Or 1Aardvark2Many?

18
y Security Implication
y Consider
C id thishi example
l
http://www.things.com/order.asp?custID=101&part=555A&q
y=20&price =10&ship=boat&shipcost=5&total=205

y A malicious attacker may decide to exploit this peculiarity by


supplying
l i instead
i t d the
th following
f ll i URL,
URL where
h theth price
i hhas been
b
reduced from $205 to $25:
http://www.things.com/order.asp?custID=101&part=555A&q
p // g / p p q
y=20&price =1&ship=boat&shipcost=5&total=25

19

y Time-of-Check to Time-of-Use Errors


y The
Th time-of-check
i f h k to time-of-use
i f (TOCTTOU) flaw
fl concerns
mediation that is performed with a "bait and switch" in the middle. It
isi also
l known
k as a serialization
i li ti or synchronization
h i ti flflaw.

20
y Time-of-Check to Time-of-Use Errors (Cont’d)
y Suppose
S a requestt tto access a fil
file were presented
t d as a ddata
t structure,
t t with ith th
the
name of the file and the mode of access presented in the structure.

y To carry out this authorization sequence, the access control mediator would
have to look up the file name in tables. The mediator could compare the
names ini the
th table
t bl to
t the
th fil
file name in
i the
th ddata
t structure
t t tto determine
d t i whether
h th
access is appropriate. More likely, the mediator would copy the file name
into its own local storage
g area and comparep from there

21

y Time-of-Check to Time-of-Use Errors (Cont’d)


y While
Whil the
h mediator
di iis checking
h ki access rights
i h ffor the
h fil
file my_file,
fil the
h
user could change the file name descriptor to your_file

y The problem is called a time-of-check to time-of-use flaw because


it exploits
p the delayy between the two times. That is,, between the
time the access was checked and the time the result of the check
was used,, a change
g occurred,, invalidatingg the result of the check.
22
y Security Implication
y Pretty
P clear
l
y Checking one action and performing another is an example of
iineffective
ff ti access control
t l
y There are ways to prevent exploitation of the time lag.
y OOne way isi to
t ensure that
th t critical
iti l parameters
t are nott exposedd dduring
i any
loss of control.
y Another wayy is to ensure serial integrity;
g y; that is,, to allow no interruption
p
(loss of control) during the validation.

23

3.3. Viruses and Other Malicious Code


y Malicious Code Can Do Much (Harm)
y Malicious
M li i code d runs under
d theh user's' authority.
h i
y Thus, malicious code can touch everything the user can touch, and
iin the
th same ways.
y Users typically have complete control over their own program code
andd data
d t fil
files; th
they can read,
d write,
it modify,
dif append,d andd even
delete them.
y But
B t malicious
li i code d can do
d the
th same, without
ith t th
the user's' permission
i i
or even knowledge.

24
N b off malware
Number l signatures
i t
1800000
1600000
1400000
1200000
1000000
800000
600000
400000
200000
0
2002 2003 2004 2005 2006 2007 2008
Symantec report 2009
25

Al t 30 years off M
Almost Malware
l

26
y From Malware fighting malicious code
Attack Sophistication vs.
Intruder Technical Knowledge Auto
Coordinated
Cross site scripting Tools
“stealth” / advanced
High scanning techniques

packet spoofing denial of service Staged

sniffers distributed
attack tools
Intruder sweepers www attacks
Knowledge
automated probes/scans
GUI
back doors
disabling audits network mgmt. diagnostics
hijacking
burglaries sessions
Attack exploiting known vulnerabilities
Sophistication
password cracking
self-replicating code
password guessing
Intruders
Low
1980 1985 1990 1995 2004
27

y Kinds of Malicious Code


y Malicious
M li i code
d or rogue program isi the
h generall name for
f
unanticipated or undesired effects in programs or program parts,
causedd by
b an agentt intent
i t t on ddamage.
y A virus is a program that can replicate itself and pass on malicious
code
d tot other
th nonmalicious
li i programs by b modifying
dif i th
them.
y A transient virus has a life that depends on the life of its host
y A resident virus locates itself in memory

28
y Kinds of Malicious Code (Cont’d)
y A Trojan
T j horse
h isi malicious
li i code
d that,
h iin addition
ddi i to iits primary
i
effect, has a second, nonobvious malicious effect
y A logic
l i bbombb isi a class
l off malicious
li i coded that
th t "detonates"
"d t t " or goes
off when a specified condition occurs.
y A time
ti bomb
b b isi a llogici bbombb whose
h trigger
t i isi a time
ti or ddate.
t
y A trapdoor or backdoor is a feature in a program by which
someone can access theth program other
th ththan bby th
the obvious,
b i di directt
call

29

y Kinds of Malicious Code (Cont’d)


y A worm is
i a program thath spreads
d copiesi off iitselflf through
h h a network.k
y A rabbit is a virus or worm that self-replicates without bound, with
th intention
the i t ti off exhausting
h ti some computing
ti resource.

30
y How Viruses Attach
y Appended
A d d Vi
Viruses

31

y How Viruses Attach (Cont’d)


y Viruses
Vi Th
That Surround
S d a Program
P

32
y How Viruses Attach (Cont’d)
y Integrated
I d Vi
Viruses andd Replacements
R l

33

y How Viruses Attach (Cont’d)


y Document
D Viruses
Vi
y Implemented within a formatted document, such as a written
d
document,t a database,
d t b a slide
lid presentation,
t ti a picture,
i t or a
spreadsheet.

34
y How Viruses Gain Control

35

y Homes for Viruses


y One-Time
O Ti EExecution i – the
h majority
j i off viruses
i
y Boot Sector Viruses

36
y Homes for Viruses (Cont’d)
y Memory-Resident
M R id Viruses
Vi
y Other Homes for Viruses
y Application
A li ti programs
y Libraries
y Data files – need a startup
p program
p g

37

y Virus Signatures
y A signature
i – a telltale
ll l pattern
y E.g., signature for the Code Red

/default.ida?NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN
NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN %u9090%u6858%ucbd3
%u7801%u9090%u6858%ucdb3%u7801%u9090%u6858 %ucbd3%u7801%u9090
%u9090%u8190%u00c3%u0003%ub00%u531b%u53ff %u0078%u0000%u00=a
HTTP/1 0
HTTP/1.0

38
y Virus Signatures
y Storage
S Patterns
P - Most
M viruses
i attachh to programs that
h are storedd
on media such as disks. The attached virus piece is invariant, so the
start
t t off th
the virus
i code
d bbecomes a ddetectable
t t bl signature.
i t

39

y Virus Signatures
y Execution
E i Patterns
P - A virus
i writer
i may want a virus
i to do
d severall
things at the same time, namely, spread infection, avoid detection,
andd cause hharm.
y Transmission Patterns - A virus is effective only if it has some
means off transmission
t i i from
f one location
l ti to t another.
th

40
y Polymorphic Viruses
y A virus
i thath can change
h iits appearance
y Encrypting viruses
y Uses
U encryptionti underd various
i kkeys tto make
k the
th stored
t d form
f off th
the virus
i
different.
y Contain three distinct parts:
p
y a decryption key,
y the (encrypted) object code of the virus
y the (unencrypted) object code of the decryption routine.

41

y Prevention of Virus Infection


y The
Th only
l way to prevent the
h iinfection
f i off a virus
i isi not to receive
i
executable code from an infected source.

42
y Prevention of Virus Infection (Cont’d)
y Several
S l techniques
hi ffor bbuilding
ildi a reasonably
bl safe
f community
i ffor
electronic contact, including the following:
y UUse only
l commerciali l software
ft acquired
i d from
f reliable,
li bl well-established
ll t bli h d
vendors.
y Test all new software
f on an isolated computer.
p
y Open attachments only when you know them to be safe
y Make a recoverable system image and store it safely.
y Make and retain backup copies of executable system files.
y Use virus detectors (often called virus scanners) regularly and update them
daily.
daily
43

g
3.4. Targeted Malicious Code
y Trapdoors - an undocumented entry point to a module
y Examples
E l
y A system is composed of modules or components.
y Programmers first test each small component of the system separate
from the other components, in a step called unit testing, to ensure that
the component works correctly by itself.
y Then, developers test components together during integration testing, to
see how they function as they send messages and data from one to the
other.

44
45

y Trapdoors - an undocumented entry point to a module


y Examples
E l
y Hardware processor design
y The undefined opcodes sometimes implement peculiar instructions,
instructions
either because of an intent to test the processor design or because of an
oversight by the processor designer.
y Undefined opcodes are the hardware counterpart of poor error checking
for software.

46
y Causes of Trapdoors
y forget
f to remove them
h
y intentionally leave them in the program for testing
y intentionally
i t ti ll leave
l them
th ini the
th program for
f maintenance
i t off the
th
finished program, or
y intentionally
i t ti ll leave
l them
th ini th
the program as a covertt means off
access to the component after it becomes an accepted part of a
production
d ti system t

47

y Salami Attack
y merges bi
bits off seemingly
i l iinconsequential
i l ddata to yield
i ld powerful
fl
results.

y Why Salami Attacks Persist


y Computer
C t computations
t ti are notoriously
t i l subject
bj t tto smallll errors
involving rounding and truncation, especially when large numbers
are tto be
b combined
bi d with
ith smallll ones.

48
y Privilege Escalation - a means for malicious code to be launched by a
user with
i h lower
l privileges
i il but
b run with
i h hi
higher
h privileges.
i il

y Interface
I t f Ill Illusions
i - a spoofing
fi attack
tt k iin which
hi h allll or partt off a webb
page is false.

y Keystroke Logging - retains a surreptitious copy of all keys pressed.

y Man-in-the-Middle Attacks - interjects itself between two other


programs
49

y Covert Channels: Programs That Leak Information

50
51

y Storage Channels - pass information by using the presence or absence


off objects
bj iin storage.

52
File Existence Channel Used to Signal
g 100

53

3.5. Controls Against


g Program
g Threats
y Three types of controls:
y Developmental
D l l
y Operating system
y Administrative
Ad i i t ti

54
y Developmental Controls
y The
Th Nature
N off Software
S f Development
D l
y Collaborative effort, involving people with different skill sets who
combine
bi their
th i expertise
ti tto produce
d a workingki product
d t
y Development requires people who can specify, design,
i l
implement, t test,
t t review,
i document,
d t manage, maintain
i t i the
th
system.

55

y Developmental Controls (Cont’d)


y Modularity,
M d l i EEncapsulation, l i andd IInformation
f i Hidi
Hiding
y A key principle of software engineering is to create a design or code in small,
self-contained units,
units called components or modules
y If a component is isolated from the effects of other components, then it is
easier to trace a problem to the fault that caused it and to limit the damage
the fault causes. This isolation is called encapsulation.
y Information hiding is another characteristic of modular software.

56
y Developmental Controls (Cont’d)
y Modularization
M d l i i isi the
h process off dividing
di idi a taskk into
i subtasks.
b k

57

y Developmental Controls (Cont’d)


y Modularization
M d l i i
y The goal is to have each component meet four conditions:
1
1. single-purpose: performs one function
2. small: consists of an amount of information for which a human can
readily grasp both structure and content
3. simple: is of a low degree of complexity so that a human can readily
understand the purpose and structure of the module
4. independent: performs a task isolated from other modules

58
y Developmental Controls (Cont’d)
y Modularization
M d l i i
y Several advantages to having small, independent components.
y Maintenance.
Maintenance If a component implements a single function,
function it can be
replaced easily with a revised one if necessary.
y Understandability
y Reuse
y Correctness
y Testing.

59

y Developmental Controls (Cont’d)


y Modularization
M d l i i
y A modular component usually has high cohesion and low coupling
y Cohesion,
Cohesion we mean that all the elements of a component have a logical
and functional reason for being there.
y Coupling refers to the degree with which a component depends on other
components in the system.

60
y Developmental Controls (Cont’d)
y Encapsulation
E l i
y Encapsulation hides a component's implementation details, but it does not
necessarily mean complete isolation
y Berard [BER00] notes that encapsulation is the "technique for packaging the
information [inside a component] in such a way as to hide what should be
hidden and make visible what is intended to be visible.“

61

y Developmental Controls (Cont’d)


y Information
I f i Hiding
Hidi
y Think of a component as a kind of black box, with certain well-defined
inputs and outputs and a well-defined function.
function
y Other components' designers do not need to know how the module
completes its function; it is enough to be assured that the component
performs its task in some correct manner.
y This concealment is the information hiding.
y Information hiding is desirable because developers cannot easily and
maliciously alter the components of others if they do not know how the
components work.
work

62
y Developmental Controls (Cont’d)
y Information
I f i Hiding
Hidi (Cont’d)
(C ’d)

63

y Developmental Controls (Cont’d)


y Mutual
M l SSuspicion ii
y Mutually suspicious programs operate as if other routines in the system were
malicious or incorrect.
incorrect
y A calling program cannot trust its called subprocedures to be correct, and a
called subprocedure cannot trust its calling program to be correct.
y Each protects its interface data so that the other has only limited access.

64
y Developmental Controls (Cont’d)
y Confinement
C fi
y A confined program is strictly limited in what system resources it can access.
If a program is not trustworthy,
trustworthy the data it can access are strictly limited
limited.
y Genetic Diversity
y Tight
g integration
g of products
p is a concern.
y A vulnerability in one of these can also affect the others.
y Fixing a vulnerability in one can have an impact on the others.

65

y Developmental Controls (Cont’d)


y Pfleeger
Pfl et al.l [PFL01] recommendd severall kkey techniques
hi ffor
building what they call "solid software":
y peer reviews
i
y hazard analysis
y testingg
y good design
y prediction
y static analysis
y configuration management
y anal sis of mistakes
analysis
66
y Developmental Controls (Cont’d)
y Peer
P review i
y Review: The artifact is presented informally to a team of reviewers; the goal
is consensus and buy-in before development proceeds further.
further
y Walk-through: The artifact is presented to the team by its creator, who
leads and controls the discussion. Here, education is the goal, and the focus
is on learning about a single document.
y Inspection: The artifact is checked against a prepared list of concerns. The
creator does not lead the discussion, and the fault identification and
correction are often controlled by statistical measurements.

67

y Developmental Controls (Cont’d)


y Peer
P reviewi (C (Cont’d)
’d)
y A wise engineer who finds a fault can deal with it in at least three ways:
1
1. by learning how
how, when,
when and why errors occur
2. by taking action to prevent mistakes
3. by scrutinizing products to find the instances and effects of errors that
were missed

68
y Developmental Controls (Cont’d)
y Peer
P review
i (C(Cont’d)
’d)

F lt Discovery
Fault Di RRate
t RReported
t d att Hewlett-Packard
H l tt P k d
69

y Developmental Controls (Cont’d)


y Peer
P review
i (C(Cont’d)
’d)
Discovery Activity
y y Faults Found (Per Thousand 
Lines of Code)
Requirements review 2.5
Design re ie
Design review 5.0
0
Code inspection 10.0
g
Integration test 33.0
Acceptance test 2.0

70
y Developmental Controls (Cont’d)
y Hazard
H d analysis l i
y A set of systematic techniques intended to expose potentially hazardous
system states.
states
y Usually involves developing hazard lists, as well as procedures for exploring
"what if" scenarios to trigger consideration of nonobvious hazards.
y A variety of techniques support the identification and management of
potential hazards. Among the most effective are
y Hazard and operability studies (HAZOP)
y Failure modes and effects analysis (FMEA)
y Fault tree analysis (FTA)

71

y Developmental Controls (Cont’d)


y Hazard
H d analysis
l i (Cont’d)
(C ’d)
Known Cause Unknown Cause

Deductive analysis, 
Description of system 
Known effect including fault tree 
behavior
analysis

y
Inductive analysis, 
Exploratory analysis,  
Exploratory analysis
Unknown  including failure modes 
including hazard and 
effect and effects analysis 
operability
studies

72
y Developmental Controls (Cont’d)
y Testing
T i
y A process activity that homes in on product quality: making the product
failure free or failure tolerant.
tolerant

73

y Developmental Controls (Cont’d)


y Testing
T i (C (Cont’d)
’d)
y Usually involves several stages.
y Unit testing is done in a controlled environment whenever possible so
that the test team can feed a predetermined set of data to the
component being tested and observe what output actions and data are
produced.
y Integration testing is the process of verifying that the system
components work together as described in the system and program
design specifications.

74
y Developmental Controls (Cont’d)
y Testing
T i (C (Cont’d)
’d)
y Usually involves several stages. (Cont’d)
y A function test evaluates the system to determine whether the functions
described by the requirements specification are actually performed by
the integrated system.
y A performance test compares the system with the remainder of these
software and hardware requirements.
y An acceptance test, in which the system is checked against the
customer's requirements description.

75

y Developmental Controls (Cont’d)


y Testing
T i (C (Cont’d)
’d)
y Usually involves several stages. (Cont’d)
y A final installation test is run to make sure that the system still functions
as it should.
y After a change is made to enhance the system or fix a
problem, regression testing ensures that all remaining functions are still
working and that performance has not been degraded by the change.

76
y Developmental Controls (Cont’d)
y Testing
T i (C (Cont’d)
’d)
y Each of the types of tests listed here can be performed from two
perspectives
y Black-box testing treats a system or its components as black boxes;
testers cannot "see inside" the system
y Clear-box testing (a.k.a. white box). - testers can examine the design and
code directly, generating test cases based on the code's actual
construction

77

y Developmental Controls (Cont’d)


y Testing
T i (C (Cont’d)
’d)
y Olsen [OLS93] describes the development at Contel IPC of a system
containing 184,000
184 000 lines of code and tracked faults discovered during various
activities, and found differences:
y 17.3 percent of the faults were found during inspections of the system design
y 19.1 percent during component design inspection
y 15.1 percent during code inspection
y 29 4 percent during integration testing
29.4
y 16.6 percent during system and regression testing
y Only 0.1 percent of the faults were revealed after the system was placed in the
field.
78
y Developmental Controls (Cont’d)
y Testing
T i (C (Cont’d)
’d)
y From a security standpoint independent testing is desirable for the testing
y Penetration testing is unique to computer security – the testers try to see if the
software does what it is not supposed to do, which is to fail or fail to enforce
security.

79

y Developmental Controls (Cont’d)


y Good
G d Design
D i
y Designers should try to anticipate faults and handle them in ways that
minimize disruption and maximize safety and security.
security
y Passive fault detection - construct the system so that it reacts in an
acceptable way to a failure's occurrence.
y Active fault detection - adopting a philosophy of mutual suspicion.
Instead of assuming that data passed from other systems or components
are correct, we always check that the data are within bounds and of the
right type or format.
y We can also use redundancy,
redundancy comparing the results of two or more
processes to see that they agree, before we use their result in a task.
80
y Developmental Controls (Cont’d)
y Good
G d Design
D i
y Fault tolerance: isolating the damage caused by the fault and minimizing
disruption to users.
users
y Typically, failures include
y failing to provide a service
y providing the wrong service or data
y corrupting data

81

y Developmental Controls (Cont’d)


y Good
G d Design
D i
y We can build into the design a particular way of handling each problem
1
1. Retrying: restoring the system to its previous state and performing the
service again, using a different strategy
2. Correcting: restoring the system to its previous state, correcting some
system characteristic, and performing the service again, using the same
strategy
3. Reporting: restoring the system to its previous state, reporting the
problem to an error-handling component, and not providing the
service again

82
y Developmental Controls (Cont’d)
y Static
S i Analysis
A l i - examine
i its
i design
d i andd code
d to llocate andd repairi
security flaws before a system is up and running
y severall aspects
t off th
the ddesign
i andd code:
d
y control flow structure - the sequence in which instructions are executed,
including iterations and loops.
loops
y data flow structure - follows the trail of a data item as it is accessed and
modified by the system
y data structure - the way in which the data are organized, independent of
the system itself.

83

y Developmental Controls (Cont’d)


y Configuration
C fi i Management
M - the
h process bby which
hi h we controll
changes during development and maintenance
y It isi important
i t t to
t know
k who
h isi making
ki which
hi h changes
h tto what
h t andd when:
h
y corrective changes: maintaining control of the system's day-to-day
functions
y adaptive changes: maintaining control over system modifications
y perfective changes: perfecting existing acceptable functions
y preventive changes: preventing system performance from degrading to
unacceptable levels

84
y Developmental Controls (Cont’d)
y Configuration
C fi i Management
M (Cont’d)
(C ’d)
y Four activities are involved in configuration management:
y configuration identification
y configuration control and change management
y configuration auditing
y status accounting

85

y Developmental Controls (Cont’d)


y Configuration
C fi i Management
M (Cont’d)
(C ’d)
y Configuration identification
y Sets up baselines to which all other code will be compared after changes
are made that is building and document an inventory of all components
that comprise the system.
y “Freeze" the baseline and carefully control what happens to it.
y When a change is proposed and made, it is described in terms of how the
baseline changes.

86
y Developmental Controls (Cont’d)
y Configuration
C fi i Management
M (Cont’d)
(C ’d)
y Configuration control and configuration management - ensure we can
coordinate separate,
separate related versions
versions.
y Three ways to control the changes
y Separate files - have different files for each release or version.
y Delta - designate a particular version as the main version of a system
and then define other versions in terms of what is different.
y Conditional compilation, whereby a single code component
addresses all versions , relying on the compiler to determine which
statements to apply to which versions.
versions

87

y Developmental Controls (Cont’d)


y Configuration
C fi i Management
M (Cont’d)
(C ’d)
y A configuration audit confirms that the baseline is complete and accurate,
that changes are recorded
recorded, that recorded changes are made
made, and that the
actual software (that is, the software as used in the field) is reflected
accurately in the documents
y Finally, status accounting records information about the components:
where they came from (for instance, purchased, reused, or written from
scratch), the current version, the change history, and pending change
requests.

88
y Developmental Controls (Cont’d)
y Configuration
C fi i Management
M (Cont’d)
(C ’d)
y All activities are performed by a configuration and change control board,
or CCB.
CCB
y The CCB contains representatives from all organizations with a vested
interest in the system
y The board reviews all proposed changes and approves changes based on
need, design integrity, future plans for the software, cost, and more.

89

y Developmental Controls (Cont’d)


y Proofs
P f off PProgram Correctness
C
y Program verification can demonstrate formally the "correctness" of certain
specific programs.
programs
y Making initial assertions about the inputs and then checking to see if the
desired output is generated.
y Each program statement is translated into a logical description about its
contribution to the logical flow of the program.
y Finally, the terminal statement of the program is associated with the
desired output.

90
y Developmental Controls (Cont’d)
y Proofs
P f off PProgram Correctness
C
y Proving program correctness is hindered by several factors.
y Correctness proofs depend on a programmer or logician to translate a
program's statements into logical implications.
y Deriving the correctness proof from the initial assertions and the
implications of statements is difficult, and the logical engine to generate
proofs runs slowly.
y The current state of program verification is less well developed than code
production.

91

Das könnte Ihnen auch gefallen