Beruflich Dokumente
Kultur Dokumente
Engineering
CSE302
Dr. Munam Ali Shah
PhD: University of Bedfordshire
MS: University of Surrey
M.Sc: University of Peshawar
n What is a software
n What is Quality
n What is Engineering
Objectives
(Roger S. Pressman)
The Software Engineering
Characteristics of a software:
1. Software is developed or engineered (in
engineering we make the things from scrap, we don’t
have any existing model), it is not manufactured in the
classical sense.
2. Software doesn't "wear out.“ (wear out means
things gets old or obsolete with the passage of time,
the performance decreases as time passes, like
computer hardware, or human body, but the
sofwtware improves with time, like Windows.NT and
now windows 10)
Difference b/w Engineering and Manufacturing
Engineering refers to planning or designing whereas manufacturing
refers to using of machines and raw materials to physically make the
thing.
Software is engineered because it involves the designing and
planning phases in it.
Manufacturing does not refer to making of civil architectures. Its called
construction.
For example Company A makes the blueprint of a building. Its in
engineering business. Company B makes cement and bricks to make
the building. Its in manufacturing. Company C takes raw material from
B and blue print from A and makes the building, Its in construction.
Engineering
Manufacturing
Constructions
What is Quality
n What is a software?
Characteristics of a Software
n What is Engineering?
What is difference between engineering,
manufacturing and construction
n What is Quality?
Outlines
Quality Assurance
Testing
Perspectives and Expectations
Quality Perspective
Quality Expectations
Quality Expectations (conti..)
ISO-9126 Quality Framework
Summary of Today’s Lecture
Testing Overview
Fault Tolerance Overview
Safety Assurance Overview
Formal Method Overview
Inspection Overview
The End
Software Quality
Engineering
Lecture No. 4
Part- 1
Overview and Basics
Summary of the previous lecture
n QA Classification
n Overview
Defect prevention
Testing
Fault
Safety
Objectives
n Error blocking
error: missing/incorrect actions .
direct intervention to block errors ⇒ fault injections
prevented
rely on technology/tools/etc.
n Error source removal .
root cause analysis ⇒ identify error sources
removal through education/training/etc.
n Systematic defect prevention via process improvement.
Formal Method Overview
n Motivation .
fault present: – revealed through testing/inspection/etc.
fault absent: formally verify. (formal methods ⇒ fault
absent)
n Basic ideas .
behavior formally specified:
– – pre/post conditions, or
– – as mathematical functions. .
verify “correctness”: – intermediate states/steps, – axioms
and compositional rules. .
Approaches: axiomatic/functional/etc.
Inspection Overview
n Product/Process characteristics: .
object: product type, language, etc.
scale/order: unit, component, system, who: self,
independent, 3rd party
n What to check:
verification vs. validation
external specifications (black-box) .
internal implementation (white/clear-box)
n Criteria: when to stop?
coverage of specs/structures. .
reliability ⇒ usage-based testing
Fault Tolerance Overview
n Motivation .
fault present but removal infeasible/impractical
fault tolerance ⇒ contain defects
n FT techniques: break fault-failure link
recovery: rollback and redo
NVP: N-version programming – fault blocked/out-voted
Safety Assurance Overview
n Defect Handling
n QA in Software Processes
n V&V Perspective
n QA: Defect View vs V&V View
The End
Software Quality
Engineering
Lecture No. 5
Part- 1
Overview and Basics
Summary of the previous lecture
n QA Classification
n Overview
Defect prevention
Testing
Fault
Safety
Outlines
n QA Context
n Quality in software processes
n V & V View
n DC View
n QA: Defect View vs V&V View
Objectives
n Mega-process:
initiation, development, maintenance, termination.
n Development process components:
requirement, specification, design, coding, testing,
release.
QA in software Process
n Process variations:
waterfall development process
iterative development process
spiral development process
lightweight/agile development processes and XP
(extreme programming)
maintenance process too
mixed/synthesized/customized processes
n QA important in all processes
QA in Waterfall Process
V&V
V-model
V&V in Software Process
n V&V vs DC View
l Two views of QA:
V&V view
DC (defect-centered)
Interconnected: mapping possible?
V&V vs DC View
n QA in Software Processes
n We explored some V & V model
n We compared and contrasted V & V and DC views
Overview of Next lecture
n QA to SQE
n Key SQE Activities
n SQE in Software Process
n SQE and QIP (quality improvement paradigm)
The End
Software Quality
Engineering
Lecture No. 6
Part- 1
Overview and Basics
Summary of the previous lecture
n QA to SQE
n Key SQE Activities
n V & V model
n SQE in Software Process
Outlines
n QIP support:
overall support: experience factory
measurement/analysis: GQM (goal-question-metric
paradigm)
n SQE as expanding QA to include QIP ideas
Pre-QA Planning
n Pre-QA planning:
Quality goal
Overall QA strategy:
QA activities to perform?
measurement/feedback planning
n Setting quality goal(s):
Identify quality views/attributes
Select direct quality measurements
Assess quality expectations vs. cost
Setting Quality Goals
n Identify quality views/attributes
customer/user expectations,
market condition,
product type, etc.
n Select direct quality measurements
direct: reliability
defect-based measurement
other measurements
n Assess quality expectations vs. cost
cost-of-quality/defect studies
economic models: COCOMO etc
Forming QA Strategy
n QA activity planning
evaluate individual QA alternatives
strength/weakness/cost/applicability/etc.
match against goals
integration/cost considerations
n Measurement/feedback planning:
define measurements (defect & others)
planning to collect data
preliminary choices of models/analyses
feedback & follow-up mechanisms, etc.
Measurement Analysis and Feedback
n Measurement:
defect measurement as part of defect handling
process
other data and historical baselines
n Analyses: quality/other models
input: above data
output/goal: feedback and follow-up
focus on defect/risk/reliability analyses
n Feedback and follow-up:
frequent feedback: assessments/predictions
possible improvement areas
project management and improvement
SQE in Software Processes
q SQE activities ⊂ development activities:
q quality planning ⊂ product planning
q QA activities ⊂ development activities
q analysis/feedback ⊂ project management
q Fitting SQE in software processes:
q different start/end time
q different sets of activities, sub-activities, and
focuses
q In waterfall process: more staged (planning,
execution, analysis/feedback)
q In other processes: more iterative or other
variations
Quality engineering in the waterfall process
Quality engineering in the waterfall process
SQE Effort Profile
n QE activity/effort distribution/dynamics:
different focus in different phases
different levels (qualitatively)
different build-up/wind-down patterns
impact of product release deadline (deadline-
driven activities)
n planning: front heavy
n QA: activity mix (early vs. late; peak variability?
deadline?)
n analysis/feedback: tail heavy (often deadline-driven or
decision-driven)
SQE Effort in Waterfall Process
n Effort profile
n planning/QA/analysis of total effort
n general shape/pattern only (actually data would not be
as smooth)
n in other processes: – similar but more evenly distributed
Summary of Today’s Lecture
n QA to SQE
n Key SQE Activities
n SQE in Software Process
n SQE and QIP (quality improvement paradigm)
Last Lecture of Part- 1:
Overview and Basics
Outlines
n More on:
Quality concepts
Quality control
Cost of Quality
n Statistical Quality Assurance
n The SQA Plan
Objectives
n Causes of errors:
- incomplete or erroneous specification (IES)
- misinterpretation of customer communication (MCC)
- intentional deviation from specification (IDS)
- violation of programming standards (VPS)
- error in data representation (EDR)
- inconsistent module interface (IMI)
- error in design logic (EDL)
- incomplete or erroneous testing (IET)
- inaccurate or incomplete documentation (IID)
- error in programming language translation of design (PLT)
- ambiguous or inconsistent human-computer interface (HCI)
- miscellaneous (MIS)
The SQA Plan
n The SQA plan provides a road map for instituting software
quality assurance.
n Basic items:
- purpose of plan and its scope
- management : organization structure, SQA tasks, their
placement in the process - roles and responsibilities related to
product quality
- documentation: Project documents, model, technical
documents, user document - standards, practices, and
conventions
- reviews and audits; test - test plan and procedure
- problem reporting, and correction actions
- tools; code control; media control; supplier control
- records collection, maintenance, and retention
- training; risk management
Summary of Today’s Lecture
n More on:
Quality concepts
Quality control
Cost of Quality
n Statistical Quality Assurance
n The SQA Plan
Part- 2 (a):
Software Testing
Outlines
n What is testing
n Why testing is needed
n History of testing
n White box testing
n Black box testing
Objectives
Unit testing
Integration testing
System testing
UNIT TESTING
system testing
What is Software Testing
Several definitions:
Major goals:
uncover the errors (defects) in the software, including errors in:
- requirements from requirement analysis
- design documented in design specifications
- coding (implementation)
- system resources and system environment
- Development Engineers
- Only perform unit tests and integration tests
- Test Planning
Define a software test plan by specifying:
- a test schedule for a test process and its activities, as
well as assignments
- test requirements and items
- test strategy and supporting tools
- Problem Reporting
Report program errors using a systematic solution.
- Test Automation
- Define and develop software test tools
- Adopt and use software test tools
- Write software test scripts and facility
Boehm [BOE81]:
GUI Testing
Object-Oriented Software Testing
Component Testing and Component-based Software
Testing
Domain-specific Feature Testing
Testing Web-based Systems
Summary of Today’s Lecture
. Defect: error/fault/failure.
. Defect prevention/removal/containment.
. Map to major QA activities
• Defect prevention:
Error blocking and error source removal.
• Defect removal:
. Inspection, etc.
• How? Run-observe-follow-up
• Refinement
⇒ generic process below
Testing: Activities & Generic Process
• Major testing activities:
. test planning and preparation
. execution (testing)
. analysis and follow-up
• Test planning:
• Test preparation:
• Followup activities:
. basic questions
. testing technique questions
. activity/management questions
• Basic questions addressed here:
• Resource-based criteria:
• Quality-based criteria:
n Testing Activities
n Test Management
n Testing Automation
Outlines
n Checklist-Based Testing
n OP Development: Procedures/Examples
Objectives
• Explicit checklists:
. Function/features (external)
. Implementation (internal)
. Standards, etc.
. Mixed or combined checklists
Function Checklists
• Function/feature (external) checklists:
. Black-box in nature
. List of major functions expected
• Possible solutions:
. specialized checklists ⇒ partitions.
. alternatives to checklists: FSMs
Partitions: Ideas and Definitions
• Partitions: a special type of checklists
• General steps:
. Information collection.
. OP construction.
. UBST under OP.
. Analysis (reliability!) and follow-up.
• Linkage to development process
. Construction: Requirement/specification, and spill over to
later phases.
. Usage: Testing techniques and SRE
• Procedures for OP construction necessary
UBST: Primary Benefit
• Primary benefit:
. Overall reliability management.
. Focus on high leverage parts
⇒ productivity and schedule gains:
– same effort on most-used parts
– reduced effort on lesser-used parts
– reduction of 56% system testing cost
– or 11.5% overall cost (Musa, 1993)
• Gains vs. savings situations
. Savings situation:
– reliability goal within reach and not to over test lesser-used parts
. Gains situation: more typical
– re-focusing testing effort
– constrained reliability maximization
Developing OP
• Obtaining OP information:
. identify distinct operations as disjoint alternatives.
. assign associated probabilities
– occurrences/weights ⇒ probabilities.
. in two steps or via an iterative procedure
• OP information sources:
. actual measurement.
. customer surveys.
. expert opinion.
Developing OP
• Customer surveys:
. Less accurate/costly than measurement.
. But without the related difficulties.
. Key to statistical validity:
– large enough participation
– “right” individuals completing surveys
. More important to cross-validate
• Expert opinion:
. Least accurate and least costly.
. Ready availability of internal experts.
. Use as a rough starting point.
Developing OP
• Who should develop OP?
. System engineers
– requirement ⇒ specification
. High-level designers
– specification ⇒ product design
. Planning and marketing
– requirement gathering
. Test planners(testing)
– users of OP
. Customers (implicitly assumed)
– as the main information source
• Key: those who can help us
. identify distinct alternatives (operations)
. assign associated probabilities
OP Construction: A Case Study
• Background:
. Former CSE 302 student
. Course project: SQE OP development
. Application:
• Problem and key decisions:
. Product: name of the product
. Product characteristics ⇒ OP type
– menu selection/classification type
– flat instead of Markovian
. Result OP, validation, and application
OP Construction: A Case Study
• Participants:
. Software Product Manager
. Test Engineers
. Systems Engineers
. Customers
. Asad: pulling it together
. Tahir: technical advising
. Malik: documentation
• Information gathering
. Interview Software Product Manager to identifytarget
customers
. Customer survey/questionnaire to obtain customer usage
information
. Preparation, OP construction and follow-up
• Use Similar profiles / customers
OP case study
n Checklist-Based Testing
n OP Development: Procedures/Examples
Overview of Next lecture
n Checklist-Based Testing
n OP case study
Part- 2 (b):
Other Models & Techniques of
Testing
Outlines
n Control Flow Testing (CFT)
l Steps
l Construction
l Issues
l Methods
Objectives
• Notational conventions:
. “Pi” for processing node “i”
. “Ji” for junction node “i”
. “Ci” for condition/branching node “i”
CFT Technique
• Test preparation:
. Build and verify the model (CFG)
. Test cases: CFG ⇒ path to follow
. Outcome checking: what to expect and how to
check it
l Steps
l Construction
l Issues
l Methods
Outlines
n Data Dependency and Sequencing
The links in DDGs represent the D-U relation, or “is used by”.
That is, if we have a link from A to B, we interpret it as that
the data defined in A is used to define the data in B
Example DDG Elements
. Conditional definition example with a data selector node
. parallel conditional assignment
. multi-valued data selector predicate
. match control and data in-link values
The three possible values for the result r can be marked as rl, 7-2, and 7-3. The
final result r will be selected among these three values based on the condition on
d. Therefore, we can place r in a data selector node, connect rl, 7-2, and 7-3 to r
as data inlink, and the condition node “d?O” to r as the control inlink. We
distinguish control inlink from data inlink by using a dotted instead of a solid
link. Only one value will be selected for r from candidate values, rl, r2, and r3,
by matching their specific conditions to the control inlink evaluation result.
DDG Characteristics and Construction
• Characteristics of DDG:
. Multiple inlinks at most non-terminal nodes.
. Focus: output variable(s)
– usually one or just a few
. More input variables and constants.
. Usually more complex than CFG
– usually contains more information
(omit non-essential sequencing info.)
• Source of modeling:
. White box: design/code (traditionally).
. Black box: specification (new usage).
. Backward data resolution
(often used as construction procedure.)
Building DDG
• Basic steps
. Synchronization.
. OO systems: abstraction hierarchies.
. Integration testing:
– communication/connections,
– call graphs.
Techniques
Key characteristics
n Integration Testing
n Regression Testing
n Beta Testing
testing (sub-phases).
Integration Testing
n Key characteristics:
.Object: interface and interaction among multiple
components or subsystems.
. Component as a black-box (assumed).
. System as a white-box (focus).
. Exit: coverage (sometimes reliability).
n Commonly used testing techniques:
. FSM-based coverage testing.
. Other techniques may also be used.
. Sometimes treated as ⊂ system testing ⇒ see
system testing techniques in next slide.
System Testing
n Key characteristics:
. Object: whole system and the overall operations,
typically from a customer’s perspective.
. No implementation detail ⇒ BBT.
. Customer perspective ⇒ UBST.
. Exit: reliability (sometimes coverage).
n Commonly used testing techniques:
. UBST with Musa or Markov OPs.
. High-level functional checklists.
. High-level FSM, possibly CFT & DFT.
. Special case: as part of a “super”-system in
embedded environment ⇒ test interaction with
environment.
Acceptance Testing
n Key characteristics:
. Object: whole system.
– – but defect fixing no longer allowed.
– . Customer acceptance in the market.
– . Exit: reliability.
n Commonly used testing techniques:
. Repeated random sampling without defect fixing.
.UBST with Musa OPs.
. External testing services/organizations may be
used for system “certification”.
Beta Testing
n Key characteristics:
. Object: whole system
. Normal usage by customers.
. Exit: reliability.
n Commonly used testing techniques:
. Normal usage.
. Ad hoc testing by customers. (trying out different
functions/features)
. Diagnosis testing by testers/developers to fix
problems observed by customers.
Testing Sub-Phases: Comparison
n Key characteristics for comparison:
. Object and perspectives.
. Exit criteria.
. Who is performing the test.
. Major types of specific techniques.
n “Who” question not covered earlier:
. Dual role of programmers as testers in unit testing
and component testing I.
. Customers as testers in beta testing. .
Professional testers in other sub-phases.
. Possible 3rd party (IV&V) to test reusable
components & system acceptance.
Testing Sub-Phases: Summary
Specialized Testing
n Equivalence Partitioning
and Integration.
n Equivalence Partitioning
Enroll
Change
Student
Drop
Step 2
Step 2
Step 3: Identify Use Case Scenarios
• A use case scenario is an instance of a use case, or a complete “path”
through the use case.
• End users of a system can go down many paths as they execute the
functionality specified in the use case.
• The basic (or normal) path is illustrated by the dotted lines.
Step 4: Generate Test Case
n EP Example:
n Consider a requirement for a software system:
18 56
Errors are more common at boundary values, either just below, just above
or specifically on the boundary value.
Boundary Analysis #1
n “The customer is eligible for a life assurance discount if they are at least
18 and no older than 56 years of age.”
Test values would be: 17, 18, 19, 55, 56 and 57.
This assumes that we are dealing with integers and so least significant
digit is 1 either side of boundary.
Boundary Analysis #2
n For each boundary we test +/- 1 in the least significant digit of either side
of the boundary.
Boundary
Limit
Boundary Limit -1 Boundary Limit +
1
Might also test: 0, -5, 200, Fred, 0.00000001, some typical mid range
values: 21, 32, 47. Note boundary values tested +/- least significant
recorded digit.
Invalid Partitions
n Testing Vs Debugging
What to test?
Any working product which forms part of the software
application has to be tested. Both data and programs must be
tested.
Who tests?
Programmer, Tester and Customer
Software Development Lifecycle (SDLC)
n Inception
n Requirements
n Design
n Coding
n Testing
n Release
n Maintenance
Inception
n By compiling and correcting the errors, all syntax error and removed.
Testing Levels
n Unit Testing
Programs will be tested at unit level
The same developer will do the test
Integration Testing
When all the individual program units are tested in the unit testing phase and all units
are clear of any known bugs, the interfaces between those modules will be tested
Ensure that data flows from one piece to another piece
System Testing
After all the interfaces are tested between multiple modules, the whole set of
software is tested to establish that all modules work together correctly as an
application.
Put all pieces together and test
Acceptance Testing
The client will test it, in their place, in a near-real-time or simulated environment.
Release to Production and Warranty Period
Bug fixing
Upgrade
Enhancement
v After some time, the software may become obsolete and will
reach a point that it cannot be used. At that time, it will
be replaced by another software which is superior to that. This
is the end of the software
v We do not use FoxPro or Windows 3.1 now as they are gone!
Development Models
n Water Fall Model – do one phase at a time for all requirements given by
customer
Development Models
n Incremental Model – take smaller set of requirements and build slowly
Development Models
n What is to be tested ?
l Configuration – check all parts for existence
l Security – how the safety measures work
l Functionality – the requirements
l Performance – with more users and more data
l Environment – keep product same but other settings different
Detailed Test Cases
The test cases will have a generic format as
below.
Test Case Id
Test Case Description
Test Prerequisite
Test Inputs
Test Steps
Expected Results
Detailed Test Case (DTC)
Install Tests
n Auto install in default mode
n Does the installer check for the prequsites?
n Does the installer check for the system user
privileges?
n Does the installer check for disk and memory space?
n Does the installer check for the license agreement ?
n Does the installer check for the right product key?
n Does the installer installs in the default path?
n Do we have different install types like custom, full,
compact, fully?
Install Tests continued..
Cancel the installation half way thru.
Uninstall the software.
Cancel half way thru un-install.
Reinstall on the same machine. Repair an existing install on the same
machine.
Does installer create folders, icons, short cuts, files, database, registry
entries?
Does uninstall remove any other files that do not belong to this
product?
Actual Test Execution
Navigation Tests
Once install complete, start the application
Move to every possible screen using menus, tool bar icons, short cut
keys, or links.
Check for respective screen titles and screen fields for the existence.
Move back and forth from various screens to other forms in adhoc
Exit the application and restart the application many times
Core Functional Test
Build Verification Tests (BVT)
A set of test scenarios/cases must be identified as critical priority, such
that, if these tests do not work, the product does not get acceptance
from the test team.
Build Acceptance Tests (BAT)
This starts once the BVT is done. This involves feeding the values to the
program as per the test input and then performing the other actions (like
clicking specific buttons of function keys etc) in the sequence as given in the
test steps.
Test Problem Report or Fault Report or Bug
Report
Problem Severity Show stopper/High/Medium/Low. This will be agreed by the lead tester and the development
project manager.
Problem Detailed Description A description of what was tested and what happened
This will be filled by the tester.
Problem Resolution After fixing the problem, the developer fills this section, with details about the fix. Developer
gives this
n Testing Vs Debugging
• Error blocking
• Process improvement
Objectives
Error blocking
Process improvement
Overview of the next lecture
• Error blocking
• Process improvement
Outlines
n Software Inspection
Fagan Inspection
Objectives
Fagan Inspection
Outlines
n Software Inspection
Scope Inspection
Other Issues
Objectives
n Code reading
. focus on code
. optional meetings
n Code reading by stepwise abstraction
. basis: program comprehension studies
. variation to code reading
– formalized code reading technique
. top-down decomposition and bottom-up abstraction
. recent evidence of effectiveness
Formal Inspection: ADR & Correctness
§ Active design reviews (ADR)
§ . Another formal inspection, for designs
§ . Inspector active vs. passive
§ . Author prepares questionnaires
§ . More than one meeting
§ . Scenario based (questionnaires)
§ . Overall ADR divided into small ones
§ . 2-4 persons (for each smaller ADR)
n • Key advantages:
. Wide applicability and early availability
. Complementary to testing/other QA
. Many techniques/process to follow/adapt
. Effective under many circumstances
n • Key limitations:
. Human intensive
. Dynamic/complex problems and interactions:
Hard to track/analyze.
. Hard to automate.
Summary of Today’s Lecture
Scope Inspection
Other Issues
Outlines
Verification of a software.
n • Basic approaches:
. Floyd/Hoare axiomatic
. Dijkstra/Gries weakest precondition (WP)
. Mills’ program calculus/functional approach
n • Basis for verification:
. logic (axiomatic and WP)
. mathematical function (Mills)
. other formalisms
n • Procedures/steps used:
. bottom-up (axiomatic)
. backward chaining (WP)
. forward composition (Mills), etc.
Object and General Approach
n • Basic block: statements
. block (begin/end)
. concatenation (S1; S2)
. conditional (if-then/if-then-else)
. loop (while) . assignment
n • Formal verification
. rules for above units
. composition
. connectors (logical consequences)
Axiomatic Approach
n • Floyd axioms/flowchart
. Annotation on flowchart
. Logical relations
. Verification using logic
n • Hoare axioms/formalization
. Pre/Post conditions
. Composition (bottom-up)
. Loops and functions/parameters
. Invariants (loops, functions)
. Basis for many later approaches
Axiomatic Correctness
n • Notations
. Statements: Si
. Logical conditions: {P} etc.
. Schema: {P} S {Q}
. Axioms/rules: conditions or schemas
conclusion
n • Axioms:
. Schema for assignment
. Basic statement types
. “Connectors”
. Loop invariant
Functional Approach
n • Functional approach
. Mills’ program calculus
. Symbolic execution extensively used.
. Code reading/chunking/cognition ideas.
n • Functional approach elements
. Mills box notation
. Basic function associated with individual
statements
. Compositional rules
. Forward flow/symbolic execution . Comparison
with Dijkstra’s wp
Functional Approach: Symbolic Execution
Trace 1 Trace 2
n if x ≥ 0 then y ← x else y ← −x
Both traces used in verification
Formal Verification: Limitations
n • Algebraic specification/verification:
. Specify and verify data properties
. Behavior specification
. Base case
. Constructions
. Domain/behavior mapping
. Use in verification
n • Stack example
. newstack
. push
. pop
. Canonical form
Formal Verification: Other
n • Model checking:
. Behavioral specification via FSMs.
. Proposition: property of interest expressed as a
suitable formula.
. Model checker: algorithm/program to check
proposition validity.
» Proof: positive result.
» Counterexample: negative result.
n • Other approaches and discussions:
. Algorithm analysis.
. Petri-net modeling and analysis.
. Tabular/semi-formal method.
. Formal inspection based.
. Limited aspects ⇒ easier to perform.
Formal Verification: Summary
n • Basic features:
. Axioms/rules for all language features
. Ignore some practical issues: Size, capacity, side
effects, etc.?
. Forward/backward/bottom-up procedure.
. Develop invariants: key, but hard.
n • General assessment:
. Difficult, even on small programs
. Very hard to scale up
. Inappropriate to non-math. problems
. Hard to automate
– manual process ⇒ errors↑
. Worthwhile for critical applications
Summary of Today’s Lecture
n • FT with NVP:
. NVP: N-Version Programming
. Multiple independent versions
. Dynamic voting/decision ⇒ FT.
Fault Tolerance: N- Version Programming
n Multiple independent versions
. Multiple: parallel vs backup?
. How to ensure independence?
n Support environment:
. concurrent execution
. switching
. voting/decision algorithms
n Correction/recovery?
. p-out-of-n reliability
. in conjunction with RB
. dynamic vs. off-line correction
FT and Safety
n • Extending FT idea for safety:
. FT: tolerate fault
. Extend: tolerate failure
. Safety: accident free
. Weaken error-fault-failure-accident link
n • FT in SSE (software safety engineering):
. Too expensive for regular systems
. As hazard reduction technique in SSE
. Other related SSE techniques:
» general redundancy
» substitution/choice of modules
» barriers and locks
» analysis of FT
What Is Safety?
n • Safety: The property of being accident-free for
(embedded) software systems.
. Accident: failures with severe consequences
. Hazard: condition for accident
. Special case of reliability
. Specialized techniques
n • Software safety engineering (SSE): .
Hazard identification/analysis techniques
. Hazard resolution alternatives
. Safety and risk assessment
. Qualitative focus
. Safety and process improvement
Safety Analysis & Improvement
n • Hazard analysis:
. Hazard: condition for accident
. Fault trees: (static) logical conditions
. Event trees: dynamic sequences
. Combined and other analyses
. Generally qualitative
. Related: accident analysis and risk assessment
n • Hazard resolution
. Hazard elimination
. Hazard reduction
. Hazard control
. Related: damage reduction
Hazard Analysis: Fault Tolerance Analysis
n • Fault tree idea:
. Top event (accident)
. Intermediate events/conditions
. Basic or primary events/conditions
. Logical connections
. Form a tree structure
n • Elements of a fault tree:
. Nodes: conditions and sub-conditions – terminal
vs. no terminal
. Logical relations among sub-conditions – AND,
OR, NOT
. Other types/extensions possible
Hazard Analysis: FTA Example
n • TFM: Two-Frame-Model
. Physical frame
. Logical frame
. Sensors: physical ⇒ logical
. Actuators: logical ⇒ physical
n • TFM characteristics and comparison:
. Interaction between the two frames
. Nondeterministic state transitions and
encoding/decoding functions
. Focuses on symmetry/consistency between the
two frames.
TFM Example
n • TFM Example:
. physical frame: nuclear reactor
. logical frame: computer controller
Usage of TFM
Refined Quality
Engineering Process:
Management, Analysis
and feedback for
Quantifiable
improvement
Outlines
Classification of quality
assessment models
Generalized Models: Overall
n • Key characteristics
. Industrial averages/patterns ⇒ (single) rough estimate.
. Most widely applicable.
. Low cost of use.
n • Examples: Defect density.
. Estimate total defect with sizing model.
. Variation: QI in IBM (counting in-field unique defect
only)
n • Non-quantitative overall models:
. As extension to quantitative models.
. Examples: 80:20 rule, and other general observations.
Product-Specific Models (PSM)
n • Product-specific models (PSMs):
. Product-specific info. used (vs. none used in
generalized models)
. Better accuracy/usefulness at cost ↑
. Three types:
» semi-customized
» observation-based
» measurement-driven predictive
n • Connection to generalized models (GMs):
. Customize GMs to PSMs with new/refined models and
additional data.
. Generalize PSMs to GMs with empirical evidence and
general patterns.
PSM: Semi-Customized
n • Semi-customized models:
. Project level model based on history.
. Data captured by phase.
. Both projections and actual.
. Linear extrapolation.
n Related extensions to Defect Removal Model (DRM):
. Defect dynamics model in Lecture 28,
. Orthogonal Defect Classification (ODC) defect
analyses in Lecture 28:
» 1-way distribution/trend analysis
» 2-way analysis of interaction.
PSM: Observation-Based
n • Observation-based models:
. Detailed observations and modeling
. Software reliability growth models
. Other reliability/safety models
n • Model characteristics
. Focus on the effect/observations
. Assumptions about the causes
. Assessment-centric
PSM: Predictive
n • Measurement-driven predictive models
. Establish predictive relations
. Modeling techniques: regression, TBM, NN,
OSR etc.
. Risk assessment and management
n • Model characteristics:
. Response: chief concern
. Predictors: observable/controllable
. Linkage quantification
Model Summary
Relating Models to Measurements
n Data required by quality models (Lecture 26):
. Direct quality measurements
» to be assessed/predicted/controlled
. Indirect quality measurements
» – means to achieve the goal
» – environmental, activity, product-internal
. Data requirement by models summarized below
Relating Models to Measurements
Model/Measurement Selection
n • Customize Goal-Question-Metric (GQM) into 3-
steps
n • Step 1: Quality goals
. Restricted, not general goals
n • Step 2: Quality models
. Model characteristics/taxonomy
. Model applicability/usefulness
. Data requirement/affordability
n • Step 3: Quality measurements
. Model-measurements relations
. Detailed model information
Selection Example A
n • Goal: rough quality estimates
n • Situation 1:
. No product specific data
. Industrial averages/patterns
. Commercial tools: SLIM etc.
. Product planning stage
. Defect profile in lifecycle
. Use generalized models
n • Situation 2: . Data from related products
. DRM for legacy products
. ODC profile for IBM products
. Semi-customized models
Summary of Today’s Lecture
In today’s lecture, we talked about quality models.
We discussed some generalized models and their
characteristics and we also discussed PSMs.
Certain models can be appropriate for specific type
of softwares.
Overview of the next lecture
n • Other attributes:
. Impact
. Severity: low-high
. Detection time, etc.
ODC Attributes: Cause/Fault-View
n • Defect type:
. Associated with development process.
. Missing or incorrect.
. Collected by developers.
. May be adapted for other products.
n • Other attributes:
. Action: add, delete, change.
. Number of lines changed, etc.
ODC Attributes: Cause/Error-View
n • Key attributes:
. Defect source: vendor/base/new code.
. Where injected.
. When injected.
n • Characteristics:
. Associated to additional causal analysis.
. (May not be performed.)
. Many subjective judgment involved
(evolution of ODC philosophy)
n • Phase injected: rough “when”.
Adapting ODC for Web Error Analysis
n • Continuation of web testing/QA study.
n • Web error = observed failures, with causes
already recorded in access/error logs.
n • Key attributes mapped to ODC:
. Error type = defect impact.
» response code (4xx) in access logs
. Referring page = defect trigger.
» individual pages with embedded links
» classified: internal/external/empty
» focus on internal problems
. Missing file type = defect source – different fixing
actions to follow.
n • May include other attributes for different kinds
of web sites.
ODC Analysis: Attribute Focusing
n • General characteristics
. Graphical in nature
. 1-way or 2-way distribution
. Phases and progression
. Historical data necessary
. Focusing on big deviations
n • Representation and analysis
. 1-way: histograms
. 2-way: stack-up vs. multiple graphics
. Support with analysis tools
ODC Analysis Examples
n • 1-way analysis:
. Defect impact distribution for an IBM product.
. Uneven distribution of impact areas!
⇒ risk identification and focus.
ODC Analysis Examples
n • 2-way analysis:
. Defect impact-severity analysis.
. IBM product study continued.
. Huge contrast: severity of reliability and usability
problems!
ODC Process and Implementation
n • ODC process:
. Human classification
» defect type: developers,
» defect trigger and effect: testers,
» other information: coordinator/other.
. Tie to inspection/testing processes.
. Analysis: attribute focusing.
. Feedback results: graphical.
n • Implementation and deployment:
. Training of participants.
. Data capturing tools.
. Centralized analysis.
. Usage of analysis results.
Linkage to Other Topics
n • Development process
. Defect prevention process/techniques.
. Inspection and testing.
n • Testing and reliability:
. Expanded testing measurement
» Defects and other information:
» Environmental (impact)
» Test case (trigger)
» Causal (fault)
. Reliability modeling for ODC classes
Summary of Today’s Lecture
n In today’s lecture, we talked about Defect
Classification and Analysis. We also focused on:
General Types of Defect Analyses.
ODC: Orthogonal Defect Classification.
Analysis of ODC Data.
Overview of the next lecture
n • TBDM example:
defect prediction for IBM-NS
. 11 design/size/complexity metrics
New Technique: OSR
n • OSR: optimal set reduction
. pattern matching idea
. clusters and cluster analysis
. similar to TBM but different in: pattern extraction vs. partition
Following are the steps for OST algorithm
New Technique: OSR
n • Organization/modeling results:
. no longer a tree, see example above
. general subsets, may overlap
. details and some positive results:
Risk Identification: Comparison
n • Comparison: cost-benefit analysis ≈ comparing QA
alternatives (Last lecture of Part III).
n • Comparison area: benefit-related
. accuracy
. early availability and stability
. constructive information and guidance for (quality)
improvement
n • Comparison area: cost-related
. simplicity
. ease of result interpretation
. availability of tool support
Comparison: Accuracy
n • Accuracy in assessment:
. model fits data well
» use various goodness-of-fit measures
. avoid over-fitting
. cross validation by review etc.
n • Accuracy in prediction:
. over-fitting ⇒ bad predictions
. prediction: training and testing sets
» within project: jackknife
» across projects: extrapolate
. minimize prediction errors
Comparison: Usefulness
n • Early availability and stability
. to be useful must be available early
. focus on control/improvement
. apply remedial/preventive actions early
. track progress: stability
n • Constructive information and guidance
. what: assessment/prediction
. how to improve?
» constructive information
» guidance on what to do
. example of TBRMs
Comparison Summary
Summary of Today’s Lecture
Risk Identification.
n • Using TBRMs:
. Reliability for partitioned subsets.
. Use both input and timing information.
. Monitoring changes in trees.
. Enhanced exit criteria.
. Integrate into the testing process.
TBRMs: Interpretation & Usage
n • Interpretation of trees:
. Predicted response: success rate. (Nelson reliability
estimate.)
. Time predictor: reliability change.
. State predictor: risk identification.
n • Implementation:
. Passive tracking and active guidance.
. Periodic and event-triggered.
. S/W tool support
Tool Support
n • Types of tool support:
. Data capturing
» mostly existing logging tools
» modified to capture new data
. Analysis and modeling
» SMERFS modeling tool
» S-PLUS and related programs
. Presentation/visualization and feedback
» S-PLUS and Tree-Browser
Implementation Support
n Implementation of tool support:
. Existing tools: minimize cost
» internal as well as external tools
. New tools and utility programs
. Tool integration
» loosely coupled suite of tools
» connectors/utility programs
SRE Perspectives
n • New models and applications
. Expand from “medium-reliable” systems.
New models for new application domains.
Data selection/treatment
n • Reliability improvement
. Followup to TBRMs
Predictive (early!) modeling for risk identification
and management
n • Other SRE frontiers:
. Coverage/testing and reliability
. Reliability composition and maximization
Summary of Today’s Lecture
n In today’s lecture, we talked about the Software
Reliability Engineering (SRE)
Concepts and Approaches
Existing Approaches: SRGMs & IDRMs
Assessment & Improvement with TBRMs
Overview of the next lecture
(Roger S. Pressman)
The Software Engineering
Characteristics of a software:
1. Software is developed or engineered, it is not
manufactured in the classical sense.
2. Software doesn't "wear out.“
Quality Assurance
Testing
Correctness, Defect and Quality
Defining Quality in SQE
ISO-9126 Quality Framework
Other Quality Frameworks
Quality Assurance
n Quality Assurance mainly deals in
1. Dealing with Defect
2. Defect Prevention
3. Defect Detection and Removal
§ QA focus on correctness aspect of Q
§ QA as dealing with defects
§ – post-release: impact on consumers
§ – pre-release: what producer can do .
§ what: testing & many others
§ when: earlier ones desirable (lower cost) but may not be
feasible
§ how ⇒ classification below
QA Classification
n dealing with errors,
faults, or failures
n removing or
blocking defect
sources
n preventing
undesirable
consequences
Overview of some topics related to SQE
n Mega-process:
initiation, development, maintenance, termination.
n Development process components:
requirement, specification, design, coding, testing,
release.
QA in software Process
n Process variations:
waterfall development process
iterative development process
spiral development process
lightweight/agile development processes and XP
(extreme programming)
maintenance process too
mixed/synthesized/customized processes
n QA important in all processes
QA in Waterfall Process
V&V
V-model
SQE Process
n Error blocking
error: missing/incorrect actions .
direct intervention to block errors ⇒ fault injections
prevented
rely on technology/tools/etc.
n Error source removal .
root cause analysis ⇒ identify error sources
removal through education/training/etc.
n Systematic defect prevention via process improvement.
Formal Method Overview
n Motivation .
fault present: – revealed through testing/inspection/etc.
fault absent: formally verify. (formal methods ⇒ fault
absent)
n Basic ideas .
behavior formally specified:
– – pre/post conditions, or
– – as mathematical functions. .
verify “correctness”: – intermediate states/steps, – axioms
and compositional rules. .
Approaches: axiomatic/functional/etc.
Inspection Overview
n Product/Process characteristics: .
object: product type, language, etc. .
scale/order: unit, component, system, . . . .
who: self, independent, 3rd party •
n What to check: .
verification vs. validation .
external specifications (black-box) .
internal implementation (white/clear-box) •
n Criteria: when to stop? .
coverage of specs/structures. .
reliability ⇒ usage-based testing
Fault Tolerance Overview
n Motivation .
fault present but removal infeasible/impractical .
fault tolerance ⇒ contain defects •
n FT techniques: break fault-failure link
recovery: rollback and redo .
NVP: N-version programming – fault blocked/out-voted
Safety Assurance Overview
n Unit Testing
Programs will be tested at unit level
The same developer will do the test
Integration Testing
When all the individual program units are tested in the unit testing phase and all units
are clear of any known bugs, the interfaces between those modules will be tested
Ensure that data flows from one piece to another piece
System Testing
After all the interfaces are tested between multiple modules, the whole set of
software is tested to establish that all modules work together correctly as an
application.
Put all pieces together and test
Acceptance Testing
The client will test it, in their place, in a near-real-time or simulated environment.
Testing Vs Debugging
n • FT with NVP:
. NVP: N-Version Programming
. Multiple independent versions
. Dynamic voting/decision ⇒ FT.
Hazard Elimination
n • Hazard sources identification ⇒ elimination
(Some specific faults prevented or removed.)
n • Traditional QA (but with hazard focus):
. Fault prevention activities:
» education/process/technology/etc
» formal specification & verification
. Fault removal activities:
» rigorous testing/inspection/analyses
n • “Safe” design: More specialized techniques:
. Substitution, simplification, decoupling.
. Human error elimination.
. Hazardous material/conditions↓
Hazard Reduction
n • Hazard identification ⇒ reduction (Some specific
system failures prevented or tolerated.)
n • Traditional QA (but with hazard focus):
. Fault tolerance
. Other redundancy
n • “Safe” design: More specialized techniques:
. Creating hazard barriers
. Safety margins and safety constraints
. Locking devices
. Reducing hazard likelihood
. Minimizing failure probability
. Mostly “passive” or “reactive”
Hazard Control
n • Hazard identification ⇒ control
. Key: failure severity reduction.
. Post-failure actions.
. Failure-accident link weakened.
. Traditional QA: not much, but good design principles
may help.
n • “Safe” design: More specialized techniques:
. Isolation and containment
. Fail-safe design & hazard scope↓
. Protection system . More “active” than “passive”
. Similar techniques to hazard reduction,
» but focus on post-failure severity↓ vs. pre-failure
hazard likelihood↓.
Software Safety Program (SSP)
n • Leveson’s approach (Leveson, 1995) — Software
safety program (SSP)
n • Process and technology integration
. Limited goals
. Formal verification/inspection based
. But restricted to safety risks
. Based on hazard analyses results
. Safety analysis and hazard resolution
. Safety verification:
» few things carried over
Software Safety Program (SSP)
n • In overall development process:
. Safety as part of the requirement
. Safety constraints at different levels/phases
. Verification/refinement activities
. Distribution over the whole process
TFM: Two-Frame-Model
n • TFM: Two-Frame-Model
. Physical frame
. Logical frame
. Sensors: physical ⇒ logical
. Actuators: logical ⇒ physical
n • TFM characteristics and comparison:
. Interaction between the two frames
. Nondeterministic state transitions and
encoding/decoding functions
. Focuses on symmetry/consistency between the
two frames.
Comparison of the QA Alternatives: Applicability
n • Objects QA activities applied on:
. Mostly on specific objects, e.g., testing executable
code
. Exception: defect prevention on (implementation
related) dev. activities
Comparison: Applicability
n • Applicability to development phases:
. In waterfall or V-model: implementation
(req/design/coding) & testing/later.
. Inspection in all phases.
. Other QA in specific sets of phases.
Comparison: Applicability/Expertise
n • General expertise levels: mostly in ranges,
depending on specific techniques used.
n • Specific background knowledge
Classification of quality
assessment models
Quality Model Summary
Defect Analysis
n Goal: (actual/potential) defect↓ or quality↑ in current
and future products.
n General defect analyses:
. Questions: what/where/when/how/why?
. Distribution/trend/causal analyses.
n Analyses of classified defect data:
. Prior: defect classification.
. Use of historical baselines.
. Attribute focusing in 1-way and 2-way analyses.
. Tree-based defect analysis (Lecture 29)
Risk Identification Techniques
n • New statistical techniques:
. PCA: principal component analysis
. DA: discriminant analysis
. TBM: tree-based modeling
n • AI-based new techniques:
. NN: artificial neural networks.
. OSR: optimal set reduction.
. Abductive-reasoning, etc.
What Is SRE
n • Reliability: Probability of failure-free operation for a
specific time period or input set under a specific environment
. Failure: behavioral deviations
. Time: how to measure?
. Input state characterization and Environment: OP
n • Software reliability engineering:
. Engineering (applied science) discipline
. Measure, predict, manage reliability
. Statistical modeling
. Customer perspective:
» – failures vs. faults
» – meaningful time vs. development days
» – customer operational profile
The End of the Course