Beruflich Dokumente
Kultur Dokumente
Can we test everything? Exhaustive testing is possible? -No, sorry time & resources make it impractical !but, instead: -We must understand the risk to the clients business of the software not functioning correctly We must manage and reduce risk, carry out a Risk Analysis of the application Prioritise tests to focus them (time & resources) on the main areas of risk
2
The main goals of testing: Find defects Assess the level of quality of the software product and providing related information to the stakeholders Prevent defects Reduce risk of operational incidents Increase the product quality Different viewpoints and objectives: Unit & integration test find as many defects as possible Acceptance testing confirm that the system works as specified and that the quality is good enough Testing metrics gathering provide information to the project manager about the product quality and the risks involved Design tests early and review requirements help prevent defects
3
A programmer (or analyst) can make an error (mistake), which produces a defect (fault, bug) in the programs code. If such a defect in the code is executed, the system will fail to do what it should do (or it will do something it should not do), causing a failure. Error (mistake) = a human action that produces an incorrect result Defect (bug) = a flaw that can cause the component or system to fail to perform its required function Failure = deviation of the component or system from its expected delivery, service or result Anomaly = any condition that deviates from expectations based on requirements specifications, design documents, user documentation, standards or someones perceptions or expectations Defect masking = An occurrence in which one defect prevents the detection of another.
4
1.1.3 Why is testing necessary - The role of testing in the software life cycle
Testers do cooperate with: Analysts to review the specifications for completeness and correctness, ensure that they are testable Designers to improve interfaces testability and usability Programmers to review the code and assess structural flaws Project manager to estimate, plan, develop test cases, perform tests and report bugs, to assess the quality and risks Quality assurance staff to provide defect metrics Interactions with these project roles are very complex.
RACI matrix (Responsible, Accountable, Consulted, Informed)
Testing means not only to Verify (the thing is done right) but also to Validate (the right thing is done)!
Software quality includes: reliability, usability, efficiency, maintainability and portability. RELIABILITY: The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations. USABILITY: The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions. EFFICIENCY: The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions. MAINTAINABILITY: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment. PORTABILITY:The ease with which the software product can be transferred 8 from one hardware or software environment to another.
Measuring product quality: Functional compliance - functional software requirements testing Non functional requirements Test coverage criteria Defect count or defect trend criteria
10
The five basic criteria often used to decide when to stop testing are:
Previously defined coverage goals have been met The defect discovery rate has dropped below a previously defined threshold The cost of finding the "next" defect exceeds the expected loss from that defect The project team reaches consensus that it is appropriate to release the product The manager decides to deliver the product
All these criteria are risk based. It is important not depending on only one stopping criterion. Software Reliability Engineering can help also to determine when to stop testing, by taking into consideration aspects like failure intensity.
11
There are two basic types of testing: execution and non-execution based
Other definitions: (IEEE) Testing = the process of analyzing a software item to detect the differences between existing and required conditions and to evaluate its features (Myers (b3)) Testing = the process of executing a program with the intent of finding errors (Craig & Jaskiel (b5)) Testing = a concurrent lifecycle process of engineering, using and maintaining test-ware in order to measure and improve the quality of the software being tested 12
Factory School - testing is a way to measure progress, with emphasis on cost and repeatable standards
Testing must be managed & cost effective Testing validates the product & measures development progress Key Questions: How can we measure whether were making progress, When will we be done? Require clear boundaries between testing and other activities (start/stop criteria) Encourage standards (V-model), best practices, and certification
Context-Driven School - emphasizes people, setting out to find the bugs that will be most important to stakeholders
Software is created by people. People set the context Testing finds bugs acting as a skilled, mental activity Key Question: What testing would be most valuable right now? Expect changes. Adapt testing plans based on test results Testing research requires empirical and psychological study
13
Context dependence: test design and execution is context dependent (desktop, web applications, real-time, )
Verification and Validation: discovering defects cannot help a product that is not fit 14 to the users needs
Operability - The better it works, the more efficiently it can be tested Observability - What we see is what we test Controllability - The better we control the software, the more the testing process can be automated and optimized Decomposability - By controlling the scope of testing, we can quickly isolate problems and perform effective and efficient testing Simplicity - The less there is to test, the more quickly we can test it Stability - The fewer the changes, the fewer are the disruptions to testing Understandability - The more information we will have, the smarter we will test Suitability - The more we know about the intended use of the software, the better we can organize our testing to find important bugs
15
-Test Planning & Test control -Test Analysis & Design -Test Implementation & Execution -Evaluating exit criteria & Reporting -Test Closure activities
16
Measure and analyze results Monitor testing progress, coverage, exit criteria Assign or reallocate resources, update the test plan schedule Initiate corrective actions Make decisions
17
Reviewing the test basis (such as requirements, architecture, design, interfaces). Identifying test conditions or test requirements and required test data based on analysis of test items and the specification. Designing the tests: Choose test techniques Identify test scenarios, pre-conditions, expected results, postconditions Identify possible test Oracles Evaluating testability of the requirements and system. Designing the test environment set-up and identifying any required infrastructure and tools.
18
Test Oracle = A source to determine expected result, a principle or mechanism to recognize a problem. The Test Oracle can be: an existing system (the old version) a document (specification, user manual) a competent client representative but never the source code itself ! Oracles in use = Simplification of Risk : do not assess pass - fail, but instead problem - no problem Problem: Oracles and Automation - Our ability to automate testing is fundamentally constrained by our ability to create and use oracles; Possible issues: false alarms missed bugs
19
20
What to watch?
Severity of possible defects Probability of possible defects Visibility of possible defects Client Requirement importance Business or technical criticality of a feature Frequency of changes applied to a module Scenarios complexity
21
Evaluate exit criteria: Check test logs against exit criteria specified in test mission definition Assess if more tests are needed Check if testing mission should be changed
Test reporting: Write the test summary report for the stakeholders use The test summary report should include: Test Cases execution coverage (% executed ) Test Cases Pass / Fail % Active bugs, sorted according to their severity (see Rex Black (b4) & RUP- Test discipline(s5))
22
23
24
25
26
2.1.1 The V testing model Verification & Validation Verification = Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled. Validation = Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. Verification is the dominant activity in the Unit, Integration, System testing levels, Validation is a mandatory activity in the Acceptance testing level
27
28
29
30
Waterfall weaknesses Linear: any attempt to go back two or more phases to correct a problem or deficiency results in major increases in cost and schedule Integration problems usually surface too late. Previously undetected errors or design deficiencies will emerge, adding risk with little time to recover Users can't see quality until the end. They can't appreciate quality if the finished product can't be seen Deliverables are created for each phase and are considered frozen. If the deliverable of a phase changes, which often happens, the project will suffer schedule problems The entire software product is being worked on at one time. There is no way to partition the system for delivery of pieces of the system
31
32
Rapid Prototype Model weaknesses In the rush to create a working prototype, overall software quality or long-term maintainability may be overlooked Tendency for difficult problems to be pushed to the future, causing the initial promise of the prototype to not be met by subsequent products Developers may fall into a code-and-fix cycle, leading to expensive, unplanned prototype iterations Customers become frustrated without the knowledge of the exact number of iterations that will be necessary Users may have a tendency to add to the list of items to be prototyped until the scope of the project far exceeds the feasibility study
33
34
Incremental Model weaknesses Definition of a complete, fully functional system must be done early in the life cycle to allow for the definition of the increments The model does not allow for iterations within each increment Because some modules will be completed long before others, welldefined interfaces are required Requires good planning and design: Management must take care to distribute the work; the technical staff must watch dependencies
35
36
Spiral Model weaknesses The model is complex, and developers, managers, and customers may find it too complicated to use Considerable risk assessment expertise is required Hard to define objective, verifiable milestones that indicate readiness to proceed through the next iteration May be expensive - time spent planning, resetting objectives, doing risk analysis, and prototyping may be excessive
37
38
For each software activity, there must be a corresponding testing activity The objectives of the testing are specific to that tested activity
Plan, analysis and design of a testing activity should be done during the corresponding development activity
Review and inspection must be considered as part of testing activities
39
2.2.1 Test levels Component testing Target: single software modules, components that are separately testable
Access to the code being tested is mandatory, usually involves the programmer
May consist of: o o o Functional tests Non-functional tests (stress test) Structural tests (statement coverage, branch coverage)
Test cases follow the low level specification of the module Can be automated (test driven software development): o o o Develop first test code Then, write the code to be tested Execute until pass
Good programming style (Design-by-contract, respecting Demeters law) enhance the efficiency of Unit testing
Target: the interfaces between components and interfaces with other parts of the system We focus on the data exchanged, not on the tested functionalities. Product software architecture understanding is critical May consist of: o Functional tests o Non-functional tests (ex: performance test) o Component integration testing (after component testing) o System integration testing (after system testing) Test strategy may be bottom-up, top-down or big-bang
41
2.2.2 Test levels Component Integration testing Component integration testing (done after Component testing) : Linking a few components to check that they communicate correctly Iteratively linking more components together Verify that data is exchanged between the components as required Increase the number of components, create & test subsystems and finally the complete system Drivers and Stubs should be used when necessary: driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system. stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
42
2.2.3 Test levels System testing System testing = The process of testing an integrated system to verify that it meets specified requirements
Black box testing techniques may be used (ex: business rule decision table) Test strategy may be risk based Test coverage is monitored
44
2.2.4 Test levels Acceptance testing Acceptance Testing = Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system The main goals: establish confidence in the system is the product good enough to be delivered to the client? The main focus is not to find defects, but to assess the readiness for deployment It is not necessary the final testing level; a final system integration testing session can be executed after the acceptance tests May be executed also after component testing (component usability acceptance) Usually involves client representatives Typical forms: User acceptance: business aware users verify the main features Operational acceptance testing: backup-restore, security, maintenance Alpha and Beta testing: performed by customers or potential users Alpha : at the developers site Beta : at the customers site
45
2.3.1 Test types Functional testing Target: Test the functionalities (features) of a product Specification based: uses Test Cases, derived from the specifications (Use Cases) business process based, using business scenarios Focused on checking the system against the specifications Can be performed at all test levels Considers the external behavior of the system
46
Non functional testing = testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability Targeted to test the product quality attributes: Performance testing Load testing (how much load can be handled by the system?) Stress testing (evaluate system behavior at limits and out of limits) Usability testing Reliability testing Portability testing Maintainability testing
47
Usability testing = used to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions People selected from the potential users may be involved to study how they use the system
A quick and focused beta-test may be a cheap way of doing Usability testing
There is no simple way to examine how people will use the system
Easy to understood is not the same as easy to learn or as easy to use or as easy to operate
48
Instalability testing = The process of testing the installability of a software product Does the installation work? How easy is to install the system?
49
2.3.2 Test types Non-Functional testing Load, Stress, Performance, Volume testing Load Test = A test type concerned with measuring the behavior of a component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system Stress Test = Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Performance Test = The process of testing to determine the performance of a software product. Performance can be measured watching: Response time Throughput Resources utilization Spike Test = Keeping the system, periodically, for short amounts of time, beyond its specified limits Endurance Test = a Load Test performed for a long time interval (week(s)) Volume Test = Testing where the system is subjected to large volumes of data
50
2.3.3 Test types Structural testing Targeted to test: o o internal structure (component) architecture (system)
51
Confirmation testing = Re-testing of a module or product, to confirm that the previously detected defect was fixed Implies the use of a bug tracking tool Confirmation testing is not the same as the debugging (debugging is a development activity, not a testing activity)
Regression testing = Re-testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered as a result of the changes made. It is performed when the software or its environment is changed Can be performed at all test levels Can be automated (because of cost and schedule reasons)
52
Maintenance testing = Testing the changes to an operational system or the impact of a changed environment to an operational system
Done on an existing operational system, triggered by modification, retirement or migration of the software Include: Release based changes Corrective changes Database upgrades Regression testing is also involved Impact analysis = determining how the existing system may be affected by changes (used to help decide how much regression testing to do)
53
3.1 Reviews and the testing process Static Testing = testing of a component or system at specification or implementation level without execution of that software, e.g. reviews (manual) or static code analysis (automated)
Reviews
Why review? To identify errors as soon as possible in the development lifecycle Reviews offer the chance to find omissions and errors in the software specifications The target of a review is a software deliverable: Specification Use case Design Code Test case Manual
54
3.1 Reviews and the testing process When to review? As soon as an software artifact is produced, before it is used as the basis for the next step in development Benefits include: Early defect detection Reduced testing costs and time Can find omissions Risks: If misused they can lead to project team members frictions The errors & omissions found should be regarded as a positive issue The author should not take the errors & omissions personally No follow up to is made to ensure correction has been made Witch-hunts used when things are going wrong
55
Formal review phases: Planning: define scope, select participants, allocate roles, define entry & exit criteria Kick-off: distribute documents, explain objectives, process, check entry criteria Individual preparation: each of participants studies the documents, takes notes, issues questions and comments Review meeting: meeting participants discuss and log defects, make recommendations Rework: fixing defects (by the author) Follow-up: verify again, gather metrics, check exit criteria
56
Manager: schedules the review, monitor entry and exit criteria Moderator: distributes the documents, leads the discussion, mediates various conflicting opinions Author: owner of the deliverable to be reviewed Reviewer: technical domain experts, identify and note findings Scribe: records and documents the discussions during the meeting
57
Inspection Formal process, based on checklists, entry and exit criteria Dedicated, precise roles Led by the moderator Metrics may be used in the assessment Reports, list-of-findings are mandatory Follow-up process Main purpose: find defects 58
3.2.4 Success factors for reviews Clear objective is set Appropriate experts are involved Identify issues, not fix them on-the-spot Adequate psychological handling (author is not punished for the found defects) Level of formalism is adapted to the concrete situation Minimal preparation and training Management encourages learning, process improvement Time-boxing is used to determine time allocated to each part of the document to be reviewed Use of effective checklists
59
3.3 Static analysis by tools Performed without executing the examined software, but assisted by tools The approach may be data flow or control flow based Benefits: early defects detection early warnings about unwanted code complexity detects missing links improved maintainability of code and design
Test condition = item, event, attribute of a module or system that could be verified (ex: feature, structure element, transaction, quality attribute)
Test data = data that affects or is affected by the execution of the specific module
Test case [IEEE] = a set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement Test case specification [IEEE] = a document specifying a set of test cases for a test condition Test procedure (suite) specification = a document specifying a sequence of actions for the execution of a series of test cases
61
Use cases are used as input Test cases will cover all possible paths of the execution graph flow Test data should be specified if necessary Priorities of Test cases should be assigned Traceability matrix (Use cases x Test cases) should be maintained
3.
Group test cases into execution schedules Factors to be considered: a. Prioritization b. Logical dependencies c. Regression tests
62
4.2 Test design categories of test design techniques Black box: no internal structure knowledge will be used White box: based on the analysis of the internal structure Static: without running, exercised on specific project artifacts
63
4.3.1 Black box techniques equivalence partitioning To minimize testing, partition input (output) values into groups of equivalent values (equivalent from the test outcome perspective) Select a value from each equivalence class as a representative value
If an input is a continuous range of values, then there is typically one class of valid values and two classes of invalid values, one below the valid class and one above it. Example: Rule for hiring a person is second its age: 0 15 = do not hire
16 17 = part time
18 54 = full time 55 -- 99 = do not hire Which are the valid equivalence classes? And the invalid ones? Give examples of representative values! (other examples)
(see Lee Copeland b2 chap.3, Cem Kaner c1, Paul Jorgensen b7 chap.2.2,6.3)
64
4.3.1 Black box techniques all-pairs testing In practice, there are situations when a great number of combinations must be tested. For example: A Web site must operate correctly with different browsers Internet Explorer 5.0, 5.5, and 6.0, Netscape 6.0, 6.1, and 7.0, Mozilla 1.1, and Opera 7; using different plug-insRealPlayer, MediaPlayer, or none; running on different client operating systemsWindows 95, 98, ME, NT, 2000, and XP; receiving pages from different serversIIS, Apache, and WebLogic; running on different server operating systemsWindows NT, 2000, and Linux. Test environment combinations: 8 browsers 3 plug-ins 6 client operating systems 3 servers
3 server OS
1,296 combinations ! All-pairs testing is the solution : tests a significant subset of variables pairs.
65
4.3.2 Black box techniques boundary value analysis Boundaries = edges of the equivalence classes. Boundary values = values at the edge and nearest to the edge The steps for using boundary values:
Actions are the actions taken depending on the various combinations of input conditions Each of the rules defines a unique combination of conditions that result in the execution of the actions associated with that rule
Actions do not depend on the condition evaluation order, but only on their values.
Actions do not depend on any previous input conditions or system state.
67
4.3.3 Black box techniques decision tables - example a, b, c are they the edges of a triangle?
68
4.3.4 Black box techniques state transition tables Allow the tester to interpret the system in term of: States Transition between states
69
4.3.4 Black box techniques state transition tables - example Ticket buy - web application
71
Good scenario attributes: Is based on a real story Is motivating for the tester Is credible Involves an enough complex use of environment and data Is easy to evaluate ( no need for external oracle) How to create good test scenarios:
Steps: 1. Identify the use-case scenarios. 2. For each scenario, identify one or more test cases. 3. For each test case, identify the conditions that will cause it to execute. 4. Complete the test case by adding data values. (see example)
Most common test case mistakes: 1. Making cases too long 2. Incomplete, incorrect, or incoherent setup 3. Leaving out a step 4. Naming fields that changed or no longer exist 5. Unclear whether tester or system does action 6. Unclear what is a pass or fail result 7. Failure to clean up
73
Syntax testing = uses a model of the formally-defined syntax of the inputs to a Component The syntax is represented as a number of rules each of which defines the possible means of production of a symbol in terms of sequences of, iterations of, or selections between other symbols. Here is a representation of the syntax for the floating point number, float in Backus Naur Form (BNF) :
float = int "e" int. int = ["+"|"-"] nat. nat = {dig}. dig = "0"|"1"|"2"|"3"|"4"|"5"|"6"|"7"|"8"|"9".
Syntax testing is the only black box technique without a coverage metric assigned.
74
A decision point is a point in the module at which the control flow can change. Most decision points are binary and are implemented by if-then-else statements. Multi-way decision points are implemented by case statements. They are represented by a bubble with one entry and multiple exits.
Example:
75
4.4.1 White box techniques statement coverage Statement coverage = Executed statements / Total executable statements
Example:
a;
if (b) { c; } d;
In case b is TRUE, executing the code will result in 100% statement coverage
76
x y
T T
T F
F T
F F
a
b c
a
b
d e e
How many test cases are needed to get 100% statement coverage?
4.4.2 White box techniques branch & decision coverage - glossary Branch = a conditional transfer of control from a statement to any other statement
OR
= an unconditional transfer of control from a statement to any other statement except the next statement;
Decision coverage = executed decisions outcomes / total decisions For components with one entry point 100% Branch Coverage is equivalent to 100% Decision Coverage
78
Decisions = B2, B3, B5 each with 2 outcomes = 3 * 2 = 6 Branches = (how many arrows?) = 10
Q1. What are the decision and branch coverage for (B1 B2 B9) ?
79
LCSAJ = Linear Code Sequence and Jump Defined by a triple, conventionally identified by line numbers in a source code listing: the start of the linear code sequence the end of the linear code sequence the target line to which control flow is transferred LCSAJ coverage = executed LCSAJ sequences / total nr. of LCSAJ seq.
80
4.4.3 White box techniques data flow coverage Just as one would not feel confident about a program without executing every statement in it as part of some test, one should not feel confident about a program without having seen the effect of using the value produced by each and every computation.
All p(redicate)-uses = Number of exercised definition- p-use pairs / Number of definition- p-use pairs
All uses = Number of exercised definition- use pairs / Number of definition- use pairs Branch condition = Boolean operand values executed / Total Boolean operand values Branch condition combination = Boolean operand values combinations executed / Total Boolean operand values combinations (see Lee Copeland b2, chap.11)
81
4.5 Exploratory testing Exploratory testing = Concurrent test design, test execution, test logging and learning, based on a quick test charter containing objectives, and executed within delimited time-intervals Uses structured approach to error guessing, based on experience, available defect data, domain expertise On-the-fly design of tests that attack these potential errors Skill, intuition and previous experience is vital Test strategy is built around: The project environment Quality criteria defined for the project Elements of the product
Risk factors
82
83
Pluses:
84
5.1.2 Tasks of the test leader Plan, estimates test effort, collaborates with project manager Elaborates the test strategy Initiate test specification, implementation, execution
85
5.1.2 Tasks of the tester Test Analyst Identify test objectives (targets) Review product requirements and software specifications Review test plans and test cases Verifies requirements to test cases traceability Define test scenario details Tester Define test approach (procedure) Write test cases Review test cases (peer review) Implement and execute tests
86
Elaborates test case lists and writes main test cases Assesses testability
87
Determine scope o Study project documents, used software life-cycle specifications, product desired quality attributes o Identify and communicate with other stakeholders o Clarify test process expectations Determine risks o Choose quality risk analysis method (e.g. FMEA) o Document the list of risks, probability, impact, priority, identify mitigation actions
Refine plan o Define roles detailed responsibilities o Select test strategy, test levels: Test strategy issues (alternatives): Preventive approach Reactive approach Risk-based Model (standard) based
Estimate testing effort, determine costs, develop schedule o Define necessary roles o Decompose test project into phases and tasks (WBS) o Schedule tasks, assign resources, set-up dependencies o Develop a budget o Obtain commitment for the plan from the stakeholders
Define entry and exit criteria Exit criteria: Coverage measures Defect density or trend measures Cost Residual risk estimation 88 Time or market based
5.2.4 Test estimation Two approaches: based on metrics (historical data) made by domain experts
Testing effort depends on: product characteristics (complexity, specification) development process (team skills, tools, time factors) defects discovered and rework involved failure risk of the product (likelihood, impact)
Time for confirmation testing and regression testing must be considered too
89
Monitoring - Test metrics used: Test cases (% passed/ % failed) Defects (found, fixed/found, density, trends) Test Coverage (% executed Test cases)
Control: identify and implement corrective actions for: Testing process Other software life-cycle activities Possible corrective actions: Assign extra resource Re-allocate resource Adjust the test schedule Arrange for extra test environments Refine the completion criteria
Identified risks
91
5.4 Configuration management IEEE definition of Configuration Management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements Configuration Management: identifies the current configuration (hardware, software) in the life cycle of the system, together with any changes that are in course of being implemented. provides traceability of changes through the lifecycle of the system. permits the reconstruction of a system whenever necessary
Only persistent objects must be subject to Configuration Management, therefore, the data processed by a system cannot be placed under Configuration Management.
Related to Version Control and Change Control
92
5.4 Configuration management Configuration Management activities: Configuration identification = selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation Configuration control = evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification Status accounting = recording and reporting of information needed to manage a configuration effectively, including: a listing of the approved configuration identification, the status of proposed changes to the configuration, and the implementation status of the approved changes Configuration auditing = The function to check that the software product matches the configuration items identified previously
93
In Testing, Configuration Management must: Identify all test-ware items Establish and maintain the integrity of the testing deliverables (test plans, test cases, documentation) through the project life cycle
94
5.5 Risk & Testing Risk = a factor that could result in future negative consequences, expressed as likelihood and impact
Product risks (defects delivered, poor quality attributes (reliability, usability, performance)
The risks identified can be used to: Define the test strategy and techniques to be used Define the extent and depth of testing Prioritize test cases and procedures (find important defects early)
95
Incident = any significant, unplanned event that occurs during testing that requires subsequent investigation and / or correction The system does not function as expected Actual results differ from expected results Required features are missing Incident reports can be raised against: documents placed under review process products defects related to functional & non-functional requirements documentation anomalies (manuals, on-line help) test-ware defects (errors in test cases or test procedures) The incident reports raised against products defects are named also bug reports.
96
5.6 Incident management Recommended Bug report format Defect ID Component name and Build version Bug statuses Issued just been reported Opened programmer is working to solve-it
(Exercise)
97
6.1.1 Test tool classification Management of testing: Test management Requirements management Test execution: Record and play Unit test framework
Bug tracking
Configuration management Static testing: Review support Static analysis Modeling Test specification:
Result comparators
Coverage measurement Security Performance and monitoring: Dynamic analysis Load and stress testing Monitoring
Test design
Test data preparation
98
6.1.2 Tool support - Management of testing Test management: Manage testing activities Manage test-ware traceability Configuration management Individual support: Version and change control
Builder
Project related Department or company related
99
6.1.3 Tool support - Static testing Review support: Process support Communications support Team support Static analysis: Coding standards
100
Test design: From requirements From design models Test stubs and driver generators
101
Record and play Scripting Unit test framework Test harness frameworks Result comparators Coverage measurement
102
Dynamic analysis: Time dependencies Memory leaks Load testing Stress testing Monitoring
103
Repetitive work is reduced (e.g. running regression tests, re-entering the same test data, and checking against coding standards). Greater consistency and repeatability (e.g. tests executed by a tool, and tests derived from requirements). Objective assessment (e.g. static measures, coverage and system behavior). Ease of access to information about tests or testing (e.g. statistics and graphs about test progress, incident rates and performance).
104
Unrealistic expectations for the tool (including functionality and ease of use). Underestimating the time, cost and effort for the initial introduction of a tool (including training and external expertise). Underestimating the time and effort needed to achieve significant and continuing benefits from the tool (including the need for changes in the testing process and continuous improvement of the way the tool is used). Underestimating the effort required to maintain the test assets generated by the tool. Over-reliance on the tool (replacement for test design or where manual testing would be better). Lack of a dedicated test automation specialist Lack of good understanding and experience with the issues of test automation Lack of stakeholders commitment for the implementation of a such tool
105
Test execution tools: Significant implementation effort Record and play tools are instable when changes occur Technical expertise is mandatory Performance testing tools: Expertise in design and results interpretation are mandatory
106
Tester:
manage test cases manage test suites manage manual testing
Tool selection process: Identify requirements Identify constraints Check available tools on the market (feature evaluation) Evaluate short list (feature comparison): Demos
Quick pilots
Select a tool
Note: there are many free testing tools available, some of them also online ( www.testersdesk.com )
108
K1: The candidates will recognize, remember and recall a term or concept.
K2: The candidates can select the reasons or explanations for statements related to the topic. They can summarize, compare, classify and give examples for concepts of testing. K3: The candidates can select the correct application of a concept or techniques and/or apply it to a given context.
Example: (see others) Which statement regarding testing is correct? a) Testing is planning, specifying and executing a program with the aim of finding defects b) Testing is the process of correcting defects identified in a developed program c) Testing is to localize, analyze and correct the direct defect cause d) Testing is independently reviewing a system against its requirements
109