Sie sind auf Seite 1von 15

SOFTWARE VALIDATION & TESTING

Software Validation And Testing Verification and Validation


Verification and validation are independent procedures that are used together for checking that a product, service, or system meets requirements and specifications and that it fulfills its intended purpose. These are critical components of a quality management system such as ISO 9000. Sometimes preceded with "Independent" (or IV&V) to ensure the verification and validation is performed by a disinterested third party.

Dynamic verification (Test, experimentation)


Dynamic verification is performed during the execution of software, and dynamically checks its behaviour; it is commonly known as the Test phase. Verification is a Review Process. Depending on the scope of tests, we can categorize them in three families:

Test in the small: a test that checks a single function or class (Unit test) Test in the large: a test that checks a group of classes, such as Module test (a single module) Integration test (more than one module) System test (the entire system) Acceptance test: a formal test defined to check acceptance criteria for a software Functional test Non functional test (performance, stress test)

Software verification is often confused with software validation. The difference between verification and validation. Software verification asks the question, "Are we building the product right?"; that is, does the software conform to its specification. Software validation asks the question, "Are we building the right product?"; that is, is the software doing what the user really requires. The aim of software verification is to find the errors introduced by an activity, i.e. check if the product of the activity is as correct as it was at the beginning of the activity.

Static verification (Analysis)


Static verification is the process of checking that software meets requirements by doing a physical inspection of it. For example: Code conventions verification Software metrics calculation Formal verification
Verification by Analysis - The analysis verification method applies to verification by investigation, mathematical calculations, logical evaluation, and calculations using classical textbook methods or accepted general use computer methods. Analysis includes sampling and correlating measured data and observed test results with calculated expected values to establish conformance with requirements.

V& V goals
Verification and validation should establish confidence that the software is fit for purpose This does NOT mean completely free of defects rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed.

Verification Vs Validation
M.sc (IT) Page

SOFTWARE VALIDATION & TESTING

Verification: "Are we building the product right" The software should conform to its specification Validation: "Are we building the right product" The software should do what the user really requires

Black-box testing
The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. It is also termed data-driven, input/output driven or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered. It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contribute approximately 30 percent of all bugs in software. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and considers the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification cannot be efficiently discovered. Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.

White-box testing
Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage).

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING


Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which cannot be discovered by functional testing. In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

Gray-Box Testing
Gray-box testing is a combination of white-box testing and black-box testing. The aim of this testing is to search for the defects if any due to improper structure or improper usage of applications. Gray-box testing is also known as translucent testing. A black-box tester is unaware of internal structure of the application to be tested, while a white-box tester knows the internal structure of the application. A gray-box tester partially knows the internal structure, which includes the access to internal structures as well as the algorithms for defining the test cases. Gray-box testers require overall and detailed description of documents with required documents of the application. Gray Box Testing collects the information for defining test cases. Gray-box testing is beneficial because it applies straight forward technique of black-box testing and influences it against the code targeted systems in white-box testing. Gray-box testing is based on requirement test case generation because it presets all the condition before program is tested by using assertion method. Requirement specification language is used to state the requirements which make easy to understand the requirements and verify its correctness too where input for requirement test case generation is the predicates and the verification discussed in requirement specification language

Performance testing
Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade. Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, and stimulusresponse time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage . The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage.

Reliability testing

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING


Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. Advocates that the primary goal of testing should be to measure the dependability of tested software. There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. Robustness testing and stress testing are variances of reliability testing based on this simple criterion. The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple; therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.

Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe. Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.

Testing automation
Software testing can be very costly. Automation is a good way to cut down time and cost. Software testing tools and techniques usually suffer from a lack of generic applicability and scalability. The reason is straight-forward. In order to automate the process, we have to have some ways to generate oracles from the specification, and generate test cases to test the target software against the oracles to decide their correctness. Today we still don't have a full-scale system that has achieved this goal. In general, significant amount of human intervention is still needed in testing. The degree of automation remains at the automated test script level. The problem is lessened in reliability testing and performance testing. In robustness testing, the simple specification and oracle: doesn't crash, doesn't hang suffices. Similar simple metrics can also be used in stress testing.

Testing Phase
"The testing phase" will refer to the phase in which comprehensive testing begins to be carried out on the development products (usually by designated testers). For discussion purposes, comprehensive testing begins from the integration stages and up to the system test. Many refer to this stage as the "QA stage". Because of the complexity of the concept "QA" and vast description, the article will adhere to the concept "testing phase". Objectives of the Testing Phase Before we examine the contribution of the testing phase to the project, it is important to understand the declared objectives of this phase. The accepted answers to this question are as follows:

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING

"To allow interested parties in the product to receive a measure of its quality and compliance to its requirements" "To prove that the software does what it is supposed to do" (partial and problematic definition). "To find the gaps between what the software is supposed to do/not do to what it actually does/does not do" (a slightly more complete definition)

This is indeed the main purpose of the testing phase - an important and critical phase for the success of the project.

Static testing
Static testing is a system of White Box testing, where developers to verify or check your code with the help of the list to find fault with him, this type of testing is completed without executing the applications that are currently developed. Code reviews, inspections, tours are mostly completed at this stage of the check.

Dynamic testing
Dynamic Testing is completed by walking the real application with valid entries to verify the expected results. Examples of methodologies for dynamic testing are unit testing, integration testing, system testing & acceptance testing. Some differences between static & dynamic testing are

Static check is more profitable than the dynamics of the static check because tests are made at the initial stage. In terms of the declaration of coverage, the static check covers areas most dynamic tests in less time. Static Testing is completed before the implementation of code in the dynamic check is performed after the implementation of code. Static Testing is completed in the verification phase in the dynamic check is performed on the validation phase.

Testing Team
The average development life cycle of each of these components is between 3 to 4 months. Since there are multiple components being developed simultaneously, the average team size is 2, which Includes 1 developer and 1 tester. Also, the business functionalities in these components are not similar in nature, as every component is developed to meet the requirement of one financial transaction. Testing of these applications involves verification of data or information generated by each component under different business scenarios. The project has an independent testing team led by one Quality Assurance Manager, who coordinates the testing activities for the entire project. Each member of the testing team is motivated to play a role more than just a Tester. Apart from testing, the tester also works on process improvements, automations and plays a role of business expert within the project. The core competency required to test this application is the understanding of the business functionality of the components under test. In order to create more scope to acquire new business knowledge the testing team is consistently working towards improving the testing productivity. This enables them to play the role of Business Analysts as well, leading to an opportunity for growth of the individuals. One of the major challenges faced by this team is to create standardized processes and automated testing tools to test in mainframe testing environment. Since, there is no tool available in the market to test the application on the desired environment, the team has taken the initiative to build the tools and work towards consistent improvements. One of the many driving factors to these initiatives is the need for the reduction of manual work and the rework during the testing life cycle.

Test Automation & Tool


Although manual tests may find many defects in a software application, it is a laborious and time consuming process. In addition, it may not be effective in finding certain classes of defects. Test automation

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING


is the process of writing a computer program to do testing that would otherwise need to be done manually. Once tests have been automated, they can be run quickly and repeatedly. This is often the most cost effective method for software products that have a long maintenance life, because even minor patches over the lifetime of the application can cause features to break which were working at an earlier point in time. There are two general approaches to test automation:

Code-driven testing: - The public (usually) interfaces to classes, modules or libraries are tested with a variety of input arguments to validate that the results that are returned are correct. Graphical user interface testing: - A testing framework generates user interface events such as keystrokes and mouse clicks, and observes the changes that result in the user interface, to validate that the observable behavior of the program is correct.

Test automation tools can be expensive, and it is usually employed in combination with manual testing. It can be made cost-effective in the longer term, especially when used repeatedly in regression testing. What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make. Selecting the correct features of the product for automation largely determines the success of the automation. Automating unstable features or features that are undergoing changes should be avoided.

Testing Tools

M.sc (IT)

Page

Tool SOFTWARE name Produced by VALIDATION & TESTING

Latest version

HP QuickTest Professional

HP Software Division

11.0

HTTP Test Tool

Open source

2.0.8

IBM Rational Functional Tester IBM Rational

8.2.1

LabVIEW

National Instruments

2011

Maveryx

Maveryx

1.2.0

QF-Test

Quality First Software GmbH

3.4.3

Ranorex

Ranorex GmbH

3.2.1

Rational robot

IBM Rational

2003

Selenium

Open source

2.11

SilkTest

Micro Focus

2010 R2 WS2

SOAtest

Parasoft

9.0

TestComplete

SmartBear Software

8.6

Testing Anywhere

Automation Anywhere

7.0

TestPartner

Micro Focus

6.3

TPT

PikeTec GmbH

3.4.3

TOSCA Testsuite

TRICENTIS Technology & Consulting 7.3.1[3]

M.sc (IT)

Visual Studio Test Professional Microsoft

2010

Page

SOFTWARE VALIDATION & TESTING

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING

Functional Testing
Functional testing is a type of black box testing that bases its test cases on the specifications of the software component under test. Functions are tested by feeding them input and examining the output, and internal program structure is rarely considered (Not like in white-box testing).[1] Functional testing differs from system testing in that functional testing "verif[ies] a program by checking it against ... design document(s) or specification(s)", while system testing "validate[s] a program by checking it against the published user or system requirements"(Kaner, Falk, Nguyen 1999, p. 52). Functional testing typically involves five steps [citation needed]: 1. The identification of functions that the software is expected to perform 2. The creation of input data based on the function's specifications 3. The determination of output based on the function's specifications 4. The execution of the test case 5. The comparison of actual and expected outputs

Non-Functionality Testing
Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Non functional testing tests the characteristics of the software like how fast the response is, or what time does the software takes to perform any operation. Non-functional testing includes:
Baseline testing Compatibility testing Compliance testing Documentation testing Load testing Performance testing Recovery testing Security testing Scalability testing Stress testing Usability testing Volume testing

TEST CASE DESIGN


The design of tests for software and other engineered products can be as challenging as the initial design of the product itself. Yet, for reasons that we have already discussed, software engineers often treat testing as an afterthought, developing test cases that may "feel right" but have little assurance of being complete. Recalling the objectives of testing, we must design tests that have the highest likelihood of finding the most errors with a minimum amount of time and effort. A rich variety of test case design methods have evolved for software. These methods provide the developer with a systematic approach to testing. More important, methods provide a mechanism that can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors in software. Any engineered product (and most other things) can be tested in one of two ways:

M.sc (IT)

Page

SOFTWARE VALIDATION & TESTING


1. Knowing the specified function that a product has been designed to perform, tests can be Conducted that demonstrate each function is fully operational while at the same time searching For errors in each function; knowing the internal workings of a product, tests can be conducted to ensure that "all Gears mesh," that is, internal operations are performed according to specifications and all internal components have been adequately exercised. The first test approach is called black-box testing and the other one is white-box testing.

2.

When computer software is considered, black-box testing eludes to tests that are conducted at the software interface. Although they are designed to uncover errors, black-box tests are used to demonstrate that software functions are operational, that input is properly accepted and output incorrectly produced, and that the integrity of external information (e.g., a database) is maintained. A black-box test examines some fundamental aspect of a system with little regard for the internal logical structure of the software. White-box testing of software is predicated on close examination of procedural detail. Logical paths through the software are tested by providing test cases that exercise specific sets of conditions and/or loops. The "status of the program" may be examined at various points to determine if the expected or asserted status corresponds to the actual status. At first glance it would seem that very thorough white-box testing would lead to "100 percent correct programs." All we need do is define all logical paths, develop test cases to exercise them, and evaluate results, that is, generate test cases to exercise program logic exhaustively. Unfortunately, exhaustive testing presents certain logistical problems. For even small programs, the number of possible logical paths can be very large. For example, consider the 100 line program in the language C. After some basic data declaration, the program contains two nested loops that execute from 1 to 20 times each, depending on conditions specified at input. Inside the interior loop, four if-then-else constructs are required. There are approximately 1014 possible paths that may be executed in this program! To put this number in perspective, we assume that a magic test processor ("magic" because no such processor exists) has been developed for exhaustive testing. The processor can develop a test case, execute it, and evaluate the results in one millisecond. Working 24 hours a day, 365 days a year, the processor would work for 3130 years to test the program. This would, undeniably, cause havoc in most development schedules. Exhaustive testing is impossible for large software systems.

Requirement Document
Official statement of what is required of the system developers should include both a dentition and a speciation of requirements should: Specify external system behavior Specify implementation constraints Be easy to change(but changes must be managed) Serve as a reference tool for maintenance Record forethought about the life cycle of the system (i.e. predict changes) characterize responses to unexpected events It is not a design document It should state what the system should do rather than how it should do it.

Requirements Document Types


BRD Business Requirements Document
Also known as: Business Needs Specification, Business Requirements Usage: To define the business needs of any given project. Written by: Product Marketing Managers; Product Managers; Project Manager; Business Analyst; or Business Executive

M.sc (IT)

Page

10

SOFTWARE VALIDATION & TESTING


Contents: Introduction - purpose; audience; background; business goals/objectives; benefits/rationale; stakeholders; existing systems; references; assumptions Project scope/Requirements scope Functional requirements Data requirements Interface requirements Non Functional requirements

MRD Market Requirements Document


Also known as: Sometimes combined with the PRD into one document Usage: Used to more accurately define the customer needs rather than the business needs Written by: Product Marketing Managers; Product Managers; Project Manager; Business Analyst; or Business Executive

Contents:
Executive summary Purpose Goals/objectives Target Market & Customer Overall position Competition Use cases/use model Customer needs & Corresponding Features Systems & Technical Requirements Quality Assurance & Testing

PRD Product Requirements Document


Also known as: Sometimes combined with the MRD into one document Usage: Used to identify the product requirements Written by: Product Managers; Project Manager; Business Analyst

Contents:
Purpose Stakeholders Project scope/Requirements scope Market overview Product overview Use cases/use models Functional requirements Data requirements Interface requirements Support requirements Usability requirements Non Functional requirements

The final three documents are pretty much fully interchangeable. They are all documents that are used to provide a clear and accurate description of the technical specification and functionality of the system being developed.

M.sc (IT)

Page

11

SOFTWARE VALIDATION & TESTING

FSD Functional Specifications Document; PSD - Product Specifications Document; SRS - Software Requirements Specification
Also known as: Functional spec, Specs, Software specification Usage: Used to identify the detailed requirements to aid in designing and developing the software Written by: Engineering Lead; Product Analyst; Program Manager

Contents:
Introduction - purpose; product overview; scope; references; Current system Summary Proposed methods and procedures Detailed characteristics Use cases Design considerations Environment Security

Requirement Types
Requirements types are logical groupings of requirements by common functions, features and attributes. There are four requirement types within three distinct requirement levels:

(A) Business Requirements Level


(1) Business Requirement Type. The business requirement is written from the Sponsor's point-ofview. It defines the objective of the project (goal) and the measurable business benefits for doing the project. The following sentence format is used to represent the business requirement and helps to increase consistency across project definitions: "The purpose of the [project name] is to [project goal -- that is, what is the team expected to implement or deliver] so that [measurable business benefit(s) -- the sponsor's goal]."

(B) User Requirements Level


(2) User Requirement Type. The user requirements are written from the user's point-of-view. User requirements define the information or material that is input into the business process, and the expected information or material as outcome from interacting with the business process (system), specific to accomplish the user's business goal. The following sentence format is used to represent the user requirement: "The [user role] shall [describe the interaction (inputs and outputs of information or materials) with the system to satisfy the user's business goal.]" or "The [user role] shall provide (input)/ receive (output.)"

(C) System Requirements Level


(3) Functional Requirement Type. The functional requirements define what the system must do to process the user inputs (information or material) and provide the user with their desired outputs (information or material). Processing the inputs includes storing the inputs for use in calculations or for retrieval by the user at a later time, editing the inputs to ensure accuracy, proper handling of erroneous inputs, and using the inputs to perform calculations necessary for providing expected outputs. The following sentence format is used to represent the functional requirement: "The [specific system domain] shall [describe what the system does to process the user inputs and provide the expected user outputs]." Or "The [specific system domain/business process] shall (do) when (event/condition)." (4) Nonfunctional Requirement Type. The nonfunctional requirements define the attributes of the user and the system environment. Nonfunctional requirements identify standards, for example, business rules, that the system must conform to and attributes that refine the system's functionality regarding use. Because of the standards and attributes that must be applied, nonfunctional requirements often appear to be limitations for designing an optimal solution. Nonfunctional requirements are also at the System level in the requirements hierarchy and follow a similar

M.sc (IT)

Page

12

SOFTWARE VALIDATION & TESTING


sentence format for representation as the functional requirements: "The [specific system domain] shall [describe the standards or attributes that the system must conform to]."

Pattern for Requirement Itemization


The pattern
Take the following steps to develop a pattern for business stakeholder requirements descriptions: 1. Identify the business processes. 2. Identify the IT processes that support each of the business processes. 3. Identify the activities within each of the IT processes. 4. Identify the functions within each of the activities. 5. Identify the use cases for one or more of the functions. Using an online pizza ordering system as an example, the rest of this article walks through the steps in the pattern.

Requirements description
A corner gourmet pizza vendor, who has operated a traditional pizza delivery service using telephone orders, wants to automate the ordering process by developing an online system. The customers need to be able to: 1. Select the pizza toppings, size, and number of pizzas. 2. Log in and enter the delivery address. 3. Specify the time of delivery. 4. Revise or delete their orders. A store associate should be able to emulate a member login and perform the corresponding member functions on their behalf.

Apply the pattern


This section shows what happens when you apply the pattern steps to the stakeholder requirements. BP = business process ITP = IT process A = Activities F = Functions 1. Identify the business processes BP1: Order automation process 2. Identify IT processes that support each of the business processes BP1: Order automation process ITP1: User management process ITP2: Inventory management process ITP3: Order management process 3. Identify the activities within each of the IT processes BP1: Order automation process ITP1: User management process A1: Membership activity ITP2: Inventory management process A1: Set up pizza toppings activity A2: Set up pizza sizes activity ITP3: Order management process A1: Order activity 4. Identify the functions within each of the activities BP1: Order automation process ITP1: User management process A1: Membership activity

M.sc (IT)

Page

13

SOFTWARE VALIDATION & TESTING


F1: Create member F2: Update member F3: Delete member F4: View members F5: Reset password F6: Create store associate password ITP2: Inventory management process A1: Set up pizza toppings activity F1: Add pizza topping and price F2: Update pizza topping F3: Delete pizza topping F4: View pizza toppings A2: Set up pizza sizes activity F1: Add pizza size and price F2: Update pizza size F3: Delete pizza size F4: View pizza sizes ITP3: Order management process A1: Order activity F1: Enter order F2: View order F3: Submit order F4: Revise order F5: Delete order

5. Identify the use cases for one or more of the functions


The use cases can now be identified directly using the functions in the steps above. Figure 1 shows the list of use cases. The add, update, and delete functions are grouped into one "Manage" use case. For the sake of brevity, the login use case is not shown; it's assumed to be part of the manage use case.

Figure 1. Pizza order system


There you have it. All essential use cases are identified. To show how it can be easily applied to any new problem situation, let's use our pattern step-by-step to come up with a solution for the stakeholder requirements in a case study.

M.sc (IT)

Page

14

SOFTWARE VALIDATION & TESTING

M.sc (IT)

Page

15

Das könnte Ihnen auch gefallen