Sie sind auf Seite 1von 21

APPLICATION TESTING BASICS:

A P R AC T I C A L G U I D E T O I M P R O V I N G S O F T WA R E Q U A L I T Y

By Paul Conte Picante Software

APPLICATION TESTING BASICS: A PRACTICAL GUIDE TO IMPROVING SOFTWARE QUALITY

by Paul Conte Picante Software

CONTENTS
Introduction .......................................................................................2 Software Management, Quality Assurance, and Benefits of Testing ..................................................................2 Software Management Quality Assurance Benefits of Testing Choosing the Right Type of Testing............................................5 Identifying the Scope of Tests Identifying Test Cases First-Time or Regression Testing Other Types of Testing to Consider Testing Principles.............................................................................9 Demonstrating is Not Testing Tests Should Be Repeatable Testing Should Provide Adequate Coverage Key Testing Tactics .......................................................................10 Adopt a Tester Mentality Add Code to Facilitate Testing Structure Interactive Applications to Simplify Testing Consider Some Form of Test-Driven Development Plan Your Test Cases Keep Track of Your Results Automation......................................................................................14 Test Drivers and Results Comparators Tools for Data Extraction, Capture, and Generation Getting Started..............................................................................16 Where to Learn More...................................................................18

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

by Paul Conte Picante Software


I NTRODUCTION

IT

managers and professionals may have varied opinions about many software development principles, but most agree on one thing above all the software you deliver must be accurate and reliable. And successful software development groups have long recognized that effective testing is essential to meeting this goal. Despite testings well-established benefits, however, many IT organizations still give the practice short shrift, perhaps because they consider thorough testing impractical. The limited testing they do conduct is often haphazard and painfully cumbersome, an effort with little if any gain. If your IT organization is in a similar position, and youd like to improve your testing practices to increase the quality of your software, this whitepaper will help. Ill explain different types of testing and the purpose of each, and Ill guide you toward establishing a well-defined strategy that exploits automation so your testing process is routine, efficient, and delivers maximum benefit. As youll also see, you can take an incremental approach to testing so that over time your testing becomes more extensive and your software more reliable.

SOFTWAR E MANAG E M E NT, QUALITY ASSU RANCE, AN D B E N E FITS OF TESTI NG

efore we dive into the details of testing, lets look at where testing fits in the larger world of software management and why it is a valuable practice.
SOFTWARE MANAGEMENT

Software management is a set of practices that attempt to achieve the following goals:
Deliver software products with the expected functionality and quality Deliver in the expected timeframe Deliver for the expected cost Meet expected levels of service during the softwares use

Effective software management is a matter of setting and meeting expectations, which requires a software development process thats predictable and that produces consistent outcomes. Testing is one of the software management practices that helps improve predictability and consistency.

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

(If youre just starting to explore ways to improve your IT organizations software management practices, you may want to read the Software Development Survival Guide Five Steps from Chaos to Control whitepaper. See Where to Learn More at the end of this whitepaper.)
QUALITY ASSURANCE

Testing is a technique you use to measure, and thereby improve, software quality. Testing fits in the broader category of software management practices known as quality assurance (QA), along with other practices, such as defect tracking and design and code inspections. The overall goal of QA is to deliver software that minimizes defects and meets specified levels of function, reliability, and performance. The single most important QA practice you can follow is to record all identified software defects and other types of problems. Tracking problems is key to improving software quality, but discovering problems only after an application is put in production is disruptive and expensive.
BENEFITS OF TESTING

Effective testing before production deployment achieves two major benefits:


Discovering defects before an application is deployed allows you to fix them before they impact business operations. This reduces business disruptions from software failure or errors and reduces the cost of fixing the defects. You can estimate the extent of remaining, undiscovered defects in software and use such estimates to decide when the software meets reliability criteria for production deployment. Test results help you identify strengths and deficiencies in your development processes and make process improvements that improve delivered software.

Considering the first benefit, your own experience probably confirms what software researchers have discovered: The later a bug or other defect is found, the more troublesome and expensive it is. Defects in production software can severely disrupt business operations by causing downtime, customer complaints, or errors. Software researchers have found that defects discovered in production can be as much as one hundred times more expensive to fix than those found early in the design and implementation process. Excessive numbers of defects also disrupt the software

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

development process, because the development group thrashes as they try to manage the cycle of fixing defects in production code while adding new features or making other planned changes required by the business. Testing is just one way to discover defects earlier in the development process. Various studies indicate that well-conducted testing by itself may identify somewhere around 50 or 60 percent of the defects present in an application. Design and code inspections are another technique for discovering defects; and, interestingly, research indicates inspections can achieve error detection rates as high as 90 percent. Whether or not your organization considers inspections, testing is still beneficial. For one thing, testing and inspections find different types of errors, so neither technique is sufficient by itself. But the main issue for most organizations is that inspections require substantial training and intensive team effort and are consequently a much bigger undertaking than improving your testing processes. The second major benefit of testing (which applies to code inspections, as well) is that you gain some measure of how buggy a piece of software is before you decide to deploy it. For example, if your experience with testing reveals that, on average, about as many defects are found after deployment as are found during testing, then you can project a similar relationship for future projects. Many programmers find it hard to believe that even thorough testing may still miss half the defects that are eventually discovered. But, as we shall see, its impossible to test any nontrivial application completely. This is why, as helpful as testing is in finding defects before software is deployed, you still shouldnt view testing as a foolproof filter that can remove nearly all undesirable elements from application code, the way an air filter cleans impurities from the air. If your software development process is producing a high rate of defects, you need to work on some part of your development practices to lower those rates, not just increase the amount of time spent testing. Testing gives you the data necessary to know how effective your other design and coding practices are at achieving software quality targets. Without consistent testing, not only will your production software be less accurate and reliable, but you also wont have a solid basis for deciding which changes to make in your development process.

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

CHOOSI NG TH E R IG HT TYPE OF TESTI NG

As you plan how to test your applications, you should consider various testing approaches. There isnt any one best way to test applications, and an effective testing strategy should include at least several ways to test new or modified applications. In this section, I provide a concise description of several important dimensions to testing, including the scope of whats tested, how test cases are chosen, whether the software to be tested is newly created or modified code, and the purpose of the tests.
IDENTIFYING THE SCOPE OF TESTS

Well-designed applications are composed of building blocks, such as subroutines, procedures, modules, classes, and other elements. These building blocks can be tested at different stages of their implementation and assembly into a complete application.
Unit testing

Unit testing is typically performed by the same developer who writes (or modifies) a subroutine, procedure, module, class, or other building block of an application. The purpose of this testing is to be sure the unit correctly performs its specified function(s). Unit testing is far more manageable in applications that are well-structured as a hierarchy of classes, modules, or other building blocks. Testing lower-level building blocks can catch many (but rarely all) defects in the building block before its combined with other units to form a larger module, subsystem, or application. Using automated tools for unit testing also greatly improves your ability to fix defects in a unit or modify it for other purposes (for example, to improve performance) and to ensure that the unit still works correctly by performing regression tests (discussed below).
Integration testing

Integration testing tests an assemblage of units to be sure they work properly together. The assemblage might be a set of Java classes in a package or a set of ILE RPG or ILE COBOL modules in an OS/400 program or service program, for example. Integration testing can occur in various ways. When an application is being implemented or revised by a single

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

developer, the developer can run integration tests after each new or changed unit passes unit tests and is added to (or replaced in) the assemblage. With team development, integration tests are typically scheduled on a periodic basis (such as daily or weekly) so that new or changed units from multiple developers can be tested together. Integration testing lies on the continuum between unit and system testing, and small assemblages created by a single developer are often tested by the developer using tools and techniques comparable to unit testing. Larger assemblages may represent complex subsystems of an application and require tools and techniques comparable to system testing.
System testing

System testing tests an entire application (or all the available and assembled units of an application) on the same (or a comparable) platform on which the application will be deployed. Like integration testing, system testing checks that all the applications units work together properly. What distinguishes system testing from just being comprehensive integration testing is the fact that the application is tested on a platform (including hardware system, Web application server, and other middleware) that resembles the final deployment platform as closely as possible. Also, the environment and application are configured as closely to the final production settings as possible (for example, security settings and the like). System testing can discover problems that arise from configuration errors and platform dependencies. Because system testing requires deployment, its often scheduled as a one-shot group of tests after all development has been completed and the final integration tests have been passed. With this approach, the implicit hope is that the system tests wont reveal serious problems and the deployed application can be made live after a single round of system testing. One-shot system testing is much better than no postdeployment testing, but it misses the valuable opportunity to catch deployment- and platform-related problems early in the development cycle. With appropriate application build and deployment tools, along with automated testing tools, a development team can efficiently schedule periodic system tests along with periodic integration tests. Note that this iterative approach to system testing does not require that all units in the

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

application be completed before testing the application. Once a significant part of the applications functionality has been implemented and has passed integration testing, the partially completed application can be deployed and put through system tests.
IDENTIFYING TEST CASES

Testing generally requires some way of identifying test cases configuration settings, input values, action sequences, and other elements that determine the behavior of the tested software. There are two main approaches you can take to identifying test cases: structural testing and functional testing.
Structural testing

With structural testing, the person who creates test cases uses his or her knowledge of the code to create test cases. This includes creating test cases that will provide broad coverage, as discussed in a later section. It also may include creating test cases with boundary values for conditional statements found in the code. For example, suppose a data validation routine tests for a Customer ID value that is greater than zero. The test cases might include Customer ID values of 0 and 1. Structural testing is also referred to as clear box or white box testing (in contrast to black box or functional testing, discussed next). Structural testing is well-suited to unit testing by the developer who implements the unit. It can be more difficult to apply structural testing to integration or system tests for applications that are developed by a team, because its likely that no single person has adequate knowledge of all the code to be tested.
Functional testing

With functional testing, the person who creates test cases uses requirements or routine specifications to determine test cases. This includes creating cases to test every specified function, including subcategories of behavior for each function. For example, if the specification for an inventory posting routine states that it supports single-unit and case-lot quantities, then there should be test cases that cover both alternatives. Functional testing is also referred to as black box testing. Functional testing is a good approach for testers other than the original developer of a unit or application to use because it doesnt require knowledge of the internal workings of the software.

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

FIRST-TIME OR REGRESSION TESTING

Testing can be done as an integral, continuous part of developing new applications or entirely new application functions. First-testing for new development essentially compares test results against specifications ensure that the unit or application does whats expected.
Regression testing

Perhaps the most cost-effective type of testing, however, is known as regression testing. With regression tests, the results of a series of successful tests on all or part of an application are saved. Then, when any part of the application is subsequently modified, the tests are rerun and the new results are compared to the saved results of previous tests. Variances in new test results are analyzed to catch introduced errors or validate that application revisions have produced expected changes in the test results. With proper application design and the right test tools, regression testing can be highly automated. This provides an efficient way to protect against the introduction of new defects when code is modified.
OTHER TYPES OF TESTING TO CONSIDER

The focus of this whitepaper is on testing to discover functional and reliability problems with an application, but testing is also important for measuring other aspects of an application. Among the other types of testing to consider are:
Performance testing Before an application is deployed in production, it should be tested under a simulated load to be sure it performs as expected. Usability testing An interactive application should undergo tests by end users to evaluate ease-of-use, potentially error-prone elements of the interface, and other human factors. Security testing This type of testing evaluates the authentication, authorization, and data protection of the application.

As I mentioned in the introduction to this section, an effective testing process generally includes several approaches to testing. For example, you might use a structural approach for unit testing new routines and re-run functional integration tests as part of your regression testing.

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

TESTI NG PR I NCI PLES

here are several core principles that form the foundation for effective testing of any kind. As you develop your own testing strategy and plan, be sure to keep these in mind.

DEMONSTRATING IS NOT TESTING

Lets suppose youve been assigned to implement a pricing routine that can be called from various other places in your companys applications. This pricing routine accepts input parameters, such as item and customer IDs, quantity, and perhaps other information that may determine the price of an item for a particular transaction. This routine includes some hard-coded logic and accesses the database for information used in calculating the price. The routine returns the calculated unit price to the caller. (Ive kept this example simple here; a real-world pricing routine might do much more.) Once youve written the routine, you combine it with the rest of an application, such as Order Entry, in a test environment. Then you use the Order Entry screens to enter some item numbers and observe the displayed prices. Have you tested the pricing routine? Emphatically, NO! Youve demonstrated the pricing routine. Yes, the routine will return a correct price at least under some conditions. Beyond that, this limited exercising of the routine provides little data. Effective testing should follow a plan that identifies the type and number of cases to test, with careful consideration given to valid and invalid inputs, boundary conditions, code coverage, and other dimensions.
TESTS SHOULD BE REPEATABLE

Consider the pricing routine again. An effective testing strategy for this routine should be repeatable. Testing one day for some set of valid inputs, doing some more development work, and then testing the next day for a set of invalid inputs assuming that youve already tested for valid cases will likely miss some defects. Each iteration of a unit, assemblage, or full application should be fully tested using the same test cases. This not only assures that you use the full range of tests on any new or revised code in the latest iteration, it also allows efficient regression testing to detect errors introduced when code is modified.

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

TESTS SHOULD PROVIDE ADEQUATE COVERAGE

Effective testing should exercise all or most of the code being examined. Testing our example pricing routine by passing it thousands of test cases that all flow through the same thirty percent of the routines statements provides no indication at all of the rest of the routines correctness. There are two measures of coverage to consider:

Percentage of statements executed Percentage of logic paths executed

Designing tests that execute most or all of the statements in a piece of software is relatively straightforward, although some defensive code to catch impossible conditions may be impossible to test unless special coding techniques are used (see below). A more challenging, but equally important goal is to run tests that flow through different sequences of statements in the software because this type of testing is more effective at uncovering logic errors among a related set of statements. Using automated tools that generate test cases and measure both types of coverage, its reasonable to achieve close to complete coverage for unit tests. For larger, more complex assemblages and complete applications, there may be so many different potential logic paths that complete coverage is impractical. However, using a coverage analysis tool lets you ensure that the most important code (that is, code that is most widely used, high-risk, or has other properties) is fully covered.
KEY TESTI NG TACTICS

n addition to the principles that underlie effective testing, there are several tactics that can help you get the most out of your testing process.
ADOPT A TESTER MENTALITY

Being a good tester requires developing an almost perverse pleasure in finding errors in someones code. This may not be too hard when the code is someone elses, but many programmers do a poor job of unit testing their own code because they a) dont believe their code has any defects to begin with, and b) finding an error in ones own code has no immediate benefit in fact, theres a bit of punishment as a result.

10

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

Nevertheless, accomplished programmers recognize that their code almost certainly has defects, and its better to catch them yourself, early on, than to have them be discovered later by another developer or by an end user. The tester mentality adopts the perspective that the software to be tested has errors, many of which are probably not obvious. Successfully ferreting out these hidden nuggets is then seen as a demonstration of the testers determination, insight, and cleverness. The satisfaction of uncovering subtle bugs can be as rewarding as writing good code (well, almost).
ADD CODE TO FACILITATE TESTING

During application development or modification, you can add code to help with subsequent testing. Much of this code should be conditionally compiled so its incorporated only in pre-production test versions. Other test-related code can be included in the production version, but its execution should be enabled only when needed. One of the most useful additions is code that emits runtime log entries with internal data values and/or execution points (for example, routine names). Logging can be enabled by a passing an argument when the applications main program is called, or by using entries in an application configuration file that the application reads during startup. Log entries can be directed to a file for subsequent analysis, such as comparison with previous log data. With this type of logging code in place, test runs can generate not only standard application results, but also a record of internal data that provides a more thorough indication of how the software is functioning. Another type of added code to consider is conditionallycompiled code that lets you test impossible application conditions. (Conditional compilation is supported in several iSeries languages, including C++ and ILE RPG.) By compiling test versions of software with this code included, you can run unique test cases (by setting Customer ID equal to 999999, for example) that cause execution to flow through statements you otherwise wouldnt be able to test.
STRUCTURE INTERACTIVE APPLICATIONS TO SIMPLIFY TESTING

Callable routines that involve no user interface (such as 5250 displays or Web pages) can be tested with a test driver program or

11

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

tool that calls the routine with a series of test values and records the returned values. Software that handles user input and output is harder to test. One powerful strategy is to structure application code to separate the user interface code from the business function and database access code. This technique results in callable routines for most of the business logic and database access, and these routines can be tested both as units and as integrated assemblies using a fairly simple test driver, as described above. Separating the user interface from business logic and database access is also a necessary first step to modernizing applications by providing a browser-based interface. Efficiently testing software by using the softwares interactive user interface requires an automated UI-driver, and even with such a tool this approach is more cumbersome than testing callable routines. Manually entering test data into an input screen, waiting for the application to respond, and manually recording results is simply too slow and too error-prone to be an effective testing technique.
CONSIDER SOME FORM OF TEST-DRIVEN DEVELOPMENT

Several newer, agile development methodologies incorporate two testing practices:


Implement tests before you write the code to be tested. Use very short cycles for integration testing.

The idea behind the first practice is this: if youre going to write the test eventually, its better to have it available as soon as you write the code. That way, you can test immediately after writing the code and catch errors at the earliest possible point. The second practice also helps you catch integration errors earlier by not allowing a large amount of time to elapse, and more development to occur, between integration tests. These practices can be adopted informally to some degree, even if your IT organization doesnt follow one of the agile methodologies. At the very least, the discipline of thinking through tests before writing code can encourage developers to program more carefully.
PLAN YOUR TEST CASES

You can both simplify testing and improve the likelihood of discovering errors if you plan your test cases, rather than just

12

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

using some random approach to creating test data. As explained above, for structural testing, you want to create test cases that will give you good statement and logic path coverage. Also consider test cases that:
Produce normal results. Trigger anticipated error conditions. Represent boundary conditions (that is, minimal/maximal values that result in true/false value for a program condition). Represent impossible conditions.

You can apply similar thinking to functional testing, even though functional testers work from a specification rather than from program code.
KEEP TRACK OF YOUR RESULTS

As explained earlier, regression testing is one of the most costeffective types of testing, especially when you use automated tools to run regression tests. Regression tests, of course, require that you store previous test results for comparison with later test results. Because testing is a pivotal practice for measuring the effect of changes in your other software development practices, you also need to keep a history of defect rates as determined by your testing. If youre wondering how well youre doing, consider that one estimate of the software industry average is 15 to 50 errors per 1,000 lines of deployed software. Among the best development groups, following rigorous QA practices that include testing, the rate drops to one or fewer errors per 1,000 lines of deployed software.
FIG U R E 1: M EASU R I NG SOFTWAR E QUALITY
50

Errors Per 1000 Lines of Deployed Software

40

30

20

Rigorous QA practices can reduce error rates to fewer than 1 per 1000 lines of code.

10

0
Very Poor Poor Average Good Very Good Exceptional Software Quality

13

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

AUTOMATION

esting is so demanding that automation is essential to accomplish it effectively. Test case data must be extracted or created; test sequences defined and invoked; and test results captured, presented, and compared, to name a few of the tasks that arent easily managed without automated tools. And, the increasing complexity of applications and systems makes automated testing tools more important than ever. As in other software development areas, of course, automation by itself is not a silver bullet. Ive already described test coverage tools, which measure which statements and logic paths a test run covers. In this section, I introduce some additional testing tools to consider.

TEST DRIVERS AND RESULTS COMPARATORS

A test driver is a tool that runs tests, using data and/or scripts as input and invoking the software to be tested by various means, including making calls to routines or simulating user data entry for interactive software. Test drivers also record results, including routines return values and data displayed by interactive software. Obviously, one of the most important aspects of a test driver is what type of interface it can drive, including:
Procedure or method calls in various high-level languages (ILE RPG, ILE COBOL, Java, C++. . .) Command-line program calls (OS/400, Windows, Linux) 5250 displays (or other block-mode terminal displays) Graphical interfaces (Windows and other platforms) Web browser interfaces Web services Message-oriented middleware (MQ Series) Other inter-program communication protocols

Test drivers generally provide a way of defining individual or small sets of tests and then assembling these into more comprehensive test suites. Some test drivers also allow conditional execution of groups of tests based on the results of prior tests. A scripting language (or equivalent interactive scripting tool) provides invaluable flexibility in programming a series of tests. A test driver must store test results, including some log of each individual test performed and a record of all software output. Flexibility in filtering and formatting the stored results can ease subsequent analysis.

14

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

Test drivers are typically integrated with a tool that can compare results from separate test runs (as for regression testing). A very important aspect of a test driver/results comparator is the ability to efficiently handle minor changes in the expected output. For example, when you add a new test case in the middle of a sequence of previous test cases, there may be additional output data in the middle of new test results. A good results comparator will note the new data, but resynchronize the comparison after that section of output. When software under test changes database file contents, a data file comparison tool is also necessary to evaluate changes made by a test against a previously produced reference file of expected results. For this type of tool, the ability to adapt to small variances in the output results is also an important capability. To test concurrent operations (as when multiple Web browsers access an application at the same time) or to simulate multi-user performance loads, a test driver must be able to simulate multiple users or connections.
TOOLS FOR DATA EXTRACTION, CAPTURE, AND GENERATION

Testing requires both test case input data and, in many cases, sample database file data. There are several types of tools that can help produce and manage this data.
Test driver input

One set of tools generates input data for test drivers. This can be done by specifying ranges and distributions of pseudo-random values or by capturing live input. Live input can come from either user input to interactive interfaces, or by capturing arguments supplied on procedure calls as an application runs.
Test database contents

Another type of tool produces test database contents. These tools can often extract specified selections or samples from production databases and/or generate artificial data according to specified ranges and distributions. There are two significant challenges with test databases that some automated tools address. The first problem arises in extracting test data from a set of inter-related production files. For example, if the data extraction process selects a tenth of the records in the Customer file, which records should be extracted

15

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

from the Orders file? Merely selecting every tenth record in the Orders file may result in orphan Order records (Orders with no matching Customer record) in the test database. To address this problem, some tools recognize and maintain dependencies among the data in different files. The second issue that arises is the need to manage the test database so tests can be repeated and/or run concurrently. In both cases, some automation tools provide alternative ways to reset the database to its initial state and/or make copies to isolate concurrent test runs.
Test Managers

For organizations with multiple projects and/or developers, setting up and conducting test suites and then keeping track of the results can be a complex management task. A tool that provides an interactive environment for these and related tasks can be especially helpful in getting long-term benefits out of your testing process.
G ETTI NG STARTE D

Testing covers many areas and can take considerable time. The payoff is better quality software and, over time, a more predictable software development process. Fortunately, you can introduce and improve testing processes incrementally. The first step is to set your quality assurance objective. To get started, you might consider the following goal: We will use testing to reduce the defect rate in deployed software and to reduce the amount of time we spend removing defects. To achieve this goal, you must begin by recording defects and other software problems, both those found in production and those found by pre-production QA activities, such as testing. This step should be tied in with your change management process (which I discuss in more detail in the Software Development Survival Guide white paper). The best place to introduce or improve testing depends somewhat on the nature of the application(s) and your current practices. If your development activity involves a substantial amount of changes to existing applications, and youre not already using automated regression tests, thats likely to be the best place to concentrate. Before you tackle the next set of application changes,
16

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

create test database files and run a basic set of normal and exception test cases using the production version of the application. This way, youll have reference test cases and results to use for regression testing after making changes and before putting modified code into production. This effort is a relatively painless way to significantly reduce one of the major sources of errors. As your next step, consider requiring that a test case be created for each new application change thats planned before the change is made. These tests can be added to the set of tests you create for your basic regression testing, and will be ready when the application change has been implemented. In many iSeries environments, where monolithic interactive and batch applications are common, it may be easier to expand the use of automated testing at the system or integration level using a 5250 display (or Web browser) test driver. Although its preferable to do unit testing before integration and system testing, thats not so practicable with monolithic programs, so work on building out your suite of tests at the most accessible point. For new development, enabling unit testing is just one of many reasons to design your applications in a modular fashion. With new or existing modular applications, you can exploit automated unit testing along with some of the logging techniques described in this white paper. Over time, youll be able to build up a solid portfolio of test cases at all three levels: unit, integration, and system, and the thoroughness of your testing will greatly increase while the effort (and cost) decreases. To go beyond the ideas presented in this white paper, you can peruse the resources listed in Where to Learn More.

17

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

WH E R E TO LEAR N M OR E

Beck, Kent. Test Driven Development: By Example. Reading, MA: Addison-Wesley Publishing Company, 2002.

This book provides an insight into the agile development practice of test-driven development. The examples focus on elementary, unit-level development and testing and leave unaddressed many of the more difficult questions regarding large-scale development. However, the book provides individual developers with fresh perspectives on coding and testing.

Conte, Paul. Software Development Survival Guide Five Steps from Chaos to Control. A SoftLanding Systems white paper available at http://www.softlanding.com/swmanagement/index.htm

This white paper explains the benefits of effective software management practices and lays out five steps you can take to help your organization improve its practices.

Conte, Paul. Making the Case for Software Management. A SoftLanding Systems white paper available at http://www.softlanding.com/swmanagement/index.htm

This white paper lays out a down-to-earth roadmap for IT Managers to build a solid case for improving their companys software management infrastructure so IT can more effectively help the company meet its business goals.

Glass, Robert L. Facts and Fallacies of Software Engineering. Reading, MA: Addison-Wesley Publishing Company, 2003.

Glass provides a valuable collection of provocative observations and research data to support or debunk various beliefs about software development and management, including testing. One of my favorite books.

Kaner, Cem et al. Testing Computer Software, 2nd Edition. Wiley, 1999

One of the classics in practical approaches to testing. Beware, however, that the 2nd edition leaves in place many woefully outdated examples from the original 1993 version.

18

A P P L I CAT I O N T E S T I N G BA S I C S : A P R ACT I CA L G U I D E TO I M P R OV I N G S O F T WA R E Q UA L I T Y

Marick, Brian. The Craft of Software Testing. Prentice Hall, 1995

A somewhat formal, but nevertheless informative book that focuses on white box testing.

SoftLanding Systems Web site http://www.softlanding.com/swmanagement/index.htm

The SoftLanding Systems Web site is a source of iSeries-specific software management information and provides links to useful articles and Web sites.

Whittaker, James A. What Is Software Testing? And Why Is It So Hard? IEEE Software, January/February 2000. Available on the Web at: www.computer.org/software/so2000/pdf/s1070.pdf

Excellent insights into difficult testing issues.

19

84 Elm Street Peterborough, NH 03458 603/924-8818 800/545-9485 fax: 603/924-6348 www.softlanding.com info@softlanding.com 2004 SoftLanding Systems, Inc.

Das könnte Ihnen auch gefallen