Sie sind auf Seite 1von 26

What is Agile model advantages, disadvantages and when to use it?

by ISTQB Guide in Testing throughout the testing life cycle Agile development model is also a type of Incremental model. Software is developed in incremental, rapid cycles. This results in small incremental releases with each release building on previous functionality. Each release is thoroughly tested to ensure software quality is maintained. It is used for time critical applications. Extreme Programming (XP) is currently one of the most well known agile development life cycle model. Diagram of Agile model:

Advantages of Agile model:


Customer satisfaction by rapid, continuous delivery of useful software. People and interactions are emphasized rather than process and tools. Customers, developers and testers constantly interact with each other. Working software is delivered frequently (weeks rather than months). Face-to-face conversation is the best form of communication. Close, daily cooperation between business people and developers. Continuous attention to technical excellence and good design. Regular adaptation to changing circumstances. Even late changes in requirements are welcomed

Disadvantages of Agile model:

In case of some software deliverables, especially the large ones, it is difficult to assess the effort required at the beginning of the software development life cycle. There is lack of emphasis on necessary designing and documentation. The project can easily get taken off track if the customer representative is not clear what final outcome that they want. Only senior programmers are capable of taking the kind of decisions required during the development process. Hence it has no place for newbie programmers, unless combined with experienced resources.

When to use Agile model:

When new changes are needed to be implemented. The freedom agile gives to change is very important. New changes can be implemented at very little cost because of the frequency of new increments that are produced. To implement a new feature the developers need to lose only the work of a few days, or even only hours, to roll back and implement it. Unlike the waterfall model in agile model very limited planning is required to get started with the project. Agile assumes that the end users needs are ever changing in a dynamic business and IT world. Changes can be discussed and features can be newly effected or removed based on feedback. This effectively gives the customer the finished system they want or need. Both system developers and stakeholders alike, find they also get more freedom of time and options than if the software was developed in a more rigid sequential way. Having options gives them the ability to leave important decisions until more or better data or even entire hosting programs are available; meaning the project can continue to move forward without fear of reaching a sudden standstill.

Testing methods
Static vs. dynamic testing
There are many approaches to software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing can be omitted, and in practice often is. Dynamic testing takes place when the program itself is used. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment. Static testing involves verification whereas dynamic testing involves validation. Together they help improve software quality.

The box approach


Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White-Box testing Main article: White-box testing White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a systemlevel test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include:

API testing (application programming interface) testing of the application using public and private APIs Code coverage creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods intentionally introducing faults to gauge the efficacy of testing strategies Mutation testing methods Static testing methods

Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[21] Code coverage as a software metric can be reported as a percentage for:

Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test

100% statement coverage ensures that all code paths, or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Black-box testing Main article: Black-box testing

Black box diagram Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation. The tester is only aware of what the software is supposed to do, not how it does it.[22] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing. Specification-based testing aims to test the functionality of software according to the applicable requirements.[23] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[24] One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[25] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested. This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.
Visual testing

The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily nd the information he or she requires, and the information is expressed clearly.[26][27]

At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-ina-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased dramatically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed] Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important.[clarification needed][citation needed] Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process.[citation needed] For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developer. Further information: Graphical user interface testing Grey-box testing Main article: Gray box testing Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[28][not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in

normal production operations.[citation needed] Grey-box testing may also include reverse engineering to determine, for instance, boundary values or error messages. By knowing the underlying concepts of how the software works, the tester makes betterinformed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[29]

Testing levels
Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[30] Other test levels are classified by the testing objective.[30]

Unit testing
Main article: Unit testing Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[31] These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other. Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, unit testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, code coverage analysis and other software verification practices.

Integration testing
Main article: Integration testing Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[32]

System testing
Main article: System testing System testing tests a completely integrated system to verify that it meets its requirements.[33] In addition, the software testing should ensure that the program, as well as working as expected, does not also destroy or partially corrupt its operating environment or cause other processes within that environment to become inoperative (this includes not corrupting shared memory, not consuming or locking up excessive resources and leaving any parallel processes unharmed by its presence).[citation needed]

Acceptance testing
Main article: Acceptance testing At last the system is delivered to the user for Acceptance testing.

Testing Types
Installation testing
Main article: Installation testing An installation test assures that the system is installed correctly and working at actual customer's hardware.

Compatibility testing

Main article: Compatibility testing A common cause of software failure (real or perceived) is a lack of its compatibility with other application software, operating systems (or operating system versions, old or new), or target environments that differ greatly from the original (such as a terminal or GUI application intended to be run on the desktop now being required to become a web application, which must render in a web browser). For example, in the case of a lack of backward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactively abstracting operating system functionality into a separate program module or library.

Smoke and sanity testing


Sanity testing determines whether it is reasonable to proceed with further testing. Smoke testing is used to determine whether there are serious problems with a piece of software, for example as a build verification test.

Regression testing
Main article: Regression testing Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk.

Acceptance testing
Main article: Acceptance testing Acceptance testing can mean one of two things:
1. A smoke test is used as an acceptance test prior to introducing a new build to the main

testing process, i.e. before integration or regression. 2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be

performed as part of the hand-off process between any two phases of development.[citation
needed]

Alpha testing
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[34]

Beta testing
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.
[citation needed]

Functional vs non-functional testing


Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such as scalability or other performance, behavior under certain constraints, or security. Testing will determine the flake point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users.

Destructive testing
Main article: Destructive testing Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing the robustness of input validation and error-management routines.[citation needed] Software fault injection, in the form of fuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from the software fault injection page; there are also numerous open-source and free software tools available that perform destructive testing. Further information: Exception handling and Recovery testing

Software performance testing

Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users. This is generally referred to as software scalability. The related load testing activity of when performed as a non-functional activity is often referred to as endurance testing. Volume testing is a way to test software functions even when certain components (for example a file or database) increase radically in size. Stress testing is a way to test reliability under unexpected or rare workloads. Stability testing (often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably. Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.

Usability testing
Usability testing is needed to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application.

Accessibility
Accessibility testing may include compliance with standards such as:

Americans with Disabilities Act of 1990 Section 508 Amendment to the Rehabilitation Act of 1973 Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C)

Security testing
Security testing is essential for software that processes confidential data to prevent system intrusion by hackers.

Internationalization and localization


The general ability of software to be internationalized and localized can be automatically tested without actual translation, by using pseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[35]

Actual translation to human languages must be tested, too. Possible localization failures include:

Software is often localized by translating a list of strings out of context, and the translator may choose the wrong translation for an ambiguous source string. Technical terminology may become inconsistent if the project is translated by several people without proper coordination or if the translator is imprudent. Literal word-for-word translations may sound inappropriate, artificial or too technical in the target language. Untranslated messages in the original language may be left hard coded in the source code. Some messages may be created automatically at run time and the resulting string may be ungrammatical, functionally incorrect, misleading or confusing. Software may use a keyboard shortcut which has no function on the source language's keyboard layout, but is used for typing characters in the layout of the target language. Software may lack support for the character encoding of the target language. Fonts and font sizes which are appropriate in the source language may be inappropriate in the target language; for example, CJK characters may become unreadable if the font is too small. A string in the target language may be longer than the software can handle. This may make the string partly invisible to the user or cause the software to crash or malfunction. Software may lack proper support for reading or writing bi-directional text. Software may display images with text that was not localized. Localized operating systems may have differently named system configuration files and environment variables and different formats for date and currency.

Development testing
Main article: Development Testing Development Testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

Depending on the organization's expectations for software development, Development Testing might include static code analysis, data flow analysis metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices.

A/B testing
Main article: A/B testing

Testing process
Traditional CMMI or waterfall development model
A common practice of software testing is that testing is performed by an independent group of testers after the functionality is developed, before it is shipped to the customer.[36] This practice often results in the testing phase being used as a project buffer to compensate for project delays, thereby compromising the time devoted to testing.[37] Another practice is to start software testing at the same moment the project starts and it is a continuous process until the project finishes.[38] Further information: Capability Maturity Model Integration and Waterfall model

Agile or Extreme development model


In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently. [39] [40] This methodology increases the testing effort done by development before reaching a formal testing team. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

Top-down and bottom-up


Bottom Up Testing is an approach to integrated testing where the lowest level components (modules, procedures, and functions) are tested first, then integrated and used to facilitate the testing of higher level components. After the integration testing of lower level integrated

modules, the next level of modules will be formed and can be used for integration testing. The process is repeated until the components at the top of the hierarchy are tested. This approach is helpful only when all or most of the modules of the same development level are ready.[citation needed] This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.[citation needed] Top Down Testing is an approach to integrated testing where the top integrated modules are tested and the branch of the module is tested step by step until the end of the related module. In both, method stubs and drivers are used to stand-in for missing components and are replaced as the levels are completed....

A sample testing cycle


Although variations exist between organizations, there is a typical cycle for testing.[41] The sample below is common among organizations employing the Waterfall development model. The same practices are commonly found in other development models, but might not be as clear or explicit.

Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work to determine what aspects of a design are testable and with what parameters those tests work. Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed. Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software. Test execution: Testers execute the software based on the plans and test documents then report any errors found to the development team. Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release. Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be assigned, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later. Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing. Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.

Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

Automated testing
Main article: Test automation Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system. While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a welldeveloped test suite of testing scripts in order to be truly useful.

Testing tools
Program testing and fault detection can be aided significantly by testing tools and debuggers. Testing/debug tools include features such as:

Program monitors, permitting full or partial monitoring of program code including: o Instruction set simulator, permitting complete instruction level monitoring and trace facilities
o

Program animation, permitting step-by-step execution and conditional breakpoint at source level or in machine code Code coverage reports

Formatted dump or symbolic debugging, tools allowing inspection of program variables on error or at chosen points Automated functional GUI testing tools are used to repeat system-level tests through the GUI Benchmarks, allowing run-time performance comparisons to be made Performance analysis (or profiling tools) that can help to highlight hot spots and resource usage

Some of these features may be incorporated into an Integrated Development Environment (IDE).

A regression testing technique is to have a standard set of tests, which cover existing functionality that result in persistent tabular data, and to compare pre-change data to post-

change data, where there should not be differences, using a tool like diffkit. Differences detected indicate unexpected functionality changes or "regression".

Measurement in software testing


Main article: Software quality Usually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO/IEC 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability. There are a number of frequently used software metrics, or measures, which are used to assist in determining the state of the software or the adequacy of the testing.

Testing artifacts
The software testing process can produce several artifacts. Test plan A test specification is called a test plan. The developers are well aware what test plans will be executed and this information is made available to management and the developers. The idea is to make them more cautious when developing their code or making additional changes. Some companies have a higher-level document called a test strategy. Traceability matrix A traceability matrix is a table that correlates requirements or design documents to test documents. It is used to change tests when related source documents are changed, to select test cases for execution when planning for regression tests by considering requirement coverage. Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.[42] This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who

generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table. Test script A test script is a procedure, or programing code that replicates user actions. Initially the term was derived from the product of work created by automated regression test tools. Test Case will be a baseline to create test scripts using a tool or a program. Test suite The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Test fixture or test data In most cases, multiple sets of values or data are used to test the same functionality of a particular feature. All the test values and changeable environmental components are collected in separate files and stored as test data. It is also useful to provide this data to the client and with the product or a project. Test harness The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

Certifications
Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification.[43] Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.[44] Software testing certification types Exam-based: Formalized exams, which need to be passed; can also be learned by self-study [e.g., for ISTQB or QAI][45] Education-based: Instructor-led sessions, where each course has to be passed [e.g., International Institute for Software Testing (IIST)]. Testing certifications Certified Associate in Software Testing (CAST) offered by the QAI [46] CATe offered by the International Institute for Software Testing[47]

Certified Manager in Software Testing (CMST) offered by the QAI [46] Certified Test Manager (CTM) offered by International Institute for Software Testing[47] Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)[46]

Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing[47] CSTP (TM) (Australian Version) offered by K. J. Ross & Associates[48] ISEB offered by the Information Systems Examinations Board ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board [49][50] ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board [49][50] TMPF TMap Next Foundation offered by the Examination Institute for Information Science[51] TMPA TMap Next Advanced offered by the Examination Institute for Information Science[51]

Quality assurance certifications CMSQ offered by the Quality Assurance Institute (QAI).[46] CSQA offered by the Quality Assurance Institute (QAI)[46]

CSQE offered by the American Society for Quality (ASQ)[52] CQIA offered by the American Society for Quality (ASQ)[52]

Controversy
Some of the major software testing controversies include: What constitutes responsible software testing? Members of the "context-driven" school of testing[53] believe that there are no "best practices" of testing, but rather that testing is a set of skills that allow the tester to select or invent testing practices to suit each unique situation.[54] Agile vs. traditional Should testers learn to work under conditions of uncertainty and constant change or should they aim at process "maturity"? The agile testing movement has received growing popularity since 2006 mainly in commercial circles,[55][56] whereas government and military[57] software providers use this methodology but also the traditional test-last models (e.g. in the Waterfall model).[citation needed] Exploratory test vs. scripted [58] Should tests be designed at the same time as they are executed or should they be designed beforehand? Manual testing vs. automated Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.[59] More in particular, test-driven development states that

developers should write unit-tests, as those of XUnit, before coding the functionality. The tests then can be considered as a way to capture and implement the requirements. Software design vs. software implementation [60] Should testing be carried out only at the end or throughout the whole process? Who watches the watchmen? The idea is that any form of observation is also an interactionthe act of testing can also affect that which is being tested.[61]

Related processes
Software verification and validation
Main article: Verification and validation (software) Software testing is used in association with verification and validation:[62]

Verification: Have we built the software right? (i.e., does it implement the requirements). Validation: Have we built the right software? (i.e., do the requirements satisfy the customer).

The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. According to the ISO 9000 standard: Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled. Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

Software quality assurance (SQA)


Software testing is a part of the software quality assurance (SQA) process.[4] In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance

than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.
[citation needed]

Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

What is Agile Testing? Advantages of Agile Methodology.


Thursday, March 29, 2012 Prashant Vadher 3 comments Email This BlogThis! Share to Twitter Share to Facebook

What is Agile Testing? Agile as the name refers implies something to do very quickly. Agile testing is used whenever customer requirements are changing dynamically. Hence Agile Testing refers to validate the client requirements as soon as possible and make it customer friendly. As a Tester, you need to provide your thoughts on the client requirements rather than just being the audience at the other end. If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process. Test case would have detail steps of what the application is supposed to do. 1. Functionality of application. 2. In addition you can refer to Backend, is mean look into the Database. To gain more knowledge of the application. Introduction to Agile Testing While the given application under test is still evolving depending upon the customer needs, the mindset of the end user and the current market condition, it is highly impractical to go for the usual standard SDLC Models like Water Fall, V&V Model etc. Such models are most suitable for the

Applications that are stable and non-volatile. The concept of Time-ToMarket is the key word in todays IT Business that compels the Software vendors to come up with new strategies to save the time, resources, cut down the cost involved and at the same time, deliver a reliable product that meets the user requirements. In this case, a reasonably good amount of end-to-end testing is carried out and the product could be acceptable with known issues/defects at the end of an intermediate release. These defects are harmless for the Application usability. To adopt such a process in a systematic way, we have a new concept called Agile Methodology. This methodology continuously strives to overcome the issues of dynamically changing requirements while still trying to maintain a well-defined process. The process is as follows: 1. The Customer prepares the Business Requirements and the Business Analyst or the Engineering team reviews it. Ideally, the Quality Assurance/Testing team is also involved in reviewing these requirements in order to be able to plan further stages accordingly. 2. During the Design and Implementation stages, the Engineering team writes User Stories and the analysis of issues at various stages. The Customer reviews these on regular basis and updates the Requirement specifications accordingly. The Testing team would follow up on regular basis at every stage until a consolidated documentation is prepared. This is to ensure that the Customer, the Engineering team and the Testing team are at the same page always and thus ensuring complete test coverage. 3. While the Engineering team starts the implementation, the Testing team starts with test planning, test strategies and test cases preparation. These would be properly documented and handed over to the Customer and the Engineering team for review. This is to ensure the complete test coverage and avoid unnecessary or redundant test cases. 4. As and when the Developer implements the code, the Testing team identifies if the application can be built using this code for a quick testing. This is to identify the defects at the early stage so that the developer can fix them in the next round on priority basis and continue with further development. This iteration continues until the end of the code implementation. Once the testing cycle starts, the Test team can now focus more on major test items such as Integration, Usability Testing and System Testing etc..

Process followed at various stages in the product life cycle: Every intermediate release of the product would be divided into two short cycles, usually of the duration of 40 days each. Each cycle would be executed in the following stages. The roles and responsibilities of every individual and the team are clearly defined for each stage. - Design Specifications: The Testing teams efforts would focus on performing any tool or process improvements and reviewing, understanding, and contributing to the nascent specifications. - Implementation: While the Engineering/Development team is implementing the code, Testers would develop complete Testing Plan and Test Sets (set of test cases) for each of the features included in the cycle. Engineering features must be included; they would likely require some level of collaboration with the engineering feature developer. All Test Sets should be ready to execute by the end of implementation period of the respective cycle. After Test Set preparation, calculate the time estimation and prioritization for the Test Set execution based on the complexity and expected execution time for each test suite. - While the test execution time estimation is notoriously difficult, this number should provide the Customer with a starting point for benchmarking. - Testing/QA: Test Set execution, raising defects and follow up with the Engineering Team. End-to-end validation of the defects. Focus simultaneously on improving the quality of test cases. Watching out for and

adding new cases as testing proceeds. Testing the software end-to-end to discover regressions and subtle systemic issues. Learning to focus more on using the time available to uncover the largest number of and most important bugs. Any deviation from the estimated time should be communicated across well in advance, so that the schedule can be worked upon depending upon the priority of the pending tasks. If there are certain issues or test cases blocking due to unknown errors, they would be differed until the beginning of next Testing/QA Cycle. - Before acceptance: Follow up on ad-hoc requests/ changes in requirements on a regular basis, besides trying to complete the defined tasks. The role of testing within agile projects as: 1. Testing is the headlights of the project where are you now? Where are you headed? 2. Testing provides information to the team allowing the team to make informed decisions 3. A bug is anything that could bug a user testers dont make the final call 4. Testing does not assure quality the team does (or doesnt) 5. Testing is not a game of gatcha find ways to set goals, rather then focusing on mistakes The key challenges for a tester on an agile project are: 1. No traditional style business requirements or functional specification documents. We have small documents (story cards developed from the 44 inch cards) which only detail one feature. Any additional details about the feature are captured via collaborative meetings and discussions. 2. You will be testing as early as practical and continuously throughout the lifecycle so expect that the code wont be complete and is probably still being written 3. Your acceptance Test cases are part of the requirements analysis process as you are developing them before the software is developed 4. The development team has a responsibility to create automated unit tests which can be run against the code every time a build is performed 5. With multiple code deliveries during the iteration, your regression testing requirements have now significantly increased and without test automation support, your ability to maintain a consistent level of regression coverage will significantly decrease

The role of a tester in an Agile project requires a wider variety of skills: 1. Domain knowledge about the system under test 2. The ability to understanding the technology be used 3. A level of technical competency to be able to interact effective with the development team Advantages offered by Agile Methodology: The very first advantage is the saving of time and money. There is less documentation required though documents help to a great deal in verifying and validating the requirements but considering the time frame of the project, this approach leads to focus more on the application rather than documenting the things. Since it is iterative in its form, it tends to have a regular feedback from the end user so that the same can be implemented as soon as possible. And because all phases of SDLC need to be completed very quickly, there is a transparency to each individual working on the project with the status of each phase. Another advantage that Agile Methodology offers to other approaches available is that in case there is any Change request or enhancements come in between any phase, it can be implemented without any budget constraint though there needs to be some adjustment in the already allotted time frame which will not be a difficult task for the projects following Agile tactics. Daily meetings and discussions for the project following Agile approach can help to determine the issues well in advance and work on it accordingly. Quick coding and Testing makes the management aware of the gaps existing in either requirements or technology used and can try to find the workaround for the same. Hence, with the quicker development, testing and constant feedbacks from the user, the Agile methodology becomes the appropriate approach for the projects to be delivered in a short span of time. Principles behind Agile Manifesto: - Our highest priority is to satisfy the customer through early and continuous delivery of high-quality software. - Welcome changing requirements, even late in testing. Agile processes harness change for the customers competitive advantage. - Deliver high-quality software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. - Business people, developers, and testers must work together daily throughout the project. - Build test projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.

- The most efficient and effective method of conveying information to and within a test team is face-to-face conversation. - Working high-quality software is the primary measure of progress. - Agile processes promote sustainable development and testing. The sponsors, developers, testers, and users should be able to maintain a constant pace indefinitely. - Continuous attention to technical excellence and good test design enhances agility. - Simplicitythe art of maximizing the amount of work not doneis essential. - The best architectures, requirements, and designs emerge from selforganizing teams. - At regular intervals, the test team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. Agile Testing: Example Agile methodology with Extreme Programming and test-driven development was used to develop the Smart Client Offline Application Block. The following are highlights of the approach taken on the project: 1. The test team and the development team were not formally separated. The developers worked in pairs, with one person developing the test cases and the other writing the functionality for the module. 2. There was much more interaction among team members than there is when following a traditional development model. In addition to using the informal chat-and-develop mode, the team held a 30 minute daily standup meeting, which gave team members a forum for asking questions and resolving problems, and weekly iterative review meetings to track the progress for each iterative cycle. 3. Project development began without any formal design document. The specifications were in the form of user stories that were agreed upon by the team members. In the weekly iterative review meetings, team members planned how to complete these stories and how many iterations to assign for each story. 4. Each story was broken down into several tasks. All of the stories and corresponding tasks were written down on small cards that served as the only source of design documentation for the application block.
5. While developing each task or story, NUnit test suites were written to

drive the development of features.

6. No formal test plans were developed. The testing was primarily based on the tasks or stories for feature development. The development team got immediate feedback from the test team. Having the test

team create the quick start samples gave the development team a perspective on the real-life usage of the application block. 7. After the task or story passed all of the NUnit test cases and was complete, quick start samples were developed to showcase the functionality. The quick start samples demonstrated the usage of the application block and were useful for further testing the code in the traditional way (functional and integration tests). Any discrepancies found in this stage were reported immediately and were fixed on a case-by-case basis. The modified code was tested again with the automated test suites and then was handed over to be tested again with the quick start samples. 8. What is Agile Testing? 9. Agile as the name refers implies something to do very quickly. Hence Agile Testing refers to validate the client requirements as soon as possible and make it customer friendly. As soon as the build is out, testing is expected to get started and report the bugs quickly if any found. As a Tester, you need to provide your thoughts on the client requirements rather than just being the audience at the other end. Emphasis has to be laid down on the quality of the deliverable in spite of short timeframe which will further help in reducing the cost of development and your feedbacks will be implemented in the code which will avoid the defects coming from the end user. 10. Advantages offered by Agile Methodology: 11. The very first advantage that the company got to see with the Agile Methodology is the saving of time and money. There is less documentation required though documents help to a great deal in verifying and validating the requirements but considering the time frame of the project, this approach leads to focus more on the application rather than documenting the things. Since it is iterative in its form, it tends to have a regular feedback from the end user so that the same can be implemented as soon as possible. And because all phases of SDLC need to be completed very quickly, there is a transparency to each individual working on the project with the status of each phase. 12. Another advantage that Agile Methodology offers to other approaches available is that in case there is any Change request or enhancements come in between any phase, it can be implemented without any budget constraint though there needs to be some adjustment in the already allotted time frame which will not be a difficult task for the projects following Agile tactics. Though it is useful for any Programming language or Technology around, it is advisable to make it employ for Web 2.0 or the projects which are new in media. 13. Daily meetings and discussions for the project following Agile approach can help to determine the issues well in advance and work on it accordingly. Quick coding and Testing makes the management aware of the gaps existing in either requirements or technology used and can try to find the workaround for the same.

14. Hence, with the quicker development, testing and constant feedbacks from the user, the Agile methodology becomes the appropriate approach for the projects to be delivered in a short span of time. 15. When is Agile Testing Approach followed? 16. In case there is a situation in the project where there is a frequent change in the requirements from the client and it is difficult to accommodate them each time in your documentation as well as Test Assets, it is always advisable to follow the Agile Testing Approach which is often used when there is a dynamic change in the requirements from the client. 17. Comparison of Agile over Other methods: 18. Here is a limelight on the Agile advantages over other methods which are known to be traditional now considering the advantages that Agile methodology offers. 19. -> Development of an application in Agile is incremental rather than progressive or sequential as compared to other methods such as waterfall model which tends to have a phase wise development. This feature of Agile helps to have quick testing and results in small incremental releases and each of them is tested to the depth in order to meet the requirements and in case any requirement is to be introduced further, it is not a difficult task to do so in case of Agile as compared to waterfall where it will have to be traced down to the beginning of the phase to make the appropriate changes. 20. -> Rather than tools and technologies talking with each other, it is the individuals who communicate with each other more often in Agile Testing approach. In case of waterfall, Extreme Programming or V-model, it is the tools, processes and technologies that often meet with each other and decide the relevant outcome. So Human communication lacks in other methods as compared to Agile method. 21. -> Working or rather testing the Application is always on the top priority for the projects following Agile Methodology as compared to other methods where documentation is given a more edge over other critical tasks. 22. -> The best part of the Agile over other methods is that it directly communicates with the user of the Application rather than having any intermediary party like client. This enables them to quickly get the feedback from the customer and appropriately implement the change in the Application. 23. -> When any enhancement or change request comes in, it is directly taken into account to be implemented in the Application rather than deciding to go for a further planning and revisiting the budget and time constraints as use to happen in other methods. 24. Agile methodology if compared to other methods offers advantages which are helpful to save time and money for Testing Phase which are important elements of any project.

Das könnte Ihnen auch gefallen