Sie sind auf Seite 1von 44

MANUAL TESTING FAQS 1. What is Acceptance Testing? Testing conducted to enable a user/customer to determine whether to accept a software product.

Normally performed to validate the software meets a set of agreed acceptance criteria. 2. What is Accessibilit Testing? Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.). !. What is A" #$c Testing? A testing phase where the tester tries to brea! the system by randomly trying the system s functionality. "an include negative testing as well. #ee also $on!ey Testing. %. What is Agile Testing? Testing practice for pro%ects using agile methodologies, treating development as the customer of testing and emphasi&ing a test'first design paradigm. #ee also Test (riven (evelopment. &. What is Applicati$n 'ina( Inte()ace *A'I+? A specification defining re)uirements for portability of applications in binary forms across different system platforms and environments. ,. What is Applicati$n -($g(a..ing Inte()ace *A-I+? A formali&ed set of software calls and routines that can be referenced by an application program in order to access supporting system or networ! services. /. What is A0t$.ate" S$)t1a(e Q0alit *ASQ+? The use of software tools, such as automated testing tools, to improve software )uality. 2. What is a0t$.ate" Testing? Testing employing software tools which e*ecute tests without manual intervention. "an be applied in +,-, performance, A.-, etc. testing. The use of software to control the e*ecution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. 13. What is 'asic 'l$c4? A se)uence of one or more consecutive, e*ecutable statements containing no branches. 11. What is 'asis -ath Testing? A white bo* test case design techni)ue that uses the algorithmic flow of the program to design tests. 12. What is 'asis Set? The set of tests derived using basis path testing. 1!. What is 'aseline? The point at which some deliverable produced during the software engineering process is put under formal change control. 1%. What $0 1ill "$ "0(ing the )i(st "a $) 5$b? /hat would you li!e to do five years from now0 1&. What is 'eta Testing? Testing of a rerelease of a software product conducted by customers. 1,. What is 'ina( -$(tabilit Testing? Testing an e*ecutable application for portability across system platforms and environments, usually for conformation to an A1- specification. 1/. What is 'lac4 '$6 Testing? Testing based on an analysis of the specification of a piece of software without reference to its internal wor!ings. The goal is to test how well the component conforms to the published re)uirements for the component. 12. What is '$tt$. Up Testing? An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the

component at the top of the hierarchy is tested. 17. What is '$0n"a( Testing? Test which focus on the boundary or limit conditions of the software being tested. (#ome of these tests are stress tests). 23. What is '0g? A fault in a program, which causes the program to perform in an unintended or unanticipated manner. 23. What is 8e)ect? -f software misses some feature or function from what is there in re)uirement it is called as defect. 21. What is '$0n"a( 9al0e Anal sis? 1VA is similar to 2)uivalence .artitioning but focuses on 3corner cases3 or values that are usually out of range as defined by the specification. his means that if a function e*pects all values in range of negative 455 to positive 4555, test inputs would include negative 454 and positive 4554. 22. What is '(anch Testing? Testing in which all branches in the program source code are tested at least once. 2!. What is '(ea"th Testing? A test suite that e*ercises the full functionality of a product but does not test features in detail. 2%. What is :AST? "omputer Aided #oftware Testing. 2&. What is :apt0(e;<epla T$$l? A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. $ost commonly applied to +,- test tools. 2,. What is :MM? The "apability $aturity $odel for #oftware ("$$ or #/'"$$) is a model for %udging the maturity of the software processes of an organi&ation and for identifying the !ey practices that are re)uired to increase the maturity of these processes. 2/. What is :a0se E))ect G(aph? A graphical representation of inputs and the associated outputs effects which can be used to design test cases. 22. What is :$"e :$.plete? .hase of development where functionality is implemented in entirety6 bug fi*es are all that are left. All functions found in the 7unctional #pecifications have been implemented. 27. What is :$"e :$=e(age? An analysis method that determines which parts of the software have been e*ecuted (covered) by the test case suite and which parts have not been e*ecuted and therefore may re)uire additional attention. !3. What is :$"e Inspecti$n? A formal testing techni)ue where the programmer reviews source code with a group who as! )uestions analy&ing the program logic, analy&ing the code with respect to a chec!list of historically common programming errors, and analy&ing its compliance with coding standards. !1. What is :$"e Wal4th($0gh? A formal testing techni)ue where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analy&e the programmer s logic and assumptions. !2. What is :$"ing? The generation of source code. !!. What is :$.patibilit Testing?

Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, 8perating #ystems, or hardware. !%. What is :$.p$nent? A minimal software item for which a separate specification is available. !&. What is :$.p$nent Testing? Testing of individual software components (,nit Testing). !,. What is :$nc0((enc Testing? $ulti'user testing geared towards determining the effects of accessing the same application code, module or database records. -dentifies and measures the level of loc!ing, deadloc!ing and use of single'threaded code and loc!ing semaphores. !/. What is :$n)$(.ance Testing? The process of testing that an implementation conforms to the specification on which it is based. ,sually applied to testing conformance to a formal standard. !2. What is :$nte6t 8(i=en Testing? The conte*t'driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organi&ation right now. !7. What is :$n=e(si$n Testing? Testing of programs or procedures used to convert data from e*isting systems for use in replacement systems. %3. What is : cl$.atic :$.ple6it ? A measure of the logical comple*ity of an algorithm, used in white'bo* testing. %1. What is 8ata 8icti$na( ? A database that contains definitions of all data items defined during analysis. %2. What is 8ata Fl$1 8iag(a.? A modeling notation that represents a functional decomposition of a system. %!. What is 8ata 8(i=en Testing? Testing in which the action of a test case is parameteri&ed by e*ternally defined data values, maintained as a file or spreadsheet. A common techni)ue in Automated Testing. %%. What is 8eb0gging? The process of finding and removing the causes of software failures. %&. What is 8e)ect? Nonconformance to re)uirements or functional / program specification %,. What is 8epen"enc Testing? 2*amines an application s re)uirements for pre'e*isting software, initial states and configuration in order to maintain proper functionality. %/. What is 8epth Testing? A test that e*ercises a feature of a product in full detail. %2. What is 8 na.ic Testing? Testing software through e*ecuting it. #ee also #tatic Testing. %7. What is E.0lat$(? A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system. &3. What is En"0(ance Testing? "hec!s for memory lea!s or other problems that may occur with prolonged e*ecution &1. What is En">t$>En" testing? Testing a complete application environment in a situation that mimics real'world use, such as interacting with a database, using networ! communications, or interacting with other hardware, applications, or systems if appropriate. &2. What is E?0i=alence :lass? A portion of a component s input or output domains for which the component s behaviour is assumed to be the same from the component s specification. &!. What is E?0i=alence -a(titi$ning?

A test case design techni)ue for a component in which test cases are designed to e*ecute representatives from e)uivalence classes. &%. What is E6ha0sti=e Testing? Testing which covers all combinations of input values and preconditions for an element of the software under test. &&. What is F0ncti$nal 8ec$.p$siti$n? A techni)ue used during planning, analysis and design6 creates a functional hierarchy for the software. &%. What is F0ncti$nal Speci)icati$n? A document that describes in detail the characteristics of the product with regard to its intended features. &&. What is F0ncti$nal Testing? Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and e*ecution conditions. or 1lac! 1o* Testing. &,. What is Glass '$6 Testing? A synonym for /hite 1o* Testing. &/. What is G$(illa Testing? Testing one particular module, functionality heavily. &2. What is G(a '$6 Testing? A combination of 1lac! 1o* and /hite 1o* testing methodologies0 testing a piece of software against its specification but using some !nowledge of its internal wor!ings. &7. What is #igh @("e( Tests? 1lac!'bo* tests conducted once the software has been integrated. ,3. What is In"epen"ent Test G($0p *ITG+? A group of people whose primary responsibility is software testing, ,1. What is Inspecti$n? A group review )uality improvement process for written material. -t consists of two aspects6 product (document itself) improvement and process improvement (of both document production and inspection). ,2. What is Integ(ati$n Testing? Testing of combined parts of an application to determine if they function together correctly. ,sually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems. ,!. What is Installati$n Testing? "onfirms that the application under test recovers from e*pected or une*pected events without loss of data or functionality. 2vents can include shortage of dis! space, une*pected loss of communication, or power out conditions. ,%. What is L$a" Testing? #ee .erformance Testing. ,&. What is L$caliAati$n Testing? This term refers to ma!ing software specifically designed for a specific locality. ,,. What is L$$p Testing? A white bo* testing techni)ue that e*ercises program loops. ,/. What is Met(ic? A standard of measurement. #oftware metrics are the statistics describing the structure or content of a program. A metric should be a real ob%ective measurement of something such as number of bugs per lines of code. ,2. What is M$n4e Testing? Testing a system or an Application on the fly, i.e %ust few tests here and there to ensure the system or an application does not crash out. ,7. What is Negati=e Testing?

Testing aimed at showing software does not wor!. Also !nown as 3test to fail3. #ee also .ositive Testing. /3. What is -ath Testing? Testing in which all paths in the program source code are tested at least once. /1. What is -e()$(.ance Testing? Testing conducted to evaluate the compliance of a system or component with specified performance re)uirements. 8ften this is performed using an automated test tool to simulate large number of users. Also !now as 39oad Testing3. /2. What is -$siti=e Testing? Testing aimed at showing software wor!s. Also !nown as 3test to pass3. #ee also Negative Testing. /!. What is Q0alit Ass0(ance? All those planned or systematic actions necessary to provide ade)uate confidence that a product or service is of the type and )uality needed and e*pected by the customer. /%. What is Q0alit A0"it? A systematic and independent e*amination to determine whether )uality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve ob%ectives. /&. What is Q0alit :i(cle? A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the )uality of outputs of a process and to the correction of problems or to the improvement of )uality. /,. What is Q0alit :$nt($l? The operational techni)ues and the activities used to fulfill and verify re)uirements of )uality. //. What is Q0alit Manage.ent? That aspect of the overall management function that determines and implements the )uality policy. /2. What is Q0alit -$lic ? The overall intentions and direction of an organi&ation as regards )uality as formally e*pressed by top management. /7. What is Q0alit S ste.? The organi&ational structure, responsibilities, procedures, processes, and resources for implementing )uality management. 23. What is <ace :$n"iti$n? A cause of concurrency problems. $ultiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access. 21. What is <a.p Testing? "ontinuously raising an input signal until the system brea!s down. 22. What is <ec$=e( Testing? "onfirms that the program recovers from e*pected or une*pected events without loss of data or functionality. 2vents can include shortage of dis! space, une*pected loss of communication, or power out conditions 2!. What is <eg(essi$n Testing? :etesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made. 2%. What is <elease :an"i"ate? A pre'release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released). 2&. What is Sanit Testing? 1rief test of ma%or functional elements of a piece of software to determine if its basically operational. #ee also #mo!e Testing.

2,. What is Scalabilit Testing? .erformance testing focused on ensuring the application under test gracefully handles increases in wor! load. 2/. What is Sec0(it Testing? Testing which confirms that the program can restrict access to authori&ed personnel and that the authori&ed personnel can access the functions available to their security level. 22. What is S.$4e Testing? A )uic!'and'dirty test that the ma%or functions of a piece of software wor!. 8riginated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire. 27. What is S$a4 Testing? :unning a system at high load for a prolonged period of time. 7or e*ample, running several times more transactions in an entire day (or night) than would be e*pected in a busy day, to identify and performance problems that appear after a large number of transactions have been e*ecuted. 73. What is S$)t1a(e <e?0i(e.ents Speci)icati$n? A deliverable that describes all data, functional and behavioral re)uirements, all constraints, and all validation re)uirements for software/ 71. What is S$)t1a(e Testing? A set of activities conducted with the intent of finding errors in software. 72. What is Static Anal sis? Analysis of a program carried out without e*ecuting the program. 7!. What is Static Anal Ae(? A tool that carries out static analysis. 7%. What is Static Testing? Analysis of a program carried out without e*ecuting the program. 7&. What is St$(age Testing? Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent une*pected termination resulting from lac! of space. This is e*ternal storage as opposed to internal storage. 7,. What is St(ess Testing? Testing conducted to evaluate a system or component at or beyond the limits of its specified re)uirements to determine the load under which it fails and how. 8ften this is performance testing using a very high level of simulated load. 7/. What is St(0ct0(al Testing? Testing based on an analysis of internal wor!ings and structure of a piece of software. #ee also /hite 1o* Testing. 72. What is S ste. Testing? Testing that attempts to discover defects that are properties of the entire system rather than of its individual components. 77. What is Testabilit ? The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. 133. What is Testing? The process of e*ercising software to verify that it satisfies specified re)uirements and to detect errors. The process of analy&ing a software item to detect the differences between e*isting and re)uired conditions (that is, bugs), and to evaluate the features of the software item (:ef. -222 #td ;<=). The process of operating a system or component under specified conditions, observing or recording the results, and ma!ing an evaluation of some aspect of the system or component. /hat is Test Automation0 -t is the same as Automated Testing. 131. What is Test 'e"? An e*ecution environment configured for testing. $ay consist of specific hardware, 8#, networ! topology, configuration of the product under test, other application or system

software, etc. The Test .lan for a pro%ect should enumerated the test beds(s) to be used. 132. What is Test :ase? Test "ase is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test "ase will consist of information such as re)uirements testing, test steps, verification steps, prere)uisites, outputs, test environment, etc. A set of inputs, e*ecution preconditions, and e*pected outcomes developed for a particular ob%ective, such as to e*ercise a particular program path or to verify compliance with a specific re)uirement. Test (riven (evelopment0 Testing methodology associated with Agile .rogramming in which every chun! of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit'level and regression bugs during development. .ractitioners of T(( write a lot of tests, i.e. an e)ual number of lines of test code to the si&e of the production code. 13!. What is Test 8(i=e(? A program or test tool used to e*ecute tests. Also !nown as a Test >arness. 13%. What is Test En=i($n.ent? The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers. 13&. What is Test Fi(st 8esign? Test'first design is one of the mandatory practices of 2*treme .rogramming (?.).-t re)uires that programmers do not write any production code until they have first written a unit test. 13,. What is Test #a(ness? A program or test tool used to e*ecute a tests. Also !nown as a Test (river. 13/. What is Test -lan? A document describing the scope, approach, resources, and schedule of intended testing activities. -t identifies test items, the features to be tested, the testing tas!s, who will do each tas!, and any ris!s re)uiring contingency planning. 132. What is Test -($ce"0(e? A document providing detailed instructions for the e*ecution of one or more test cases. 137. What is Test Sc(ipt? "ommonly used to refer to the instructions for a particular test that will be carried out by an automated test tool. 113. What is Test Speci)icati$n? A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and e*ecution conditions for the associated tests. 111. What is Test S0ite? A collection of tests used to validate the behavior of a product. The scope of a Test #uite varies from organi&ation to organi&ation. There may be several Test #uites for a particular product for e*ample. -n most cases however a Test #uite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. 112. What is Test T$$ls? "omputer programs used in the testing of a system, a component of the system, or its documentation. 11!. What is Th(ea" Testing? A variation of top'down testing where the progressive integration of components follows the implementation of subsets of the re)uirements, as opposed to the integration of components by successively lower levels. 11%. What is T$p 8$1n Testing? An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. 11&. What is T$tal Q0alit Manage.ent? A company commitment to develop a process that achieves high )uality product and

customer satisfaction. 11,. What is T(aceabilit Mat(i6? A document showing the relationship between Test :e)uirements and Test "ases. 11/. What is Usabilit Testing? Testing the ease with which users can learn and use a product. 112. What is Use :ase? The specification of tests that are conducted from the end'user perspective. ,se cases tend to focus on operating software as an end'user would conduct their day'to'day activities. 117. What is Unit Testing? Testing of individual software components. 123. h$1 "$ the c$.panies e6pect the "e)ect (ep$(ting t$ be c$..0nicate" b the teste( t$ the "e=el$p.ent tea.. :an the e6cel sheet te.plate be 0se" )$( "e)ect (ep$(ting. I) s$ 1hat a(e the c$..$n )iel"s that a(e t$ be incl0"e" ? 1h$ assigns the p(i$(it an" se=e(it $) the "e)ect To report bugs in e*cel@ #no. $odule #creen/ #ection -ssue detail #everity .rioriety -ssuestatus this is how to report bugs in e*cel sheet and also set filters on the "olumns attributes. 1ut most of the companies use the share point process of reporting bugs -n this when the pro%ect came for testing a module wise detail of pro%ect is inserted to the defect managment system they are using. -t contains following field 4. (ate <. -ssue brief A. -ssue discription(used for developer to regenrate the issue) B. -ssue satus( active, resolved, onhold, suspend and not able to regenrate) C. Assign to (Names of members allocated to pro%ect) D. .rioriety(>igh, medium and low) E. #everity ($a%or, medium and low) 121. #$1 "$ $0 plan test a0t$.ati$n? 4. .repare the automation Test plan <. -dentify the scenario A. :ecord the scenario B. 2nhance the scripts by inserting chec! points and "onditional 9oops C. -ncorporated 2rror >nadler D. (ebug the script E. 7i* the issue ;. :erun the script and report the result 122. 8$es a0t$.ati$n (eplace .an0al testing? There can be some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be repleaced. (/e can write the scripts for negative testing also but it is hectic tas!)./hen we tal! about real environment we do negative testing manually. 12!. #$1 1ill $0 ch$$se a t$$l )$( test a0t$.ati$n? choosing of a tool deends on many things ... 4. Application to be tested <. Test environment A. #cope and limitation of the tool. B. 7eature of the tool. C. "ost of the tool. D. /hether the tool is compatible with your application which means tool should be able to interact with your appliaction E. 2ase of use 12%. #$1 $0 1ill e=al0ate the t$$l )$( test a0t$.ati$n?

/e need to concentrate on the features of the tools and how this could be benficial for our pro%ect. The additional new features and the enhancements of the features will also help. 12&. #$1 $0 1ill "esc(ibe testing acti=ities? Testing activities start from the elaboration phase. The various testing activities are preparing the test plan, .reparing test cases, 2*ecute the test case, 9og teh bug, validate the bug F ta!e appropriate action for the bug, Automate the test cases. 12,. What testing acti=ities $0 .a 1ant t$ a0t$.ate? Automate all the high priority test cases which needs to be e*ceuted as a part of regression testing for each build cycle. 12/. 8esc(ibe c$..$n p($ble.s $) test a0t$.ati$n. The commom problems are@ 4. $aintenance of the old script when there is a feature change or enhancement <. The change in technology of the application will affect the old scripts 122. What t pes $) sc(ipting techni?0es )$( test a0t$.ati$n "$ $0 4n$1? C types of scripting techni)ues@ 9inear #tructured #hared (ata (riven Gey (riven 127. What is .e.$( lea4s an" b0))e( $=e()l$1s ? $emory lea!s means incomplete deallocation ' are bugs that happen very often. 1uffer overflow means data sent as input to the server that overflows the boundaries of the input area, thus causing the server to misbehave. 1uffer overflows can be used. 1!3. What a(e the .a5$( "i))e(ences bet1een st(ess testingBl$a" testingB9$l0.e testing? #tress testing means increasing the load ,and che!ing the performance at each level. 9oad testing means at a time giving more load by the e*pectation and che!ing the performance at that leval. Volume testing means first we have to apply initial. >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1. What is the MAIN bene)it $) "esigning tests ea(l in the li)e c cle? -t helps prevent defects from being introduced into the code. 2. What is (is4>base" testing? :is!'based testing is the term used for an approach to creating a test strategy that is based on prioriti&ing tests by ris!. The basis of the approach is a detailed ris! analysis and prioriti&ing of ris!s by ris! level. Tests to address each ris! are then specified, starting with the highest ris! first. !. A 1h$lesale( sells p(inte( ca(t(i"ges. The .ini.0. $("e( ?0antit is &. The(e is a 23C "isc$0nt )$( $("e(s $) 133 $( .$(e p(inte( ca(t(i"ges. D$0 ha=e been as4e" t$ p(epa(e test cases 0sing =a(i$0s =al0es )$( the n0.be( $) p(inte( ca(t(i"ges $("e(e". Which $) the )$ll$1ing g($0ps c$ntain th(ee test inp0ts that 1$0l" be gene(ate" 0sing '$0n"a( 9al0e Anal sis? B, C, ==

%. What is the EED "i))e(ence bet1een p(e=entati=e an" (eacti=e app($aches t$ testing? .reventative tests are designed early6 reactive tests are designed after the software has been produced. &. What is the p0(p$se $) e6it c(ite(ia? To define when a test level is complete. ,. What "ete(.ines the le=el $) (is4? The li!elihood of an adverse event and the impact of the event /. When is 0se" 8ecisi$n table testing? (ecision table testing is used for testing systems for which the specification ta!es the form of rules or cause'effect combinations. -n a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table e*plores combinations of inputs to define the outputs produced. 2. What is the MAIN $b5ecti=e 1hen (e=ie1ing a s$)t1a(e "eli=e(able? To identify defects in any software wor! product. 7. Which $) the )$ll$1ing "e)ines the e6pecte" (es0lts $) a test? Test case speci)icati$n $( test "esign speci)icati$n. Test case specification. 13. Which is a bene)it $) test in"epen"ence? -t avoids author bias in defining effective tests. 11. As pa(t $) 1hich test p($cess "$ $0 "ete(.ine the e6it c(ite(ia? Test planning. 12. What is beta testing? Testing performed by potential customers at their own locations. 1!. Gi=en the )$ll$1ing )(ag.ent $) c$"eB h$1 .an "ecisi$n c$=e(age? i) 1i"th F length then biggestG"i.ensi$n H 1i"th i) height F 1i"th tests a(e (e?0i(e" )$( 133C

then biggestG"i.ensi$n H height en"Gi) else biggestG"i.ensi$n H length i) height F length then biggestG"i.ensi$n H height en"Gi) en"Gi) B 1%. D$0 ha=e "esigne" test cases t$ p($=i"e 133C state.ent an" 133C "ecisi$n c$=e(age )$( the )$ll$1ing )(ag.ent $) c$"e. i) 1i"th F length then biggestG"i.ensi$n H 1i"th else biggestG"i.ensi$n H length en"Gi) The )$ll$1ing has been a""e" t$ the b$tt$. $) the c$"e )(ag.ent ab$=e. p(int I'iggest "i.ensi$n isJ K biggestG"i.ensi$n p(int IWi"thL MK 1i"th p(int ILengthL I K length #$1 .an .$(e test cases a(e (e?0i(e"? None, e*isting test cases can be used. 1&. <api" Applicati$n 8e=el$p.ent? :apid Application (evelopment (:A() is formally a parallel development of functions and subse)uent integration. "omponents/functions are developed in parallel as if they were mini pro%ects, the developments are time'bo*ed, delivered, and then assembled into a wor!ing prototype. This can very )uic!ly give the customer something to see and use and to provide feedbac! regarding the delivery and their re)uirements. :apid change and development of the product is possible using this methodology. >owever the product specification will need to be developed for the product at some point, and the pro%ect will need to be placed under more formal controls prior to going into production. 1,. What is the "i))e(ence bet1een Testing Techni?0es an" Testing T$$ls? Testing techni)ue@ H -s a process for ensuring that some aspects of the application system or unit functions properly there may be few techni)ues but many tools. Testing Tools@ H -s a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing 1/. We 0se the $0tp0t $) the (e?0i(e.ent anal sisB the (e?0i(e.ent speci)icati$n as the inp0t )$( 1(iting N ,ser Acceptance Test "ases

12. <epeate" Testing $) an al(ea" teste" p($g(a.B a)te( .$"i)icati$nB t$ "isc$=e( an "e)ects int($"0ce" $( 0nc$=e(e" as a (es0lt $) the changes in the s$)t1a(e being teste" $( in an$the( (elate" $( 0n(elate" s$)t1a(e c$.p$nentL :egression Testing 17. What is c$.p$nent testing? "omponent testing, also !nown as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, ob%ects, classes, etc.) that are separately testable. "omponent testing may be done in isolation from the rest of the system depend'ing on the conte*t of the development life cycle and the system. $ost often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested6 a driver calls a component to be tested. 23. What is )0ncti$nal s ste. testing? Testing the end to end functionality of the system as a whole. 21. What a(e the bene)its $) In"epen"ent Testing? -ndependent testers see other and different defects and are unbiased. 22. In a <EA:TI9E app($ach t$ testing 1hen 1$0l" "esign 1$(4 t$ be beg0n? After the software or system has been produced. 2!. What a(e the "i))e(ent Meth$"$l$gies in Agile 8e=el$p.ent M$"el? There are currently seven different Agile methodologies that - am aware of@ 4. <. A. B. C. D. E. 2*treme .rogramming (?.) #crum 9ean #oftware (evelopment 7eature'(riven (evelopment Agile ,nified .rocess "rystal (ynamic #ystems (evelopment $odel ((#($) $0 e6pect the b0l4 $) the test

2%. Which acti=it in the )0n"a.ental test p($cess incl0"es e=al0ati$n $) the testabilit $) the (e?0i(e.ents an" s ste.? A Test analysis and design.

2&. What is t picall e))$(ts?

the M@ST i.p$(tant (eas$n t$ 0se (is4 t$ "(i=e testing

1ecause testing everything is not feasible. 2,. Which is the M@ST i.p$(tant a"=antage $) in"epen"ence in testing? An independent tester may be more effective at finding defects missed by the person who wrote the software. 2/. Which $) the )$ll$1ing a(e =ali" $b5ecti=es )$( inci"ent (ep$(ts? i. -($=i"e "e=el$pe(s an" $the( pa(ties 1ith )ee"bac4 ab$0t the p($ble. t$ enable i"enti)icati$nB is$lati$n an" c$((ecti$n as necessa( . ii. -($=i"e i"eas )$( test p($cess i.p($=e.ent. iii. -($=i"e a =ehicle )$( assessing teste( c$.petence. i=. -($=i"e teste(s 1ith a .eans $) t(ac4ing the ?0alit $) the s ste. 0n"e( test.

i. .rovide developers and other parties with feedbac! about the problem to enable identification, isolation and correction as necessary, ii..rovide ideas for test process improvement, iv..rovide testers with a means of trac!ing the )uality of the system under test 22. :$nsi"e( the )$ll$1ing techni?0es. Which a(e static an" 1hich a(e " na.ic techni?0es? i. E?0i=alence -a(titi$ning. ii. Use :ase Testing. iii. 8ata Fl$1 Anal sis. I=. E6pl$(at$( Testing. =. 8ecisi$n Testing. =i. Inspecti$ns. (ata 7low Analysis and -nspections are static, 2)uivalence .artitioning, ,se "ase Testing, 2*ploratory Testing and (ecision Testing are dynamic. 27. Wh a(e static testing an" " na.ic testing "esc(ibe" as c$.ple.enta( ? 1ecause they share the aim of identifying defects but differ in the types of defect they find.

!3. What a(e the phases $) a )$(.al (e=ie1? -n contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of si* main steps@ 4. <. A. B. C. D. .lanning Gic!'off .reparation :eview meeting :ewor! 7ollow'up.

!1. What is the ($le $) .$"e(at$( in (e=ie1 p($cess? The moderator (or review leader) leads the review process. >e or she deter'mines, in co' operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry chec! and the follow'up on the rewor!, in order to control the )uality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected. !2. What is an e?0i=alence pa(titi$n *als$ 4n$1n as an e?0i=alence class+? An input or outputs range of values such that only one value in the range becomes a test case. !!. When sh$0l" c$n)ig0(ati$n .anage.ent p($ce"0(es be i.ple.ente"? (uring test planning. !%. A T pe $) )0ncti$nal TestingB 1hich in=estigates the )0ncti$ns (elating t$ "etecti$n $) th(eatsB s0ch as =i(0s )($. .alici$0s $0tsi"e(s. #ecurity Testing !&. Testing 1he(e in 1e s0b5ect the ta(get $) the test B t$ =a( ing 1$(4l$a"s t$ .eas0(e an" e=al0ate the pe()$(.ance beha=i$(s an" abilit $) the ta(get an" $) the test t$ c$ntin0e t$ )0ncti$n p($pe(l 0n"e( these "i))e(ent 1$(4l$a"s. 9oad Testing !,. Testing acti=it 1hich is pe()$(.e" t$ e6p$se "e)ects in the inte()aces an" in the inte(acti$n bet1een integ(ate" c$.p$nents isL -ntegration 9evel Testing

!/. What a(e the St(0ct0(e>base" *1hite>b$6+ testing techni?0es? #tructure'based testing techni)ues (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are com'monly called white' bo* or glass'bo* techni)ues (implying you can see into the system) since they re)uire !nowledge of how the software is implemented, that is, how it wor!s. 7or e*ample, a structural techni)ue may be concerned with e*ercising loops in the software. (ifferent test cases may be derived to e*ercise the loop once, twice, and many times. This may be done regardless of the functionality of the software. !2. When sh$0l" be pe()$(.e" <eg(essi$n testing? After the software has changed or when the environment has changed !7. When sh$0l" testing be st$ppe"? -t depends on the ris!s for the system being tested %3. What is the p0(p$se $) a test c$.pleti$n c(ite(i$n? To determine when to stop testing %1. What can static anal sis N@T )in"? 7or e*ample memory lea!s %2. What is the "i))e(ence bet1een (e>testing an" (eg(essi$n testing? :e'testing ensures the original fault has been removed6 regression testing loo!s for une*pected side effects %!. What a(e the E6pe(ience>base" testing techni?0es? -n e*perience'based techni)ues, people s !nowledge, s!ills and bac!ground are a prime contributor to the test conditions and test cases. The e*perience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. (ue to previous e*perience with similar systems, they may have insights into what could go wrong, which is very useful for testing. %%. What t pe $) (e=ie1 (e?0i(es )$(.al ent( an" e6it c(ite(iaB incl0"ing .et(ics? -nspection %&. :$0l" (e=ie1s $( inspecti$ns be c$nsi"e(e" pa(t $) testing? Ies, because both help detect faults and improve )uality %,. An inp0t )iel" ta4es the ea( $) bi(th bet1een 1733 an" 233% 1hat a(e the b$0n"a( =al0es )$( testing this )iel"?

4;==,4=55,<55B,<55C %/. Which $) the )$ll$1ing t$$ls 1$0l" be in=$l=e" in the a0t$.ati$n $) (eg(essi$n test? a. 8ata teste( b. '$0n"a( teste( c. :apt0(e;-la bac4 ". @0tp0t c$.pa(at$(. d. 8utput comparator %2. T$ test a )0ncti$nB 1hat has t$ 1(ite a p($g(a..e(B 1hich calls the )0ncti$n t$ be teste" an" passes it test "ata. (river %7. What is the $ne Ee (eas$n 1h "e=el$pe(s ha=e "i))ic0lt 1$(4? 9ac! of 8b%ectivity &3.M#$1 .0ch testing is en$0gh?J The answer depends on the ris! for your industry, contract and special re)uirements. &1. When sh$0l" testing be st$ppe"? -t depends on the ris!s for the system being tested. &2. Which $) the )$ll$1ing is the .ain p0(p$se $) the integ(ati$n st(ateg integ(ati$n testing in the s.all? To specify which modules to combine when, and how many at once. &!. What is the p0(p$se $) a test c$.pleti$n c(ite(i$n? To determine when to stop testing &%. Gi=en the )$ll$1ing c$"eB 1hich state.ent is t(0e ab$0t the .ini.0. n0.be( $) test cases (e?0i(e" )$( )0ll state.ent an" b(anch c$=e(age? <ea" p <ea" ? IF pO?F 133 T#EN -(int ILa(geI EN8IF IF p F &3 T#EN -(int Ip La(geI EN8IF )$( testing thei( $1n

4 test for statement coverage, < for branch coverage &&. What is the "i))e(ence bet1een (e>testing an" (eg(essi$n testing? :e'testing ensures the original fault has been removed6 regression testing loo!s for une*pected side'effects. &,. Which (e=ie1 is n$(.all 0se" t$ e=al0ate a p($"0ct t$ "ete(.ine its s0itabilit )$( inten"e" 0se an" t$ i"enti) "isc(epancies? Technical :eview. &/. Wh 1e 0se "ecisi$n tables? The techni)ues of e)uivalence partitioning and boundary value analysis are often applied to specific situations or inputs. >owever, if different combinations of inputs result in different actions being ta!en, this can be more difficult to show using e)uivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification'based tech'ni)ues, decision tables and state transition testing are more focused on business logic or business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This techni)ue is sometimes also referred to as a cause'effect table. The reason for this is that there is an associated logic diagramming techni)ue called cause'effect graphing which was sometimes used to help derive the decision table &2. Fa0lts )$0n" sh$0l" be $(iginall "$c0.ente" b 1h$.? 1y testers. &7. Which is the c0((ent )$(.al 1$(l">1i"e (ec$gniAe" "$c0.entati$n stan"a("? There isnJt one. ,3. Which $) the )$ll$1ing is the (e=ie1 pa(ticipant 1h$ has c(eate" the ite. t$ be (e=ie1e"? Author ,1. A n0.be( $) c(itical b0gs a(e )i6e" in s$)t1a(e. All the b0gs a(e in $ne .$"0leB (elate" t$ (ep$(ts. The test .anage( "eci"es t$ "$ (eg(essi$n testing $nl $n the (ep$(ts .$"0le. :egression testing should be done on other modules as well because fi*ing one module may affect other modules. ,2. Wh "$es the b$0n"a( =al0e anal sis p($=i"e g$$" test cases? 1ecause errors are fre)uently made during programming of the different cases near the KedgesJ of the range of values.

,!. What .a4es an inspecti$n "i))e(ent )($. $the( (e=ie1 t pes? -t is led by a trained leader, uses formal entry and e*it criteria and chec!lists. ,%. Wh can be teste( "epen"ent $n c$n)ig0(ati$n .anage.ent? 1ecause configuration management assures that we !now the e*act version of the testware and the test ob%ect. ,&. What is a 9>M$"el? A software development model that illustrates how testing activities integrate with software development phases ,,. What is .aintenance testing? Triggered by modifications, migration or retirement of e*isting software ,/. What is test c$=e(age? Test coverage measures in some specific way the amount of testing performed by a set of tests (derived in some other way, e.g. using specification'based techni)ues). /herever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage. ,2. Wh is inc(e.ental integ(ati$n p(e)e((e" $=e( Mbig bangJ integ(ati$n? 1ecause incremental integration has better early defects screening and isolation ability ,7. When "$ 1e p(epa(e <TM *<e?0i(e.ent t(aceabilit case "esigning $( a)te( test case "esigning? .at(i6+B is it be)$(e test

The would be before. :e)uirements should already be traceable from :eview activities since you should have traceability in the Test .lan already. This )uestion also would depend on the organi&ation. -f the organi&ations do test after development started then re)uirements must be already traceable to their source. To ma!e life simpler use a tool to manage re)uirements. /3. What is calle" the p($cess sta(ting 1ith the te(.inal .$"0les? 1ottom'up integration /1. 80(ing 1hich test acti=it c$0l" )a0lts be )$0n" .$st c$st e))ecti=el ? (uring test planning /2. The p0(p$se $) (e?0i(e.ent phase is To free&e re)uirements, to understand user needs, to define the scope of testing

/!. #$1 .0ch testing is en$0gh? The answer depends on the ris!s for your industry, contract and special re)uirements /%. Wh 1e split testing int$ "istinct stages? 2ach test stage has a different purpose. /&. Which $) the )$ll$1ing is li4el t$ bene)it .$st )($. the 0se $) test t$$ls p($=i"ing test capt0(e an" (epla )acilities? a+ <eg(essi$n testing b+ Integ(ati$n testing c+ S ste. testing "+ Use( acceptance testing :egression testing /,. #$1 1$0l" $0 esti.ate the a.$0nt $) (e>testing li4el t$ be (e?0i(e"?

$etrics from previous similar pro%ects and discussions with the development team //. What st0"ies "ata )l$1 anal sis? The use of data on paths through the code. /2. What is Alpha testing? .re'release testing by end user representatives at the developerJs site. /7. What is a )ail0(e? 7ailure is a departure from specified behavior. 23. What a(e Test c$.pa(at$(s? -s it really a test if you put some inputs into some software, but never loo! to see whether the software produces the correct result0 The essence of testing is to chec! whether the software produces the correct result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison. 21. Wh$ is (esp$nsible )$( "$c0.ent all the iss0esB p($ble.s an" $pen p$int that 1e(e i"enti)ie" "0(ing the (e=ie1 .eeting #cribe 22. What is the .ain p0(p$se $) In)$(.al <e=ie1? -ne*pensive way to get some benefit 2!. What is the p0(p$se $) test "esign techni?0e?

-dentifying test conditions and -dentifying test cases 2%. When testing a g(a"e calc0lati$n s ste.B a teste( "ete(.ines that all sc$(es )($. 73 t$ 133 1ill iel" a g(a"e $) AB b0t sc$(es bel$1 73 1ill n$t. This anal sis is 4n$1n asL 2)uivalence partitioning 2&. A test .anage( 1ants t$ 0se the (es$0(ces a=ailable )$( the a0t$.ate" testing $) a 1eb applicati$n. The best ch$ice is Tester, test automater, web specialist, (1A 2,. 80(ing the testing $) a .$"0le teste( PQ )in"s a b0g an" assigne" it t$ "e=el$pe(. '0t "e=el$pe( (e5ects the sa.eB sa ing that its n$t a b0g. What PQ sh$0l" "$? #end to the detailed information of the bug encountered and chec! the reproducibility 2/. A t pe $) integ(ati$n testing in 1hich s$)t1a(e ele.entsB ha("1a(e ele.entsB $( b$th a(e c$.bine" all at $nce int$ a c$.p$nent $( an $=e(all s ste.B (athe( than in stages. 1ig'1ang Testing 22. In p(acticeB 1hich Li)e : cle .$"el .a ha=e .$(eB )e1e( $( "i))e(ent le=els $) "e=el$p.ent an" testingB "epen"ing $n the p($5ect an" the s$)t1a(e p($"0ct. F$( e6a.pleB the(e .a be c$.p$nent integ(ati$n testing a)te( c$.p$nent testingB an" s ste. integ(ati$n testing a)te( s ste. testing. V'$odel 27. Which techni?0e can be 0se" t$ achie=e inp0t an" $0tp0t c$=e(age? It can be applie" t$ h0.an inp0tB inp0t =ia inte()aces t$ a s ste.B $( inte()ace pa(a.ete(s in integ(ati$n testing. 2)uivalence partitioning 73. MThis li)e c cle .$"el is basicall state.ent is best s0ite" )$(N V'$odel 71. In 1hich $("e( sh$0l" tests be (0n? The most important tests first 72. The late( in the "e=el$p.ent li)e c cle a )a0lt is "isc$=e(e"B the .$(e e6pensi=e it is t$ )i6. 1h ? "(i=en b sche"0le an" b0"get (is4sJ This

The fault has been built into more documentation, code, tests, etc 7!. What is :$=e(age .eas0(e.ent? -t is a partial measure of test thoroughness. 7%. What is '$0n"a( =al0e testing? Test boundary conditions on, below and above the edges of input and output e)uivalence classes. 7&. What is Fa0lt Mas4ing? 2rror condition hiding another error condition. 7,. What "$es :@TS (ep(esent? "ommercial off the #helf. 7/.The p0(p$se $) 1hich is all$1 speci)ic tests t$ be ca((ie" $0t $n a s ste. $( net1$(4 that (ese.bles as cl$sel as p$ssible the en=i($n.ent 1he(e the ite. 0n"e( test 1ill be 0se" 0p$n (elease? Test 2nvironment 72. What can be th$0ght $) as being base" $n the p($5ect planB b0t 1ith g(eate( a.$0nts $) "etail? .hase Test .lan 77. What is e6pl$(at$( testing? 2*ploratory testing is a hands'on approach in which testers are involved in minimum planning and ma*imum test e*ecution. The planning involves the creation of a test charter, a short declaration of the scope of a short (4 to < hour) time'bo*ed test effort, the ob%ectives and possible approaches to be used. The test design and test e*ecution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techni)ues will not be used. 7or e*ample, the tester may decide to use boundary value analysis but will thin! through and test the most important boundary values without necessarily writing them down. #ome notes will be written during the e*ploratory'testing session, so that a report can be produced afterwards. 133. What is )ail0(e? (eviation from e*pected result to actual result ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' '''''''''''''''''''''''''''''''''

Q: How do you introduce a new software QA process? AL -t depends on the si&e of the organi&ation and the ris!s involved. 7or large organi&ations with high'ris! pro%ects, a serious management buy'in is re)uired and a formali&ed LA process is necessary. 7or medium si&e organi&ations with lower ris! pro%ects, management and organi&ational buy'in and a slower, step'by'step process is re)uired. +enerally spea!ing, LA processes should be balanced with productivity, in order to !eep any bureaucracy from getting out of hand. 7or smaller groups or pro%ects, an ad'hoc process is more appropriate. A lot depends on team leads and managers, feedbac! to developers and good communication is essential among customers, managers, developers, test engineers and testers. :egardless the si&e of the company, the greatest value for effort is in managing re)uirement processes, where the goal is re)uirements that are clear, complete and testable. Q: What is the role of documentation in QA? AL (ocumentation plays a critical role in LA. LA practices should be documented, so that they are repeatable. #pecifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals should all be documented. -deally, there should be a system for easily finding and obtaining of documents and determining what document will have a particular piece of information. ,se documentation change management, if possible. Q: What makes a good test engineer? AL +ood test engineers have a 3test to brea!3 attitude. /e, good test engineers, ta!e the point of view of the customer6 have a strong desire for )uality and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers and an ability to communicate with both technical and non'technical people. .revious software development e*perience is also helpful as it provides a deeper understanding of the software development process, gives the test engineer an appreciation for the developers point of view and reduces the learning curve in automated test tool programming. :ob (avis is a good test engineer because he has a 3test to brea!3 attitude, ta!es the point of view of the customer, has a strong desire for )uality, has an attention to detail, >e s also tactful and diplomatic and has good a communication s!ill, both oral and written. And he has previous software development e*perience, too. Q: What is a test plan? AL A software pro%ect test plan is a document that describes the ob%ectives, scope, approach and focus of a software testing effort. The process of preparing a test plan is a useful way to thin! through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. -t should be thorough enough to be useful, but not so thorough that none outside the test group will be able to read it. Q: What is a test case? AL A test case is a document that describes an input, action, or event and its e*pected result, in order to determine if a feature of an application is wor!ing correctly. A test case should contain particulars such as a... Test case identifier6 Test case name6

8b%ective6 Test conditions/setup6 -nput data re)uirements/steps, and 2*pected results.

.lease note, the process of developing test cases can help find problems in the re)uirements or design of an application, since it re)uires you to completely thin! through the operation of the application. 7or this reason, it is useful to prepare test cases early in the development cycle, if possible. Q: What should be done after a bug is found? AL /hen a bug is found, it needs to be communicated and assigned to developers that can fi* it. After the problem is resolved, fi*es should be re'tested. Additionally, determinations should be made regarding re)uirements, software, hardware, safety impact, etc., for regression testing to chec! the fi*es didn t create other problems elsewhere. -f a problem' trac!ing system is in place, it should encapsulate these determinations. A variety of commercial, problem'trac!ing/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fi* it. Q: What is configuration management? AL "onfiguration management ("$) covers the tools and processes used to control, coordinate and trac! code, re)uirements, documentation, problems, change re)uests, designs, tools, compilers, libraries, patches, changes made to them and who ma!es the changes. :ob (avis has had e*perience with a full range of "$ tools and concepts, and can easily adapt to your software tool and process needs. Q: What if the software is so buggy it can't be tested at all? AL -n this situation the best bet is to have test engineers go through the process of reporting whatever bugs or problems initially show up, with the focus being on critical bugs. #ince this type of problem can severely affect schedules and indicates deeper problems in the software development process, such as insufficient unit testing, insufficient integration testing, poor design, improper build or release procedures, managers should be notified and provided with some documentation as evidence of the problem. Q: What if there isn't enough time for thorough testing? AL #ince it s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, ris! analysis is appropriate to most software development pro%ects. ,se ris! analysis to determine where testing should be focused. This re)uires %udgment s!ills, common sense and e*perience. The chec!list should include answers to the following )uestions@ /hich functionality is most important to the pro%ect s intended purpose0 /hich functionality is most visible to the user0

/hich functionality has the largest safety impact0 /hich functionality has the largest financial impact on users0 /hich aspects of the application are most important to the customer0 /hich aspects of the application can be tested early in the development cycle0 /hich parts of the code are most comple* and thus most sub%ect to errors0 /hich parts of the application were developed in rush or panic mode0 /hich aspects of similar/related previous pro%ects caused problems0 /hich aspects of similar/related previous pro%ects had large maintenance e*penses0 /hich parts of the re)uirements and design are unclear or poorly thought out0 /hat do the developers thin! are the highest'ris! aspects of the application0 /hat !inds of problems would cause the worst publicity0 /hat !inds of problems would cause the most customer service complaints0 /hat !inds of tests could easily cover multiple functionalities0 /hich tests will have the best high'ris!'coverage to time're)uired ratio0

Q: What if the project isn't big enough to justify extensi e testing? AL "onsider the impact of pro%ect errors, not the si&e of the pro%ect. >owever, if e*tensive testing is still not %ustified, ris! analysis is again needed and the considerations listed under 3/hat if there isn t enough time for thorough testing03 do apply. The test engineer then should do 3ad hoc3 testing, or write up a limited test plan based on the ris! analysis. Q: What can be done if re!uirements are changing continuously? AL /or! with management early on to understand how re)uirements might change, so that alternate test plans and strategies can be wor!ed out in advance. -t is helpful if the application s initial design allows for some adaptability, so that later changes do not re)uire redoing the application from scratch. Additionally, try to... 2nsure the code is well commented and well documented6 this ma!es changes easier for the developers. ,se rapid prototyping whenever possible6 this will help customers feel sure of their re)uirements and minimi&e changes. -n the pro%ect s initial schedule, allow for some e*tra time to commensurate with probable changes.

$ove new re)uirements to a .hase < version of an application and use the original re)uirements for the .hase 4 version. Negotiate to allow only easily implemented new re)uirements into the pro%ect.

2nsure customers and management understand scheduling impacts, inherent ris!s and costs of significant re)uirements changes. Then let management or the customers decide if the changes are warranted6 after all, that s their %ob. 1alance the effort put into setting up automated testing with the e*pected effort re)uired to redo them to deal with changes. (esign some fle*ibility into automated test scripts6 7ocus initial automated testing on application aspects that are most li!ely to remain unchanged6 (evote appropriate effort to ris! analysis of changes, in order to minimi&e regression'testing needs6 (esign some fle*ibility into test cases6 this is not easily done6 the best bet is to minimi&e the detail in the test cases, or set up only higher'level generic'type test plans6

7ocus less on detailed test plans and test cases and more on ad'hoc testing with an understanding of the added ris! this entails. Q: How do you know when to stop testing? AL This can be difficult to determine. $any modern software applications are so comple* and run in such an interdependent environment, that complete testing can never be done. "ommon factors in deciding when to stop are... (eadlines, e.g. release deadlines, testing deadlines6 Test cases completed with certain percentage passed6 Test budget has been depleted6 "overage of code, functionality, or re)uirements reaches a specified point6 1ug rate falls below a certain level6 or 1eta or alpha testing period ends.

Q: What if the application has functionality that wasn't in the re!uirements? AL -t may ta!e serious effort to determine if an application has significant une*pected or hidden functionality, which it would indicate deeper problems in the software development process. -f the functionality isn t necessary to the purpose of the application, it should be removed, as it may have un!nown impacts or dependencies that were not ta!en into account by the designer or the customer. -f not removed, design information will be needed to determine added testing needs or regression testing needs. $anagement should be made aware of any significant added ris!s as a result of the une*pected functionality. -f the functionality only affects areas, such as minor improvements in the user interface, it may not be a significant ris!. Q: How can software QA processes be implemented without stifling producti ity? AL -mplement LA processes slowly over time. ,se consensus to reach agreement on processes and ad%ust and e*periment as an organi&ation grows and matures. .roductivity

will be improved instead of stifled. .roblem prevention will lessen the need for problem detection. .anics and burnout will decrease and there will be improved focus and less wasted effort. At the same time, attempts should be made to !eep processes simple and efficient, minimi&e paperwor!, promote computer'based processes and automated trac!ing and reporting, minimi&e time re)uired in meetings and promote training as part of the LA process. >owever, no one, especially talented technical types, li!e bureaucracy and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be re)uired for late'night bug fi*ing and calming of irate customers. Q: What if the organi"ation is growing so fast that fixed QA processes are impossible? AL This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than... >ire good people (i.e. hire :ob (avis) :uthlessly prioriti&e )uality issues and maintain focus on the customer6 2veryone in the organi&ation should be clear on what )uality means to the customer.

Q: Why do you recommend that we test during the design phase? AL 1ecause testing during the design phase can prevent defects later on. /e recommend verifying three things... 4. Verify the design is good, efficient, compact, testable and maintainable. <. Verify the design meets the re)uirements and is complete (specifies all relationships between modules, how to pass data, what happens in e*ceptional circumstances, starting state of each module and how to guarantee the state of each module). A. Verify the design incorporates enough memory, -/8 devices and )uic! enough runtime for the final product. Q: What is software !uality assurance? AL #oftware Luality Assurance, when :ob (avis does it, is oriented to MpreventionM. -t involves the entire software development process. .revention is monitoring and improving the process, ma!ing sure any agreed'upon standards and procedures are followed and ensuring problems are found and dealt with. #oftware Testing, when performed by :ob (avis, is also oriented to MdetectionM. Testing involves the operation of a system or application under controlled conditions and evaluating the results. :ob (avis can provide LA/testing service. This document details some aspects of how he can provide software testing/LA service. 7or more information, e'mail robNrobdavispe.com.

8rgani&ations vary considerably in how they assign responsibility for LA and testing. #ometimes they re the combined responsibility of one group or individual. Also common are pro%ect teams, which include a mi* of test engineers, testers and developers, who wor! closely together, with overall LA processes monitored by pro%ect managers. #oftware )uality assurance depends on what best fits your organi&ation s si&e and business structure. Q: How is testing affected by object#oriented designs? AL A well'engineered ob%ect'oriented design can ma!e it easier to trace from code to internal design to functional design to re)uirements. /hile there will be little affect on blac! bo* testing (where an understanding of the internal design of the application is unnecessary), white'bo* testing can be oriented to the application s ob%ects. -f the application was well designed this can simplify test design. Q: What is !uality assurance? AL Luality Assurance ensures all parties concerned with the pro%ect adhere to the process and procedures, standards and templates and test readiness reviews. :ob (avis LA service depends on the customers and pro%ects. A lot will depend on team leads or managers, feedbac! to developers and communications among customers, managers, developers test engineers and testers. Q: What is black box testing? AL 1lac! bo* testing is functional testing, not based on any !nowledge of internal software design or code. 1lac! bo* testing are based on re)uirements and functionality. Q: What is white box testing? AL /hite bo* testing is based on !nowledge of the internal logic of an application s code. Tests are based on coverage of code statements, branches, paths and conditions. Q: What is unit testing? AL ,nit testing is the first level of dynamic testing and is first the responsibility of developers and then that of the test engineers. ,nit testing is performed after the e*pected test results are met or differences are e*plainable/acceptable. Q: What is functional testing? AL 7unctional testing is blac!'bo* type of testing geared to functional re)uirements of an application. Test engineers MshouldM perform functional testing.

Q: What is usability testing? AL ,sability testing is testing for user'friendliness . "learly this is sub%ective and depends on the targeted end'user or customer. ,ser interviews, surveys, video recording of user sessions and other techni)ues can be used. .rogrammers and developers are usually not appropriate as usability testers. Q: What is incremental integration testing? AL -ncremental integration testing is continuous testing of an application as new functionality is recommended. This may re)uire that various aspects of an application s functionality are independent enough to wor! separately, before all parts of the program are completed, or that test drivers are developed as needed. -ncremental testing may be performed by programmers, software engineers, or test engineers. Q: What is parallel$audit testing? AL .arallel/audit testing is testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Q: What is integration testing? AL ,pon completion of unit testing, integration testing begins. -ntegration testing is blac! bo* testing. The purpose of integration testing is to ensure distinct components of the application still wor! in accordance to customer re)uirements. Test cases are developed with the e*press purpose of e*ercising the interfaces between the components. This activity is carried out by the test team. -ntegration testing is considered complete, when actual results and e*pected results are either in line or differences are e*plainable/acceptable based on client input. Q: What is system testing? AL #ystem testing is blac! bo* testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment. The purpose of system testing is to validate an application s accuracy and completeness in performing the functions as designed. #ystem testing simulates real life scenarios that occur in a 3simulated real life3 test environment and test all functions of the system that are re)uired in real life. #ystem testing is deemed complete when actual results and e*pected results are either in line or differences are e*plainable or acceptable, based on client input. ,pon completion of integration testing, system testing is started. 1efore system testing, all unit and integration test results are reviewed by #oftware LA to ensure all problems have been resolved. 7or a higher level of testing it is important to understand unresolved problems that originate at unit and integration test levels.

Iou "AN learn system testing, with little or no outside help. +et "AN get free information. "lic! on a lin!O Q: What is end#to#end testing? AL #imilar to system testing, the MmacroM end of the test scale is testing a complete application in a situation that mimics real world use, such as interacting with a database, using networ! communication, or interacting with other hardware, application, or system. Q: What is regression testing? AL The ob%ective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and e*ecuted to verify changes introduced during the release have not 3undone3 any previous code. 2*pected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the ne*t level. Q: What is sanity testing? AL #anity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. -t normally includes a set of core tests of basic +,- functionality to demonstrate connectivity to the database, application servers, printers, etc. Q: What is performance testing? AL Although performance testing is described as a part of system testing, it can be regarded as a distinct level of testing. .erformance testing verifies loads, volumes and response times, as defined by re)uirements. Q: What is load testing? AL 9oad testing is testing an application under heavy loads, such as the testing of a web site under a range of loads to determine at what point the system response time will degrade or fail. Q: What is installation testing? AL -nstallation testing is testing full, partial, upgrade, or install/uninstall processes. The installation test for a release is conducted with the ob%ective of demonstrating production readiness. This test includes the inventory of configuration items, performed by the application s #ystem Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. /hen necessary, a sanity test is performed, following installation testing. Q: What is security$penetration testing? AL #ecurity/penetration testing is testing how well the system is protected against unauthori&ed internal or e*ternal access, or willful damage. This type of testing usually re)uires sophisticated testing techni)ues.

Q: What is reco ery$error testing? AL :ecovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Q: What is compatibility testing? AL "ompatibility testing is testing how well software performs in a particular hardware, software, operating system, or networ! This test includes the inventory of configuration items, performed by the application s #ystem Administration, the evaluation of data readiness, and dynamic tests focused on basic system functionality. /hen necessary, a sanity test is performed, following installation testing. Q: What is security$penetration testing? AL #ecurity/penetration testing is testing how well the system is protected against unauthori&ed internal or e*ternal access, or willful damage. This type of testing usually re)uires sophisticated testing techni)ues. Q: What is reco ery$error testing? AL :ecovery/error testing is testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Q: What is compatibility testing? AL "ompatibility testing is testing how well software performs in a particular hardware, software, operating system, or networ! Q: What is comparison testing? AL "omparison testing is testing that compares software wea!nesses and strengths to those of competitors products. Q: What is acceptance testing? AL Acceptance testing is blac! bo* testing that gives the client/customer/pro%ect manager the opportunity to verify the system functionality and usability prior to the system being released to production. The acceptance test is the responsibility of the client/customer or pro%ect manager, however, it is conducted with the full support of the pro%ect team. The test team also wor!s with the client/customer/pro%ect manager to develop the acceptance criteria. Q: What is alpha testing? AL Alpha testing is testing of an application when development is nearing completion. $inor design changes can still be made as a result of alpha testing. Alpha testing is typically performed by a group that is independent of the design team, but still within the company, e.g. in'house software test engineers, or software LA engineers.

Q: What is beta testing? AL 1eta testing is testing an application when development and testing are essentially completed and final bugs and problems need to be found before the final release. 1eta testing is typically performed by end'users or others, not programmers, software engineers, or test engineers. Q: What is a %est$QA %eam &ead? AL The Test/LA Team 9ead coordinates the testing activity, communicates testing status to management and manages the test team. Q: What testing roles are standard on most testing projects? AL (epending on the organi&ation, the following roles are more or less standard on most testing pro%ects@ Testers, Test 2ngineers, Test/LA Team 9ead, Test/LA $anager, #ystem Administrator, (atabase Administrator, Technical Analyst, Test 1uild $anager and Test "onfiguration $anager. (epending on the pro%ect, one person may wear more than one hat. 7or instance, Test 2ngineers may also wear the hat of Technical Analyst, Test 1uild $anager and Test "onfiguration $anager. Q: What is a %est 'ngineer? AL /e, test engineers, are engineers who speciali&e in testing. /e, test engineers, create test cases, procedures, scripts and generate data. /e e*ecute test procedures and scripts, analy&e standards of measurements, evaluate results of system/integration/regression testing. /e also... #peed up the wor! of the development staff6 :educe your organi&ation s ris! of legal liability6 +ive you the evidence that your software is correct and operates properly6 -mprove problem trac!ing and reporting6 $a*imi&e the value of your software6 $a*imi&e the value of the devices that use it6 Assure the successful launch of your product by discovering bugs and design flaws, before users get discouraged, before shareholders loose their cool and before employees get bogged down6 >elp the wor! of your development staff, so the development team can devote its time to build up your product6 .romote continual improvement6 .rovide documentation re)uired by 7(A, 7AA, other regulatory agencies and your customers6 #ave money by discovering defects early in the design process, before failures occur in production, or in the field6

#ave the reputation of your company by discovering bugs and design flaws6 before bugs and design flaws damage the reputation of your company.

Q: What is a %est (uild )anager? AL Test 1uild $anagers deliver current software versions to the test environment, install the application s software and apply software patches, to both the application and the operating system, set'up, maintain and bac! up test environment hardware. (epending on the pro%ect, one person may wear more than one hat. 7or instance, a Test 2ngineer may also wear the hat of a Test 1uild $anager. Q: What is a *ystem Administrator? AL Test 1uild $anagers, #ystem Administrators, (atabase Administrators deliver current software versions to the test environment, install the application s software and apply software patches, to both the application and the operating system, set'up, maintain and bac! up test environment hardware. (epending on the pro%ect, one person may wear more than one hat. 7or instance, a Test 2ngineer may also wear the hat of a #ystem Administrator. Q: What is a +atabase Administrator? AL Test 1uild $anagers, #ystem Administrators and (atabase Administrators deliver current software versions to the test environment, install the application s software and apply software patches, to both the application and the operating system, set'up, maintain and bac! up test environment hardware. (epending on the pro%ect, one person may wear more than one hat. 7or instance, a Test 2ngineer may also wear the hat of a (atabase Administrator. Q: What is a %echnical Analyst? AL Technical Analysts perform test assessments and validate system/functional test re)uirements. (epending on the pro%ect, one person may wear more than one hat. 7or instance, Test 2ngineers may also wear the hat of a Technical Analyst. Q: What is a %est ,onfiguration )anager? AL Test "onfiguration $anagers maintain test environments, scripts, software and test data. (epending on the pro%ect, one person may wear more than one hat. 7or instance, Test 2ngineers may also wear the hat of a Test "onfiguration $anager. Q: What is a test schedule? AL The test schedule is a schedule that identifies all tas!s re)uired for a successful testing effort, a schedule of all test activities and resource re)uirements. Q: What is software testing methodology? AL 8ne software testing methodology is the use a three step process of... 4. "reating a test strategy6 <. "reating a test plan/design6 and A. 2*ecuting tests.

This methodology can be used and molded to your organi&ation s needs. :ob (avis believes that using this methodology is important in the development and ongoing maintenance of his clients applications. Q: What is the general testing process? AL The general testing process is the creation of a test strategy (which sometimes includes the creation of test cases), creation of a test plan/design (which usually includes test cases and test procedures) and the e*ecution of tests. Q: How do you create a test plan$design? AL Test scenarios and/or cases are prepared by reviewing functional re)uirements of the release and preparing logical groups of functions that can be further bro!en into test procedures. Test procedures define test conditions, data to be used for testing and e*pected results, including database updates, file outputs, report results. +enerally spea!ing... Test cases and scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test re)uirements and unit test cases. Test engineers also e*ecute unit test cases. -t is the test team that, with assistance of developers and clients, develops test cases and scenarios for integration and system testing. Test scenarios are e*ecuted through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts include the specific data that will be used for testing the process or transaction. Test procedures or scripts may cover multiple test scenarios. Test scripts are mapped bac! to the re)uirements and traceability matrices are used to ensure each test is within scope. Test data is captured and base lined, prior to testing. This data serves as the foundation for unit and system testing and used to e*ercise system functionality in a controlled environment. #ome output data is also base'lined for future comparison. 1ase'lined data is used to support future application maintenance via regression testing. A pretest meeting is held to assess the readiness of the application and the environment and data to be tested. A test readiness document is created to indicate the status of the entrance criteria of the release.

Inp0ts )$( this p($cessL Approved Test #trategy (ocument. Test tools, or automated test tools, if applicable. .reviously developed scripts, if applicable.

Test documentation problems uncovered as a result of testing. A good understanding of software comple*ity and module path coverage, derived from general and detailed design documents, e.g. software design document, source code, and software comple*ity data.

@0tp0ts )$( this p($cessL Approved documents of test scenarios, test cases, test conditions, and test data. :eports of software design issues, given to software developers for correction. Q: How do you execute tests? AL 2*ecution of tests is completed by following the test documents in a methodical manner. As each test procedure is performed, an entry is recorded in a test e*ecution log to note the e*ecution of the procedure and whether or not the test procedure uncovered any defects. "hec!point meetings are held throughout the e*ecution phase. "hec!point meetings are held daily, if re)uired, to address and discuss testing issues, status and activities. The output from the e*ecution of test procedures is !nown as test results. Test results are evaluated by test engineers to determine whether the e*pected results have been obtained. All discrepancies/anomalies are logged and discussed with the software team lead, hardware test lead, programmers, software engineers and documented for further investigation and resolution. 2very company has a different process for logging and reporting bugs/defects uncovered during testing. A pass/fail criteria is used to determine the severity of a problem, and results are recorded in a test summary report. The severity of a problem, found during system testing, is defined in accordance to the customer s ris! assessment and recorded in their selected trac!ing tool. .roposed fi*es are delivered to the testing environment, based on the severity of the problem. 7i*es are regression tested and flawless fi*es are migrated to a new baseline. 7ollowing completion of the test, members of the test team prepare a summary report. The summary report is reviewed by the .ro%ect $anager, #oftware LA $anager and/or Test Team 9ead. After a particular level of testing has been certified, it is the responsibility of the "onfiguration $anager to coordinate the migration of the release software components to the ne*t test level, as documented in the "onfiguration $anagement .lan. The software is only migrated to the production environment after the .ro%ect $anager s formal acceptance. The test team reviews test document problems identified during testing, and update documents where appropriate.

Inp0ts )$( this p($cessL Approved test documents, e.g. Test .lan, Test "ases, Test .rocedures. Test tools, including automated test tools, if applicable. (eveloped scripts. "hanges to the design, i.e. "hange :e)uest (ocuments. Test data.

Availability of the test team and pro%ect team. +eneral and (etailed (esign (ocuments, i.e. :e)uirements (ocument, #oftware (esign (ocument. A software that has been migrated to the test environment, i.e. unit tested code, via the "onfiguration/1uild $anager. Test :eadiness (ocument. (ocument ,pdates.

@0tp0ts )$( this p($cessL 9og and summary of the test results. ,sually this is part of the Test :eport. This needs to be approved and signed'off with revised testing deliverables. "hanges to the code, also !nown as test fi*es. Test document problems uncovered as a result of :e)uirements document and (esign (ocument problems. testing. 2*amples are

:eports on software design issues, given to software developers for correction. 2*amples are bug reports on code issues. 7ormal record of test incidents, usually part of problem trac!ing. 1ase'lined pac!age, also !nown as tested source and ob%ect code, ready for migration to the ne*t level.

Q: How do you create a test strategy? AL The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as re)uired. The test team analy&es the re)uirements, writes the test strategy and reviews the plan with the pro%ect team. The test plan may include test cases, conditions, the test environment, a list of related tas!s, pass/fail criteria and ris! assessment. Inp0ts )$( this p($cessL A description of the re)uired hardware and software components, including test tools. This information comes from the test environment, including test tool data. A description of roles and responsibilities of the resources re)uired for the test and schedule constraints. This information comes from man'hours and schedules. Testing methodology. This is based on !nown standards. 7unctional and technical re)uirements of the application. This information comes from re)uirements, change re)uest, technical and functional design documents. :e)uirements that the system can not provide, e.g. system limitations.

@0tp0ts )$( this p($cessL An approved and signed off test strategy document, test plan, including test cases. Testing issues re)uiring resolution. ,sually this re)uires additional negotiation at the pro%ect management level.

Q: What is security clearance? AL #ecurity clearance is a process of determining your trustworthiness and reliability before granting you access to national security information. Q: What are the le els of classified access? AL The levels of classified access are confidential, secret, top secret, and sensitive compartmented information, of which top secret is the highest. Q: WhatRs a Rtest planR? A software pro%ect test plan is a document that describes the ob%ectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to thin! through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the why and how of product validation. -t should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular pro%ect@ M Title M -dentification of software including version/release numbers. M :evision history of document including authors, dates, approvals. M Table of "ontents. M .urpose of document, intended audience M 8b%ective of testing effort M #oftware product overview M :elevant related document list, such as re)uirements, design documents, other test plans, etc. M :elevant standards or legal re)uirements M Traceability re)uirements M :elevant naming conventions and identifier conventions M 8verall software pro%ect organi&ation and personnel/contact'info/responsibilties M Test organi&ation and personnel/contact'info/responsibilities M Assumptions and dependencies M .ro%ect ris! analysis M Testing priorities and focus

M #cope and limitations of testing M Test outline ' a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable M 8utline of data input e)uivalence classes, boundary value analysis, error classes M Test environment ' hardware, operating systems, other re)uired software, data configurations, interfaces to other systems M Test environment validity analysis ' differences between the test and production systems and their impact on test validity. M Test environment setup and configuration issues M #oftware migration processes M #oftware "$ processes M Test data setup re)uirements

M (atabase setup re)uirements M 8utline of system'logging/error'logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs M (iscussion of any speciali&ed software or hardware tools that will be used by testers to help trac! the cause or source of bugs M Test automation ' %ustification and overview M Test tools to be used, including versions, patches, etc. M Test script/test code maintenance processes and version control M .roblem trac!ing and resolution ' tools and processes M .ro%ect test metrics to be used M :eporting re)uirements and testing deliverables M #oftware entrance and e*it criteria M -nitial sanity testing period and criteria M Test suspension and restart criteria M .ersonnel allocation M .ersonnel pre'training needs M Test site/location

M 8utside test organi&ations to be utili&ed and their purpose, responsibilties, deliverables, contact persons, and coordination issues. M :elevant proprietary, classified, security, and licensing issues. M 8pen issues M Appendi* ' glossary, acronyms, etc. Q: 1hats a Rtest caseR? M A test case is a document that describes an input, action, or event and an e*pected response, to determine if a feature of an application is wor!ing correctly. A test case should contain particulars such as test case identifier, test case name, ob%ective, test conditions/setup, input data re)uirements, steps, and e*pected results. M Note that the process of developing test cases can help find problems in the re)uirements or design of an application, since it re)uires completely thin!ing through the operation of the application. 7or this reason, it s useful to prepare test cases early in the development cycle if possible. Q: What sh$0l" be "$ne a)te( a b0g is )$0n"? M The bug needs to be communicated and assigned to developers that can fi* it. After the problem is resolved, fi*es should be re'tested, and determinations made regarding re)uirements for regression testing to chec! that fi*es didn t create problems elsewhere. -f a problem'trac!ing system is in place, it should encapsulate these processes. A variety of commercial problem'trac!ing/management software tools are available (see the Tools section for web resources with listings of such tools). The following are items to consider in the trac!ing process@ M "omplete information such that developers can understand the bug, get an idea of it s severity, and reproduce it if necessary. M 1ug identifier (number, -(, etc.) M "urrent bug status (e.g., :eleased for :etest , New , etc.) M The application name or identifier and version M The function, module, feature, ob%ect, screen, etc. where the bug occurred M 2nvironment specifics, system, platform, relevant hardware specifics M Test case name/number/identifier M 8ne'line bug description M 7ull bug description M (escription of steps needed to reproduce the bug if not covered by a test case or if the developer doesn t have easy access to the test case/test script/test tool

M Names and/or descriptions of file/data/messages/etc. used in test M 7ile e*cerpts/error messages/log file e*cerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem M #everity estimate (a C'level range such as 4'C or critical 'to' low is common) M /as the bug reproducible0 M Tester name M Test date M 1ug reporting date M Name of developer/group/organi&ation the problem is assigned to M (escription of problem cause M (escription of fi* M "ode section/file/module/class/method that was fi*ed M (ate of fi* M Application version that contains the fi* M Tester responsible for retest M :etest date M :etest results M :egression testing re)uirements M Tester responsible for regression tests M :egression testing results M A reporting or trac!ing process should enable notification of appropriate personnel at various stages. 7or instance, testers need to !now when retesting is needed, developers need to !now when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers. Q: What i) the s$)t1a(e is s$ b0gg it canRt (eall be teste" at all? M The best bet in this situation is for the testers to go through the process of reporting whatever bugs or bloc!ing'type problems initially show up, with the focus being on critical bugs. #ince this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.

Q: #$1 can it be 4n$1n 1hen t$ st$p testing? This can be difficult to determine. $any modern software applications are so comple*, and run in such an interdependent environment, that complete testing can never be done. "ommon factors in deciding when to stop are@ M (eadlines (release deadlines, testing deadlines, etc.) M Test cases completed with certain percentage passed M Test budget depleted M "overage of code/functionality/re)uirements reaches a specified point M 1ug rate falls below a certain level M 1eta or alpha testing period ends Q: What i) the(e isnRt en$0gh ti.e )$( th$($0gh testing? M ,se ris! analysis to determine where testing should be focused. #ince it s rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, ris! analysis is appropriate to most software development pro%ects. This re)uires %udgement s!ills, common sense, and e*perience. (-f warranted, formal methods are also available.) "onsiderations can include@ M /hich functionality is most important to the pro%ect s intended purpose0 M /hich functionality is most visible to the user0 M /hich functionality has the largest safety impact0 M /hich functionality has the largest financial impact on users0 M /hich aspects of the application are most important to the customer0 M /hich aspects of the application can be tested early in the development cycle0 M /hich parts of the code are most comple*, and thus most sub%ect to errors0 M /hich parts of the application were developed in rush or panic mode0 M /hich aspects of similar/related previous pro%ects caused problems0 M /hich aspects of similar/related previous pro%ects had large maintenance e*penses0 M /hich parts of the re)uirements and design are unclear or poorly thought out0 M /hat do the developers thin! are the highest'ris! aspects of the application0 M /hat !inds of problems would cause the worst publicity0 M /hat !inds of problems would cause the most customer service complaints0

M /hat !inds of tests could easily cover multiple functionalities0 M /hich tests will have the best high'ris!'coverage to time're)uired ratio0 Q: What i) the p($5ect isnRt big en$0gh t$ 50sti) e6tensi=e testing? M "onsider the impact of pro%ect errors, not the si&e of the pro%ect. >owever, if e*tensive testing is still not %ustified, ris! analysis is again needed and the same considerations as described previously in /hat if there isn t enough time for thorough testing0 apply. The tester might then do ad hoc testing, or write up a limited test plan based on the ris! analysis. Q: What can be "$ne i) (e?0i(e.ents a(e changing c$ntin0$0sl ? A common problem and a ma%or headache M /or! with the pro%ect s sta!eholders early on to understand how re)uirements might change so that alternate test plans and strategies can be wor!ed out in advance, if possible. M -t s helpful if the application s initial design allows for some adaptability so that later changes do not re)uire redoing the application from scratch. M -f the code is well'commented and well'documented this ma!es changes easier for the developers. M ,se rapid prototyping whenever possible to help customers feel sure of their re)uirements and minimi&e changes. M The pro%ect s initial schedule should allow for some e*tra time commensurate with the possibility of changes. M Try to move new re)uirements to a .hase < version of an application, while using the original re)uirements for the .hase 4 version. M Negotiate to allow only easily'implemented new re)uirements into the pro%ect, while moving more difficult new re)uirements into future versions of the application. M 1e sure that customers and management understand the scheduling impacts, inherent ris!s, and costs of significant re)uirements changes. Then let management or the customers (not the developers or testers) decide if the changes are warranted ' after all, that s their %ob. M 1alance the effort put into setting up automated testing with the e*pected effort re)uired to re'do them to deal with changes. M Try to design some fle*ibility into automated test scripts. M 7ocus initial automated testing on application aspects that are most li!ely to remain unchanged. M (evote appropriate effort to ris! analysis of changes to minimi&e regression testing

needs. M (esign some fle*ibility into test cases (this is not easily done6 the best bet might be to minimi&e the detail in the test cases, or set up only higher'level generic'type test plans) M 7ocus less on detailed test plans and test cases and more on ad hoc testing (with an understanding of the added ris! that this entails). Q: What i) the applicati$n has )0ncti$nalit that 1asnRt in the (e?0i(e.ents? M -t may ta!e serious effort to determine if an application has significant une*pected or hidden functionality, and it would indicate deeper problems in the software development process. -f the functionality isn t necessary to the purpose of the application, it should be removed, as it may have un!nown impacts or dependencies that were not ta!en into account by the designer or the customer. -f not removed, design information will be needed to determine added testing needs or regression testing needs. $anagement should be made aware of any significant added ris!s as a result of the une*pected functionality. -f the functionality only effects areas such as minor improvements in the user interface, for e*ample, it may not be a significant ris!. Q: #$1 can QA p($cesses be i.ple.ente" 1ith$0t sti)ling p($"0cti=it ? M 1y implementing LA processes slowly over time, using consensus to reach agreement on processes, and ad%usting and e*perimenting as an organi&ation grows and matures, productivity will be improved instead of stifled. .roblem prevention will lessen the need for problem detection, panics and burn'out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to !eep processes simple and efficient, minimi&e paperwor!, promote computer'based processes and automated trac!ing and reporting, minimi&e time re)uired in meetings, and promote training as part of the LA process. >owever, no one ' especially talented technical types ' li!es rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be re)uired for late'night bug'fi*ing and calming of irate customers. (#ee the 1oo!s section s #oftware LA , #oftware 2ngineering , and .ro%ect $anagement categories for useful boo!s with more information.) Q: What i) an $(ganiAati$n is g($1ing s$ )ast that )i6e" QA p($cesses a(e i.p$ssible M This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than@ M >ire good people M $anagement should ruthlessly prioriti&e )uality issues and maintain focus on the customer M 2veryone in the organi&ation should be clear on what )uality means to the customer Q: #$1 "$es a client;se(=e( en=i($n.ent a))ect testing? M "lient/server applications can be )uite comple* due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing re)uirements can be

e*tensive. /hen time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing. (#ee the Tools section for web resources with listings that include these !inds of test tools.) Q: #$1 can W$(l" Wi"e Web sites be teste"? M /eb sites are essentially client/server applications ' with web servers and browser clients. "onsideration should be given to the interactions between html pages, T"./-. communications, -nternet connections, firewalls, applications that run in web pages (such as applets, %avascript, plug'in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a ma%or ongoing effort. 8ther considerations might include@

Q: #$1 is testing a))ecte" b $b5ect>$(iente" "esigns? M /hat are the e*pected loads on the server (e.g., number of hits per unit time0), and what !ind of performance is re)uired under such loads (such as web server response time, database )uery response times). /hat !inds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)0 M /ho is the target audience0 /hat !ind of browsers will they be using0 /hat !ind of connection speeds will they by using0 Are they intra' organi&ation (thus with li!ely high connection speeds and similar browsers) or -nternet'wide (thus with a wide variety of connection speeds and browser types)0 M /hat !ind of performance is e*pected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)0 M /ill down time for server and content maintenance/upgrades be allowed0 how much0 M /ill down time for server and content maintenance/upgrades be allowed0 how much0 M >ow reliable are the site s -nternet connections re)uired to be0 And how does that affect bac!up system or redundant connection re)uirements and testing0 M /hat processes will be re)uired to manage updates to the web site s content, and what are the re)uirements for maintaining, trac!ing, and controlling page content, graphics, lin!s, etc.0 M /hich >T$9 specification will be adhered to0 >ow strictly0 /hat variations will be allowed for targeted browsers0 M /ill there be any standards or re)uirements for page appearance and/or graphics throughout a site or parts of a site0

M >ow will internal and e*ternal lin!s be validated and updated0 how often0 M "an testing be done on the production system, or will a separate test system be re)uired0 >ow are browser caching, variations in browser option settings, dial'up connection variabilities, and real'world internet traffic congestion problems to be accounted for in testing0 M >ow e*tensive or customi&ed are the server logging and reporting re)uirements6 are they considered an integral part of the system and do they re)uire testing0 M >ow are cgi programs, applets, %avascripts, Active? components, etc. to be maintained, trac!ed, controlled, and tested0 M .ages should be A'C screens ma* unless content is tightly focused on a single topic. -f larger, provide internal lin!s within the page. M The page layouts and design elements should be consistent throughout a site, so that it s clear to the user that they re still within a site. M .ages should be as browser'independent as possible, or pages should be provided or generated based on the browser'type. M All pages should have lin!s e*ternal to the page6 there should be no dead'end pages. M The page owner, revision date, and a lin! to a contact person or organi&ation should be included on each page. Q: What is E6t(e.e -($g(a..ing an" 1hatRs it g$t t$ "$ 1ith testing? M 2*treme .rogramming (?.) is a software development approach for small teams on ris!' prone pro%ects with unstable re)uirements. -t was created by Gent 1ec! who described the approach in his boo! 2*treme .rogramming 2*plained (#ee the #oftware)atest.com 1oo!s page.). Testing ( e*treme testing ) is a core aspect of 2*treme .rogramming. .rogrammers are e*pected to write unit and functional test code first ' before the application is developed. Test code is under source control along with the rest of the code. "ustomers are e*pected to be an integral part of the pro%ect team and to help develope scenarios for acceptance/blac! bo* testing. Acceptance tests are preferably automated, and are modified and rerun for each of the fre)uent development iterations. LA and test personnel are also re)uired to be an integral part of the pro%ect team. (etailed re)uirements documentation is not used, and fre)uent re'scheduling, re'estimating, and re'prioriti&ing is e*pected.

Das könnte Ihnen auch gefallen