Sie sind auf Seite 1von 17

Test Scenarios & Test Cases:

Test case consist of set of input values, execution precondition, excepted Results and executed post condition, developed to cover certain test Condition. While Test scenario is nothing but test procedure. A Test Scenarios have one to many relation with Test case, Means A scenario have multiple test case. Every time we have write test cases for test scenario. So while starting testing first prepare test scenarios then create different-2 test cases for each scenario. Test cases are derived (or written) from test scenario. The scenarios are derived from use cases. Test Scenario represents a series of actions that are associated together. While test Case represents a single (low level) action by the user. Scenario is thread of operations where as Test cases are set of input and output given to the System.

For example: Checking the functionality of Login button is Test scenario and Test Cases for this Test Scenario are: 1. Click the button without entering user name and password. 2. Click the button only entering User name. 3. Click the button while entering wrong user name and wrong password. Etc In short,

Test Scenario is What to be tested and Test Case is How to be tested.

Bug Life Cycle :

From discovery to resolution a defect moves through a definite lifecycle called the defect lifecycle Lets look into it. Suppose a tester finds a defect .The defect is assigned a status, new. The defect is assigned to development project manager who will analyze the defect. He will check whether it is a valid defect. Consider that in the flight reservation application, the only valid password is mercury. But you test the application for some random password, which causes logon failure and report it as defect. Such defects due to corrupted test data , miss configurations in the test environment, invalid expected results etc. are assigned a status rejected If not, next the defect is checked whether it is in scope. Suppose you find a problem with the email functionality. But it is not part of the current release. Such defects are postponed Next, manager checks whether a similar defect was raised earlier if yes defect is assigned a status duplicate If no the defect is assigned to developer who starts fixing the code. During this stage defect is assigned a status in- progress. Once code is fixed. Defect is assigned a status fixed Next the tester will re-test the code. In case the test case passes the defect is closed. If the test cases fails again the defect is re-opened and assigned to the developer Consider a situation where during the 1st release of Flight Reservation a

defect was found in Fax order which was fixed and assigned a status closed During the second upgrade release the same defect again re-surfaced. In such cases, a closed defect will be re-opened. Thats all to Bug Life Cycle

Severity & Priority: 1) Severity: It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of application crashing is severe. So the severity is high but priority is low. Severity can be of following types:

Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical. Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major. Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate. Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor. Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and feel of the application then the severity is stated as cosmetic.

2) Priority: Priority defines the order in which we should resolve a defect. Should we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it. Priority can be of following types:

Low: The defect is an irritant, which should be repaired, but repair can be deferred until after more serious defect has been fixed. Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created. High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the repair has been done.

Few very important scenarios related to the severity and priority, which are asked during the interview: High Priority & High Severity: An error, which occurs on the basic functionality of the application and will not allow the user to use the system. (E.g. A site maintaining the student details, on saving record if it, doesnt allow to save the record then this is high priority and high severity bug.) High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application. High Severity & Low Priority: An error, which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user. Low Priority and Low Severity: Any cosmetic or spelling issues, which is within a paragraph or in the report (Not on cover page, heading, title).

Verification:

Verification makes sure that the product is designed to deliver all functionality to the customer. Verification is done at the starting of the development process. It includes reviews and meetings, walkthroughs, inspection, etc. to evaluate documents, plans, code, requirements and specifications. Verification answers the questions like: Am I building the product right? Am I accessing the data right (in the right place; in the right way). Verification is a Low level activity Performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists and standards. Demonstration of consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.

Validation:

Determining if the system complies with the requirements and performs functions for which it is intended and meets the organizations goals and user needs. Validation is done at the end of the development process and takes place after verifications are completed. Validation answers the question like: Am I building the right product? Am I accessing the right data (in terms of the data required to satisfy the requirement). Validation is a High Level activity. Performed after a work product is produced against established criteria ensuring that the product integrates correctly into the environment. Determination of correctness of the final software product by a development project with respect to the user needs and requirements.

Test Strategy: A Test Strategy document is a high level document and normally developed by project manager. This document defines Software Testing Approach to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.

The Test Strategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document. Some companies include the Test Approach or Strategy inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing. Components of the Test Strategy document

Scope and Objectives Business issues Roles and responsibilities Communication and status reporting Test deliverability Industry standards to follow Test automation and tools Testing measurements and metrics Risks and mitigation Defect reporting and tracking Change and configuration management Training plan

Test Plan: Test plan is a document, which includes introduction, assumptions, list of test cases, list of features to be tested, approach, deliverables, resources, risks and scheduling. A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be. The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents. The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test. It is not uncommon to have one Master Test Plan, which is a common document for the test phases and each test phase have their own Test Plan documents. There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities. My own personal view is that when a testing phase starts and the Test Manager is controlling the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.

Test Plan id Introduction Test items

Features to be tested Features not to be tested Test techniques Testing tasks Suspension criteria Features pass or fail criteria Test environment (Entry criteria, Exit criteria) Test deliverables Staff and training needs Responsibilities Schedule

Boundary Value Analysis: Its widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. Boundary value analysis testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain. Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes. Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis: 1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case. 2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999. 3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001. Boundary value analysis is often called as a part of stress and negative testing. Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments. E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes. This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.

Equivalence Partitioning: In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements. In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data. Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class. So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs. Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning: 1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient. 2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case. 3) Input data with any value greater than 1000 to represent third invalid input class. So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result. We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised. Equivalence partitioning uses fewest test cases to cover maximum requirements.

STLC: Software Testing Life Cycle (STLC) defines the steps/stages/phases in testing of software.

The different stages in Software Testing Life Cycle:


1. 2. 3. 4. 5. 6. Requirement Analysis Test Planning Test Case Design / Development Environment setup Test Execution / Reporting / Defect Tracking Test Cycle Closure / Retrospective study

1) Requirement Analysis
During this phase, test team studies the requirements from a testing point of view to identify the testable requirements. The QA team may interact with various stakeholders (Client, Business Analyst, Technical Leads, System Architects etc) to understand the requirements in detail. Requirements could be either Functional (defining what the software must do) or Non Functional (defining system performance /security availability). Automation feasibility for the given testing project is also done in this stage. Activities 1. 2. 3. 4. 5. Identify types of tests to be performed. Gather details about testing priorities and focus. Prepare Requirement Traceability Matrix (RTM). Identify test environment details where testing is supposed to be carried out. Automation feasibility analysis (if required).

Deliverables 1. 2. RTM Automation feasibility report. (If applicable)

2) Test Planning
This phase is also called Test Strategy phase. Typically, in this stage, a Senior QA manager will determine effort and cost estimates for the project and would prepare and finalize the Test Plan. Activities 1. 2. 3. 4. 5. Preparation of test plan/strategy document for various types of testing Test tool selection Test effort estimation Resource planning and determining roles and responsibilities. Training requirement

Deliverables 1. 2. Test plan /strategy document. Effort estimation document.

3) Test Case Design / Development


This phase involves creation, verification and rework of test cases & test scripts. Test data, is identified/created and is reviewed and then reworked as well. Activities 1. 2. 3. Create test cases, automation scripts. (If applicable) Review and baseline test cases and scripts Create test data (If Test Environment is available)

Deliverables 1. 2. Test cases/scripts Test data

4) Test Environment Setup


Test environment decides the software and hardware conditions under which a work product is tested. Test environment set-up is one of the critical aspects of testing process and can be done in parallel with Test Case Development Stage. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check (smoke testing) of the given environment. Activities 1. 2. 3. Understand the required architecture, environment set-up and prepare hardware and software requirement list for the Test Environment. Setup test Environment and test data Perform smoke test on the build

Deliverables 1. 2. Environment ready with test data set up Smoke Test Results.

5) Test Execution / Reporting / Defect Tracking


During this phase test team will carry out the testing based on the test plans and the test cases prepared. Bugs will be reported back to the development team for correction and retesting will be performed.

Activities 1. 2. 3. 4. 5. Execute tests as per plan Document test results, and log defects for failed cases Map defects to test cases in RTM Retest the defect fixes Track the defects to closure

Deliverables 1. 2. 3. Completed RTM with execution status Test cases updated with results Defect reports

6) Test Cycle Closure / Retrospective study


Testing team will meet, discuss and analyze testing artifacts to identify strategies that have to be implemented in future, taking lessons from the current test cycle. The idea is to remove the process bottlenecks for future test cycles and share best practices for any similar projects in future. Activities 1. 2. 3. 4. 5. 6. Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software, Critical Business Objectives, Quality Prepare test metrics based on the above parameters. Document the learning out of the project Prepare Test closure report Qualitative and quantitative reporting of quality of the work product to the customer. Test result analysis to find out the defect distribution by type and severity.

Deliverables 1. 2. Test Closure report Test metrics

Traceability Matrix: Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table, which is used to trace the requirements during the Software development life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user defined templates for RTM. Each requirement in the RTM document is linked with its associated test case, so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also included and linked with its associated requirements and test case. The main goals for this matrix are: Make sure Software is developed as per the mentioned requirements. Helps in finding the root cause of any bug. Helps in tracing the developed documents during different phases of SDLC.

Alpha Testing: This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). Unit testing, integration testing and system testing when combined are known as alpha testing. During this phase, the following will be tested in the application: Spelling Mistakes Broken Links Cloudy Directions

The Application will be tested on machines with the lowest specification to test loading times and any latency problems.

Beta Testing: This test is performed after Alpha testing has been successfully performed. In beta testing a sample of the intended audience tests the application. Beta testing is also known as pre-release testing. Beta test versions of software are ideally distributed to a wide audience on the Web, partly to give the program a "real-world" test and partly to provide a preview of the next release. In this phase the audience will be testing the following: Users will install, run the application and send their feedback to the project team. Typographical errors, confusing application flow, and even crash. Getting the feedback, the project team can fix the problems before releasing the software to the actual users. The more issues you fix that solve real user problems, the higher the quality of your application will be. Having a higher-quality application when you release to the general public will increase customer satisfaction.

QC & QA: Quality Assurance 1. Quality Assurance helps us to build processes. 2. It is the Duty of the complete team. 3. QA comes under the category of Verification. 4. Quality Assurance is considered as the process oriented exercise. 5. It prevents the occurrence of issues, bugs or defects in the application. 6. It does not involve executing the program or code. 7. It is done before Quality Control. 8. It can catch an error and mistakes that Quality Control cannot catch, that is why considered as Low Level Activity. 9. It is human based checking of documents or files. 10. Quality Assurance means Planning done for doing a process. 11. Its main focuses on preventing Defects or Bugs in the system. 12. It is not considered as a time consuming activity. 13. Quality Assurance makes sure that you are doing the right things in the right way that is the reason it is always comes under the category of verification activity. 14. QA is Pro-active means it identifies weaknesses in the processes. Quality Control 1. Quality Control helps us to implements the build processes. 2. It is only the Duty of the Testing team. 3. QC comes under the category of Validation. 4. Quality Control is considered as the product oriented exercise. 5. It always detects, corrects and reports the bugs or defects in the application. 6. It always involves executing the program or code. 7. It is done only after Quality Assurance activity is completed. 8. It can catch an error that Quality Assurance cannot catch, that is why considered as High Level Activity.

9. It is computer-based execution of program or code. 10. Quality Control Means Action has taken on the process by execute them. 11. Its main focuses on identifying Defects or Bugs in the system. 12. It is always considered as a time consuming activity. 13. Quality Control makes sure that whatever we have done is as per the requirement means it is as per what we have expected, that is the reason it is comes under the category of validation activity. 14. QC is Reactive means it identifies the defects and also corrects the defects or bugs also.

Regression Testing

Regression Testing Definition, Elaboration, Details, Analogy: DEFINITION Regression testing is a type of software testing that intends to ensure that changes (enhancements or defect fixes) to the software have not adversely affected it. ELABORATION The likelihood of any code change impacting functionalities that are not directly associated with the code is always there and it is essential that regression testing is conducted to make sure that fixing one thing has not broken another thing. During regression testing, new test cases are not created but previously created test cases are re-executed. LEVELS APPLICABLE TO Regression testing can be performed during any level of testing (Unit, Integration, System, or Acceptance) but it is mostly relevant during System Testing. EXTENT In an ideal case, a full regression test is desirable but oftentimes there are time/resource constraints. In such cases, it is essential to do an impact analysis of the changes to identify areas of the software that have the highest probability of being affected by the change and that have the highest impact to users in case of malfunction and focus testing around those areas. Due to the scale and importance of regression testing, more and more companies and projects are adopting regression test automation tools. LITERAL MEANING OF REGRESSION Regression [noun]: the act of going back to a previous place or state; return or reversion.

ANALOGY You can consider regression testing as similar to moonwalk or backslide, a dance technique (popularized by Michael Jackson) that gives the illusion of the dancer being pulled backwards while attempting to walk forward. Well, many will not agree with this analogy but what the heck! Lets have some fun! Acceptance Testing DEFINITION Acceptance Testing is a level of the software testing process where a system is tested for acceptability. The purpose of this test is to evaluate the systems compliance with the business requirements and assess whether it is acceptable for delivery.

ANALOGY During the process of manufacturing a ballpoint pen, the cap, the body, the tail and clip, the ink cartridge and the ballpoint are produced separately and unit tested separately. When two or more units are ready, they are assembled and Integration Testing is performed. When the complete pen is integrated, System Testing is performed. Once the System Testing is complete, Acceptance Testing is performed so as to confirm that the ballpoint pen is ready to be made available to the end-users. METHOD Usually, Black Box Testing method is used in Acceptance Testing. Testing does not usually follow a strict procedure and is not scripted but is rather ad-hoc. TASKS Acceptance Test Plan Prepare Review Rework Baseline Acceptance Test Cases/Checklist Prepare Review Rework Baseline Acceptance Test Perform When is it performed? Acceptance Testing is performed after System Testing and before making the system available for actual use. Who performs it? Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project (Development or Testing). Usually, it is the members of Product Management, Sales and/or Customer Support. External Acceptance Testing is performed by people who are not employees of the organization that developed the software. Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the

software for them. [This is in the case of the software not being owned by the organization that developed it.] User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers customers. Definition by ISTQB acceptance testing: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Smoke Testing Smoke Testing Definition, Elaboration, Advantages, Details: DEFINITION Smoke Testing, also known as Build Verification Testing, is a type of software testing that comprises of a non-exhaustive set of tests that aim at ensuring that the most important functions work. The results of this testing is used to decide if a build is stable enough to proceed with further testing. The term smoke testing, it is said, came to software testing from a similar type of hardware testing, in which the device passed the test if it did not catch fire (or smoked) the first time it was turned on.

ELABORATION Smoke testing covers most of the major functions of the software but none of them in depth. The result of this test is used to decide whether to proceed with further testing. If the smoke test passes, go ahead with further testing. If it fails, halt further tests and ask for a new build with the required fixes. If an application is badly broken, detailed testing might be a waste of time and effort. Smoke test helps in exposing integration and major problems early in the cycle. It can be conducted on both newly created software and enhanced software. Smoke test is performed manually or with the help of automation tools/scripts. If builds are prepared frequently, it is best to automate smoke testing. As and when an application becomes mature, with addition of more functionalities etc, the smoke test needs to be made more expansive. Sometimes, it takes just one incorrect character in the code to render an entire application useless. ADVANTAGES It exposes integration issues. It uncovers problems early. It provides some level of confidence that changes to the software have not adversely affected major areas (the areas covered by smoke testing, of course) LEVELS APPLICABLE TO Smoke testing is normally used in Integration Testing, System Testing and Acceptance Testing levels. NOTE

Do not consider smoke testing to be a substitute of functional/regression testing. Defect Density DEFINITION Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component. ELABORATION The defects are: confirmed and agreed upon (not just reported). Dropped defects are not counted. The period might be for one of the following: for a duration (say, the first month, the quarter, or the year). for each phase of the software life cycle. for the whole of the software life cycle. The size is measured in one of the following: Function Points (FP) Source Lines of Code DEFECT DENSITY FORMULA

USES For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them. For comparing software/products so that quality of each software/product can be quantified and resources focused towards those with low quality. 45. What do you like the most about testing? There are several answers that you can give for this question. Here are a few examples: You enjoy the process of hunting down bugs Your experience and background have been focused on enhancing testing techniques You like being in the last phase of work before the product reaches the customer You consider your contribution to the whole development process to be very important. Some of the reference we got from Quality Analyst Interview Questions, please comment below your valuable suggestion. To clarify most of the doubts testers have, even though there are different set of interview questions for each category, most of the will be repeated. For example, the questions you are reading in manual testing interview questions may be repeated in database testing interview questions as well. Because there are more generic list of questions.

UAT

User Acceptance Testing Services validate end-to-end business process, system transactions and user access, confirms the system or application is functionally fit for use and behaves as expected. Also, identifies areas where user needs are not included in the system or the needs are incorrectly specified or interpreted in the system.

recognizes that the specific focus during UAT should be in terms of the exact real world usage of the application. Our UAT testing will be done in an environment that simulates the real world or production environment. The test cases are written using real world scenarios for the application. The test team develops test scenarios, and scenario-based testing is used to conduct User Acceptance Testing. The Key Deliverables of User Acceptance Testing: The Test Plan- Outlining the testing strategy The User Acceptance Test cases Helping the team to effectively test the application The Test Log A log of all the test cases executed and the actual results User Sign Off Customer buy-in, indicating customer finds the product delivered to their satisfaction Definition Save to FavoritesTesting an application prior to customer delivery for functionality and usability using real-world scenarios which resemble how the application will be employed by the end user. Test results are documented as are any modifications made to fix problems discovered during the test. What is Test Suite? A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. Equivalence partitioning: Dividing the test input data into a range of values and selecting one input value from each range is called Equivalence Partitioning. This is a black box test design technique used to calculate the effectiveness of test cases and which can be applied to all levels of testing from unit, integration, system testing and so forth. We cannot test all the possible input domain values, because if we attempted this, the number of test cases would be too large. In this method, input data is divided into different classes, each class representing the input criteria from the equivalence class. We then select one input from each class. This technique is used to reduce an infinite number of test cases to a finite number, while ensuring that the selected test cases are still effective test cases which will cover all possible scenarios. Lets take a very basic and simple example to understand the Equivalence partitioning concept: If one application is accepting input range from 1 to 100, using equivalence class we can divide inputs into the classes, for example, one for valid input and another for invalid input and design one test case from each class. In this example test cases are chosen as below: One is for valid input class i.e. selects any value from input between ranges 1 to 100. So here we are not writing hundreds of test cases for each value. Any one value from this equivalence class should give you the same result. One is for invalid data below lower limit i.e. any one value below 1. One is for invalid data above upper limit i.e. any value above 100.

Boundary value analysis: For the most part, errors are observed in the extreme ends of the input values, so these extreme values like start/end or lower/upper values are called Boundary values and analysis of these Boundary values is called Boundary value analysis. It is also sometimes known as range checking. Boundary value analysis is another black box test design technique and it is used to find the errors at boundaries of input domain rather than finding those errors in the center of input. Equivalence Partitioning and Boundary value analysis are linked to each other and can be used together at all levels of testing. Based on the edges of the equivalence classes, test cases can then be derived. Each boundary has a valid boundary value and an invalid boundary value. Test cases are designed based on the both valid and invalid boundary values. Typically, we choose one test case from each boundary. Finding defects using Boundary value analysis test design technique is very effective and it can be used at all test levels. You can select multiple test cases from valid and invalid input domains based on your needs or previous experience but remember you do have to select at least one test case from each input domain. Lets take same above example to understand the Boundary value analysis concept: One test case for exact boundary values of input domains each means 1 and 100. One test case for just below boundary value of input domains each means 0 and 99. One test case for just above boundary values of input domains each means 2 and 101. 1. Equivalence Partitioning divides the input domain in to classes of data from which test cases will be derived. Ex: for input range from 1 to 1000 there will be 3 types of inputs using Equivalence Partitioning, they are <1 >1000 between 1 and 1000. 2. Boundary Value Analysis is used to check the application behavior at its boundaries. Ex: If a field accepts value range from 1 to 1000 then Boundary Value Analysis will have following inputs for testing that field 0, 1, 2 at lower boundary and 999, 1000, 1001 at upper boundary What Is Functional Testing Explain It with Example? Functional testing means testing the application against business requirements. Functional testing is executed using the functional specifications given by the client or by the design specifications according to use cases given by the design team. Role of functional testing is to validating the behavior of an application. Functional testing is more important because it always verifies that your system is fixed for release. The functional tests define your working system in a useful manner. In functional testing tester has to validate the application to see that all specified requirements of the client whatever we have said in SRS or BRS have been incorporated or not. Functional testing is always concentrating on customer requirements and whereas the Non-Functional testing is always concentrating on customer expectations. Functional and Non Functional Test Cases Functional test cases target business goals and non functional test cases target performance, resource utilization, usability, compatibility etc. Functional testing is a part of system testing. Example of functional testing is explained below Considering example if you are functionally testing a word processing application, a partial list of checks you would perform minimally includes creating, saving, editing, spell checking and printing documents Why We Use Stubs And Drivers? Stubs are dummy modules that are always distinguish as "called programs", or you can say that is handle in integration testing (top down approach), it used when sub programs are under construction. Stubs are considered as the dummy modules that always simulate the low level modules. Drivers are also considered as the form of dummy modules which are always distinguished as "calling programs, that is handled in bottom up integration testing, it is only used when main programs are under construction.

Drivers are considered as the dummy modules that always simulate the high level modules. Example of Stubs and Drivers is given below:For Example we have 3 modules login, home, and user module. Login module is ready and need to test it, but we call functions from home and user (which is not ready). To test at a selective module we write a short dummy piece of a code which simulates home and user, which will return values for Login, this piece of dummy code is always called Stubs and it is used in a top down integration. Considering the same Example above: If we have Home and User modules get ready and Login module is not ready, and we need to test Home and User modules Which return values from Login module, So to extract the values from Login module We write a Short Piece of Dummy code for login which returns value for home and user, So these pieces of code is always called Drivers and it is used in Bottom Up Integration. Conclusion:So it is fine from the above example that Stubs act called functions in top down integration. Drivers are calling Functions in bottom up integration. What is bidirectional traceability? Bidirectional traceability needs to be implemented both forward and backward (i.e., from requirements to end products and from end product back to requirements). When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source Diff. between STLC and SDLC? STLC is software test life cycle it starts with Preparing the test strategy. Preparing the test plan. Creating the test environment. Writing the test cases. Creating test scripts. Executing the test scripts. Analyzing the results and reporting the bugs. Doing regression testing. Test exiting. SDLC is software or system development life cycle, phases are... Project initiation. Requirement gathering and documenting. Designing. Coding and unit testing. Integration testing. System testing. Installation and acceptance testing. " Support or maintenance.

HiCoverage is the measurement of how much the test case is being exercised. It includes :Statement CoverageBranch coverage/Decision coverageCondition coveragePath coverage The techniques are used for writing the test case are: 1. Boundary Value Analysis. 2. Equivalence Class partition. 3. Error Guessing.
What is End-to-End Testing? The type of testing which is used to validate the flow of the application from the start to the end without a glitch is called end-to-end testing. A real time environment is created for this testing process. It helps in assessing the interaction of the software with the database with other hardware applications or systems or within the software itself. In this process, the entire application is tested to find whether there is a glitch in any part of the software.

When a defect is detected in the system and is fixed, end-to-end testing is carried out to ensure new defects have not been introduced in the system. In a number of organizations, end-to-end testing is the testing which is carried out from the first stage of software testing to the last stage of software testing. What is System Testing? System testing is used to validate that the software developed is in accordance with the requirements of the end user. Different modules of the software are validated in the process, taking into account the performance of the system as a whole. In other words, the entire system is validated as a whole. Defects are unearthed in the system as a whole and not in individual components which make up the system. The aim of system testing is to test the system from a user perspective, taking into account how the user will feel about the system and how convenient is it to use the system as a whole. End-to-End Testing Vs. System Testing When end-to-end testing is carried out, it is the flow of activities in the system starting from scratch to the end of the system which are tested. On the other hand, in system testing, the system as a whole is tested to find defects, if any, in the system. In most cases, end-to-end testing is carried out after changes have been made to the system, while system testing is carried out towards the end of the software development, where the application is validated against the requirements of the end user. To explain the difference between end-to-end testing and system testing further, we can take an example. If an email page is being tested, the starting point in end-to-end testing is logging into the page, while the end point is when you log out of the page. On the other hand, system testing is when you work through entire system, like logging into the system, sending an email, opening an email, replying to an email, forwarding the email, etc., and finally logging out of the system. Any defects in moving from one component to another and the working of the component itself is the aim of system testing. Therefore, end-to-end testing is often considered to be a subset of system testing. Often end-to-end testing and system testing are considered to be the same. However, from the above discussion it is clear that both of them are different from the other. More than the difference between end-to-end and system testing, it is important to note that both of them have an important role to play in the entire process of software testing. Using both of them a number of defects in the testing can be unearthed. Read more at Buzzle: http://www.buzzle.com/articles/end-to-end-testing-vs-systemtesting.html

Das könnte Ihnen auch gefallen