Sie sind auf Seite 1von 6

QUALITY TESTING

Types, Test Driven Development (TDD) and JUnit

Dr Simon Spacey

B.Sc. Hons. (York), M.Sc. (Lancaster), M.B.A. (Cass), J.L.P. (Keio), D.E.A. (Montpellier), D.CSC. (Cambridge), D.I.U. (Imperial), Ph.D. (Imperial)

Simon Spacey.

20/05/2013

<1>

Types of Testing
!! Main Types of Pass-Fail Testing:
1.! Unit Testing Development Unit Level e.g. Class Methods 2.! Integration Testing Interfaces Between Developed Units 3.! System Testing Performance, Scalability, Reliability, Security... 4.! User Acceptance Testing (UAT) Functional and Non-Functional

!! Cost of Fixing Errors (Adapted from [2]):


When Introduced Requirements Design Development When Identified Req.
1

Design
3 1

Unit/ Int
5-10 10 1

System
10 15 10

UAT
10-100 25-100 10-25
[1] T. Hammell, Test-Driven Development: A J2EE Example, Apress, 2004

[2] S. McConnell, Code Complete 2, Microsoft Press, 2004

Simon Spacey.

18/04/2013

<2>

Test Driven Development


!! Process:
1.! Create Unit Tests for a Required Behaviour 2.! Run Tests on No Code to Prove they Fail 3.! Develop Code to Pass Tests 4.! Refactor Code 5.! Move to Next Behaviour at Step 1
1

Predicting Regression Test Failures Using Genetic Algorithm-Selected Dynamic Performance Analysis Metrics
Michael Mayo and Simon Spacey
Waikato University, Hamilton, New Zealand {mmayo,sspacey}@waikato.ac.nz http://cs.waikato.ac.nz/

Abstract. A novel framework for predicting regression test failures is proposed. The basic principle embodied in the framework is to use performance analysis tools to capture the runtime behaviour of a program as it executes each test in a regression suite. The performance information is then used to build a dynamically predictive model of test outcomes. Our framework is evaluated using a genetic algorithm for dynamic metric selection in combination with state-of-the-art machine learning classiers. We show that if a program is modied and some tests subsequently fail, then it is possible to predict with considerable accuracy which of the remaining tests will also fail which can be used to help prioritise tests in time constrained testing environments. Keywords: regression testing, test failure prediction, program analysis, machine learning, genetic metric selection.

Introduction

!! Positives and Negatives:


!! Writing Tests Before Developing Increases Clarity Before Coding! !! Reliable Progress Metrics !! Reduces Chance of Feature Bloat and Provides Examples of Usage ! Overhead: Writing Unit Tests and Running while Regression [3]
Simon Spacey. 18/04/2013 <3>

Regression testing is a software engineering activity in which a suite of tests covering the expected behaviour of a software system are executed to verify a systems integrity after modication. As new features are added to a system, regression tests are re-run and outputs compared against expected results to ensure new feature code and system changes have not introduced bugs into old feature sets. Ideally, we would like to run all regression tests as part of the normal development process when each new feature is committed. However, regression testing the large number of tests required to cover (an ever expanding) previous feature set can take considerable time. For example, the regression test suite [14, 15] used in Section 4 of this work takes approximately 12 hours to execute fully in our environment which makes on-line regression testing dicult. Recently authors concerned with regression testing have began looking to performance analysis and machine learning to aid software engineering [1] and in this paper we propose a method that joins performance analysis [2], machine
G. Ruhe and Y. Zhang (Eds.): SSBSE 2013, LNCS 8084, pp. 158171, 2013. c Springer-Verlag Berlin Heidelberg 2013

!! Explicit Refactoring Stage Encourages Good Code Documentation and Styling


Smoke Suite

[3] Mayo and Spacey, PATEK, SSBSE, St. Petersburg, 2013.

Use OpenPAT

! Assumes Quality: Who Tests the Tester? Are Tests Reliable? How do we Know Code Coverage?
Can be done by different person than the developer

JUnit: Overview
!! JUnit Overview:
!! A Test Harness to Test Java Functional Outputs and State Changes with Defined Inputs !! Typically Used for Bottom-Up Testing before Code is Developed (Ideal) or After Code Review !! Test Cases Can be Written by Technical User/ BA/ Experienced Tester/ PM (Ideal), Interface User (without source = Black Box) or Pair Programmer (with source = White Box) !! Historic Tests Used for Regression Testing So Know New Features Dont Break Old Features !! Can be Incorporated into Smoke Tests A Subset of Tests to Quickly Confirm Main Features !! To Access Privates Need to use Reflection, Getter/Setters, Nested Classes or Package Scope

!! Normally Expect 100% Test Coverage Meaning:


!! Every Basic block Should be Tested (= Assembly/Bytecode Coverage of 100%) !! Need to Prove Positive Normal Use Cases Pass and... !! ... Prove Negative Abnormal Cases Do Actually Fail
Simon Spacey. 18/04/2013 <4>
Use OpenPAT to prove for this [2]

JUnit: Details
!! Structure:
!! Separate Class Called <Class Being Tested>Test! !! Import org.junit.* and org.junit.Assert.*! !! Test Methods for Each Class Method usually prefixed as test<Method>!
Tests run in definition order!

!! Key Annotations:

(expected = Type.<class>) and (timeout=<ms>L)

@Test a test method to run for each test Design Decision @Before/@After run before/after each test @BeforeClass/AfterClass run once each test set (must be static) ! @RunWith(Suite.class) @Suite.SuiteClasses({<class>[,...]}) run suite of test classes

JUnit = instance

!! Return Test Results With:


fail(["message"]) fail with optional message assertEquals(["message",] exp, act) fail if act != exp assertSame(["message",] o1, o2) " fail if o1 and o2 not same instance assertTrue(["message,] b) fail if b not true
Simon Spacey. 18/04/2013 <5>
There are corresponding Not versions too

Real-Time In Lecture Demo

Simon Spacey.

20/05/2013

<6>

Das könnte Ihnen auch gefallen