Sie sind auf Seite 1von 10

GENERATING TEST CASES FROM THE GUI MODEL

Izzat Alsmadi, Kenneth Magel

Department of Computer Science


North Dakota State University
{ kenneth.magel, izzat.alsmadi}@ndsu.edu

ABSTRACT
Software testing is an expensive stage in the software project lifecycle. Testing automation is expected to reduce the
cost of testing. Graphical User Interfaces (GUI) testing is increasingly taking a major role in the whole project
validation. GUI test automation is a major challenge for test automation. Most of the current GUI test automation
tools are partially automated. They require the involvement of the users or testers in several stages of the testing
process. This paper presents a research for developing GUI test cases and framework through first transferring the
GUI from implementation into a tree model. The transformation makes it easier to automate generating test cases
and executing them. GUI’s code has some characteristics that distinguishes it from the rest of the project’s code.
Generating test cases from the GUI code requires different algorithms from those usually applied in test case
generation. We developed several GUI test generation automated algorithms that do not need any user involvement
and that ensure code branch coverage in the generated test cases. GUI execution and verification is accomplished
through simulating the user interactions and then comparing the output of the execution to the input or the test suite
files.

KEYWORDS
GUI testing, GUI modeling, test automation, test case generation algorithms.

INTRODUCTION
Software testing is the processes of executing a program with the intent to find errors. Through software testing, we
can check whether the developed software met its requirements or functionalities. It helps in exposing errors,
missing requirements, or any other quality aspects.
Manual testing can be described as a situation where a person initiates each test, interacts with it, interprets,
analyzes, and reports the results. Software testing is automated when there is a mechanism for tester-free running of
test cases [1]. Test automation does not only include execution automation. It also includes the automation of the
generation of test cases and the verification of the results. Automated tests should be accomplished with the least
amount of the user involvement.

GUIs are taking a larger and more complex portion of the overall project design, code and testing resources. In many
applications one of the major improvements that are suggested with the new releases is to improve the user interface.
User interfaces have steadily grown richer, more sophisticated, more users’ interactive and enhanced window
controls. For users this means friendlier and better GUIs, while for testers and developers this means more testing
and work. We choose the option of GUI test automation when tests are repeated several times [2]. This makes it
more cost effective to build or use a test automation tool. Software test automation tasks include selecting and
generating test cases, building the test oracle, executing and validating the results. Graphical user interfaces manage
controls. Controls like buttons, labels, textboxes, or lists are reusable objects with which users can interact. We use
the word “control” as a generic term for any graphical object that an application may contain and that is relevant to
GUI testing. Controls have an organizational hierarchy; every GUI has a parent (except the main entry), and every
control may have child controls. Controls, in their parent child relations, form a tree. This makes them searchable in
depth; start from a root control and search among its siblings. Generating GUI test cases follows the tree in the next
selected control in a test scenario. Controls can be distinguished by their names and their unique location in the GUI
tree. We generate the GUI tree automatically from the application executable that contains all detail information
about the GUI controls. Unlike other approaches, we are selecting specific properties to be serialized. Those selected
properties are more critical than others and are relevant to the GUI model and testing. This agrees with the fact that
it is impossible to automate testing everything; in fact it is impossible to test everything.

We designed a tool in C# that uses reflection to parse the GUI control components. Certain control properties are
selected to be serialized. These properties are relevant to the user interface testing. The application then uses the
XML file that is produced to build the GUI tree or the event flow graph and generates the test cases. Generating the
test cases takes into consideration the tree structure. Test cases are selected with the consideration of the GUI tree
branches.

The next section introduces the related work. Section 3 lists the goals of this research and describes the work done
toward those goals. Section 4 presents the conclusion and future work.

RELATED WORK

The software field plays an increasingly important role in industry and society. It is also a highly important source of
innovation. Industry estimates are that, over the years, 70-80% of the cost of business systems is in the development
and maintenance of the software these companies write themselves [3]. It is also the vehicle for implementing the
other key elements of a knowledge economy [4]. Software test automation is beneficial if it is properly
implemented. Testing is an important stage of the software development life cycle. United States loses $59.5 billion
each year due to the bugs in the software not detected by the current testing means [5].

There are several papers presented about GUI test automation using the object data model [6] [7] [8] [9] [10] [11]
[12] [13] [14] [15] [16] [17] [18]. The overall goals and approach for this work is similar to their goals; developing a
data driven GUI test automation framework or part of it. The GUI testing framework described in some of those
references is a general GUI test automation structure that includes test case generation, selection, execution and
verification. It can be applied to any testing or GUI testing model.

Other approaches to the GUI test automation is through semi test automation techniques [19][20], using some
capture/reply tools like WinRunner, QuickTest pro, Segui silk, QARun, Rational Robot, JFCUnit, Abbot and
Pounder, to create unit tests for the GUI. Capture/reply tools have been existed and used for years. This may make
them more reliable and practical at this time as they have been tested and improved throughout several generations.
However, there are several problems and issues in using record/play back tools [21]. The need to reapply all test
cases, the complexity in editing the scripts code, and the lack of error handlings are some examples of those issues.
The reuse of test oracles is not very useful in the case of using a capture/replay tool [22].

Goga[23] introduces an algorithm based on probabilistic approach. It suggests combining the test generation and the
test execution in one phase. Tretmans[24] studies test case generation algorithms for implementations that
communicate via inputs and outputs, based on specifications using Labeled Transition Systems (LTS). In MulSaw
project [25], the team use two complementary frameworks; TestEra and Korat for specification based test
automation. To test a method, TestEra and Korat automatically generate all non-isomorphic test cases from the
method's pre-condition and check its correctness using its post-condition as a test oracle. There are several papers
published related to the MulSaw project. We have a similar approach that focus on GUI testing. As explained
earlier, one of the goals for our automatic generation of test scenarios is to produce non-isomorphic test scenarios
using the GUI tree. We also plan to check the results of the tests through comparing the output results with the
expected results as written in preparation by the testers. Those results list the test scenario outputs and the expected
results. Clay [26] presents an overview for model based software testing using UML. He suggests a framework for
using the various UML diagrams in the testing process. As a model based software testing, prior to test case
generation, we develop an XML model tree that represents the actual GUI that is serialized from the
implementation. Test cases are then generated from the XML model. Turner and Robson [27] suggest a new
technique for the validation of Object Oriented Programming System (OOPS) which emphasizes the interaction
between the features and the object’s state. Each feature maps its starting or input state to its resultant or output state
affected by any stimuli. Tse, Chan, and Chen [28] [29] introduce normal forms for an axiom based test case
selection strategy for OOPS’s, and equivalent sequences of operations as an integration approach for object oriented
test case generation. Orso and Silva [30] introduce some of the challenges that object oriented technologies add to
the process of software testing. Encapsulation and information hiding make it impossible for the tester to check what
happens inside an object during testing. Due to data abstraction there is no visibility of the insight of objects. Thus it
is impossible to examine their state. Encapsulation implies the converse of visibility, which in the worst case means
that objects can be more difficult, or even impossible to test.

The AI planner [31] finds the best way to reach the goal states given the current state. One issue with this research is
that it does not address the problem of the huge number of states that a GUI in even a small application can have,
and hence, may generate too many test cases. The idea of defining the GUI state as the collection states for all its
controls, such that a change of a single property in one control leads to a new state is valid, but is the reason for
producing the huge number of possible GUI states. Planning Assisted Tester for grapHical Systems (PATHS) takes
test goals from test designer as inputs and generates sequences of events automatically. These sequences of events or
plans become test cases for the GUI. PATHS first performs an automated analysis of the hierarchical structure of the
GUI to create hierarchical operators that are then used during plan generation. The test designer describes the
preconditions and effects of these planning operators, which are subsequently, become the input to the planner. Each
planning operator has two controls that represent a valid event sequence. For example, File_Save, File_SaveAs,
Edit_Cut, and Edit_Copy are examples of planning operators. The test designer begins the generation of particular
test cases by identifying a task, consisting of initial and goal states. The test designer then codes the initial and goal
states or uses a tool that automatically produces the code (that is not developed yet). The process to define in a
generic way the current and the goal states automatically can be very challenging. In our approach, we decided to
generate the test cases independently and allow the user in a later stage to define pre- and post conditions for the
verification process.

GOALS AND APPROACHES

The purpose of the GUI modeler, that is a component of the tool we developed, is to transform the user interface to a
model that is easier to test using an automated tool. The tool that is developed as part of this research has two parts;
the first part serializes the GUI components of the AUT, using reflection, to an XML file. This file contains all GUI
controls and their selected properties. The second part uses the XML file as an input to generate a call graph tree that
represents the GUI hierarchical structure. It also generates test cases using several algorithms; random test case
generation that considers the hierarchical structure, semi random in which the user can select certain controls to
override the random process, and other more intelligent algorithms that generate test cases to simulate actual
scenarios. Through the semi random choice, user can select certain controls to be more highlighted. We used the tool
to generate many test cases and it produced the results within a highly accepted time and branch coverage. All
generated files use universal formats (.csv or .xml). This enables the tool or its outputs to be used in other
applications.

For branch coverage in the generated control graph, each path of the GUI tree should be tested or listed at least once
in the test cases. We define a test case or scenario as a case that has three or more controls. We have to take the
hierarchical structure into consideration and select for example an object from next level that is within the reach (a
child) of the current selected control. A partial test case can have two controls. Test case selection should also takes
into consideration the number of controls in each level.

A limitation in GUI that should be considered is that controls in lower levels may not be accessible directly or
randomly. They have to be accessed through their upper level or parent controls. Also since we are pulling all
controls together, the naming convention should be unique (which is true in the assembly when considering the full
names) so that the object can be uniquely identified. Full names can be used to avoid cases where same control
names are used in the different forms.
The main criterion that is used to locate each control is the parent property that each control has from the assembly.
Through the child parent relations, we were able to build the XML tree. We used two different terminologies as a
secondary method to locate controls. They are encoded during the generation of the GUI tree.
1. Control-level: This is the vertical level of the control. The main GUI is considered the highest level control or
control zero, some other forms or controls that can be accessed directly are also considered of level zero. As controls
go down the hierarchy, their control level value increases.
2. Control-unit: This is the horizontal level of the control. For example, in Notepad, the file menu, with all its sub
units has the unit value one, edit menu is unit two, format, unit three and so on.

Using a model checker tool for GUI model verification


Since our test cases are generated from the implementation model, we thought of using a formal model checker
(LTSA) to verify the implementation model against the design model for some properties like safety, progress or
deadlock. This part is partially completed. The rest of using LTSA as a GUI model checker is left as one of the
future goals for this research. Using LTSA, we define some requirement properties to be checked for in the
generated model. The verification of the implementation model rather than the design model is expected to expose
different issues. While the design model is closer to the requirement, it is more abstract and generally causes some
difficulties for testing. On the other hand, the implementation model is closer to testing and is expected to be easier
to test and expose more relevant errors. Those errors could be a reflection of a requirement, design, or
implementation problems.

A formal definition of an event in LTSA is: S1= (e1-> S2), where S1 is the initial state, S2 is the next state, and e1 is
the event that causes the transition. We formalize the names of the states to automate the generation of the LTS file.
For example,
FILE= (save->ok->saveokFILE)
FILE= (save->cancel->savecancelFILE), where the next state name is the combination of the event(s) and the initial
state. This means that the same event on the same state transits to the same next state. In saving data to a file
example, although the saved data may not be the same, yet the effect of the event on the affected objects is the same.
After generating the LTS File for the GUI model, we can apply some of the available checking properties on the
model. The current tool can generate automatically the LTS states file from the GUI tree.

We can use LTSA to check for some security or progress properties’ violations. Here is an edit-cut-copy –paste –
undo LTSA demonstration example (Figure 1).

Set Edit events={cut,copy,paste,undo}


EDIT=(cut->CUTEDIT|copy->COPYEDIT|{paste,undo}->EDIT),
CUTEDIT=(undo->EDIT|{cut,copy}->CUTEDIT|paste->PASTECUTEDIT),
COPYEDIT=(undo->EDIT|{cut,copy}->COPYEDIT|paste->PASTECOPYEDIT),
PASTECOPYEDIT=(undo->EDIT|paste->PASTECOPYEDIT),
PASTECUTEDIT=(undo->EDIT|paste->PASTECUTEDIT).
Property PASTE =({cut,copy} ->paste -> PASTE).

We compose the edit process with a check property “paste”, to make sure that the application does not start from a
paste action (before copy, or cut).
Fig. 1. Edit-Cut-Copy-Paste LTSA model.

GUI state definition


Through using XML to store the GUI tree, we introduced a new definition for a GUI state. Rather than assuming
that the GUI state depends on each property for each control in the whole GUI, we define the GUI state as the
hierarchy that is embedded in the XML tree. A GUI state change means here a change in any parent child relation,
or any change in the specific properties that are parsed. This new definition produces an effective reduction in GUI
states. For example a small application, like Notepad, can have more than 200 controls, each of which may have
more than 40 properties, this produces 200 * 40 =8000 states. Any property or control from the 8000 controls and
properties triggers the state change. In our assumption, 200 controls will have 200 parent-child relations (or less).
Each control has less than 10 GUI relevant properties. The total possible GUI states are reduced to approximately
2000 or 75% total reduction.

The test case generation algorithms created are heuristics. The goal is to generate unique test cases that represent
legal test scenarios in the GUI tree. Here is a list of the developed algorithms.

1. Random legal sequences


In this algorithm, the tool randomly selects a first level control. It then selects randomly a child for this control and
so on. For example, in a Notepad AUT example, If Notepad main menu is randomly selected as the first level
control, children will be file, edit, format, view, and help. Then if file control is randomly selected from those
children, the children for file (save, saveas, exit, close, open, print, etc) are the valid next level controls in which one
control is randomly selected from and so on. Figure 2 is a sample output from the random legal sequence algorithm.

0,NOTEPADMAIN,LABEL2,,,
1,NOTEPADMAIN,BUTTON1,,,
2,NOTEPADMAIN,BUTTON1,,,
3,NOTEPADMAIN,EDIT,GOTO,,
4,NOTEPADMAIN,TXTBODY,,,
5,NOTEPADMAIN,ABOUT,ABOUTHELPLABEL2,,
6,NOTEPADMAIN,VIEW,STATUS BAR,,
7,NOTEPADMAIN,PRINTER,PRINTERLABEL1,,
8,NOTEPADMAIN,TEXT2,,,
9,NOTEPADMAIN,FORMAT,WORD WRAP,,
10,NOTEPADMAIN,LABEL1,,,
11,NOTEPADMAIN,SAVE,SAVEFILEBUTTON1,,
12,NOTEPADMAIN,HELPTOPICSFORM,LINKLABEL1,,
13,NOTEPADMAIN,FILE,EXIT,,
14,NOTEPADMAIN,FONT,FONTLABEL4,,
15,NOTEPADMAIN,FIND,TABCONTROL1,TABREPLACE,
Fig. 2. A sample output from random legal sequence test case generation.

2. Random less previously selected controls


In this algorithm, controls are randomly selected as in the previous algorithm. The only difference is that if the
current control is selected previously (e.g. in the test case just before this one), this control is excluded from the
current selection. This causes the tool to always look for a different control to select. Figure 3 is a sample output
from random legal sequence, less previously selected controls algorithm.

0,NOTEPADMAIN,BUTTON2,,,
1,NOTEPADMAIN,SAVEAS,SAVEFILELABEL8,,
2,NOTEPADMAIN,BUTTON2,,,
3,NOTEPADMAIN,SAVEAS,SAVEFILELABEL8,,
4,NOTEPADMAIN,BUTTON2,,,
5,NOTEPADMAIN,SAVEAS,SAVEFILELABEL8,,
6,NOTEPADMAIN,BUTTON2,,,
7,NOTEPADMAIN,SAVEAS,SAVEFILELABEL8,,
8,NOTEPADMAIN,BUTTON2,,,
9,NOTEPADMAIN,LABEL2,,,
10,NOTEPADMAIN,FORMAT,FONT,FONTLABEL3,
11,NOTEPADMAIN,LABEL2,,,
12,NOTEPADMAIN,FORMAT,FONT,FONTLABEL3,
13,NOTEPADMAIN,LABEL2,,,
14,NOTEPADMAIN,FORMAT,FONT,FONTLABEL3,
15,NOTEPADMAIN,LABEL2,,,

Fig. 3. A sample output from random legal sequence less previously selected controls algorithm.

3. Excluding previously generated scenarios algorithm


Rather than excluding the previously selected control as in the second algorithm, this algorithm excludes all
previously generated test cases or scenarios and hence verifies the generation of a new unique test case every cycle.
The scenario is generated and if the test suite already contains the new generated scenario, it will be excluded and
the process to generate a new scenario starts again. In this scenario, the application may stop before reaching the
number of test cases to generate if there are no more unique test scenarios to create. The algorithm is given limited
resources. It is expected to find the solution within those resources or the algorithm stops and is considered to have
failed. Figure 4 is a sample output from the algorithm; excluding previously generated scenarios.

1,NOTEPADMAIN,FORMAT,WORD WRAP,,,
3,NOTEPADMAIN,OPEN,OPENFILELABEL4,,,
5,NOTEPADMAIN,FORMAT,FONT,FONTLABEL1,,
7,NOTEPADMAIN,LABEL1,,,,
9,NOTEPADMAIN,BUTTON1,,,,
11,NOTEPADMAIN,HELPTOPICSFORM,HELPTOPICS,INDEX,LABEL3,
13,NOTEPADMAIN,OPEN,OPENFILELABEL8,,,
15,NOTEPADMAIN,FILE,OPEN,OPENFILELABEL9,,
17,NOTEPADMAIN,FIND,TABCONTROL1,TABFIND,FINDTABTXTFIND,
19,NOTEPADMAIN,FIND,TABCONTROL1,TABGOTO,GOTOTABTXTLINE,
21,NOTEPADMAIN,SAVE,SAVEFILEBUTTON1,,,
23,NOTEPADMAIN,SAVEAS,SAVEFILECOMBOBOX4,,,
25,NOTEPADMAIN,FONT,FONTLISTBOX1,,,

Fig. 4. A sample output from excluding previously generated scenarios algorithm.

As seen in the sample, some of the generated test cases are canceled as they were previously generated (from
looking at the sequence of the test cases).

4. Weight selection algorithm


In this algorithm, rather than giving same probability of selection or weight for all candidate control’s children as in
all previous scenarios, any child that is randomly selected from the current node causes its weight or probability of
selection next time to be reduced by a certain percent. If the same control is selected again, its weight will be more
reduced and so on. Figure 5 is a sample output from the weight selection algorithm.

1,NOTEPADMAIN,HELPTOPICSFORM,HELPTOPICS,INDEX,LABEL4,
3,NOTEPADMAIN,HELP,ABOUT NOTEPAD,,,
5,NOTEPADMAIN,LABEL2,,,,
7,NOTEPADMAIN,OPEN,OPENFILEBUTTON1,,,
9,NOTEPADMAIN,VIEW,STATUS BAR,,,
11,NOTEPADMAIN,OPEN,OPENFILECOMBOBOX2,,,
13,NOTEPADMAIN,SAVEAS,SAVEFILELABEL4,,,
15,NOTEPADMAIN,TXTBODY,,,,

Fig. 5. A sample output from the weight selection algorithm.

Both algorithms; three and four are designed to ensure branch coverage and reduce redundancy in the generated test
suite.

We define the term test suite effectiveness that can be calculated automatically in the tool in order to evaluate the
algorithms. Test suite effectiveness is defined as the total number of edges discovered to the actual total number of
edges in the AUT. Figure 6 shows test effectiveness for the four algorithms explained earlier.

Test Generation Effectiveness

100
algorithm s % effectiveness

90
80
70 Effec. AI1
60 Effec. AI2
50
40 Effect AI3
30 Effect AI4
20
10
0
10
30
50
200
400
1000
3000
5000
20000
40000

No. Of test cases generated

Fig. 6. Test suite effectiveness for the 4 algorithms explained earlier.

As shown above, the last two algorithms reach to about 100 % effectiveness by generating less than 300 test cases.
We developed two algorithms to locate critical paths automatically in the AUT. Part of the future work for this part
is to evaluate the effectiveness of those suggested algorithms.

1. Locate critical paths using nodes weight


In this approach, each control is given a metric weight that represents the count of all its children. For example if the
children of file are: save, saveas, close, exit, open, page setup, and print, then its metric weight is seven (another
alternative is to calculate all the children and grand children or siblings). For each generated scenario, the weight of
that scenario is calculated as the sum of all the weights of its individual selected controls. To achieve coverage with
test reduction, the algorithm selects randomly one of those scenarios that share the same weight value, as a
representative for all of them. An experiment should be done to test whether those scenarios that have same weight
can be represented by one test case or not.

2. Critical path level reduction through selecting representatives


This technique approaches test selection reduction through selecting representative test scenarios. Representatives
are elected from the different categories, classes or areas to best represent the whole country. In this approach, the
algorithm randomly selects a test scenario. The selected scenario includes controls from the different levels. Starting
from the lowest level control, the algorithm excludes from selection all those controls that share the same parent
with the selected control. This reduction shouldn’t exceed half of the tree depth. For example if the depth of the tree
is four levels, the algorithm should exclude controls from levels three and four only. We assume that three controls
are the least required for a test scenario (like Notepad – File – Exit). We select continuously five test scenarios using
the same reduction process described above. The selection of the number five for test scenarios is heuristic. The idea
is to select the least amount of test scenarios that can best represent the whole GUI. This number depends on the
complexity and depth of the GUI tree.

CONCLUSION AND FUTURE WORK

There are some techniques that are tested and applied in this research to make the process of GUI test automation
more effective and practical. We will continue refining our approach and extending test case generation algorithms
to include more effective ones. Some test verification techniques are explained in principles in this research. A
logging procedure is implemented to compare the executed suite with the generated one. Another track of
verification is suggested. This track requires building templates for events. For each event pre conditions, post
conditions and expected results are included. More elaborations and verifications are required to prove the
effectiveness of the suggested approach. Automation of the first few test cases is expensive; beyond that they
become much cheaper to develop. In GUI, it is difficult to reach a high level of test adequacy in generating test cases
that cover all possible combinations. GUI state reduction is the major contribution of this research.

In our approach, there is state reduction from selecting specific properties to parse. Those properties are more
critical than the rest for the testing process. Total properties of less than 10 are selected. In this paper, we presented
and described some GUI test generation algorithms and critical path test selection techniques. We studied test
effectiveness mathematically by measuring the discovered parts to the total ones. The future work includes
measuring the performance of developed algorithms.

We will also study the fault detection effectiveness of the created test case selection techniques. One proposed
extension for this work is to expand the use of the model checker (e.g. LTSA) to verify the implementation model
against certain properties like safety and progress.

REFERENCES

1. Hoffman Douglas. Test automation architecture: planning for test automation. Software quality methods.
International quality week. 1999.
2. George Nistorica. Automated GUI testing. http://www.perl.com/pub/a/2005/08/11/win32guitest.html. 2005.
3. Bar Avron and Shirley Tessler. An overview of the software industry study. Stanford computer industry project.
1995. http://www.stanford.edu/group/scip/sirp/swi.overview.html.
4. Tessler Shirely, Avron Bar, and Nagy Hanna. National Software Industry Development: Considerations for
Government Planners. 2003. http://www.aldo.com/Publications/Papers/National_SWI_Development_050303.pdf.
5. Li Kanglin and Mengqi Wu. Effective GUI test automation. Sybex. 2005. Alameda, CA. USA.
6. A. M Memon. A Comprehensive Framework For Testing Graphical User Interfaces. Ph.D. thesis, Department of
Computer Science, University of Pittsburgh, July 2001.
7. Q. Xie. Developing Cost-Effective Model-Based Techniques for GUI Testing. In Proceedings of the International
Conference of Software Engineering 2006 (ICSE’06). 2006.
8. A. M. Memon and Q. Xie . Studying the fault detection effectiveness of GUI test cases for rapidly evolving
software. IEEE Transactions on Software Engineering, 31(10):884-896, 2005.
9. A. M. Memom, I Banerejee, and A. Nagarajan. GUI Ripping: Reverse Engineering Of Graphical User Interfaces
For Testing. In Proceedings of the 10th Working Conference on Reverse Engineering ( WCRE’03), 1095-1350/03.
2003.
10. A. K. Ames and H Jie. Critical Paths for GUI Regression Testing. University of California, Santa Cruz.
http://www.cse.ucsc.edu/~sasha/ proj/ gui_testing.pdf. 2004.
11. A. M. Memon. Developing Testing Techniques for Event-driven Pervasive Computing Applications.
Department of Computer Science. University of Maryland.
12. A. M. Memon. GUI testing: Pitfall and Process. Software Technologies. August 2002. Pages 87-88.
13. A. Mitchell and J. Power. An approach to quantifying the run-time behavior of Java GUI applications.
14. A. M. Memon, and M. Soffa. Regression Testing of GUIs. In Proceedings of ESEC/FSE’03. Sep. 2003.
15. L. White, H. AlMezen, and N. Alzeidi. User-based testing of GUI sequences and their
interactions. In Proceedings of the 12th International Symposium Software Reliability Engineering,
16. L. White. Regression testing of GUI event interactions. In Proceedings of the International
Conference on Software Maintenance, pages 350.358, Washington, Nov.4.8 1996.
17. Q. Xie and A. M. Memon. Model-based testing of community-driven open-source GUI
applications. In Proceedings of The International Conference on Software Maintenance
2006 (ICSM'06), Philadelphia, PA, USA, Sept. 2006.
18. Pettichord, Bret. Homebrew test automation. ThoughtWorks. Sep. 2004. www.io.com/~wazmo/ papers/
homebrew_test_automation_200409.pdf.
19. L. White and H. Almezen. Generating test cases from GUI responsibilities using complete interaction
sequences. In Proceedings of the International Symposium on Software Reliability Engineering, pages 110-121, Oct
2000.
20. A. K. Ames and H Jie. Critical Paths for GUI Regression Testing. University of California, Santa Cruz.
http://www.cse.ucsc.edu/~sasha/ proj/ gui_testing.pdf. 2004.
21. Saket Godase. An introduction to software automation. http://www.qthreads.com/
articles/testing/an_introduction_to_software_test_automation.html. 2005.
22. Brian Marick. When should a test be automated. http://www.testing.com/writings/automate.pdf. (Presented at
Quality Week '98.).
23. Goga, N. A probabilistic coverage for on-the-y test generation algorithms. Jan. 2003.
fmt.cs.utwente.nl/publications/Files/ 398_covprob.ps.gz.
24. Jan Tretmans. Test Generation with Inputs, Outputs, and Quiescence. TACAS 1996: 127-146.
25. Software Design Group. MIT. Computer Science and Artificial Intelligence Laboratory. 2006.
http://sdg.csail.mit.edu/index.html.
26. Williams, Clay. Software testing and the UML. ISSRE99. 99. http://www.chillarege.com /fastabstracts/issre99/.
27. Turner, C.D. and D.J. Robson. The State-based Testing of Object-Oriented Programs. Proceedings of the 1993
IEEE Conference on Software Maintenance (CSM- 93), Montreal, Quebec, Canada, Sep. 1993.
28. T.H. Tse, F.T. Chan, H.Y. Chen. An Axiom-Based Test Case Selection Strategy for Object-Oriented Programs.
University of Hong Kong, Hong Kong. 94.
29. T.H. Tse, F.T. Chan, H.Y. Chen. In Black and White: An Integrated Approach to Object-Oriented Program
Testing. University of Hong Kong, Hong Kong. 96.
30. Orso, Alessandro, and Sergio Silva. Open issues and research directions in Object Oriented testing. Italy.
AQUIS98.
31. Memon, Atef. Hierarchical GUI Test Case Generation Using Automated Planning. IEEE transactions on
software engineering. 2001. vol 27.

Das könnte Ihnen auch gefallen