You are on page 1of 261

A Risk Catalog for Mobile Applications

By
Ajay Kumar Jha

A thesis submitted to Florida Institute of Technology in partial fulfillment of the


requirements for the degree of

Master of Science
in
Software Engineering

Melbourne, Florida
February 2007

Copyright by Ajay Kumar Jha 2007


All Rights Reserved

The author grants permission to make single copies _______________________

We the undersigned committee hereby recommend that the attached document be


accepted as fulfilling in part the requirements for the degree of
Master of Science of Software Engineering.
A Risk Catalog of Mobile Applications,
a thesis by Ajay Kumar Jha, February 2007

_________________________
Cem Kaner, J.D, Ph.D
Professor and Thesis Advisor
Computer Science

_________________________
Pat Bond, Ph.D
Associate Professor
Computer Science

_________________________
Ivica Kostanic, Ph.D.
Professor and Director
Electrical and Computer Engineering

_________________________
William Shoaff, Ph.D.
Associate Professor and Head
Computer Science

Abstract
TITLE: A Risk Catalog for Mobile Applications
AUTHOR: Ajay Kumar Jha
MAJOR ADVISOR: Cem Kaner, J.D, Ph.D.

Testing mobile applications that run on handheld devices is challenging because it


is relatively new, and problems show up in new ways. This thesis presents a risk
catalog that is populated with known problems and potential problems in mobile
applications. The primary objective of this structured risk catalog is to serve as a
test idea generator for software testers new to testing mobile applications, and to
broaden the risk analysis that guides the testing of this new breed of application. A
risk-based software tester can choose the categories of interest from this structured
risk profile, explore the mobile application under test, and create more powerful
tests designed to detect potential failures.

I used a sample mobile application in the education domain to build and refine the
risk catalog. Thereafter, I conducted a pilot study to determine the efficiency of this
risk catalog by using it as a guide to test a real-time financial mobile application.
Finally, I updated and enriched the risk catalog by building, analyzing and testing a
mobile application utilizing Web services.

iii

Table of Contents
Abstract .................................................................................................................... iii
Table of Contents ..................................................................................................... iv
Keywords ................................................................................................................. xii
List of Figures ........................................................................................................ xiii
List of Tables .......................................................................................................... xiv
Acknowledgments ................................................................................................... xv
Dedication .............................................................................................................. xvi
Chapter 1: Introduction ............................................................................................. 1
1.1 Mobile Applications ........................................................................................ 2
1.1.1 What is a Mobile Application? ................................................................. 2
1.1.2 Types of Mobile Applications Considered ............................................... 6
1.2 Goal of the Thesis ............................................................................................ 6
1.2.1 Challenges in Testing Mobile Applications ............................................. 6
1.2.2 Solution Approach .................................................................................... 8
1.3 Organization of the Thesis .............................................................................. 9
Chapter 2: Risk-Based Testing ................................................................................ 12
2.1 Software Risks ............................................................................................... 12
2.1.1 Risk ......................................................................................................... 12
2.1.2 Types of Software Risks......................................................................... 13
2.1.3 Concept of Risk in Software Testing ..................................................... 14
2.2 Risk Analysis Methods .................................................................................. 15
2.2.1 Heuristic Analysis .................................................................................. 17
2.2.2 Hazard and Operability Study ................................................................ 17
2.2.3 Failure Mode and Effects Analysis ........................................................ 18
2.2.4 Fault Tree Analysis ................................................................................ 22
iv

2.2.5 Event Tree Analysis ............................................................................... 25


2.3 Combining Risk Analysis Methodologies ..................................................... 27
2.3.1 Forward and Backward Search Techniques ........................................... 27
2.3.2 Cause-Consequence Analysis................................................................. 27
2.3.3 Combining SFMEA and SFTA .............................................................. 29
2.4 Test Techniques ............................................................................................. 30
2.5 Risk Based Test Management ....................................................................... 31
2.6 Risk Based Testing ........................................................................................ 33
2.7 Summary of risk-based testing ...................................................................... 34
Chapter 3: Heuristics and Risk Catalogs ................................................................. 35
3.1 Heuristics for Testing .................................................................................... 35
3.1.1 Understanding Heuristics ....................................................................... 35
3.1.2 Heuristics in Computer Science ............................................................. 39
3.1.3 Risk Heuristics ....................................................................................... 42
3.1.4 Heuristic Test Strategy Model ................................................................ 45
3.2 Risk Catalogs ................................................................................................. 46
3.2.1 Understanding Taxonomies .................................................................... 46
3.2.2 How Bug Catalogs Can Guide Testing .................................................. 49
Chapter 4: Building and Using a Risk Catalog ....................................................... 51
4.1 Error, Fault and Failure ................................................................................. 51
4.2 Scope of the Risk Catalog for Mobile Applications...................................... 53
4.3 Using Heuristics to Organize the Failure Lists.............................................. 54
4.4 Developing a Risk Catalog ............................................................................ 56
4.4.1 Collecting Failures and Risks ................................................................. 56
4.4.2 Organizing Failures and Risks ............................................................... 56
4.5 Overview of the Risk Catalog ....................................................................... 57
Chapter 5: Mobile Application Risk Catalog .......................................................... 66
5.1 Product Elements ........................................................................................... 66
v

5.1.1 Structure ................................................................................................. 66


5.1.1.1 Code................................................................................................. 67
5.1.1.2 Interfaces ......................................................................................... 67
5.1.1.3 Hardware ......................................................................................... 68
5.1.1.4 Non-Executable Files ...................................................................... 68
5.1.2 Functions ................................................................................................ 69
5.1.2.1 User Interface .................................................................................. 69
5.1.2.2 System Interface .............................................................................. 69
5.1.2.3 Calculation....................................................................................... 70
5.1.2.4 Startup/ Shutdown ........................................................................... 71
5.1.2.5 Error Handling ................................................................................. 72
5.1.2.6 Interactions ...................................................................................... 72
5.1.3 Data ........................................................................................................ 73
5.1.3.1 Input ................................................................................................. 73
5.1.3.2 Output .............................................................................................. 74
5.1.3.3 Big and Little ................................................................................... 74
5.1.3.4 Noise ................................................................................................ 75
5.1.4 Platform .................................................................................................. 76
5.1.4.1 External Hardware ........................................................................... 76
5.1.4.2 External Software ............................................................................ 77
5.1.4.3 Internal Components ....................................................................... 80
5.1.5 Operations .............................................................................................. 81
5.1.5.1 Environment .................................................................................... 81
5.1.5.2 Common Use ................................................................................... 83
5.1.6 Time........................................................................................................ 83
5.1.6.1 Input/ Output ................................................................................... 83
5.1.7 Synchronization ...................................................................................... 84
5.1.7.1 Software Interface ........................................................................... 84
vi

5.1.7.2 Hardware Interface .......................................................................... 87


5.1.7.3 Wireless Synchronization ................................................................ 87
5.2 Operational Quality Criteria .......................................................................... 88
5.2.1 Capability ............................................................................................... 88
5.2.1.1 Suitability ........................................................................................ 89
5.2.1.2 Accuracy .......................................................................................... 90
5.2.1.3 Interoperability ................................................................................ 91
5.2.1.4 Compliance ...................................................................................... 92
5.2.2 Dependability ......................................................................................... 93
5.2.2.1 Fault Tolerance ................................................................................ 93
5.2.2.2 Maturity ........................................................................................... 94
5.2.2.3 Recoverability ................................................................................. 95
5.2.2.4 Reliability ........................................................................................ 95
5.2.3 Usability ................................................................................................. 96
5.2.3.1 Learnability ..................................................................................... 96
5.2.3.2 Efficiency ........................................................................................ 97
5.2.3.3 Satisfaction ...................................................................................... 98
5.2.3.4 Memorability ................................................................................... 98
5.2.3.5 Accessibility .................................................................................... 98
5.2.3.6 Error Messages ................................................................................ 99
5.2.4 Security ................................................................................................... 99
5.2.4.1 Authentication ............................................................................... 100
5.2.4.2 Access control and authorization .................................................. 100
5.2.4.3 Privacy and confidentiality ............................................................ 101
5.2.4.4 Data Integrity ................................................................................. 102
5.2.4.5 Wireless Network Security ............................................................ 102
5.2.4.6 Availability .................................................................................... 107
5.2.5 Scalability ............................................................................................. 108
vii

5.2.5.1 Horizontal scalability .................................................................... 108


5.2.5.2 Vertical scalability ......................................................................... 108
5.2.6 Performance.......................................................................................... 108
5.2.7 Installability .......................................................................................... 111
5.2.7.1 System Requirements .................................................................... 111
5.2.7.2 Configuration................................................................................. 112
5.2.7.3 Uninstallation ................................................................................ 113
5.2.7.4 Upgrades ........................................................................................ 113
5.2.8 Compatibility ........................................................................................ 114
5.2.8.1 Application Compatibility ............................................................. 114
5.2.8.2 Operating System Compatibility ................................................... 114
5.2.8.3 Hardware Compatibility ................................................................ 114
5.2.8.4 Backward Compatibility................................................................ 114
5.2.8.5 Resource Usage ............................................................................. 114
5.2.9 Quality of service ................................................................................. 115
5.3 Development Quality Criteria ..................................................................... 116
5.3.1 Supportability ....................................................................................... 116
5.3.2 Testability ............................................................................................. 116
5.3.2.1 Visibility ........................................................................................ 117
Field failures .............................................................................................. 117
5.3.2.2 Control ........................................................................................... 117
5.3.3 Maintainability ..................................................................................... 117
5.3.3.1 Analyzability ................................................................................. 118
5.3.3.2 Changeability................................................................................. 118
5.3.3.3 Stability ......................................................................................... 119
5.3.4 Portability ............................................................................................. 119
5.3.4.1 Adaptability ................................................................................... 120
5.3.4.2 Conformance ................................................................................. 120
viii

5.3.4.3 Replaceability ................................................................................ 120


5.3.5 Localizability ........................................................................................ 121
5.3.6 Scalability ............................................................................................. 121
5.4 Project Environment .................................................................................... 121
5.4.1 Customers ............................................................................................. 122
5.4.2 Information ........................................................................................... 123
5.4.3 Developer Relations ............................................................................. 124
5.4.4 Team ..................................................................................................... 124
5.4.5 Equipment and tools ............................................................................. 125
5.4.6 Schedules .............................................................................................. 126
5.4.7 Test Items ............................................................................................. 127
5.4.8 Deliverables .......................................................................................... 127
Chapter 6: Refining the Risk Catalog.................................................................... 128
6.1 Overview of Mobile Computing in Education ............................................ 128
6.1.1 Mobile Technology in Education ......................................................... 128
6.1.2 Common Patterns of Using Handheld Devices in Education............... 129
6.2 Educational Mobile Applications Tested .................................................... 130
6.2.1 PAAM and Cells .................................................................................. 131
6.2.2 Installation and Testing ........................................................................ 134
6.3 Modifying the Risk Catalog ........................................................................ 136
6.3.1 Preliminary Evaluation of the Risk Catalog ......................................... 136
6.3.2 Restructuring the Risk Categories in the Catalog................................. 137
6.3.3 Refining and Enhancing the Catalog with PAAM and Cells ............... 138
6.3.4 Finalizing the First Version of the Risk Catalog .................................. 140
Chapter 7: Using the Risk Catalog to Test Enterprise........................................... 141
7.1 Business Context of the Enterprise Mobile Application ............................. 141
7.1.1 Common Scenarios of Usage ............................................................... 141
7.1.2 Enterprise Mobile Applications............................................................ 142
ix

7.2 Testing a Mobile Application Using the Catalog ........................................ 143


7.2.1 MidCast ................................................................................................ 143
7.2.2 Installation and Test Environment ....................................................... 145
7.3 Methodology Used to Test MidCast ........................................................... 147
7.3.1 Pilot Experiment Carried Out With Testers ......................................... 147
7.4 Results of the Experiment ........................................................................... 148
7.4.1 Issues Discovered in MidCast .............................................................. 148
7.4.2 Feedback on the Risk Catalog .............................................................. 149
Chapter 8: Testing Mobile Web Services .............................................................. 150
8.1 Overview of Service Oriented Architecture ................................................ 150
8.1.1 Introduction to Services Oriented Architecture .................................... 151
8.1.2 Motivation and Requirements for Service Oriented Architecture ........ 152
8.1.3 Web Services ........................................................................................ 155
8.1.4 Web Service Extensions ....................................................................... 157
8.1.5 Web Services and Service Oriented Architecture ................................ 159
8.2 Mobile Web Services .................................................................................. 160
8.2.1 Common Paradigms of Using Web Services in Mobile Applications . 160
8.2.2 Challenges in Using Mobile Web Services .......................................... 162
8.3 Architecture of a Mobile Application Utilizing Web Services ................... 163
8.3.1 A Sample .Net Mobile Application ...................................................... 163
8.3.2 Mobile Book Catalog Client................................................................. 164
8.3.3 Consuming Book Catalog Web Service ............................................... 166
8.4 Building and Testing the Smartphone Application Using the Risk Catalog 169
8.4.1 Development Technologies and Environment ..................................... 169
8.4.2 Issues Encountered during Development and Debugging.................... 170
8.4.3 Deploying the Application on a Pocket PC based Smartphone ........... 171
8.5 Enhancing the Catalog with Risks in Mobile Web Services ....................... 171
8.5.1 Enhancing the Risk Catalog with Risks in Mobile Web Services ....... 171
x

8.5.2 Testing the Smartphone Application Using the Risk Catalog .............. 172
Chapter 9: Conclusions and Closing Remarks ...................................................... 175
9.1 Usage and Utility of the Risk Catalog ......................................................... 175
9.2 Next Steps with the Risk Catalog ................................................................ 175
Appendix A: Overview of Mobile Computing Technology ................................. 177
Mobile Applications ......................................................................................... 177
Mobile Content Delivery and Middleware ....................................................... 178
Client-Side Devices .......................................................................................... 183
Wireless Networking Infrastructure ................................................................. 185
Glossary and Acronyms.................................................................................... 189
Appendix B............................................................................................................ 190
Issues Faced During Testing............................................................................. 190
Feedback Forms ................................................................................................ 201
Appendix C: Institutional Research Board Forms ................................................ 209
Student Application Research Involving Human Subjects............................... 209
Consent Form ................................................................................................... 214
Appendix D: Source Code Listing ........................................................................ 219
Form1.cs ........................................................................................................... 219
WSDL ............................................................................................................... 228
Service.cs .......................................................................................................... 233
References ............................................................................................................. 237

xi

Keywords
Failure mode and effects analysis
Failure mode catalog
Heuristics
Mobile applications
Mobile application failures
Risk analysis
Risk-based testing
Risk catalog
Service-oriented architecture
Software testing
Web services
Wireless network failures
Wireless technology

xii

List of Figures
Figure 1-1: Framework for mobile applications........................................................ 4
Figure 2-1: FTA on mobile book catalog ................................................................ 24
Figure 2-2: ETA on mobile book catalog ................................................................ 26
Figure 2-3: Cause consequence analysis on mobile book catalog .......................... 28
Figure 3-1: Heuristic test strategy model ................................................................ 46
Figure 4-1: Overview of Risk Catalog .................................................................... 55
Figure 6-1: Screenshot of Cells ............................................................................. 132
Figure 6-2: Summation of a data range using Cells .............................................. 133
Figure 6-3: PAAM on an instructors device. ....................................................... 134
Figure 7-1: MidCast launching splash screens ...................................................... 144
Figure 7-2: Day chart window............................................................................... 145
Figure 8-1: Service oriented architecture .............................................................. 154
Figure 8-2: Web services framework .................................................................... 158
Figure 8-3: Mobile book catalog application layers .............................................. 164
Figure 8-4: User interface of the mobile book catalog .......................................... 165
Figure 8-5: Populated list-view with book information ........................................ 168
Figure 8-6: Overview of the risk catalog using mindmap ..................................... 173
Figure 8-7: Heuristic risk analysis for product elements ...................................... 174
Figure 9-1 Schematic of Microsoft.NET compact framework .............................. 179

xiii

List of Tables
Table 2-1: FMEA on the mobile Web services application .................................... 21
Table 3-1: Heuristics in How to Solve It .............................................................. 37
Table 3-2: Diversity Among Taxonomies ............................................................... 47
Table 4-1: Categorization of the Risk Catalog ........................................................ 57

xiv

Acknowledgments

I am deeply grateful to my advisor Dr. Cem Kaner for his guidance and
unfailing support through the trials of my graduate study at Florida Institute
of Technology. He has always been available for discussions and has
provided very valuable and insightful inputs.

I thank James Bach, who provided me with valuable input to my thesis and
guided me whenever I needed his help.

I thank my committee members Dr. Pat Bond and Dr. Ivica Kostanic for
their useful suggestions and comments on my thesis.

I thank Sam Oswald and Georgi Nikolov for their help with testing some
sample applications and providing very valuable feedback on the strengths
and weaknesses of the risk catalog.

I thank my family, lab mates at Center for Software Testing and Research
and friends for the constant encouragement and support they provided. I
thank Amy Bowman for providing feedback on my work.

I thank Shagun Kumar for helping me with the printing of this thesis.

Finally I would like to thank my wife, Deepika, who was always available
to assist me by providing feedback, suggestions for improvement and help
in all ways possible.
xv

Dedication
This thesis is dedicated to the English author Aldous Leonard Huxley (July 26,
1894 November 22, 1963), whose writings are the biggest source of inspiration
and intellectual quest for me over the years (http://somaweb.org/index.html, last
accessed January 29, 2007).

I also dedicate this thesis to my best friend and wife, Deepika Gupta who was a
constant source of motivation and without whose support this work would have
been impossible.

xvi

Chapter 1: Introduction
Handheld devices are evolving and becoming increasingly complex with the
continuous addition of features and functionalities. The rapid proliferation of the
Internet Protocol (IP)-based wireless networks, the maturation of cellular
technology, and the business value discovered in deploying mobile solutions in
different sectors like education, enterprise, entertainment, and personal productivity
are some of the drivers of these changes. Computing and communication
technologies are converging, as with communications-enabled Personal Digital
Assistants (PDAs) and smart phones, and the mobile landscape is getting swamped
with devices having a variety of different form factors.

Mobile applications are a natural extension to the current wired infrastructure.


Traditional mobile applications like email and Personal Information Management
(PIM) have been widely adopted in the enterprise and consumer arenas. A plethora
of applications targeting the consumer is now available in the market. Mobile
applications enabling Business to Business (B2B) and Business to Consumer (B2C)
transactions are rapidly becoming mainstream along with other shrink-wrap
software products. In the enterprise, a variety of people including road warriors,
sales, and service professionals, are being equipped with on-the-go computing
capabilities using mobile technologies for the entertainment, education,
communication, work, and other sectors.

Testing is challenging in the handheld, wireless world because problems are new,
or they show up in new ways. Not many software testers have experience testing
1

this new breed of applications; consequently, they run out of test ideas and test
cases. To facilitate risk-based testing in this area, I present and organize a catalog
of a broad set of risks that are publicly reported and potential failures that could
occur with mobile applications running on handheld devices. A software tester of
mobile applications can derive test ideas for more-focused testing that explores the
risks outlined in this risk catalog and, accordingly, can achieve better test coverage
by executing the tests based on different categories of risks.

1.1 Mobile Applications


To develop a test approach for a mobile application requires that a software tester
know what such an application is. This section discusses different mobile
applications and their respective challenges.

1.1.1 What is a Mobile Application?


Definitions of mobile applications vary. In this thesis, a mobile application is any
application that runs on a handheld device, like a personal digital assistant or a
smart phone, and connects to the network wirelessly. Giguere (1999) provided a
way to categorize mobile applications on the basis of the connectivity model of the
application to the backend system. The following is a model for categorizing
mobile applications, inspired by Gigueres work, and includes additional categories
to account for the recent changes in wireless technology.

Applications that are stand-alone: These applications run on the handheld


device itself without connecting to the network. An example of a standalone application is a calculator running on a Windows Pocket PC.
2

Applications that connect to the backend through synchronization software:


These applications use synchronization software like Microsoft Active
Synch to connect to a parent computer or network. An example of such an
application is Microsoft Outlook for Pocket PC that synchronizes data
between the handheld device and the host computer through
synchronization software.

Applications that connect to the backend through a wide-area wireless


network: These applications use either circuit-switched or packet-switched
wide-area wireless networks to connect to a data source or other network
resource. An example of such an application is a stock-ticker application
that streams real-time information about the stock rates to handheld devices
using cellular data transfer.

Applications that connect to the backend using special networks: These


applications connect to the back-end through special networks like
Specialized Mobile Radio (SMR) or paging networks.

Other Applications: There applications include those that connect to the


back-end using short-range wireless networks, such as Bluetooth or
infrared.

Another way to categorize mobile applications could be on the basis of the layering
of the system, which is based on the software and hardware infrastructure.
Varshney (2002) proposed a framework for mobile commerce application
development to separate the responsibilities and functionalities provided by
different entities, and to implement mobile systems.

Figure 1-1: Framework for mobile applications


Source: (Varshney & Vetter, 2002)

The framework developed by Varshney and Vetter (2002) shown in the figure has
two planes: the user plane, the developer plane and the provider plane. The
framework has four layers in the user plane: m-commerce applications, user
infrastructure, middleware, and network infrastructure. Each layer has a welldefined responsibility and provides a standard interface to the adjoining layers. For
example, the user infrastructure layer shows that the design of new mobile
applications should take into consideration the general capabilities of the mobile
device, and should not be device-specific. Similarly, the middleware layer hides the
details of the underlying wireless network from the application layer. In the
developer and provider plane, this framework has separation of responsibilities
between the application developer, content providers, and the service providers.
Varshney and Vetter (2002) state that: Each one of these could build its products
and services using the functionalities provided by others. A content provider can
4

build its service using applications from multiple application developers. They can
also aggregate content from other content providers and can supply the aggregated
content to a network operator or service provider. Service providers can also act as
content aggregators, but are unlikely to act as either an application or content
provider due to their focus on the networking and service aspects of m-commerce.
A service provider can also act as a clearing house for content and application
providers in advertising and distributing their products to its customers. In any
case, the developer and provider plane in our framework is likely to have multiple
layers.

I have identified four layers of abstraction, inspired by the classification scheme of


Varshney and Vetter (2002), to strategize the testing process. They are as follows:

Mobile application layer: This layer includes the application software that
is responsible for user authentication and privacy, for establishing the
communication partners, and for determining the constraints on data and
other application services.

Client-side devices: This layer constitutes the hardware on which a mobile


application with varying capabilities executes.

Mobile content delivery and middleware: This layer includes mobile


middleware that integrates heterogeneous wireless software and the
hardware environment, and that hides the disparities to expedite
development at the application layer. There are a rich set of content delivery
and application programming interfaces available from Microsoft, Sun, and
other leading companies in the mobile application domain that developers
can use out of the box for rapid application development.

Wireless networking infrastructure: Wireless networks could be broadly


divided into wide area networks (WANs), local area networks (LANs), and
personal area networks (PANs), on the basis of network coverage.

1.1.2 Types of Mobile Applications Considered


In this thesis, I will focus primarily on mobile applications that connect to the
backend through a WAN, such as the circuit-switched, or an Internet Protocol (IP)based wireless connectivity. However, the information will be a good base for other
types of mobile applications as well. Furthermore, I will focus on the application
and the middleware layer of the mobile systems. Failures in the wireless network
infrastructure and the hardware are supplied wherever required, but my intent is to
analyze and categorize the risks and failures at the application level. This thesis
includes both Java Micro Edition (Java ME) and Microsoft compact frameworkbased applications running on Windows Mobile 5.0 as sample applications to
validate and enrich the risk catalog.

1.2 Goal of the Thesis


1.2.1 Challenges in Testing Mobile Applications
Multiple challenges are associated with testing mobile applications. First, it is
difficult to reproduce the production environment. Most testing must be performed
using simulators and emulators. However, even if we can simulate some aspects of
the application, the handset for example, we cant be sure what happens when we
try it over a real wireless network. This results in many field failures (Jha & Kaner,
2003).
6

Also, with diverse mobile applications available, and rapidly evolving mobile
technology, testers find themselves at a loss to identify and create a risk profile
because of a lack of experience in this domain. Therefore, test cases may not be
sufficiently powerful, focused, or comprehensive (Jha & Kaner, 2003).

Test automation often expedites the process of test execution by reducing the
manual input required. It is a technique that saves a lot of time by enabling the
execution of repetitive tests with the help of computers. In the case of mobile
applications, it is difficult to automate even the mundane tests due to the inherent
constraints of hardware like less memory and poor processing power on which
these applications execute. These tests have to be executed manually. This demands
more manual testing resource and time. Test case prioritization based on risk
becomes increasingly important, to minimize the number of tests, and isolate the
more powerful tests from the weak ones. A risk catalog helps in test case
prioritization, by allowing the software tester to focus on the failure categories of
interest and map the risks in the application under test from a pre-structured risk
profile.

Another testing challenge is that, as mobile technology is emerging, the market,


including developers and customers, is still figuring out what makes mobile
applications great or merely adequate. Quality criteria are in flux and will stay that
way until the market matures. Despite the uncertainty, software testers must look
carefully at the application under test, asking whether this is as good a product as it
should be (Jha & Kaner, 2003).

1.2.2 Solution Approach


To address these test challenges, I present a risk catalog of mobile applications that
contains real and potential failures that could occur in mobile applications. These
failures and risks are categorized on the basis of the Satisfice Heuristic Test
Strategy Model (Bach, 2006a).

In risk-based testing, explained in more detail in chapter 2 of this thesis, testers


imagine the ways a program could fail and write tests to explore those risks. Risk
catalogs provide the testers with a prestructured risk profile and outlines the ways a
program could fail, thereby facilitating risk-based testing. A risk catalog serves as a
test idea generator and allows testers to apply techniques like failure mode and
effects analysis to many or all elements of the product and to the products key
quality attributes (Kaner & Bach, 2005).

One of my goals in developing a risk catalog is to broaden the risk analysis that
testers use to guide their testing. A catalog provides a wider range of examples (and
categories of risk) than any one person is likely to think of while designing her or
his tests. It also provides training material for testers new to wireless mobile
applications to come up to speed quickly (Jha & Kaner, 2003).

I have used this catalog to test three mobile applications in different sectors like
education, enterprise, and mobile Web service, to refine and enrich the failure
categories, and to provide more examples of potential and known faults and failures
that could occur in these domains. The risk catalog is presented in chapter 5 of this
thesis; its usage in testing mobile applications is described in chapters 6, 7, and 8.

1.3 Organization of the Thesis


Chapter 2: Risk Based Testing outlines different risk analysis methodologies
used in the software testing domain. It draws a distinction between risk-based test
management and risk-based testing according to the kind of software risk being
addressed.

Chapter 3: Heuristics and Risk Catalogs introduces the concept of heuristics and
an application of risk-based heuristics testing in a popular model to strategize
testing. It describes and contrasts the difference between error, faults, and failures,
and clarifies the kind of examples that are populated in the risk catalog for mobile
applications. It briefly outlines the previous work researchers have done on
taxonomies and bug catalogs.

Chapter 4: Building and Using a Risk Catalog contains the experience report in
using heuristics to organize the failure lists by restructuring the bug taxonomies
created earlier. Kaner, Falk and Nyugen (1999) published a taxonomy for common
software errors as an appendix in the book, Testing Computer Software. This
chapter also provides a high-level overview of key categories and structure of the
risk catalog for testing mobile applications.

Chapter 5: Mobile Application Risk Catalog contains the risk catalog for mobile
applications. The risk catalog contains the known and potential problems in mobile
applications, categorized by quality attributes and product elements. This catalog is
used to test three different types of applications to refine, validate, and enhance the
initial version of the risk catalog developed in 2003 (Jha & Kaner, 2003).

Chapter 6: Refining the Risk Catalog describes the process of refining the risk
catalog. Two mobile applications in education were tested using the initial risk list,
and the catalog developed during the testing of these applications. Cells and Palm
OS Artifact and Assessment Manager (PAAM) were the applications tested.

Chapter 7: Using the Risk Catalog to Test an Enterprise Application describes


the pilot experiments carried out to verify the utility of such risk catalogs. Two
software testers new to the wireless domain tested an enterprise wireless data
application. Issues found by the testers, and their feedback on the risk catalog, are
presented as an appendix in this thesis.

Chapter 8: Testing Mobile Web Services contains the experience report on


building and testing a .NET mobile application utilizing a Web service. This
applications architecture is different from those tested before using the catalog.
Examples of failures, specifically to test an applications development quality
attributes, are presented to enrich the catalog. An overview of the service-oriented
architecture and Web services is provided.

Chapter 9: Conclusions and Closing Remarks summarizes the usage of the risk
catalog and the possible next steps for the development of the risk catalog.

Appendix A: Contains an overview of mobile wireless technology.

Appendix B: Contains the issues discovered in a sample wireless application. This


application is described in chapter 7 of this thesis.

Appendix C: Contains the institutional research board forms for conducting


research involving human subjects.
10

Appendix D: Contains the source code listing for the sample application developed
utilizing mobile Web service. This application is described in chapter 8 of this
thesis.

11

Chapter 2: Risk-Based Testing


2.1 Software Risks
Software testers must understand different types of risk present in software
lifecycle phases and product elements. This section defines the terms related to risk
analysis and describes the types of risks in software applications. These concepts
are used to elucidate risk-based testing.

2.1.1 Risk
The American Heritage Dictionary of the English Language 2000 defines risk to
be:

1. The possibility of suffering harm or loss; danger.


2. A factor, thing, element, or course involving uncertain danger; a hazard: the
usual risks of the desert: rattlesnakes, the heat, and lack of water.

In simple terms, a risk is a potential future loss or harm that can occur if proper
remedial action is not taken.

Poyhonen (2001) states that there are many aspects of risk like

A loss associated with the event.

The likelihood that the event would occur.


12

The degree to which we can change the outcome (p. 2).

Risks can also be quantified as the product of two factors: the severity of the
potential failure and the probability of its occurrence (Rosenberg, Stapko, & Gallo,
1999, p. 1).

Risk =

( p( E ) c( E )
i

Where i =1, 2 n. n is the number of unique failure events, Ei are the possible
failure events, p is probability and c is cost.

2.1.2 Types of Software Risks


Software risks can be of various types depending on the context. Gerard, Gerard
and Thompson (2002) identify three kinds of software risks:

Project Risk: These risks relate to the project in its own context. Factors
such as availability of skills, suppliers, contractual agreements, availability
of tools, schedules, budget, and so on determine the fate of the project.
Project management takes on the responsibility of handling these issues.
There could be many risks associated with project and the most severe one
is project chaos. Software Engineering Institute at Carnegie Mellon
University published a method of risk identification at the project level.
This is available at
http://www.sei.cmu.edu/pub/documents/93.reports/pdf/tr06.93.pdf, last
accessed December 11, 2006 (Carr, 1993).

13

Process Risk: These risks primarily relate to the project internally.


Examples of process risk include use of the waterfall model instead of
following an iterative life cycle on a project where requirements are
changing frequently.

Product Risk: These risks relate to the product specifications and


functionalities. Areas to consider are the product stability and quality of use
from the end-user perspective.

2.1.3 Concept of Risk in Software Testing


Bach (2003) states that there are many ways to analyze and identify risks. In riskbased testing, quite often, risk lists are generated through brainstorm or other team
activities. Sometimes these risk lists gets muddled and are not easily actionable. It
is important to understand how to identify risks that directly relate to the product
under test.

Bach (2003) proposes following a generic chain of cause and effect to maintain the
frame of reference while identifying risks. He defines the following terms:

Victim: Someone that experiences the impact of a problem. Ultimately no


bug can be important unless it victimizes a human.

Problem: Something the product does that we wish it wouldnt do. (You can
also call this failure, but I can imagine problems that arent failures,
strictly speaking.)

Vulnerability: Something about the product that causes or allows it to


exhibit a problem, under certain conditions (also called a fault).
14

Threat: Some condition or input external to the product that, were it to


occur, would trigger a problem in a vulnerable product (p. 6).

A threat exploits the vulnerability in a product that results in a problem and


victimizes someone. Bach (2003) states that In terms of this chain, we can say that
someone may be hurt or annoyed because of something that might go wrong while
operating the product, due to some vulnerability in the product that is exploited by
some threat. This is essentially a mini-story about risk. Whatever risk idea comes to
your mind, find its place in the story, then try to flesh out the other parts of the
story (p. 6).

There are many well-known risk analysis techniques used in multiple disciplines.
Some techniques relevant to this thesis are explained in the following section.

2.2 Risk Analysis Methods


Risk analysis is a technique that allows us to identify, assess, manage, and
communicate unwanted events that could have negative consequences. Risk
analysis may be carried out for risk-based test management or risk-based testing to
uncover project risks or product risks respectively. Project risks are explained in
detail in Kaner and Bachs (2005) course notes. In this thesis, my focus is on
product risks, therefore the risk analysis methods discussed will primarily be for
product risks.

Product risk analysis methods are broadly divisible into qualitative and quantitative
categories (McGary, 2005).

15

Qualitative risk analysis is based on experience and intuition of the person carrying
out risk analysis. It uses techniques like brainstorming, surveys, interviews, and
polling to prioritize risks.

Quantitative risk analysis applies statistical techniques to evaluate the effect of risk
events on the project objectives.

There are advantages and disadvantages in both quantitative and qualitative risk
analysis methodologies. Quantitative risk analysis uses familiar mathematical
language of probability and statistics. It is easier to communicate the outcome of
analysis because it uses concrete numbers, and thus supports statistical analysis of
risks, which may assist in cost-benefit analysis to be used by management to make
decisions. The disadvantage of quantitative analysis is the uncertainty in the risk
results, owing to assignment of a numerical score to items that are intrinsically
qualitative. Quantitative analysis also requires complex calculations and skilled
manpower for gathering input data and for successful computation (McGary,
2005).

Qualitative risk analysis, however, has a simple approach, and as long as it is


conducted by a skilled team of people, it yields a fairly good estimate of risk. On
the downside, this approach is influenced by the experience and opinions of the
individuals conducting the analysis, thereby resulting in possible inconsistency in
the results of the risk analysis.

Effective risk analysis utilizes a combination of qualitative and quantitative


techniques, and incorporates elements of both approaches, based on context and
availability of data.
16

Sections 2.2.1 till 2.2.5 describe five different risk analysis methods. I have
provided the examples on how to carry out the risk analysis using a sample mobile
web services application. Details of the application are provided in section 8.3.
The sample application runs on Windows mobile device. It provides the
functionality of displaying a list of books and their prices on the handheld screen.
The user clicks a button on the application, namely, Get Items, and a list of books
and their prices downloads from a database server using a web service. The books
along with the prices are visible on the grid view widget of the windows mobile
handheld.

2.2.1 Heuristic Analysis


Heuristics are a powerful qualitative tool in risk analysis and identification. It is
very important in the context of this thesis, and will be discussed in greater detail in
Chapter 3.

2.2.2 Hazard and Operability Study


The hazard and operability (HAZOP) technique was developed in the early 1970s
by Imperial Chemical Industries Ltd (Sutton, 1991). In this technique, the possible
hazards are identified by critical examination of the process and product, the
consequences for the stakeholders are analyzed, and a severity rating is assigned to
each hazard.

This technique is usually performed using a set of guidewords: NO/NOT, MORE


OR/LESS OF, AS WELL AS, PART OF REVERSE, AND OTHER THAN. From
these guidewords, a scenario that may result in a hazard or an operational problem
17

is identified. Consider the possible flow problems in a process line, the guide word
MORE OF will correspond to high flow rate, while that for LESS THAN, low flow
rate. The consequences of the hazard and measures to reduce the frequency with
which the hazard will occur are then discussed. This technique had gained wide
acceptance in the process industries as an effective tool for plant safety and
operability improvements (Jouko & Veikko, 1993).

McDermid and Pumfrey (1994) applied the HAZOP techniques to software by


exploring the selection of guide words. The report by McDermid, Nicholson,
Pumfrey and Fenelon (1995) summarizes their experiences after practical
application of the method to a computer-assisted braking system.

2.2.3 Failure Mode and Effects Analysis


Failure Mode and Effects Analysis (FMEA) is a risk analysis technique originally
developed by the US military in 1949 to classify failures according to their impact
on mission success and personnel/equipment safety. It is more often and more
formally applied to the manufacturing industry than to the software industry, but
FMEA-like analyses are the foundation of the risk-based software testing described
in this thesis.

Numerous standards have been published to carry out FMEA in hardware


engineering. IEC 60812, SAE J 1739, and MIL-STD-1629A are the most
frequently encountered standards. There is no explicit standard for software FMEA
(SFMEA), but the standard IEC 60812 published in 1985 is often referenced when
carrying out FMEA for software-based systems. Lutz (1997) reported that a
technique similar to FMEA called SEEA (software error effects analysis) was used
for some modules of Columbus free flyer. For ideas on applying the full FMEA
18

discipline to software, see the discussions in (http://www.stuk.fi/julkaisut/tr/stukyto-tr190.pdf). Computer aided FMEA is discussed in the paper by Hecht (Hecht,
Xuegao, & Hecht, 2003).

A failure mode is, essentially, a way that the product can fail. A failure modes
effect is the consequence of the failure. The purpose of the FMEA is to identify
possible failures, to rate them according to the priority, and to take actions to
eliminate or reduce failures, starting with those having highest priority.

The detailed steps involved in carrying out FMEA as described by the American
Society for Quality (Source: http://www.asq.org/learn-about-quality/processanalysis-tools/overview/fmea.html, last accessed August 11, 2006) are as follows:

1. Identify the system or system components and their functions.


2. For each function, identify all the ways failure could happen. These are
potential failure modes...
3. For each failure mode, identify all the consequences, or effects, on the system,
related systems, process, related processes, product, service, customer, or
regulations. These are potential effects of failure.
4. Determine how serious each effect is. This is the severity rating, or S. Severity
is usually rated on a scale from 1 to 10, where 1 is insignificant and 10 is
catastrophic.
5. For each failure mode, determine all the potential root causes. Causal Analysis
tools may be used to assist. List all possible causes for each failure mode.
6. For each cause, determine the occurrence rating, or O. This rating estimates the
probability of failure occurring for that reason during the lifetime of your scope.
Occurrence is usually rated on a scale from 1 to 10, where 1 is extremely
unlikely and 10 is inevitable.
19

7. For each cause, identify current process controls. These are tests, procedures or
mechanisms that are currently in place to keep failures from reaching the
customer.
8. For each control, determine the detection rating, or D. This rating estimates
how well the controls can detect either the cause or its failure mode after they
have happened but before the customer is affected. Detection is usually rated on
a scale from 1 to 10, where 1 means the control is absolutely certain to detect
the problem and 10 means the control is certain not to detect the problem (or no
control exists).
9. Calculate the risk priority number, or RPN, which equals S O D.
10. Identify recommended actions. These actions may include design or process
changes to lower severity or occurrence. There may be additional controls to
improve detection. Also note who is responsible for the actions and target
completion dates.
11. Follow-up and update the risks after the recommended controls are
implemented.

The following example illustrates the concept of FMEA more clearly.

Example: Table 2-1 identifies risks, and the severity, occurrence and detection of
the risks. Risk analysts use or test the application, gain some experience, and make
these estimates subjectively. They calculate the risk priority number of each risk as
RPN = S * O * D. The numbers given in the table below provides an example, and
will vary with the application and the organization.

20

Table 2-1: FMEA on the mobile Web services application


Risk
S#

Risk

Severity

Occurrence Detection

Priority

(S, 1 10)

(O, 1 10)

Number

(D, 1 - 10)

(RPN)
1

Items do not
download
Application does
not open

10

180

10

10

100

10

360

135

84

Application does
3

not connect to
database
Application does

not download the


correct prices
Application does

not download all


the books if it
crosses one page

The application can then be tested, or the failures corrected, based on the RPN.
For this thesis, I provide a risk catalog that assists in identifying failure modes or
risks, so that, if required, the FMEA process can be applied to the software. FMEA
allows for improved test management (explained in detail in Section 2.5).

21

2.2.4 Fault Tree Analysis


Fault Tree Analysis (FTA) is a top-down or backward-analysis technique that is
usually performed in conjunction with FMEA. It requires an in-depth knowledge of
a system, system components, and the component relationships. It is a deductive
approach in which the analysts first identify the failure, and then explore and
analyze the possible causes leading to the failure. The causes of those causes are
then identified, and this cause chain, viewed diagrammatically, leads to a fault tree
diagram. The chain of causes is linked by OR and AND gates, and the failure
analysts estimate the probability of each cause at the leaf level. The analysts then
calculate the probability of the top-level failure and take appropriate action. The
process is further explained below.

Bell Telephone Laboratories developed the concept of fault tree analysis in 1962
for the U.S. Air Force for use with the Minuteman system. Mission-critical systems
have used FTA for a long time to determine the systems reliability and safety, by
identifying the probability of each top-level failure
(http://reliasoft.com/newsletter/2q2003/fta.htm, last accessed March 20, 2004).

Lyu (1995) defined a fault tree model as: A fault tree model is a graphical
representation of logical relationships between events (usually failure events)

Steps involved in carrying out an FTA are as follows:

Identify the top-level failure event (system or subsystem failure)

Decompose the failure into possible contributing events or causes.

Continue resolving the possible causes or failures and create a cause chain
until the root or leaf level is reached.
22

Connect the causes using AND, OR, and M-out of N logic gates.

Estimate and assign probabilities to each failure event at the lowest level.

Calculate the probability of the failure event at a higher level and continue
going up until the probability of the top-level failure has been calculated

The following example illustrates the concept of fault tree analysis more clearly.

Example: Figure 2-1 examines the probability of the failure: the application does
not download the list of books along with their prices. The probabilities of the leaf
level nodes are estimated subjectively, and with the help of the AND and OR gates,
the probability for the top level risk is calculated.

23

Figure 2-1: FTA on mobile book catalog

24

Further reading on FTA, as used in other industries, can be accessed from:

Newsletter available at
http://www.theiet.org/publicaffairs/health/hsb26c.pdf, last accessed August
11, 2006

Newsletter available at http://reliasoft.com/newsletter/2q2003/fta.htm, last


accessed August 11, 2006

2.2.5 Event Tree Analysis


Event tree analysis is a forward-analysis technique in which the events that could
occur in a system are represented visually (Henley & Kumamoto, 1992). It is an
inductive approach in which a list of possible failures, or causes, is identified, and
effects of this failure are identified. These effects may have further effects, and a
cause-effect chain is generated. The probabilities of their failures are estimated
subjectively. It is similar to FTA except that the leaf-level failure modes or causes
are identified first, and the top-level failure modes are identified based on the
effects of the leaf-level failure modes. This allows the risk analyst to develop a
model of outcome events based on an initiating event.

ETA has seen application in the nuclear industries for operability analysis of
nuclear power plant as well as accident sequence in the Three Mile Island-2
reactors accident. (Source:
http://home1.pacific.net.sg/~thk/risk.html#2.3%20%20%20%20Event%20tree,
last accessed July 4, 2006)

The following example illustrates the concept of event tree analysis more clearly.
25

Example: Figure 2-2 examines the probability of the failure: the application does
not download the list of books along with their prices. The probabilities of the leaf
level nodes are estimated subjectively, and with the help of the AND and OR gates,
the probability for the top level risk is calculated.

Figure 2-2: ETA on mobile book catalog

Further reading on ETA, as used in other industries, can be accessed from:

Newsletter at http://www.theiet.org/publicaffairs/health/hsb26b.pdf last


accessed August 11, 2006

26

2.3 Combining Risk Analysis Methodologies


In this section, I explain the concept of forward and backward search with respect
to risk analysis. In the second part of this section I discuss combining inductive and
deductive risk analysis techniques.

2.3.1 Forward and Backward Search Techniques


A forward search technique is a method of analysis in which the software tester
examines the unwanted event or risk to evaluate the impact or negative
consequence it could have on the system. A backward search technique is a causal
analysis technique in which the risk or unwanted event is examined to determine
the circumstances or the root-cause that led to the failure in the system.

2.3.2 Cause-Consequence Analysis


Cause-consequence analysis combines the deductive and inductive methods for risk
analysis. It uses the top-down deductive techniques, such as fault tree analysis, to
determine the cause of the unwanted event and inductive techniques, such as event
tree analysis, to determine the consequences that may arise due to the occurrence of
the initiating event. Cause-consequence analysis can be described as an expanded
event tree analysis in which each cause in the event tree is analyzed further, using
the fault tree analysis technique. (Source:
http://www.sverdrup.com/safety/cause.pdf, last accessed August 11, 2006)

Figure 2-7 shows a typical cause-consequence analysis in which the cause portion
of the analysis is represented by a fault tree. Cause occurrence is expressed as a
27

probability score. The figure then shows the different paths that the initial event
may trigger and their associated consequence score. This accounts for the
consequence portion of the analysis. Each subsequent event occurring in the system
has a probability attached to it based on the possible set of behaviors available
within the system.

Figure 2-3: Cause consequence analysis on mobile book catalog

28

2.3.3 Combining SFMEA and SFTA


SFMEA (Software Failure Mode and Effects Analysis) is often used in conjunction
with SFTA (Software Fault Tree Analysis) to discover errors. The Jet Propulsion
Laboratory, California Institute of Technology used this approach to discover errors
in software modules of the spacecraft systems Cassini and Galileo (Lutz &
Woodhouse, 1996). SFMEA was used to detect abnormal software behavior and
unexpected data. It was then complemented with Software Fault Tree analysis to
examine the causes that produced the failure mode.

(Lutz & Woodhouse, 1996) further described the integration of SFMEA and SFTA
in a two-step process:

The SFMEA used forward searching to identify cause/effect relationships


in which unexpected data or software behavior (causes) could result in
failure modes (effects). For example, outdated sensor data (cause) can
prevent the software from commanding a needed hardware
reconfiguration.

A backward search technique was then used to examine the possibility of


occurrence of each anomaly (cause) that produced a failure mode (effect).
In the example above, the root node for the backward search was outdated
sensor data. In this case our backward search for circumstances that could
lead to outdated sensor data found a situation in which failed hardware
continued to provide (inaccurate) data to the software. This bad data, due to
the voting logic in the software, could veto a needed recovery action. By
demonstrating the possibility of a new failure mode (obsolete data
preventing required actions), the requirements specifications were
29

improved. The failure mode was eliminated by a change to the software


requirements.

Czerny report successful application of software FMEA on embedded automotive


control systems (Czerny, DAmbrosio, Murray, & Sundaram, 2005). They applied
a tightly coupled software FTA and software FMEA technique to analyze potential
failures in the embedded automotive control system. Primary causes of these
failures were categorized as hardware failures, software logic errors and support
software failures (e.g. compiler) errors. In their approach a preliminary hazard
analysis (PHA) was conducted in the conceptual design and requirements phase of
the project. Thereafter, FTA and FMEA were used in conjunction to mitigate
potential hazards.

Software testers can utilize the failure modes described in chapter 5 in a deductive
as well as inductive fashion. They can identify failure modes using the catalog and
carry out further analysis using risk analysis techniques such as those described
above.

The next sections describe software testing techniques and how risk-based testing is
done.

2.4 Test Techniques


Kaner identified some general techniques for testing, on the basis of which many
specific techniques for testing can be generated (Kaner & Bach, 2005).

30

A summary of the ten general techniques is as follows:

1. Function Testing: Test each feature on its own.


2. Specification-Based Testing: Test every claim made in the specifications
document.
3. Domain Testing: Focus on variables such as inputs, outputs and other variables.
4. Risk-Based Testing: Imagine how programs may fail, test for them.
5. Scenario Testing: Test a combination of tests that will be used in real life.
6. Regression Testing: Repeat the test after a change in the program.
7. Stress Testing: Overwhelm the product.
8. User Testing: Let a user run or test the product.
9. State-Model-Based Testing: Test the program from state to state in response to
events.
10. High-Volume Automatic Testing: Run a large series of test.

For more details on the different testing techniques, refer to the course notes on
black box software testing by Kaner (Kaner & Bach, 2005). Testing results are best
when testers combine different techniques during testing. This thesis focuses on
risk-based testing for mobile applications.

2.5 Risk Based Test Management


(Kaner, Bach, & Pettichord, 2001) explained that there are two different approaches
and objectives in carrying out risk analysis. In the first approach, risk analysis is
done to determine what features, modules, or components to test next. In this
approach, some cost is attached to the effect of each failure, and the expensive
failures are prioritized for the purpose of testing. This type of risk analysis leads to
31

risk-based test management. The second type of risk analysis finds errors that lead
to risk-based testing. I will focus on risk-based test management in this section and
discuss risk-based testing in the next section.

Risk-based test management focuses on managing the testing activities at the


project level. It involves analyzing and addressing the project-level and processlevel risks such as new resources, new tools or technology, or ambiguous
requirements. Risk-based test management determines the risk analysis techniques
to be used by the software test team for the purpose of testing, and identifies the
components to be tested and the focus areas of testing, based on inputs such as the
effects of failures, cost of failure, cost of testing, or cost of fixing the failure.
Kaners course notes (Kaner & Bach, 2005) detailed the test management concepts.

(Amland, 1999) discussed risk-based test management in which testing is


prioritized to concentrate on the areas to test next. The risk analysis activity model
that Amland presented identifies the following distinct steps:

Risk Identification

Risk Strategy

Risk Assessment

Risk Mitigation

Risk Reporting

Risk Prediction

(Amland, 1999) stated that the main objective of risk analysis is to identify the
potential problems that can affect the cost or outcome of a project. He identified
three main sources of the risk analysis:
32

Quality of the function (area) to be tested, i.e. quality of a program or a


module. This is used as an indication of the probability of a fault - The
assumption is that a function suffering from poor design, inexperienced
programmer, complex functionality etc. is more exposed to faults than
functions based on better design quality, more experienced programmer etc.

The consequences of a fault in the function as seen by the customer in a


production situation, i.e. probability of a legal threat, loosing market place,
not fulfilling government regulations etc. because of faults. This
consequence represents a cost to the customer.

The consequences of a fault in the function as seen by the vendor, i.e.


probability of negative publicity, high software maintenance cost etc.
because of a function with faults. This consequence represents a cost to the
vendor.

2.6 Risk Based Testing


In his research, (Kaner et al., 2001) explained that the objective of the second type
of risk analysis is to find errors in software. Risk analysis methods help to identify
possible risks or failures in a software application. Software is then tested using
these risks as test cases, to uncover the software failures. This process is known as
risk-based testing.

(Bach, 1999) explained risk analysis for the purpose of finding software errors. The
steps in his approach are as follows:

Make a prioritized list of risks;


33

Perform tests to explore each risk;

As risks evaporate and new ones emerge, adjust your test effort to stay
focused on the current risk set.

2.7 Summary of risk-based testing


The approach used for research in the thesis has primarily been a qualitative one,
focusing on using skills of the testers as opposed to a well-documented and
process-oriented method.

I started off defining some key terms related to risk and risk analysis. The HAZOP
technique is more applicable where the requirements are clear and failures are
easily identified. In this method, guidewords such as MORE THAN or LESS
THAN are used in the software to help identify risks. In software, requirements are
often unclear. The FTA and ETA methods, although highly effective in identifying
risks in the context of an engineering project, require a formalized process for
identifying failure modes and the cause chain of these failure modes. FMEA, on the
other hand, is a simplified process in which failure modes are identified and their
effects weighed to decide what needs to be tested first. Heuristic analysis is a very
simple, yet very effective, technique to identify risks. It is explained in detail in
Chapter 3 and is the basis for the creation and refinement of the risk catalog.

I introduced the concept of risk-based test management and risk-based testing. The
primary focus of this thesis is risk-based testing. (Bach, 1999) provides a method
for risk-based testing using heuristics, as explained in sections 3.1.3 and 3.1.4.

34

Chapter 3: Heuristics and Risk


Catalogs
3.1 Heuristics for Testing
3.1.1 Understanding Heuristics
The word heuristic originates from the Greek word heuriskein which means to
find (http://whatis.techtarget.com/definition/0,,sid9_gci212246,00.html, last
accessed December 12, 2006). A heuristic is a fallible technique of directing ones
attention in learning, discovery, or problem-solving. It is a simple, efficient rule of
thumb that may be used as an aid to problem finding and solving. The application
and usage of a heuristic method or process is known as heuristics.

The American Heritage Dictionary of the English Language 2000 defines heuristic
to be:

Of or relating to a usually speculative formulation serving as a guide in the


investigation or solution of a problem: The historian discovers the past by
the judicious use of such a heuristic device as the ideal type.

Of or constituting an educational method in which learning takes place


through discoveries that result from investigations made by the student.

Computer Science: Relating to or using a problem-solving technique in


which the most appropriate solution of several found by alternative methods
35

is selected at successive stages of a program for use in the next step of the
program.

A problem is an obstacle in achieving a goal, objective, or purpose. The first step to


eliminating a problem is to find the problem; the next step is to solve it. Heuristics
may be used as an aid in both steps. The focus of heuristics is to guide the brain to
think in new ways, to provide reasonable guidelines, and to direct the thinking
process in order to generate new ideas.

Heuristics may be used in a wide variety of disciplines. These rules mostly work
well, but at times lead to biases. The following are examples of heuristics used in
different disciplines:

In psychology: Dreams often reflect your desires. For instance, if you dream
of winning a race against your brother, you probably desire to out-do him in
real life.

In computer science: Heuristics can be used in artificial intelligence


algorithms, model checking, genetic algorithms, and expert systems. For a
more detailed explanation of heuristics as applied to computer science, see
section 3.2 of this chapter.

General: If you want to ask for something from someone, ask when the
person is in a good mood. You are more likely to get it then.

The mathematician George Polya first published How to Solve It in 1945. How to
Solve It is a collection of ideas about heuristics that he taught to math students. This
book provides ways of looking at problems and formulating solutions (Polya,
2004). He states in his book that Heuristic reasoning is not regarded as final and
36

strict but as provisional and plausible only, whose purpose is to discover the
solution to the present problem.

Another book on heuristics, Theory of Inventive Problems Solving, also known as


TRIZ, was developed by a Russian researcher, Genrich Saulovich Altshuller. In his
book, (Altshuller, 1997) talks about problem solving and inventive techniques.

(Polya, 2004) suggested the following steps when solving a mathematical problem:

Understanding the problem

Devising a plan: Find the connection between the data and the unknown.
You may be obliged to consider auxiliary problems if an immediate
connection cannot be found. You should obtain a plan of the solution.

Carrying out the plan

Looking back: Examine the solution obtained and consider ways to improve
it.

Table 3-1 summarizes some of Polyas heuristics.

Table 3-1: Heuristics in How to Solve It


(Source: http://en.wikipedia.org/wiki/How_to_Solve_It, last accessed August
13, 2006)
Formal
Heuristic
Informal Description
Analogue
Analogy

Can you find a problem analogous to


your problem and solve that?

37

Map

Formal

Heuristic

Informal Description

Generalization

Can you find a problem more general


than your problem? This draws on the
inventors paradox: the more
ambitious plan may have more chances
of success.

Generalization

Induction

Can you solve your problem by deriving


a generalization from some examples?

Induction

Variation of the
problem

Can you vary or change your problem to


create a new problem (or set of
problems) whose solution(s) will help
you solve your original problem?

Search

Auxiliary
problem

Can you find a sub problem or side


problem whose solution will help you
solve your problem?

Sub goal

Can you find a problem related to yours


that has already been solved and use that
to solve your problem?

Pattern
recognition
Pattern Matching

Specialization

Can you find a problem more


specialized?

Specialization

Decomposing and
recombining

Can you decompose the problem and


"recombine its elements in some new
manner"?

Divide and
conquer

Working
backward

Can you start with the goal and work


backwards to something you already
know?

Backward
chaining

Draw a figure

Can you draw a picture of the problem?

Diagrammatic
reasoning

Auxiliary
elements

Can you add some new element to your


problem to get closer to a solution?

Extension

Here is a problem
related to yours
and solved before

38

Analogue

3.1.2 Heuristics in Computer Science


One of the finest examples of the application of heuristic evaluation in computer
science is the application of heuristics in the usability inspection process. (Nielsen
& Molich, 1990) developed the first set of heuristics to examine user interfaces for
usability. Nielsen recommends conducting heuristic evaluation by having the
evaluators examine the user interface and judge its compliance with usability
principle heuristics. Nielsens heuristics on usability testing is outlined below.
More information on application of heuristics in human-computer interaction is
available at http://www.useit.com/papers/heuristic/, last accessed September 5,
2006) or in his book on usability inspection methods (Nielsen & Mack, 1994).

1. Visibility of system status: The system should always keep users informed
about what is going on, through appropriate feedback within reasonable time.
2. Match between system and the real world: The system should speak the users
language, with words, phrases and concepts familiar to the user, rather than
system-oriented terms. Follow real-world conventions, making information
appear in a natural and logical order.
3. User control and freedom: Users often choose system functions by mistake and
will need a clearly marked emergency exit to leave the unwanted state
without having to go through an extended dialogue. Support undo and redo.
4. Consistency and standards: Users should not have to wonder whether different
words, situations, or actions mean the same thing. Follow platform conventions.
5. Error prevention: Even better than good error messages is a careful design
which prevents a problem from occurring in the first place. Either eliminate
error-prone conditions or check for them and present users with a confirmation
option before they commit to the action.
39

6. Recognition rather than recall: Minimize the users memory load by making
objects, actions, and options visible. The user should not have to remember
information from one part of the dialogue to another. Instructions for use of the
system should be visible or easily retrievable whenever appropriate.
7. Flexibility and efficiency of use: Acceleratorsunseen by the novice user
may often speed up the interaction for the expert user such that the system can
cater to both inexperienced and experienced users. Allow users to tailor
frequent actions.
8. Aesthetic and minimalist design: Dialogues should not contain information
which is irrelevant or rarely needed. Every extra unit of information in a
dialogue competes with the relevant units of information and diminishes their
relative visibility.
9. Help users recognize, diagnose, and recover from errors: Error messages should
be expressed in plain language (no codes), precisely indicate the problem, and
constructively suggest a solution.
10. Help and documentation: Even though it is better if the system can be used
without documentation, it may be necessary to provide help and documentation.
Any such information should be easy to search, focused on the users task, list
concrete steps to be carried out, and not be too large.
(http://www.useit.com/papers/heuristic/heuristic_list.html, last accessed
September 5, 2006)

Apart from human computer interaction where the term heuristics is used as a rule
of thumb, the term heuristic has two well-defined technical meanings in computer
science. They are described below:

40

1. Heuristic Algorithms
Two fundamental goals in computer science are finding algorithms with good run
times and with optimal solution quality. A heuristic is an algorithm that gives up
one or both of these goals; for example, it usually finds pretty good solutions, but
there is no proof the solutions could not get arbitrarily bad; or it usually runs
reasonably quickly, but there is no argument that this will always be the case. For
many practical problems, a heuristic algorithm may be the only way to get good
solutions in a reasonable amount of time

There is a class of general heuristic strategies called metaheuristics, which often


use randomized search for example. They can be applied to a wide range of
problems, but good performance is never guaranteed

2. Heuristics in shortest path problems


For shortest path problems, the term has a different meaning. Here, a heuristic is a
function, h(n) defined on the nodes of a search tree, which serves as an estimate of
the cost of the cheapest path from that node to the goal node. Heuristics are used by
informed search algorithms such as Greedy best-first search and A* to choose the
best node to explore. Greedy best-first search will choose the node that has the
lowest value for the heuristic function. A* search will expand nodes that have the
lowest value for g(n) + h(n), where g(n) is the (exact) cost of the path from the
initial state to the current node. If h(n) is admissiblethat is, if h(n) never
overestimates the costs of reaching the goal, then A* will always find an optimal
solution. (Source: http://en.wikipedia.org/wiki/Heuristics, last accessed August 13,
2006)
41

3.1.3 Risk Heuristics


Risk heuristics are heuristics as applied in risk analysis processes. Heuristics can be
used to create methods of risk analysis or, after the method has been decided, can
be used to identify risks. I will be discussing and analyzing heuristics used in
improving risk analysis methods as described by various experts.

Kaner & Bach (2005) described three classes of heuristics of identifying risks in
their course notes on risk-based testing
(http://www.testingeducation.org/k04/documents/bbstRisk2005.pdf, last accessed
January 29, 2007):

Recognize common project warning signs (and test things associated with
the risky aspects of the project).

Apply failure mode and effects analysis to (many or all) elements of the
product and to the products key quality criteria.

Apply common techniques (quicktests or attacks) to take advantage of


common errors. These are universal tests that are easy to perform and have
some value in most software applications. Examples of quicktests are
provided in:
o Participants at the 7th Los Altos Workshop on Software Testing
(Exploratory Testing, 1999) pulled together a collection of these.
o James Whittaker published another collection in How to Break
Software.
o Elisabeth Hendrickson teaches courses on bug hunting techniques
and tools, many of which are quicktests or tools that support them

42

Bach (1999) observed in his course notes that heuristics is a method of generating
solutions quickly. He suggested that heuristics are a guide, not a checklist:

Heuristics are often presented as a checklist of open-ended questions, suggestions,


or guidewords. A heuristic checklist is not the same as a checklist of actions such
as you might include as steps to reproduce in a bug report. Its purpose is not to
control your actions, but help you consider more possibilities and interesting
aspects of the problem. (p. 2)

Bach provides some examples of different types of heuristics (Bach, 2006b):

Guideword Heuristics: Words or labels that help you access the full
spectrum of your knowledge and experience as you analyze something.

Trigger Heuristics: Ideas associated with an event or condition that help


you recognize when it may be time to take an action or think a particular
way. Like an alarm clock for your mind.

Subtitle Heuristics: Help you reframe an idea so you can see alternatives
and bring out assumptions during a conversation.

Heuristic Model: A representation of an idea, object, or system that helps


you to explore, understand, or control it.

Heuristic Procedure or Rule: Plans of action that may help solve a class of
problems. (p. 46)

These heuristics encourage the software developer or tester to use their skills and
encourage thinking to identify the maximum risks in the minimum time.

Bach (1999) suggests two ways to analyze risks. A summary is provided:


43

Inside-Out: In this method, the product is observed and studied, the


developers and other stakeholders interviewed, and, based on inputs from
these, risks are identified. Questions such as how the product functionality
works, how the components interface, what happens if certain correct or
incorrect inputs are given to the system, are asked, and based on the
observations recorded by analyzing the product, potential risks are
identified.

Outside-In: In this method a predefined set of risks is an input to testing,


and is used to test the product:

o Quality Criteria Categories: These categories are defined keeping


different set of requirements or ilities in mind, such as Capability,
Reliability, Usability, Performance, Installability, Compatibility,
Supportability, Testability, Maintainability, Portability, and
Localizability.
o Generic Risk List: Generic risks are risks that are universal to any
system.
o Risk catalogs: A risk catalog is a set of possible risks that belong to
a particular domain. Risk catalogs are motivated by testing the same
technology pattern over and over again. A risks catalog can be
organized into different categories of possible risks that have been
detected during testing.

(Source: http://www.satisfice.com/articles/hrbt.pdf, last accessed August 11, 2006)

44

3.1.4 Heuristic Test Strategy Model


Of the heuristics approaches Ive described, the Outside-In approach explained by
Bach assists in uncovering a large number of failures in an application in a
relatively short time (Bach, 1999). Bach pointed out that risk analysts work more
effectively with risk catalogs whose structure is immediately obvious to the reader
(Bach, 2003). Analysts especially benefit from clear structure when trying to decide
what area(s) to focus on at a given point in a project.

However, testing is effective when a combination of various testing strategies is


applied. Bach suggested that the initial risk list should be created without the use of
guidewords or catalogs, which is similar to the Inside-Out approach (Bach, 2003).
After analysts have generated the initial risk list, the catalog can be used to guide
the risk analysis process.

Bachs Heuristic Test Strategy Model (Bach, 2006a), as shown in Figure 3-1,
provides a clear structure that we can fit failure modes into. Therefore, this is the
methodology used for this thesis. I have subdivided the quality criteria into
operational quality criteria and development quality criteria subsections and
presented the model in Figure 4-1.

45

Figure 3-1: Heuristic test strategy model


Source: Bach (2006)

3.2 Risk Catalogs


3.2.1 Understanding Taxonomies
The American Heritage Dictionary of the English Language: Fourth Edition 2000
defines taxonomy to be:

The classification of organisms in an ordered system that indicates natural


relationships.

Division into ordered groups or categories.

46

Taxonomy is an effective way of structuring and classifying information or data


(Bishop & Bailey, 1996).

A few examples of well-known taxonomies are the following:

The science of systematics, which classifies animals and plants into groups
showing the relationship between each. (Bishop & Bailey, 1996)

Blooms taxonomy (Bloom, 1956) of levels of knowledge, which educators


use to develop teaching goals and assessments (such as exam questions).

Beizers taxonomy (Beizer, 1990) of software errors, which classifies bugs


in terms of the software life cycle phase in which they were introduced
(design, implementation, and so on).

Taxonomies were developed for many different objectives. Table 3-2 below lists
some of the common taxonomy and data models with their attributes and year of
publication.

Table 3-2: Diversity Among Taxonomies


Source: (NISTIR 6137
http://hissa.nist.gov/publications/nistir6137/index.html#2, last accessed March
23, 2004)
Attributes

Simple, few elements

Author

Year of Publication

Rubey

late 1970's

Glass

1981

Weiss & Basili

1985

Knuth

1989

Grady

1992

Fenton & Pfleeger

1996

47

Attributes
Security-oriented

Author

Year of Publication

Landwehr

1995

Aslam

1995

Greenberg & Siewiorek

1996

IEEE Standard 1044

1993

Beizer

1990

Hecht & Wallace

1996

CMU experiment: need


data from complex
projects
Detailed, life cycle
oriented

Many taxonomies related to software testing have been developed. Some important
ones are:

Kaner, Falk & Nguyen (1999) published a catalog of common software


errors in the book appendix A of Testing Computer Software. Some of our
thinking about how testers can use a failure mode catalog came from
experience with many readers reports of their uses of this bug list.

Vijayaraghavan published a useful taxonomy of e-commerce failures


(Vijayaraghavan, 2003).

For examples of interest to software readers, see Vijayaraghavan thesis


(Vijayaraghavan, 2003). He has done a useful literature survey on different
taxonomies applicable to software testing.

48

3.2.2 How Bug Catalogs Can Guide Testing


Vijayaraghavan has listed extensive usage of taxonomies in computer science.
However, as he cautioned, disputes exist over whether the structured risk lists (e.g.,
in security), structured bug lists, and many other structured lists can be called
taxonomies if they do not provide orthogonal classifications. I am sidestepping
that issue by calling the structured list in this thesis a catalog instead of
taxonomy.

Testing Computer Software contains a broad-level bug catalog that lists almost 500
common bugs. Kaner suggests using the list as follows:

1. Test idea generation

Find a defect in the list

Ask whether the software under test could have this defect

If it is theoretically possible that the program could have the defect, ask
how you could find the bug if it was there.

Ask how plausible it is that this bug could be in the program and how
serious the failure would be if it was there.

If appropriate, design a test or series of tests for bugs of this type.

2. Test plan auditing

Pick categories to sample from

From each category, find a few potential defects in the list

For each potential defect, ask whether the software under test could have
this defect

49

If it is theoretically possible that the program could have the defect, ask
whether the test plan could find the bug if it was there.

3. Getting unstuck

Look for classes of problem outside of your usual box

4. Training new staff

Expose them to what can go wrong, challenge them to design tests that
could trigger those failures. (Kaner & Bach, 2005), Risk-Based Testing, p.
17.

50

Chapter 4: Building and Using a Risk


Catalog
4.1 Error, Fault and Failure
The term defect in software - often used interchangeably to mean error, fault, or
failure in a software system - refers to a problem with the software work product.
This flaw could be either with the softwares behavior or with its internal structure.
The terms error, fault, and failure, however, have different meanings, which IEEE
Standard 610.12 defines as follows: (IEEE, 1991)

Failure: The inability of a system or component to perform its required


functions within specified requirements. It is an unacceptable departure of
the program operation from the requirements. For example: The user
complaint that My favorite Web-based e-mail doesnt let me log in is an
example of failure because the e-mail system is apparently not operating
according toand so has deviated fromits specification.

Fault: An incorrect step, process, or data definition in a computer program.


Faults lead to one or many failures in the system. An infinite loop in a
computer program is an example of a fault because, rather than complete a
program sequence; it loops endlessly and so could lead to failures.

Error: A human action that produces an incorrect result. Errors may lead to
one or many faults in the system. Designing a software application without
considering all possible program states is an example of error because it is
caused by a problem in a software engineers thought process.
51

The term failure refers to a behavioral deviation from the user requirement or the
product specification; fault refers to an underlying condition within software that
causes certain failure(s) to occur; error refers to a missing or incorrect human
action resulting in certain fault(s) being injected into software. Sometimes error is
also used to refer to human misconceptions or other misunderstandings or
ambiguities that are the root cause for the missing or incorrect actions (Tian 2001).

A causal relationship exists between the three kinds of software defects. An error
injects faults into the software that, when executed, result in failures. A specific
failure may be caused by several faults; some faults may never cause a failure.
Similarly, an error can lead to single or multiple faults injected into software.

As explained in chapter 3, researchers have developed many taxonomies and


catalogs on the basis of failure analysis and classification. Moreover, researchers
have made significant efforts to develop taxonomies that address the problem of
software testing and design at the fault level. Mantyla published a masters thesis
on taxonomy of code smells (Mantyla, 2003). Code smell is a term used in agile
programming. It means that there could be something wrong with the codes
design. To remove code smells, agile programmers use refactoring techniques. In
simple terms, refactoring improves the code design and structure to enhance
readability and maintainability, without changing the programs behavior.
Mantalyas work is applicable at the software fault level where he tries to classify
the problems in object-oriented design into five high-level groups inspired by work
done in this field by Fowler (Fowler, Beck, Brant, Opdyke, & Roberts, 2003).
These categories are presented below:

1. The bloaters
2. The object orientation abusers
52

3. The change preventers


4. The dispensables
5. The couplers

More explanation of these categories can be found at


http://www.soberit.hut.fi/mmantyla/BadCodeSmellsTaxonomy.htm, (last accessed
September 5, 2006)

Marick has published a catalog of test ideas applicable at the fault level. This is
available at http://www.testing.com/writings/shortcatalog.pdf#search=%22brian%20marick%20catalog%22, last accessed on
September 5, 2006). More details on the topic are available in his book (Marick,
1995).

The following section clarifies the scope of the risk catalog for mobile applications
and with respect to the type of examples presented in the risk catalog.

4.2 Scope of the Risk Catalog for Mobile


Applications
In this thesis, I intend to create a risk catalog for mobile applications to assist
testers with risk analysis for the purpose of uncovering failure modes and faults.
The catalog contains heuristics for risks, which help developers and testers in
considering possible risks. The scope of the risk catalog for mobile applications is
limited to failures that can happen in mobile applications and, to some extent, to the
faults that can cause potential failures. While working on creating and refining the
risk catalog, I tested some mobile applications used in education and enterprise
53

domains. This effort enabled me to imagine how applications can fail in different
ways.

4.3 Using Heuristics to Organize the Failure Lists


I reworked Vijayaraghavans taxonomy into the structure that Bach laid out in his
Heuristic Test Strategy Model (Bach, 2006a), and I extended it to mobile and
handheld applications. The model outlines the various aspects that need to be
considered when testing. The model has four high-level categories:

Project Environment

Operational Quality

Development Quality

Product Elements

54

Figure 4-1 gives an overview of the risk catalog.

Figure 4-1: Overview of Risk Catalog

Product Elements consider the products different attributes and dimensions.


Operational Quality concern details about how the product should function and
Development Quality is about how the product is built. Project Environment
concerns aspects of the projects non-technical environment that may affect the
product and that need to be addressed. I adapted the model slightly by splitting
Bachs qualitative failure categories into operational and developmental
subcategories (Bach, 2006a).
55

Steps taken to structure the risk catalog for mobile applications are presented in
section 6.3.

4.4 Developing a Risk Catalog


In this section, I explain the steps that I took to develop the risk catalog.

4.4.1 Collecting Failures and Risks


Over the last three years, Ive been using 25 to 50 broad categories to classify
failures that can occur in mobile applications. Most real-life failures are collected
using online resources such as bug databases, magazines, and the trade press. I
identified the potential problems outlined in this thesis using the sample
applications that I tested, as well as the generic risks associated in each category.

4.4.2 Organizing Failures and Risks


Table 4-1 provides the categories of the risk catalog. For each category, I provide
examples of publicly reported failures, with links to the press reports, and potential
failures, or problems I expect to happen, and would want to look for as a software
tester, but which havent been discussed in public bug reports. The public and
potential failures together make up a set of failure modes within a category.

56

4.5 Overview of the Risk Catalog


One goal in developing a broad risk catalog is to broaden the risk analysis that
software testers use to guide their testing. A well-structured catalog provides a
wider range of examples and categories of risk than any one person is likely to
think of while designing his or her tests. Another goal is to provide training
material, and to help software testers who are new to wireless mobile applications
start from a prestructured risk profile and so come up to speed more quickly.

The catalog is on the basis of Bachs Heuristic Test Strategy Model (Bach, 2006a)
where the categories and subcategories are as shown in Table 4-1. I have deleted
some sections that were not very relevant or useful in testing mobile applications,
and added some sections that were required in the context of mobile application
testing. Bachs original model can be found at
http://www.satisfice.com/tools/satisfice-tsm-4p.pdf, last accessed January 24,
2007)
Table 4-1: Categorization of the Risk Catalog
Categorization
Test Idea
Description (input taken from Bach,
Heuristics

Heuristics

2006a)

Product Elements
Structure

Code

This includes the code structures that


comprise the product, from
executables to individual routines

Interfaces

Points of that connection and


communication between sub-systems

Hardware

Any hardware component that is


57

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)
integral to the product

Non Executable

Any files other than multimedia or

Files

programs, like text files, sample data,


or help files

Functions

User Interface

Any functions that mediate the


exchange of data with the user (e.g.
navigation, display, data entry)

System Interface

Any functions that exchange data with


something other than the user, such as
with other programs, hard disk,
network, printer, etc.

Calculations

Any arithmetic function or arithmetic


operations embedded in other
functions

Startup/ Shutdown

Each method and interface for


invocation and initialization as well as
exiting the product

Error Handling

Any functions that detect and recover


from errors, including all error
messages

Interactions

Any interactions or interfaces between


functions within the product

Data

Input

Any data that is processed by the


product

Output

Any data that results from processing


58

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)
by the product

Big and Little (Data Variations in the size and aggregation

Platform

Handling)

of data

Noise (Memory

Any data or state that is invalid,

Management,

corrupted, or produced in an

Memory Leaks)

uncontrolled or incorrect fashion

External Hardware

Hardware components and

(Mobile Switching

configurations that are not part of the

Center Failures,

shipping product, but are required (or

Hardware Failures)

are optional) for the product to work,


for example, CPU's, memory,
keyboards, peripheral boards, etc.

External Software

Software components and

(Third Party

configurations that are not a part of the

Software, Micro-

shipping product, but are required (or

browser Failures,

optional) in order for the product to

Wireless Network

work: operating systems, concurrently

Failure, Location

executing applications, drivers, fonts,

Registers, Software

etc.

Upgrade Errors)
Internal

Libraries and other components that

Components

are embedded in your product but are

(Mobile Database,

produced outside your project. Since

Database Server,

you dont control them, you must

Mobile Middleware

determine what to do in case they fail


59

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)

Interface Failures)
Environment

The physical environment in which

(Mobility and

the product operates, including such

Resource

elements as noise, light, and

Management

distractions

Failures, Location
Management)

Time

Common Use

Patterns and sequences of input that

(Transaction

the product will typically encounter.

Errors)

This varies by user.

Input/ Output (Real

When input is provided, when output

Time Failure)

created, and any timing relationships


(delays, intervals, etc.) among them

Synchronization Software Interface

Errors in the synchronization software

Hardware Interface

Errors in the hardware accessories

Wireless

Error in proxy server or wireless

Synchronization

connectivity

Operational Criteria
Capability

Suitability

Presence and appropriateness of a set


of functions for specified tasks

Accuracy

Agreed results or effects

Interoperability

Ability to interact with specified


systems

Compliance

Adhere to application related


60

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)
standards or conventions or
regulations

Dependability

Fault Tolerance

Ability to maintain a specified level of


performance in cases of software
faults

Maturity

Frequency of failure by faults in the


software

Recoverability

Capability of a system or application


to maintain services during attack or
when all the resources are not
available.

Reliability

Ability of the system to will work well


and resist failure in all required
situations

Usability

Learnability

The operation of the product can be


rapidly mastered by the intended user

Efficiency

Once the user has learned the system,


a high level of productivity is possible

Satisfaction

Users should like the system

Memorability

Easy to remember

Accessibility

The product meets relevant


accessibility standards and works with
O/S accessibility features

Error Messages

Is error handling for usability issues


available?
61

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)

Security

Authentication

The ways in which the system verifies


that a user is who she says she is

Access Control and

The rights that are granted to

Authorization

authenticated users at varying


privilege levels

Privacy and

The ways in which customer or

Confidentiality

employee data is protected from


unauthorized people

Data Integrity

The system should not corrupt the data


as compared to their original state

Wireless Network

Security failures that could occur in

Security

different wireless networks

Availability

Duration of time a system available


for use by its intended users

Scalability

Horizontal

Adding resources to an existing device

Vertical

Adding more resources to form a


group of resources,

Performance

How speedy and responsive is it?

Installability

System

Does the product recognize if some

Requirements

necessary component is missing or


insufficient?

Configuration

What parts of the system are affected


by installation? Where are files and
resources stored?

Uninstallation

When the product is uninstalled, is it


62

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)
removed cleanly?

Upgrades

Can new modules or versions be


added easily? Do they respect the
existing configuration?

Compatibility

Application

The product works in conjunction with

Compatibility

other software products

Operating System

The product works with a particular

Compatibility

operating system

Hardware

The product works with particular

Compatibility

hardware components and


configurations

Backward

The products work with earlier

Compatibility

versions of itself

Resource Usage

The product doesnt unnecessarily hog


memory, storage, or other system
resources

Quality of

Can you configure and depend on the

Service

networks service?
Development Criteria

Supportability

How economical will it be to provide


support to users of the product?

Testability

Visibility (Field

Ability to observe the states, outputs,

failures)

resource usage and other side effects

Control

Ability to apply input


63

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)

Maintanability

Analyzability

effort needed for diagnosis of


deficiencies or causes of failures

Changeability

Risks of the unexpected effects of


modifications

Stability

Effort needed for validating the


modified software

Portability

Adaptability

Adaptation to different specified


environments

Conformance

Adhere to standards or conventions


relating to portability

Replacability

Opportunity and effort of using it in


the place of specified other software

Project Environment
Customers

Anyone who is a client of the testing and development


project

Information

Information about the product or project that is needed for


testing

Developer

How you get along with the programmers, so that

Relations

information about possible errors could be obtained from


them

Test Team

Anyone who will perform or support testing and


development

Equipment and

Hardware, software or documents required to administer

Tools

testing and development


64

Categorization

Test Idea

Description (input taken from Bach,

Heuristics

Heuristics

2006a)

Schedule

The sequence, duration, and synchronization of events

Test Items

The product to be tested

Deliverables

The observable products of the test project

65

Chapter 5: Mobile Application Risk


Catalog
This chapter provides the detailed risk catalog for mobile applications. Bach
(2006a) has defined many of the categories and heuristics used in this catalog. I
have filtered and presented the categories that are relevant to mobile applications.

5.1 Product Elements


Product Elements are things that you intend to test (Bach, 2003 p.1). In the case
of mobile applications, the most relevant product dimensions are the mobile
software and hardware interfaces, external dependencies like micro browser,
mobile switching center, location registers, and wireless network. Also, there can
be failures in mobility and resource management. Since mobile devices have less
resource as compared to desktop machines, they are more prone to memory related
errors, and are subject to harsher network conditions like high latency and low
bandwidth.

5.1.1 Structure
Everything that comprises the physical product.

66

5.1.1.1 Code
Typical failures in this category arise due to not taking into consideration the
limited processing power and memory limitations of a mobile device.

Failure Modes

No optimization of algorithms to run on a mobile device.

Incorrect usage of data structures thereby imposing more memory


requirements to execute a particular routine or procedure.

Assuming existence of components and tools typically available only on a


desktop machine while designing code for a mobile application.

Designing network centric routines that require large network data streams
for proper execution.

5.1.1.2 Interfaces
Wireless Application Protocol (WAP) Gateway Failures: WAP gateway is a server
that is responsible for converting a Wireless Transport Protocol (WTP) request
made by a smart phone to an HTTP request to be processed by a Web server. WAP
gateway also translates an HTML Web page to wireless markup language (WML)
if required.

Failure Modes

Failure in transcoding HTML to WML resulting in errors serving a page to


the cell phone.

Problems arising due to cards / fonts of WML not supported on a device.


67

Problems arising due to deck size of WML exceeding the device limit.

Failure arising due to usage of non standard HTML tags.

Failure arising due to usage of client side JavaScript and scripts that is
available only on a web browser running on desktop machines.

5.1.1.3 Hardware
Client side devices play a more important role in mobile applications as compared
to their desktop counterparts. There are diverse set of devices available with
varying capabilities. Mobile application developers and testers should take into
consideration the hardware platform on which the application will execute.

Failure Modes

Screen resolution of mobile device not taken into consideration while


designing the mobile application.

High level of coupling between the mobile application and mobile


hardware, making it difficult to port the application to a different mobile
device.

5.1.1.4 Non-Executable Files


Some non-executable files of interest in mobile computing domain are the help
files, multimedia demonstration of application and samples.

Failure Modes

No help system present for the mobile application.


68

Help system not rendering correctly on the mobile device.

Help system does not respond the keyboard button or device specific help
button.

No desktop help system present to supplement the online help present on


the device.

5.1.2 Functions
Everything that the product does.

5.1.2.1 User Interface


Failure Modes

Non-intuitive placement of navigation buttons on the screen.

Users reaching a deadlock while navigating. Only way to proceed further is


to return to the main screen.

Does the application allow data input from the touch screen as well as
device keyboard?

Deviation from the user interface guidelines.

Inconsistencies like usage of different font and color between related


screens of the mobile application.

5.1.2.2 System Interface


Mobile middleware is an integral part of most mobile applications and provides
interfaces and methods used by the application layer. A middleware is defined as:
69

An enabling layer of software that resides between the business application and
the networked layer of heterogeneous (diverse) platforms and protocols. It
decouples the business applications from any dependencies on the plumbing layer,
which consists of heterogeneous operating systems, hardware platforms and
communication protocols. (Source: International Systems Group)

A mobile middleware is a layer of software that is used by application developers


to connect their applications with disparate mobile (wireless and wired) networks
and operating systems.

Failure Modes

Optimized device specific interface not used in the mobile application.

Failure of the mobile middleware in connecting to wireless network.

Inadequate support in the middleware for different types of wireless


network.

Inadequate user interface controls and widgets provided to the application


developer.

User interface widgets in the middleware not optimized for mobile devices.

5.1.2.3 Calculation
One of the sample application tested by me is Cells which is described in more
detail in chapter 6. This application provides basic arithmetic operations to
calculate sum, average and other operations as available in a spreadsheets. Some
potential failure modes that are calculation specific are listed below.

70

Failure Modes

Improper calculation of the basic arithmetic functions like average,


minimum, maximum, etc.

Character encoding of the mobile device interprets values of numbers in an


unexpected way.

Improper calculation at the boundary values due to limitation of the data


storage capability of the mobile device.

ASCII values are calculated in case user enters a character other than a
number.

5.1.2.4 Startup/ Shutdown


Startup and shutdown of mobile application is very different because of the
difference in the process and threading model of the mobile operating systems.

Failure Modes

Mobile application does not terminate all open connections to the wireless
network when shutting down.

Mobile application worker process continues to hold memory even after the
application exits.

Mobile application does not save data or state after unexpected shut down.

Mobile application assumes existence of keyboard button or touch screen to


allow user to close the application.

71

5.1.2.5 Error Handling


Error handling in mobile application is challenging and very important due to the
fact that end user is not connected using a high bandwidth network that can provide
additional information in case of failure. Application developers and testers have to
ensure the error messages are terse and to the point as mobile devices usually do
not have big screens to provide the details of the error. Error logging routines have
to decide what to log based on the severity of the message as local storage space on
the device is limited.

5.1.2.6 Interactions
With the advent of service oriented architecture, many application interfaces now
reside on a server. Mobile applications have to connect to the server that contains
the methods and interfaces to get or set data and carry out tasks. Mobile book
catalog described in chapter 8 uses this mechanism to get the list of books from a
backend database.

Failure Modes

Inability to connect to the remote server due to problems in the proxy


objects on the client device.

Problem in retrieving data on the client device due to data corruption over
the wireless network.

Client device not informed of any error arising on the server component.

Data payload too big over the wireless network resulting in delay in
receiving response from the server.
72

Problems in data serialization and de-serialization over the wireless network


resulting in data truncation and incomplete transmission from the client to
server and vice versa.

5.1.3 Data
Everything that the product processes.

5.1.3.1 Input
A well designed mobile application allows only valid input across its subsystem
boundaries and interfaces. Processing invalid input is expensive as it requires
memory and processing power. Data input in mobile devices is tough for the end
user as there is no keyboard or even if the device has a keyboard, it is not as good
as the one available for desktop machines. This requires that the data input to the
application has to be minimized and optimized to provide maximum efficiency to
the end user.

Failure Modes

Crash due to unhandled invalid input.

Corruption of the file system due to invalid input not handled at user
interface.

No default values for the common fields in the user interface of the
application under test.

No auto correct feature to fix common mistakes in input.

73

No status information provided to the user. For example repeated clicking


of a button that fetches data from the backend may result in application
crash due to multiple connections open with the backend.

5.1.3.2 Output
This category highlights failures that can arise due improper output of the data after
processing. Some failure modes in this category are from the sample application
(Mobile book catalog) described in chapter 8.

Failure Modes

User interface does not support scrolling to display all available data.

Book titles do not download with right price on the mobile device.

Missing columns in the data grid of the application.

Missing title if there is no price information.

5.1.3.3 Big and Little


Data handling

Failure Modes

Varying treatment of characters by different mobile content delivery


technology not taken into consideration.

Application vulnerable to failures at the boundary values.

74

In the case of book catalog (mobile Web service), are the special characters
handled appropriately if present in the book price?

5.1.3.4 Noise
Memory Management
Mobile devices are highly resource-constrained with respect to the amount of
primary and secondary storage. Special attention is required while developing and
testing to avoid memory leaks and wild pointers.

Memory leaks

Failure Modes

Failures due to memory leaks on the client device.

Failures due to memory leaks on the server side.

Related Bugs and Links

Memory Leak in Pocket Internet Explorer:


http://support.microsoft.com/default.aspx?scid=kb;en-us;315028

Memory leak due to SOAP exception:


https://www.alphaworks.ibm.com/forum/wstkmd.nsf/0/001C8DD95F6E7C
F2633A6F05F9275483?OpenDocument

Crash when establishing connection with dead Web Services:


http://www.alphaworks.ibm.com/forum/wstkmd.nsf/0/896C1B509F013066
9C5E7D90F1D9812E?OpenDocument (IBM, 2005)
75

5.1.4 Platform
Everything on which the product depends (and that is outside your project).

5.1.4.1 External Hardware


Mobile switching center failures
In the case of a cellular wireless network, establishing a communication session
means making a wireless channel available between a mobile host (cell-phone) and
a mobile support station. The mobile host sends a request to the mobile support
station in its cell. There are many different channel allocation algorithms to avoid
channel interference and efficiently utilize the limited frequencies. All these
functions are carried out at the mobile switching center. (Cao, 2000)

Hardware failures

Related Bugs and Links

Problems with Casio Cassiopeia Pocket PC 2002:


http://www.pdastreet.com/forums/showthread.php?threadid=779

Problems in Compaq hardware:


http://www.cewindows.net/bugs/pocketpc2002-compaq.htm

Problems in HP hardware: http://www.cewindows.net/bugs/pocketpc2002hewlettpackard.htm

Problems in Toshiba hardware:


http://www.cewindows.net/bugs/pocketpc2002-toshiba.htm
76

HP Jornada with Pocket Internet Explorer for Windows CE Saves Cookies


When "Save my password" Is Not Selected:
http://support.microsoft.com/default.aspx?scid=kb;en-us;303676

5.1.4.2 External Software


Third party software failures
Mobile Application Architecture utilizes various third party software applications.
In the case of location-based services, mapping is carried out with the help of
content providers who have the geographic data. Potential problems in the third
party applications may lead to failures in the application under test.

Micro-browser failures
A micro-browser offers the same basic functionality as a desktop browser. It is
used to submit user requests, receive and interpret results and allow the users to
surf Web pages using their handheld (Nguyen, 2003).

Failure Modes

Application not tested for correct functionality on text-only browser.

Application fails to work on palm-based browser.

Micro-browser does not support the security mechanism like SSL.

Micro-browser does not support tables and images.

Micro-browser does not support cHTHL, thereby it is not able to serve


i-mode pages.
77

AvantGo specific problems. AvantoGo supports Web channel formatted


pages.

Related Bugs and Links

Known Issues in Pocket Internet Explorer on a Handheld PC:


http://support.microsoft.com/default.aspx?scid=kb;en-us;190307

Pocket Internet Explorer Quits When You Connect to an SSL Site with the
DES56 Cipher: http://support.microsoft.com/default.aspx?scid=kb;enus;320894

PRB: You Receive an Unknown Error When You Call a Method of the
MFC ActiveX Control:
http://support.microsoft.com/default.aspx?scid=kb;en-us;310566

Error Message: 500 java.lang.IllegalArgumentException: [object]


http://support.microsoft.com/default.aspx?scid=kb;en-us;258910

Pocket Internet Explorer: "Bad MIME Format" Viewing JPEG Images:


http://support.microsoft.com/default.aspx?scid=kb;en-us;187608

Cannot Download .wav Files in Pocket Internet Explorer 1.1:


http://support.microsoft.com/default.aspx?scid=kb;en-us;167923

Unable to Personalize a User Who Is Using Pocket Internet Explorer:


http://support.microsoft.com/default.aspx?scid=kb;en-us;284151

Wireless network failures


This category targets the failures in the typical wireless networks.

78

802.11 Failure Modes

Configuration problem in the 802.11 BSS (Basic Service Set).

Configuration problems in 802.11 ESS (Extended Service Set).

Location of access point not appropriate, leading to fading of signal.

Interference with another access point, leading to loss of signal.

Bluetooth Failure Modes

Failure in setting up a Bluetooth piconet.

Failure to build a Bluetooth scatternet from piconets.

Cellular Network Failure modes

Loss of signal.

Coverage issues.

Loss of bandwidth due to mobility.

Home Location Register / Visitor location Register / Location database


In cellular systems, home location register (HLR) is the database that has the
permanent registry for the service profile, i.e. information about the subscriber.
Visitor location register (VLR), on the other hand, serves as a temporary repository
for the profile information. Many different algorithms are in use to manage the
mobility. The most common strategy in use in North America is IS-41 that utilizes
a two-tier system of HLR and VLR to keep track of the mobile node. (Biaz &
Vaidya, 1997)
79

Failure Modes

Failure to update the HLR on the status of the mobile host after it enters a
new VLR (Biaz & Vaidya, 1998).

Excessive load on the network signaling resource due to the mobility of the
mobile hosts.

Excessive load on the database due to frequent updates needed resulting due
to mobility of the node.

Software upgrades errors

Failure Modes

Application does not work with the upgraded operating system.

Application or device freezes due to firmware upgrade.

Related Bugs and Links

Problem faced due to firmware upgrade on Samsung T series:


http://www.reviewcentre.com/post64347.html

5.1.4.3 Internal Components


Mobile Database
There are two different approaches for database connectivity on a handheld
wireless device. There is a thin-client model where technology, like WAP, enables
users to view information that has been extracted from the database and displayed
80

as a Web page with the help of a micro-browser. Since availability of the wireless
network still has some issues, this model is not very suitable for data-intensive
applications. The alternative model makes significant data reside on the handheld (a
local relational-database on the handheld).

Database Server
A database server is software that manages data in a database. Database
management functions such as the location of the actual record being requested,
updation, deletion and protection of the data are performed by the database server.
A database server also provides access control and concurrency control. So, while
testing a mobile application that connects to the database, if there is some erratic
data encountered, the database server could be the culprit and should be tested.

5.1.5 Operations
How the product will be used.

5.1.5.1 Environment
Mobility and Resource Management Failures
This category targets the failures that occur due to the mobility of the node and
improper resource management to offer uninterrupted wireless connectivity to the
user.

81

Failure modes

Frequent disconnection due to the mobility of the node.

Disruption during hand-off between different networks.

Depletion of the IPv4 addresses.

Location management
Location management is an extremely important functionality in location-based
mobile applications. A location-based mobile application utilizes the knowledge of
the location of the mobile node to serve location-specific information. It is used in
telematics, route directions, call routing, billing and several other applications.

Failure modes

Change in the logical identity of the device or the owner. A logical identity
could be MAC address, IP address or anything else used to identify a
mobile node.

Problems arising due to not updating the Location database server.

Problems arising due to mobile node not re-registering with the base station.

Failure in receiving GPS data, in case of GPS being used to locate mobile
nodes.

Failure to translate the geocode (latitude, longitude) into a map by the


content provider.

Failures in the triangulation mechanism to determine location.

82

5.1.5.2 Common Use


Transaction errors
These are the errors that occur in carrying out a typical transaction. Transaction will
depend on the functionality that the application offers and is highly contextdependent.

Failure Modes

Failure in transmitting information to the handhelds by the base unit.

Failure in completing a task assigned due to missing information on how to


transmit data.

5.1.6 Time
Any relationship between the product and time.

5.1.6.1 Input/ Output


Real-time failure
Real-time applications or services not only have to carry out the right computation
but the time taken to carry out a specific task or providing a service in a specified
amount of time is also of considerable importance. Real-time applications could be
broadly categorized as hard real-time applications or soft real-time applications
depending on the real-time constraints they have to satisfy (Dreamtech, 2002).
83

Failure Modes

Delay in sending localized / personalized content to a user.

Data instance failure

Related Bugs and Links

Problem due to number of records system stop:


http://support.microsoft.com/default.aspx?scid=kb;en-us;Q317698

5.1.7 Synchronization
How the data will be synchronized.
Synchronization is a feature that enables exchanges, transforms and synchronizes
data between two different applications or data stores. Synchronization could be
either cradle-based or wireless. The SyncML Consortium is on a mission to get
mobile application developers and handheld device makers to use a common,
XML-based data synchronization technology. This category lists the different
failure that could be encountered while synchronizing data between two
applications.

5.1.7.1 Software Interface


Failure Modes

Corruption of data files during hotsynch.

Problem in the synchML server.


84

Active synch problems in case of Pocket PC devices.

Problems encountered while establishing partnerships with more than one


machine.

Failure in synchronizing data due to interference with another application.

Related Bugs and Links

Office-Palm link: http://www.earthv.com/articles.asp?ArticleID=579

Intellisynch error:
http://www.pdastreet.com/forums/showthread.php?threadid=779

Installation of another software over active synch:


http://support.microsoft.com/default.aspx?scid=kb;en-us;263450

Problem with the USB driver of Mac:


http://www.palminfocenter.com/view_Story.asp?ID=703

Problem with hotsynch synchronizing datebook:


http://www.geocities.com/Heartland/Acres/3216/faq_pg7.htm

Hot synch problem with some Windows XP:


http://www.computing.net/pda/wwwboard/forum/278.html

Database access errors during synchronization:


http://support.microsoft.com/default.aspx?scid=kb;en-us;294213

Synchronization failure with outlook and active synch:


http://support.microsoft.com/default.aspx?scid=kb;en-us;276563

Problem synchronizing third party applications and software:


http://support.microsoft.com/default.aspx?scid=kb;en-us;271980

Problem with AvantGo synchronization:


http://support.microsoft.com/default.aspx?scid=kb;en-us;259938

85

Problems in continuing message interchange over synchML server:


http://lists.axialys.net/pipermail/syncml/2003-April/000010.html

Problems with multiple inboxes with single parternership:


http://support.microsoft.com/default.aspx?scid=kb;en-us;269217

Palm hotsynch troubleshooting: http://www.palm.com/support/hotsync.html

Problem with hotsynch due to discrepancy in the registry entries:


http://www.geocities.com/Heartland/Acres/3216/faq_pg6.htm

Error due to CDO collaboration object model not installed:


http://support.microsoft.com/default.aspx?scid=kb;en-us;299625

Error in Activesynch due to missing files:


http://support.microsoft.com/default.aspx?scid=kb;en-us;281598

Soft Reset: http://www.mobile-andwireless.com/Articles/Index.cfm?ArticleID =27164

Problem in the synchronization port (?) - COM / USB of the computer:


http://support.microsoft.com/default.aspx?scid=kb;en-us;185750

Problem synchronizing application like money by unexpected actions


during the synchronization process:
http://support.microsoft.com/default.aspx?scid=kb;en-us;263988

No reconnection after log off and logging on again:


http://support.microsoft.com/default.aspx?scid=kb;en-us;q321935&

Synchronization problem due to error in the server:


http://support.microsoft.com/default.aspx?scid=kb;en-us;318450

Corrupted Data File May Prevent Mobile Application From Opening Up:
http://www.filemaker.com/ti/108092.html

ActiveSync Reports Unresolved Items When Device Resources Are Low:


http://support.microsoft.com/default.aspx?scid=kb;en-us;295001

86

5.1.7.2 Hardware Interface


In case of local synchronization, a cradle that is connected to either the serial or
USB port is used to mount the mobile device to synchronize the files and other data
between a desktop and the mobile device. This category lists the failures that could
occur in interfacing the cradle with a desktop due to failures in the serial or USB
port of the desktop.

Failure Modes

Failure in the USB driver installed on the desktop.

Failure in the serial port communication.

BIOS errors in the desktop resulting in breakdown of communication


between the external device and desktop.

Failure in the infrared port of the mobile device or partner machine.

Related Bugs and Links

Problem with the USB driver of Mac:


http://www.palminfocenter.com/view_Story.asp?ID=703

5.1.7.3 Wireless Synchronization


Wireless modems are also in use to synchronize data. In this wireless
synchronization method, mobile device connects to the proxy server using a
wireless modem and preformatted Web pages stored on the Web server is
transferred to the device. User can then view the pages offline (Nguyen, 2003).
87

Infrared synchronization is also in use where devices are synchronized locally


using line of sight.

Failure Modes

Problems in the proxy server.

Non-compliance with the guidelines for preformatting the Web pages for
mobile devices.

Failure in infrared synchronization.

5.2 Operational Quality Criteria


Quality Criteria are the rules, values, and sources that allow you as a tester to
determine if the product has problems. Quality criteria are multidimensional, and
often hidden or self-contradictory. ((Bach, 2003), p. 1)

Operational quality criteria are criteria that relate to the product in use. We
distinguish them from development criteria, which relate to the product as a static
object under development.

5.2.1 Capability
Can it perform the required functions?
ISO 9126 defines functionality as, A set of attributes that bear on the existence of
a set of functions and their specified properties. The functions are those that satisfy
stated or implied needs. This set of attributes characterizes what the software does
88

to fulfill needs, whereas the other sets mainly characterize when and how it does
so. (Source: http://www.issco.unige.ch/ewg95/node14.html)

ISO 9126 further subdivides Functionality into Suitability, Accuracy,


Interoperability and Compliance. These subcategories are discussed as individual
categories in this qualitative fault modeling of mobile applications.

5.2.1.1 Suitability
Attributes of software that bear on the presence and appropriateness of a set of
functions for specified tasks. (ISO9126, 1991)

Failure Modes

No implementation or malfunction of the renaming of the cells spreadsheet.

Unable to beam the Cells spreadsheet.

File size too big to be served and viewed on a handheld.

No admin button to carry out administrative tasks in PAAM.

Failure to add an individual handheld to the existing subgroup of PAAM.

Failure to delete, add or move files from one subgroup to another.

Failure in the filter function of PAAM (right click for their suggestion).

Failure to archive files in PAAM.

Related Bugs and Links

Incomplete functional implementation in Pocket EXCEL:


http://www.earthv.com/articles.asp?ArticleID=579
89

Pocket PC 2002: Contact Does Not Beam:


http://support.microsoft.com/default.aspx?scid=kb;en-us;323011

Pocket PC Calendar Does Not Correct Time Zone Changes:


http://support.microsoft.com/default.aspx?scid=kb;en-us;268249

5.2.1.2 Accuracy
Attributes of software that bear on the provision of right or agreed results or
effects. (ISO9126, 1991)

Failure Modes

Data that is beamed is inaccurate due to transmission problem.

Data not stored in the appropriate folder.

Not able to add a students name in the subgroup of PAAM.

Application designed without taking screen size into consideration.

Misalignment of image on the screen.

Multiple copies of a file with the same name but different content exist on
the handheld.

Related Bugs and Links

Known Issues in Pocket Excel on a Handheld PC:


http://support.microsoft.com/default.aspx?scid=kb;en-us;189502

Known Issues in Pocket Word on a Handheld PC:


http://support.microsoft.com/default.aspx?scid=kb;en-us;188782

90

Graphics Issues in Pocket PowerPoint Presentations:


http://support.microsoft.com/default.aspx?scid=kb;en-us;186757

5.2.1.3 Interoperability
Attributes of software that bear on its ability to interact with specified systems.
(ISO9126, 1991)

Failure Modes

Specific make of handheld not able to send information to PAAM.

Problem running the software due to incompatible version of the operating


system of the handheld. For example certain applications developed for
Palm OS requires earlier version of Palm OS.

Application not available for all the leading handheld platforms like Palm
OS, Pocket PC, Blackberry and Symbian OS.

Application not able to run on dirty configuration. A dirty configuration is


any configuration that is not supported but is very common. (Collard, 2003)

Application can run only on one kind of network. For example, if CDMA or
1XRTT is used for voice and data, it can only work in North America and
places where CDMA is in use.

Beaming of file or data between handhelds having different operating


systems not possible.

Failure to interoperate between differing screen resolutions. Some of the


common resolutions available are 160x160, 320x240, 95x65, 120x130,
1024x768 and 1024x768.

91

Failure to interoperate between differing colors. Some of the common color


schemes to be found on the handheld devices are 2-bit, 16-bit, 32-bit and
varying gray levels.

Application compatible across different microbrowsers available.

Application complies with the differing graphics format.

Related bugs and links

Incompatibility bugs wireless:


http://australianit.news.com.au/articles/0,7204,6459892%5e15397%5e%5en
bv%5e,00.html

Problems When You Convert Files Between Excel and Pocket Excel:
http://support.microsoft.com/default.aspx?scid=kb;en-us;185921

5.2.1.4 Compliance
Attributes of software that make the software adhere to application related
standards or conventions or regulations in laws and similar prescriptions.
(ISO9126, 1991)

Failure Modes

Non-compliance with the wireless network standard.

Application does not comply with the UI guidelines of development leading


development environments like Palm OS, MSDN or BREW.

92

5.2.2 Dependability
Will it work well and resist failure in all required situations?
Dependability is a term encompassing many notions like reliability, recoverability,
availability and safety within itself (Malloy, Varshney, & Snow, 2002). ISO 9126
divides reliability into three separate categories: Fault Tolerance, Maturity and
Recoverability.

Applications running on wired infrastructures have very high dependability because


of the nature of such infrastructures. Current and emerging wireless networks and
the applications running on them do not offer such levels of dependability.
Therefore, it is extremely important to concentrate on this category while testing
any wireless or mobile application.

Reliability within the mobile context could be defined as the Ability of the
wireless and mobile networks to perform their designated set of functions under
certain conditions for a certain operational time. (Malloy et al., 2002) I have
included the listing of failure modes with respect to Fault Tolerance, Maturity and
Recoverability in this paper as that seemed to be the most appropriate way to deal
with the issues concerning dependability of a wireless application and networks.

5.2.2.1 Fault Tolerance


Attributes of software that bear on its ability to maintain a specified level of
performance in cases of software faults or of infringement of its specified
interface. (ISO9126, 1991)

93

Failure Modes

Unable to resume transmission of data or corruption of data due to failure of


an access point.

No auto configuration of addressing is based on context. (Sterbenz et al.,


2002)

No auto configuration of signaling and routing is based on mission.


(Sterbenz et al., 2002)

Problem in channel allocation algorithm for the cellular network.

Failure in borrowing a channel is due to delay in communication. (Cao,


2000)

Failure to establish a channel is due to network congestion.

Failure to establish a channel is due to communication link failure.

Failure in establishing a channel is due to mobile switching center failure.

5.2.2.2 Maturity
Attributes of software that bear on the frequency of failure by faults in the
software. (ISO9126, 1991)

Failure Modes

Mean-time between failures (MTBF) in carrying out a transaction very


high.

Frequent stalls in transmissions.

Low power of transmission of the radio waves.

94

5.2.2.3 Recoverability
Recoverability is the capability of a system or application to maintain services
during attack or when all the resources are not available.

Failure Modes

Application does not switch to offline mode when there is a loss in network
connectivity.

Delayed recovery after timeouts. Normally the recovery should be in


milliseconds or microseconds. General Packet Radio Service (GPRS)
networks have the recovery time in seconds after timeouts due to link
stalls. (Chakravorty & Pratt, 2002)

No adaptability of the power of transmission of the signals.

No reconnection attempt after the device fails to establish the wireless link.

5.2.2.4 Reliability
This checks if the product will work well and resist failure in all required situations.
It includes the following:

Error handling: the product resists failure in the case of errors, is graceful
when it fails, and recovers readily.

Data Integrity: the data in the system is protected from loss or corruption.

Safety: the product will not fail in such a way as to harm life or property.

95

5.2.3 Usability
How easy is it for a real user to use the product?
Usability is the "effectiveness, efficiency and satisfaction with which a specified set
of users can achieve a specified set of tasks in a particular environment."
(ISO9241-11, 1998) According to Jakob Neilson, usability subsumes the notions of
learnability, memorability, efficiency, error rate / recovery and satisfaction (Nielsen
& Mack, 1994). ISO 9126 divides usability into understandability, learnability and
operability.

Usability issues are highly pronounced on handheld devices due to the limitations
of wireless handheld devices (Passani, 2000). They have a limited form-factor and
the display units are smaller than their desktop counterparts. Designing for such
small screen size needs more thinking and better navigation structures. Another
factor worth taking into consideration is the data input in a PDA or smart phone.
The keyboard or the soft input panel of these devices is not very spacious. This
warrants new and innovative ways of reducing the amount of the data that a user is
made to enter. In this paper, usability failure modes and risks are divided into
learnability, efficiency, memorability, error recovery and satisfaction.

5.2.3.1 Learnability
The system should be easy to learn so that the user can rapidly start getting some
work done with the system. (Nielsen & Mack, 1994)

96

Failure Modes

Inconsistent layout.

Information not arranged in hierarchical or tree-like structure.

Related bugs and links

Inconsistent user interface:


http://www.cewindows.net/commentary/userinterface.htm

5.2.3.2 Efficiency
The system should be efficient to use, so that once the user has learned the system,
a high level of productivity is possible. (Nielsen & Mack, 1994)

Failure modes

No importance given to the main activities of the portable users and all the
functionalities implemented.

Main activities of the user not implemented in the fastest possible manner.

Verbose text on a small screen.

No implementation of Back functionality.

Application directly transcoded to any wireless markup language from


HTML.

Users not given proper feedback when they commit errors.

Users do not understand what to do when they commit mistakes.

97

5.2.3.3 Satisfaction
The system should be pleasant to use, so that users are subjectively satisfied when
using it. Users should like the system. (Nielsen & Mack, 1994)

Failure Modes

Users found it difficult to use the application.

Application needs a lot of data entry.

5.2.3.4 Memorability
The system should be easy to remember so that the casual user is able to return to
the system after some period of not having used it without having to learn
everything all over again. (Nielsen & Mack, 1994)

Failure Modes

No customization of the application for the specific micro browser. This


might result in content not being rendered properly on a micro browser.

Difficulty in remembering actions needed to perform a task

5.2.3.5 Accessibility
Can it be used by everyone?
A system is said to be accessible when it can be used by anyone irrespective of
their physical or technical capabilities. With respect to people with physical
98

disabilities, accessibility means providing supporting tools or assistive technologies


like screen reader, adapted keyboards or head-mounted pointers to enable the use of
product.

Failure Modes

Red / green color blindness not taken into consideration.

User not able to press a button on the handheld device due to improper
placement of the button.

User not able to use biometric security feature of the handheld due to strict
requirement of the hand movement.

5.2.3.6 Error Messages


Error handling is provided for obvious errors. It is a good aid for the user, however,
if error handling is provided for errors a user might make in terms of the
functionality of the product.

5.2.4 Security
How well is the product protected against unauthorized use or intrusion?
Security issues could be subdivided into six subcategories: privacy and
confidentiality, access control and authorization, authentication, data integrity,
wireless network security, and availability.

99

5.2.4.1 Authentication
Authentication implies establishing identity of users, process or hardware
components.

Failure Modes
No authentication mechanism is used.
Weak passwords that can be easily broken.

Related Bugs and Links

PalmOS Authentication Bypass VulnerabilityA:


http://www.securityfocus.com/bid/5538/info/

Palm OS Weak Encryption Vulnerability:


http://www.securityfocus.com/bid/1715/info/

5.2.4.2 Access control and authorization


Access control implies as to who may have access to the physical device or data.

Related Bugs and Links

Security analysis of palm operating system


http://www.usenix.org/events/sec01/full_papers/kingpin/kingpin_html/inde
x.html

100

5.2.4.3 Privacy and confidentiality


Confidentiality and privacy means the protection of the information about a user or
process.

Failure Modes

Disclosure of passwords.

Disclosure of private data like credit card details.

Related Bugs and Links

Yahoo! Mobile service discloses random sensitive information to


unauthorized users: http://xforce.iss.net/xforce/xfdb/11352

Stolen credit card information:


http://www.cewindows.net/commentary/userinterface.htm

Handhelds are a hacker's delight:


http://www.ciol.com/content/developer/2003/103080301.asp

Failure Modes

Physical access to a stolen device: http://www.kb.cert.org/vuls/id/789985

Related Bugs and Links

Linux PDA Security Hole:


http://www.pdacenter.net/news/static/102643633846024.shtml

Palm Password bypass error: http://www.securityfocus.com/bid/2429


101

World readable permission in Palm desktop:


http://www.securityfocus.com/bid/2398/discussion/

Palm Desktop For MacOS X Hotsync Insecure Backup Permissions


Vulnerability: http://www.securityfocus.com/bid/3863

Mouse hole in device security: http://pcw.vnunet.com/News/1123233

Unauthorized access due to stolen device:


http://www.computerworld.com/securitytopics/security/story/0,10801,7812
7,00.html

5.2.4.4 Data Integrity


Integrity means that the system should not corrupt the data as compared to their
original state.

Related Bugs and Links

Corruption of file system using Zaurus vulnerability:


http://www.internetnews.com/dev-news/article.php/1402491

5.2.4.5 Wireless Network Security


This category targets the security failures that could occur in different wireless
networks. Networks covered under this category are the Bluetooth, 802.11 and the
cellular network. Wireless network as of now is the biggest security hole in the IT
infrastructure and exposes even the wired infrastructure to attacks. A brief
explanation of the networking technologies and links to relevant literature could be
found in appendices at the end of the paper. Many of these failure modes are
explained in detail in a special publication by NIST (Karygiannis & Owens, 2002).
102

A new security specification called Wi Fi Protected Access (WPA) is being


adopted as it fixes most of the fundamental flaws in Wireless Encryption Protocol
(WEP). The key features of WPA are: Extensible Authentication Protocol (EAP),
Temporal Key Integrity Protocol (TKIP), Message Integrity Check (MIC), and
802.1X for authentication and dynamic key exchange. WPA is a standards-based
solution and offers higher level of interoperability.

Bluetooth Failure modes

No external boundary protection.

Set Bluetooth devices to the lowest sufficient power level to ensure


transmissions within bounds.

Unit keys used instead of combination keys.

Weak Personal Identification Numbers (PINs) chosen.

No alternative protocol used for the exchange of PINS.

No established "minimal key size" for any key negotiation process.

No application level (on top of Bluetooth stack) encryption used for highly
sensitive data.

Security patches and upgrades not up-to-date.

Man in the middle attack.

Break of stream cipher - DES/RC4 are weaker than 3DES and AES.

Unit keys used instead of combination keys.

Weak PINS chosen.

No alternative protocol used for the exchange of PINS.

No established "minimal key size" for any key negotiation process.

No application level (on top of Bluetooth stack) encryption used for highly
sensitive data.
103

Security patches and upgrades not up-to-date.

Man-in-the-middle attack.

Break of stream cipher - DES/RC4 are weaker than 3DES and AES.

802.11 Failure Modes

Eavesdropping from the parking lot within the AP range.

Radio interference resulting in DoS.

No encryption mechanism followed by the AP.

Shared key authentication.

Media Access Control (MAC) address could be spoofed even if WEP is


enabled allowing access to the Wireless Local Area Network (WLAN).

Security patches and upgrades not up-to-date.

Algorithms using shorter keys used.

Inherent weaknesses in the WEPcould be easily cracked.

If the plaintext message is known and attacker has the copy of the cipher
text key could be obtained by getting the IV and using a dictionary
attack.

Cipher stream reuse - key stream could be recovered from the WEP packet.

Fluhrer-Mantin-Shamir exploit of the weakness in the KSAresulted in


two tools: WEP-crack and air snort.

Static noise attackresulting in DoS.

Weak AP password.

If reset option is enableddefault encryption settings could revert to no


encryption.

No physical barriersfirst line of defense.

Network name or the SSID is stored in clear text.


104

No authentication mechanism in placeEAP is not enabled.

Man-in-the-middle attack.

Address Resolution Protocol (ARP) poisoning.

Cache poisoning.

Rogue access points.

No Access Control List (ACL) or Virtual Private Network (VPN) in place.

Access to the wired network is not protected.

IPSec or VPN or secure shell traffic is not used.

Related Bugs and Links

Problems with WEP and adoption of WPA: InformationWeek Wireless


Fidelity Deploying WPA Today June 30, 2003

Wi Fi users do not use security features: SecurityFocus HOME News Study


Wi-Fi users still don't encrypt

Problem in the wireless router: SecurityFocus discussion SMC Wireless


Router Malformed PPTP Packet, SecurityFocus HOME Vulns Info Buffalo
WBRG54 Wireless Broadband Router

Configuration problem: SecurityFocus HOME Netgear FM114P ProSafe


Wireless Router Rule Bypass

Information disclosure due to configuration error: SecurityFocus HOME


Netgear FM114P ProSafe Wireless Router UPnP Inform

Input validation error: SecurityFocus HOME Netgear FM114P Wireless


Firewall File Disclosure V

Microsoft Wireless and mobile security resources: Wireless and Mobile


Security Technical Resources

105

New WLAN Attacks Identified: http://www.wifiplanet.com/news/article.php/2246081

Cellular network Failure Modes


This subcategory lists the most common attacks on a cellular system. Some of the
failure modes in this category are inspired by Kos paper on attacks on cellular
systems. (Ko, 1996)

Phone cloning resulting due to the Electronic Serial Number (ESN) and
Mobile identification number (MIN) being read by attackers.

Cell phone not equipped with PIN.

Hijacking of the voice channel by increasing the power level of the cellular
phone.

No encryption of voice while transmission.

Denial of service occurring due to jamming of the RF channel using RF


attack.

Related Bugs and Links

Handspring VisorPhone vulnerable to DoS via SMS image transfer:


http://www.kb.cert.org/vuls/id/222739

SMS denial of service on Seimens: SecurityFocus HOME Siemens Mobile


Phone SMS Denial of Service Vulnera

Verizon Wireless bug allows SMS tapping:


http://www.threezee.com/modules.php?op=modload&name=Sections&file
=index&req=viewarticle&artid=6&page=1
106

Cellular companies fight fraud: http://www.decodesystems.com/mt/97dec/

5.2.4.6 Availability
System availability is duration of time a system is available for use by its intended
users. Opposite of availability is Denial of Service (DoS), when the system is not
available with its services either fully or partially.

Failure Modes

DoS attack due to firmware problems.

Failure initiating and maintaining the wireless link due to interference with
external devices.

Related Bugs and Links

Airborne Viruses:
http://www.networkmagazine.com/article/NMG20001130S0001

Palm OS - Phage virus: http://doc.advisor.com/doc/07194

New email virus bombards mobile phone users: http://news.com.com/21001023-241489.html?legacy=cnet

Palm HotSync Manager Remote Denial of Service Vulnerability:


http://www.securityfocus.com/bid/6673/info/

PalmOS TCP Scan Remote Denial of Service Vulnerability:


http://www.securityfocus.com/bid/3847

Mobile virus threat looms large:


http://news.bbc.co.uk/2/hi/technology/2690253.stm
107

5.2.5 Scalability
How well does the deployment of the product scale up or down?
Scalability can be sub-divided into two categories: horizontal scalability and
vertical scalability.

5.2.5.1 Horizontal scalability


Horizontal scalability is achieved by adding resources to an existing device, such as
memory.

5.2.5.2 Vertical scalability


Vertical scalability is achieved by adding more resources to form a group of
resources, such as adding another computer to an existing one.

5.2.6 Performance

Mobile applications run on wireless links that have high latency and low
bandwidth. A data packet needs multiple hops before communicating with another
device or application. Special consideration is required while designing the system
to enhance the performance of such applications.

Failure Modes

Usage of HTTP/1.0 instead of HTTP/1.1. (Cheng, Lai, & Baker, 1999)


108

Problem in the Wireless Markup Language (WML) timer function.

Time taken to load a page on a microbrowser is very high.

Time taken to beam a file to another handheld is very high.

No caching mechanism is used to enhance the performance.

Performance problems arising after loading third party software


applications.

Firmware problems resulting in low performance.

Performance degrades when loading multiple pages.

Problems arising because of poor mobile client server architecture, i.e., fat
client vs. thin client approaches. (Yang, Nieh, Krishnappa, Mohla, &
Sajjadpour, 2003)

Usage pattern of the wireless network is not taken into consideration.

Performance critical features or functions of an application is not identified


and optimized.

Response time after a complex calculation is very high.

Throughput of the system is very low when many users log in and try to use
a feature.

Occurrence of buffer overflows and memory leaks after exposing the


system to load for an extended period of time. (Collard, 2002)

Weak point in the system detected after exposing load at a specific portion
of the system is considered as being not so robust. This is also known as hot
spot testing. (Collard, 2002)

Systems performance hampers when varying the load from low to high and
following some pattern of load fluctuation. (Collard, 2002)

Problem occurs when exposing the system to abrupt load. This is also
known as spike or bounce testing. Load balancing and resource reallocation
problems surface during such tests. (Collard, 2002)
109

Breakpoint of the system found to be less than expected or designed.


Breakpoint of a system is defined as the load or mix of loads at which the
system fails and the manifestation of the failures. (Collard, 2002)

Reduced performance when using a network for data transfer in conjunction


with data transmission.

Delay in loading bitmaps and other graphics more than expected.

Wireless link stalls resulting in spurious TCP timeouts. (Chakravorty &


Pratt, 2002)

Delay caused due to high RTT and slow startup phase for the system to
utilize the wireless link. (Chakravorty & Pratt, 2002)

Excessive queuing over the downlink results in higher probability of


timeouts during the initial requests for connection. (Chakravorty & Pratt,
2002)

Related bugs and links

Microsoft Active Sync 3.x slows down the system

Dell Halts Axim Shipments Over Software Problem:


http://stickyminds.com/news.asp?Function=NEWSDETAIL&ObjectType=
NEWS&ObjectId=6549

110

5.2.7 Installability
How easily can it be installed onto its target platform(s)?
Attributes of software that bear on the effort needed to install the software in a
specified environment. (ISO9126, 1991) Many of the failure modes described in
this section are inspired by Agruss article on installation testing. (Agruss, 2000)

5.2.7.1 System Requirements


This ensures that all components have been included.

Failure Modes

Installation of the wrong version.

Failure to install the application locally by using synchronization software.

Failure in online installation. Many applications give the benefit of wireless


online install using internet or local area network.

Failure to install the application on the emulator or simulator, thereby


resulting in inadequate unit testing.

Unwanted results when user aborts the installation midway.

Problems in the input field while installing the software.

Interrogation of the device or the desktop settings not done correctly.

Interpretation of environment variables for installation not correct.

Status of installation not displayed anywhere on the screen.

Forced reboot in the middle of installation.


111

5.2.7.2 Configuration
This identifies the configuration, or the resources required, during installation.

Failure Modes

Installation under minimum condition fails.

Installation fails under default condition.

Installation fails if the default condition is changed.

Installation does not work if the directory where the program is to be


installed is changed.

Installation of upgrades not possible.

Failure to install the application at the desired location in spite of the


location being valid.

Failure to install the application on all the supported configurations.

Failure to install the software due to interference with another application.

Failure to customize the installation if user has the option to do that.

Configuration of the device and environment variables not set correctly.

Intermittent network connection during a wireless install.

Background applications interfering during the installation process.

Shortcut in the programs folder not created after the install

Desktop PCs registry clobbered with user configuration data and not
cleaned up after the installation procedure.

DLLs required for the installation stored in the wrong directory.

Errors in the synchronization software during the installation process.

Keyboard shortcuts do not work during the installation process.

112

Related Bugs and Links

Services may not start after installation:


http://www.securityfocus.com/bid/7282/discussion/

Palm install could not handle a valid path:


http://www.faughnan.com/palm.html#Installation

"Cannot Find Pocket Streets" Error Message When You Try to Install
Pocket Streets: http://support.microsoft.com/default.aspx?scid=kb;enus;319689

5.2.7.3 Uninstallation
This checks if all parts of the product have been removed from the system after
installing.

Failure Modes

Uninstallation failures resulting in residual components on the device.

5.2.7.4 Upgrades
This ensures that the product can be upgraded smoothly, maintaining current user
configuration.

Installation of upgrades not possible.

Newer version cannot detect and remove older version of the software.

113

5.2.8 Compatibility
How well does it work with external components & configurations?

5.2.8.1 Application Compatibility


This includes how the product works in conjunction with other software products.

5.2.8.2 Operating System Compatibility


This includes how the product works with a particular operating system.

5.2.8.3 Hardware Compatibility


This includes how the product works with particular hardware components and
configurations.

5.2.8.4 Backward Compatibility


This checks if the products work with earlier versions of itself.

5.2.8.5 Resource Usage


This checks that the product doesnt unnecessarily hog memory, storage, or other
system resources.

114

5.2.9 Quality of service


A network provides high Quality of Service (QoS) if it delivers traffic consistently
across a network, provides high transmission rates, low error rates and supports
designated usage patterns. Quality of Service (QoS) refers to the capability of a
network to provide better service to selected network traffic over various
technologies, including Frame Relay, Asynchronous Transfer Mode (ATM),
Ethernet and 802.1 networks, SONET, and IP-routed networks that may use any or
all of these underlying technologies. The primary goal of QoS is to provide priority
including dedicated bandwidth, controlled jitter and latency (required by some realtime and interactive traffic), and improved loss characteristics. Also important is
making sure that providing priority for one or more flows does not make other
flows fail. QoS technologies provide the elemental building blocks that will be used
for future business applications in campus, WAN, and service provider networks.
(Cisco, at
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ito_doc/qos.htm#1020563, last
accessed July 23, 2006.)

Failure Modes

Inability to predict the network capabilities of the wireless network. Due to


this exchange of parameters between the application and network is not
possible.

Loss of bandwidth or attenuation of the signal strength due to movement of


the mobile node.

Loss of communication during handover between cells. This is not


significant for the voice application but even milliseconds of broken link
can result in undesirable consequences in case of data application.
115

Cost per unit data not very economical for customers.

High cost to establish a connection or to access a resource.

High bit error rate due to mobility of the node connected to the wireless
network.

Low quality of multimedia application due to limited bandwidth.

Limitation imposed by the requirement of portability of the system. Battery


life is limited and mobile computing technology requires significant power
for effective transmission.

Alteration of the QoS parameters in the wireless networks not taken into
account while designing the system.

Context management or the user environment not taken into consideration.

5.3 Development Quality Criteria


Development quality criteria focus on the product under development and its
relationship to the development organization rather than to the user.

5.3.1 Supportability
The user may have suggestions for feature enhancement or bugs. Guidelines need
to be established for support to be provided for such user needs.

5.3.2 Testability
Testability of mobile applications can be subdivided into visibility and control and
field failures.
116

5.3.2.1 Visibility
Visibility is our ability to observe the states, outputs, resource usage and other side
effects of the software under test. (Pettichord, 2002)

Field failures
These are the failures that escape the unit testing stage or any other kind of testing
done using the simulators or emulators. Errors and failures are encountered when
the application runs on the actual device or on the actual wireless network in the
production environment.

Failure Modes

Application fails to connect to the backend on the real network.

Application fails to load and function on the actual device.

5.3.2.2 Control
Control is our ability to apply inputs to the software under test

5.3.3 Maintainability
Will it be easy to maintain?
Maintainability is defined as the ease with which changes can be made to a
software system. These changes may be necessary for the correction of faults,
adaptation of the system to a meet a new requirement, addition of new functionality
117

or removal of existing functionality. Maintainability can be either a static form of


testing, i.e. carried out by inspections and reviews, or a dynamic form i.e.
measuring the effort required to execute maintenance activities.

(Source: http://www.testingstandards.co.uk/maintainability_guidelines.htm)

ISO 9126 divides maintainability into analyzability, changeability, stability and


adaptability.

5.3.3.1 Analyzability
Attributes of software that bear on the effort needed for diagnosis of deficiencies
or causes of failures, or for identification of parts to be modified. (ISO9126, 1991)

Failure Modes

Components like mobile middleware, programming language, etc. used in


the development of the application not very clear or poorly chosen.

Improper documentation of the design and architecture.

5.3.3.2 Changeability
Attributes of software that bear on the risk of the unexpected effect of
modifications. (ISO9126, 1991)

Failure Modes

Unable to change the network configuration or type with ease.


118

Unable to add more features to the application.

Failure to personalize the application.

5.3.3.3 Stability
Attributes of software that bear on the effort needed for validating the modified
software. (ISO9126, 1991)

Failure Modes

Addition of features makes the application difficult to use.

Application becomes unstable after changes in the language or tool used.

5.3.4 Portability
How easy will it be to change the environment?
The ease with which a system or component can be transferred from one hardware
or software environment to another. (IEEE, 1991)

ISO 9126 sub-categorizes Portability into Adaptability, Installability, Conformance


and Replaceability. Installability is discussed under operational quality criteria in
this thesis.

119

5.3.4.1 Adaptability
Attributes of software that bear on the opportunity for its adaptation to different
specified environments without applying other actions or means than those
provided for this purpose for the software considered. (ISO9126, 1991)

Failure Modes

Application not customizable to the user environment.

Application does not behave well on limited bandwidth.

Application does not adapt to network latency.

5.3.4.2 Conformance
Attributes of software that make the software adhere to standards or conventions
relating to portability. (ISO9126, 1991)

Failure Modes

Non-conformance with the carriers (cellular service provider) standards


and guidelines.

5.3.4.3 Replaceability
Attributes of software that bear on the opportunity and effort of using it in the
place of specified other software in the environment of that software. (ISO9126,
1991)
120

5.3.5 Localizability
Can I adapt the application to serve bigger market?
Internationalization (sometimes shortened to "I18N, meaning "I - eighteen letters N") is the process of planning and implementing products and services so that they
can easily be adapted to specific local languages and cultures, a process called
localization. The internationalization process is sometimes called translation or
localization enablement. (Source:
http://whatis.techtarget.com/definition/0,,sid9_gci212303,00.html)

5.3.6 Scalability
Can I increase the capacity with ease?
The ease with which a system or component can be modified to fit the problem
area. (IEEE, 1991)

Failure Modes

Problems due to large number of users.

5.4 Project Environment


Elements of the project environment define how the project is being run, rather
than what it contains. There are really two projects for a tester to consider, the

121

broad development project and the testing sub-project. Aspects of either can be a
source of constraints, problems, or opportunities for the tester.

We have described this section taking input from Bach (Bach, 2006a) and Bach
((Bach, 2003) p. 2), with his permission.

Project Environment includes resources, constraints, and other forces in the project
that enable us to test, while also keeping us from doing a perfect job. Make sure
that you make use of the resources you have available, while respecting your
constraints. ((Bach, 2003) p. 1) Creating and executing tests is the heart of the test
project. However, there are many factors in the project environment that are critical
to your decision about what particular tests to create. In each category, below,
consider how that factor may help or hinder your test design process. Try to exploit
the resources you have available while minimizing the impact of constraints.

5.4.1 Customers
Stakeholders of the project determine as to what kind of tests they want to run.
There may include the end customer, the project manager, the test manager or the
business analyst. Usage of the failure mode catalog will depend on the expectations
of the clients. If it is possible to have a discussion with the customer, then do so.

Failure Modes

Are the customers clear about what they want?

What do the customers like/ dislike in terms of features or test cases?

Do the customers have different or conflicting likes and dislikes?


122

How frequently do the requirements change? Regression testing may need


to be increased if so.

Are the customers willing to test along with you?

5.4.2 Information
Mobile application is used in a variety of horizontal and vertical industries. Some
of the vertical applications are stock trading, airline reservation, healthcare
solutions, and warehouse inventory solutions, etc. Among the horizontal
applications, the most prominent ones are wireless e-mail and personal information
management, wireless office data solutions and sales force automation etc. Testing
will depend on the context in which the application will be used.

Failure Modes

Do you have previous experience testing wireless mobile applications? If


not, more effort may need to be spent on this.

Are you comfortable with the functionality of the application? If not,


training sessions or more time may be required to become comfortable.

Is the understanding of the subject matter of the application enough? For


example, if the application is about airline reservation, gather enough
understanding of how the process works.

Are there user manuals/ guides/ documents available?

Does the product have a history of customer complaints?

Are there some sections with very little information?

123

5.4.3 Developer Relations


How well the tester gets along with the programmers gives an insight into how
much information the tester will be able to gather from the programmers about
risks.

Failure Modes

Does the programmer has any input to the possible risks, and is he
forthcoming about them?

Does the programmer talk about the difficulties they faced? If not, more
care needs to be taken when testing.

Does the programmer refute the validity of bugs, or insist that bugs are
features? In that case test cases will need to be designed such that refutation
becomes difficult.

Is communication with developers quick and when wanted? If not, a


communication channel could be established.

Does the programmer gloss over certain sections when explaining or talking
about them? These sections may need to be looked at more closely.

5.4.4 Team
Experience, skills and expertise in special test techniques of the people responsible
for carrying out testing should be considered while formulating a test strategy.

124

Failure Modes

How well or badly do the team members work together?

Are the reviews effective? Which sections have been more throughly
reviewed?

Is the development environment healthy? If not, more bugs may have been
introduced in the application.

Is training required for the test team?

Is there any team member with experience testing similar products? Input
may be taken from him/ her if so.

Is there any team member with a skill in testing in a particular way? Testing
may be assigned accordingly.

Has a team member undergone a personal problem recently? The parts (s)he
coded might need to be tested more thoroughly.

Has there been a long weekend or have long leaves been taken? Work done
around that time might need more careful testing.

5.4.5 Equipment and tools


This category lists the problems and links specific to the leading software
development environments for mobile application development.

Wireless JAVA: http://wireless.java.sun.com/j2me/index.html

Bugs in J2ME:
http://search.java.sun.com/search/java/index.jsp?qt=%2Bcategory%3Amidprofile+%2Bstate%3Aopen&nh=10&qp=&rf=1&since=&country=&langua
ge=&charset=&variant=&col=javabugs
125

Palm OS Application Development: http://www.palmos.com/dev/start/

Microsoft Mobile development:


http://www.microsoft.com/windowsmobile/information/devprograms/defaul
t.mspx

Problems in BREW:
http://www.qualcomm.com/brew/developer/resources/ds/faq/techfaq14.html

List of known bugs in OpenWave SDK


http://developer.openwave.com/support/bug_form.html

5.4.6 Schedules
The schedules of the development as well as testing team, and the sequence or
process in which the activities are carried out, affect the application and can be
taken into consideration when testing.

Failure Modes

Was the schedule of the development team too tight? If so, testing will need
to be more rigorous.

Is the schedule of the test team too tight? If so, test prioritization will need
to be carried out.

Are the completed work products such as requirements or design


specifications, user documentation, base code, ready on time? If not, then
both development and testing may not be effective, or may be delayed.

126

5.4.7 Test Items


The product to be tested has some characteristics which affect the application.

Failure Modes

Does the product have new features that havent been tested before?

Does the application need to be tested for features that will allow for
compatibility with future releases of the same product?

5.4.8 Deliverables
The deliverables of the project include work products such as the test cases or test
reports. For example, test cases and test reports will need to be prepared as
required, following the standard guidelines. If someone else needs to run the test
cases again, they will need to be documented accordingly.

127

Chapter 6: Refining the Risk Catalog


6.1 Overview of Mobile Computing in Education
Mobile computing is very useful in a sector like education. Mobile-based solutions
generate benefits for educational institutions at many levels. K-12 educational
institutions utilize the power of mobile computing to enable students any-time
access to reading material and assignments. Institutions of higher education use
mobility solutions to provide a better administration, to enhance student learning,
and to improve classroom experience. Handhelds are integrated into the curriculum
to deliver notes or assignments to students, to help the teacher monitor what the
student is doing, to collect partial or completed assignments, and to obtain feedback
from the students.

This chapter explains the process through which I refined the risk catalog. I tested
two mobile applications used in education using the initial risk list. This exercise
conducted on the mobile applications generated further test heuristics which I then
added to the catalog. The following sections of this chapter describe the sample
applications and refinement of the risk catalog.

6.1.1 Mobile Technology in Education


Many kinds of wireless networks, handheld devices, and mobile applications are
used in education.

128

The typical wireless networks in use are the Ethernet-based 802.11 family which
provide a high bandwidth data transfer. Wireless network hardware designed
exclusively for classrooms utilizes infra-red wireless technology and works as a
beaming station. These beaming stations distribute reading material and
assignments to the students.

Among the handhelds, Palm handhelds (http://www.palm.com/us/education/, last


accessed August 24, 2006), Windows powered Pocket PC
(http://www.microsoft.com/education/pocketpc.mspx, last accessed August 24,
2006), and Blackberry
(http://www.blackberry.com/products/handhelds/index.shtml, last accessed August
24, 2006) are the most common devices. Specialized handhelds such as the Texas
Instruments TI 83 and TI 89 (http://education.ti.com/educationportal/index.jsp)
focus exclusively on educational use.

The tablet PC is also an interesting addition to the mix of devices used in


institutions of higher education.

6.1.2 Common Patterns of Using Handheld Devices in Education


The technology of mobile applications used in education depends on the level of
the educational institution, on the student profiles, and on the technical needs of the
educators.

Handhelds devices are commonly used in the K-12 educational institutions in


elementary and middle schools. Many custom applications allow educational
institutions to integrate Web and mobile technologies for better communication
between students, teachers and parents. Research conducted to analyze the impact
129

of handhelds in classroom suggests significant gain in student learning in math and


sciences as compared to traditional methods of teaching the same (Luchini, 2004).

Handheld devices simplify collaboration and out-of-school learning of the students.


With the beaming capability of these devices, students can easily exchange notes
and merge their work into single documents that are easy to manage. For educators
in elementary and middle schools, administration of students progress and keeping
track of student assignments becomes easier due to ready availability of data.
Administrators and teachers can send pertinent information to parents, and can
receive comments and request from the parents more easily.

In institutions of higher education, mobile computing is used to enhance classroom


and laboratory experience. 802.11-family-based wireless networks are used with
tablet PCs in universities. Handheld computers can be used to make lectures more
interactive or for data collection while carrying out experiments.

The following section describes the educational handheld applications that I tested
to refine the risk catalog.

6.2 Educational Mobile Applications Tested


Some educational handheld applications can be installed on the students
handhelds, and some can integrate the handheld applications to a Web based tool
controlled by the instructors. Some examples of student applications in use are
Cells, Handysheets, and PicoMap from University of Michigans Center for Highly
Interactive Computing in Education, and CellSheet and LearningCheck from Texas
Instruments. Instructors can integrate these applications with students devices
130

using applications like the Palm OS Artifact and Assessment Manager (PAAM) and
TI-NAVIGATOR from Texas Instruments.

Two applications, namely, Cells, and PAAM (GoKnow, 2004), were tested. Cells
version 1.1 was used for the experiment, and was available from University of
Michigans Center for Highly Interactive Computing in Education, on the link
http://www.handhelds.hice-dev.org/beta.php, last accessed July 21, 2003. The
current version of Cells is version 1.2, and is available from GoKnow Inc, at the
link http://www.goknow.com/Products/Cells/, last accessed November 28, 2006.
The following sections describe the applications and the tests conducted on them.

6.2.1 PAAM and Cells


Cells provide common spreadsheet functions on handhelds like sum, maximum and
minimums of a data range, and averages. The following screenshots were obtained
from the documentation guides of v1.1 of Cells, at http://www.handheld.hicedev.org/beta/nb/Cells%20Quick%20Start.pdf, last accessed July 21, 2003. The
updated guide, for v1.2, is available at
http://www.goknow.com/GuideDownloads/Cells12.pdf, last accessed November
28, 2006.

131

Figure 6-1 shows the screenshot of Cells.

Figure 6-1: Screenshot of Cells


Source: Cells 1.1 Quick Start Guide (http://www.handheld.hicedev.org/beta/nb/Cells%20Quick%20Start.pdf, last accessed July 21, 2003)

132

Figure 6.2 depicts the summation of a data range using Cells.

Figure 6-2: Summation of a data range using Cells


Source: Cells 1.1 Quick Start Guide (http://www.handheld.hicedev.org/beta/nb/Cells%20Quick%20Start.pdf, last accessed July 21, 2003)

PAAM is a Web-based application that manages multiple handhelds in a


classroom. It enables instructors to send and receive files from handhelds used by
students. Old files on a students handheld can be archived, or the handheld
restored to a previous state, using this application. Individualized comments and
feedback can be sent to the students handheld. PAAM operates in two modes:
groups and individuals. In the group mode teachers can beam reading material or
assignments to all the handhelds that belong to a particular group. In the individual
mode a specific students handheld can be manipulated to add or delete files or
applications. PAAM provides role-based security and allows only the
administrators to create new groups or delete old ones.

133

Figure 6.3 shows a screenshot of PAAM on an instructors desktop. The figure


shows a list of files beamed to a student in 6th grade. The instructor can move,
copy, archive or delete files on the students handheld. Archived copies can be
viewed and feedback can be sent to the student.

Figure 6-3: PAAM on an instructors device.


Source: Walkthrough: PAAM-Palm OS Artifact and Assessment Manager
(GoKnow, 2003b)

6.2.2 Installation and Testing


I tested PAAM and Cells with the help of the first version of the risk catalog
developed for mobile applications. To test the applications, I installed Cells on a
Palm handheld. PAAM was used to distribute assignments and read the file
134

contents on the handheld. In classrooms PAAM communicates with the students


handhelds using a beaming station that sends and receives data wirelessly. This was
not possible in my experiment. Therefore I tested those features of the application
that did not require setting up a beaming station.

The device configuration of the handheld on which Cells was installed is as


follows:

Handheld Device: Palm Tungsten W

Operating System: PalmOS 4.2

Memory: 16 MB SDRAM (14.8 MB actual storage capacity)

Processor: Motorola Dragonball VZ 33

Screen Display: Transflective TFT 320 x 320 color display supports more
than 65,000 colors

Cradle: USB Cradle and Battery Charger Included

Battery Support: Rechargeable Lithium Battery

I tested PAAM using a trial account provided by Go Know Inc. I installed PAAM
on a Pocket PC and ran it on the browser Internet Explorer, version 6.0. I faced
some installation issues and logged the issues in the risk catalog under the category
Installation Failure. In the initial stage of testing, my primary focus was on the
functional categories of failures like suitability, accuracy and calculation. These
were the features that I was able to test without setting up the wireless network and
were mostly confined to the standalone device. At the end of the initial phase of
testing, I started exploring other categories like usability, compatibility and security
for test ideas. At this stage, the risk catalog came very handy as it provided me with
the high level risk heuristics to channel my thinking process and direct my testing. I
135

was able to focus on a particular class of problems and drill deeper into related
potential issues and risks. I tested PAAM and its communication with the Pocket
PC, and added more failures and potential problems to the risk catalog in the
appropriate categories. After exhausting the categories of the risk catalog, I started
looking for additional categories and ways that I can restructure the risk catalog.
These are described in the subsequent sections.

6.3 Modifying the Risk Catalog


6.3.1 Preliminary Evaluation of the Risk Catalog
As Bach (2003) points out, there are many ways to analyze risks. At the end of any
risk analysis we create a list that relates to a single kind of activity. The first risk
list that I had created was not based on the Bachs heuristic test strategy model
(Bach, 2006a). This was a prototype version. I felt that the prototype was a bit
muddled. It was not intuitive enough for someone not very experienced with risk
analysis techniques and with risk-based testing. The primary reason for the risk
categories to be muddled was the lack of categorization by type of failure. For
instance, I had the product element attributes like WAP Gateway failures defined
and real life and potential failures listed under that category, but the very next
category was something like Calculation which although falls under the product
element domain is not really very well related to the previous category. This
observation led to restructuring of the risk catalog which is described in the next
section.

Another problem was the lack of empirical data on the usage of risk catalogs for
mobile applications. I had created the catalog and populated the failure categories
136

with generic problems that could occur in mobile applications, but had not put the
catalog to test. My advisor Dr. Cem Kaner suggested that I test a mobile
application and enrich the risk categories with the problems that I encounter while
testing the application. I have described the process that I followed in section 6.3.3.

6.3.2 Restructuring the Risk Categories in the Catalog


Testing is often conducted on a model of the application under test. A model is
essentially a simplified representation of the application that is easier to visualize
and work with than the original application. The American Heritage Dictionary of
the English Language 2000 defines model as

A schematic description of a system, theory, or phenomenon that accounts for its


known or inferred properties and may be used for further study of its
characteristics.

Models can be represented either explicitly or implicitly, on the basis of its type.
For example, in operational modeling, a state transition diagram can explicitly be
drawn either using word processor or using more sophisticated tools like Rational
XDE or Microsoft Visio. On the other hand, models can be implicit, that is, not
expressed as a diagram or in a document. In risk-based testing, when a tester tries
to develop a risk profile of the application, the guiding factor is the way the
application could fail. A risk-based tester writes tests that expose risks in the
application by following risk heuristics. These risk heuristics help to draw a mental
picture or model of how a problem could occur in the application.

As mentioned in the previous section of this thesis, while working on developing


the risk catalog for mobile applications, I had categorized the risks based on quality
137

and component lists. This was a good first step as it allowed me to focus on one
kind of problem at a time. When the lists grew and I had more examples of failures
and potential problems I realized that my risk catalog was difficult to read through
and apply. This problem stemmed due to the lack of categorization heuristics. To
overcome this issue I followed Bachs Heuristic Test Strategy Model (Bach, 2006a)
that provides a risk based tester some guidance on how to categorize the test ideas
into more actionable lists. Bach suggests three high level categories to analyze and
test a product. These top level categories are:

Product Elements

Quality Criteria

Project Environment

I subdivided the quality criteria further into operational and development quality
list to assist more focused risk exploration and test design. Bach has some more
guideword heuristics under each top level category in his test model which I used
to organize my failure lists. There are some categories like synchronization that I
thought deserved a higher level in the failure categorization scheme based on
mobile application specific technology.

6.3.3 Refining and Enhancing the Catalog with PAAM and Cells
As mentioned in the second paragraph of section 6.3.1 to gather empirical data on
the usage of risk catalog I tested the applications PAAM and Cells. Apart from
filtering and refining the failure categories identified for the catalog, my secondary
objective in carrying out this exercise was to enrich the risk catalog for mobile
applications with more examples of risks and failures. This activity assisted in
138

populating the sub-categories of the risk catalog with risk heuristics. Many risks
and failures provided in the catalog were inspired by the tests executed on these
applications. The risks identified were either failed tests or potential problems
imagined during testing. The process I followed to refine and enrich the risk
catalog was as follows:

1. In the first step, I allocated a 3 by 5 color card to each risk heuristic in the risk
catalog by writing the name of the heuristic on the card.
2. I then brainstormed on possible failures and problems that can occur in the
application under test (PAAM and Cells) and wrote them on another 3 by 5
card of different color to differentiate them from the cards with risk heuristics.
3. Next, I checked for patterns of failures in cards containing the failure modes
and placed them under the card with risk heuristic.
4. There were some cards with failure mode that I could not place under a
predefined failure category, so I decided to create my own subcategories to
place them. Some examples of these are WAP gateway failures and mobility
management.
5. Finally, I made a pass through all the cards with risk heuristics and removed the
thinly populated categories.

139

6.3.4 Finalizing the First Version of the Risk Catalog


The first version of the risk catalog was finalized after restructuring the prototype
version of the catalog and enhancing it on the basis of tests conducted on the
handheld applications. The overall structure and categorization mechanism for
failures in the updated version of the risk catalog is on the basis of the Heuristic
Test Strategy Model (Bach, 2006a). I reworked the structure and categories in
August, 2006 to keep it consistent with the changes made by Bach in version 4.8 of
the testing model.

140

Chapter 7: Using the Risk Catalog to


Test Enterprise
7.1 Business Context of the Enterprise Mobile
Application
Mobile applications have made deep inroads in the enterprise sector. People whose
job profile mandates a lot of travel and commuting use mobile applications
extensively. These include sales and service personnel, consultants, and back-office
workers (Lee, Schneider, & Schell, 2004). Lee identifies sales and service
personnel as customer service representatives who are involved in selling items
face-to-face in the field, and back-office workers as individuals who are involved
with the administration of systems and warehouse personnel.

This chapter describes the tests that were conducted by two software testers at
Florida Institute of Technology. The testers tested an enterprise wireless
application, MidCast, using the risk catalog. The primary objective in testing this
application was to evaluate the effectiveness and efficiency of the risk catalog in
discovering failures in mobile applications.

7.1.1 Common Scenarios of Usage


The current crop of mobile applications allows information exchange between a
central server in the organization and mobile handhelds in the field. Information
exchange using this client-server paradigm include email, instant messages, news,
141

schedules, tasks, assignments, flight information, geographic information, orders,


bills, package information, delivery information, medical data and other work
related information (Lee et al., 2004). In the financial and banking sectors the value
of real-time wireless data is particularly significant because of the need to carry out
transactions remotely. Similarly, trading firms have mobile solutions to enable their
employees to carry out certain functions while on-the-move.

MidCast, the application tested using the risk catalog, is a stock-quote application
that streams real-time stock price and news to the handhelds through a central
server.

7.1.2 Enterprise Mobile Applications


Wireless applications in enterprise can directly increase the productivity and
efficiency of an organizations employees. With the help of mobile applications,
employees have access to current and accurate data on the field. If a careful
analysis is carried out to design the best mobile solution, real-time wireless access
to the organizations data an effectively streamline the business process and result
in better revenue for the enterprise. Security and privacy are a matter of concern for
the enterprise, however, because of the risk of misplacing mobile devices and
making customer and private data available to unwanted elements.

Mobilization obviates the need to re-enter the same data on multiple systems and
increases the efficiency of the business process. For example, field personnel on the
field can enter data directly in the system using the mobile application. Hence they
do not need to enter the same data twice, on the field and again in the office.

142

The next section describes an experiment that was conducted with two software
testers at Florida Institute of technology to use the risk catalog to test an enterprise
wireless application.

7.2 Testing a Mobile Application Using the Catalog


MidCast, a financial application that obtains streaming, real-time data from the
New York Stock Exchange (NYSE), was installed on a handheld and tested by two
undergraduate students who had little previous software testing experience. The
following sections describe the application and the tests conducted on them.

7.2.1 MidCast
Midcast runs on any JAVA enabled handheld device. The client application of
MidCast running on the handheld device communicates directly with the MidCast
server using a wireless Wide Area Network (WAN) or Local Area Network (LAN)
over the internet. Real-time information on quotes, charts, graphs and news is
pushed to the handheld.
(http://www.hillcast.com/Website/products/midcast/index.asp ), last accessed on
March 9, 2004. More information on Midcast is available at
http://www.hillcast.com/Website/products/midcast/index.asp, last accessed on
March 9, 2004.

143

Figure 7.1 shows snapshots of MidCast as it launches.

Figure 7-1: MidCast launching splash screens


Source: (FastTrack Midcast User Manual for Palm OS handhelds)

The MidCast client communicates wirelessly with the MidCast server, which runs
on the J2EE technology, using wireless WAN. The software is available in many
handheld operating systems like Motion, Palm OS, Windows mobile, and
Motorolas iDEN handhelds. Either wireless LAN, like the 802.11 family, or
wireless WAN, like GPRS or 1XRTT, can be used.

144

Figure 7.2 shows a day chart window of the client MidCast. In the first screenshot
the user selects the stock of MSFT and clicks on day chart. The chart loads and can
be seen in the third screenshot.

Figure 7-2: Day chart window


Source: (FastTrack Midcast User Manual for Palm OS handhelds)

7.2.2 Installation and Test Environment


Testing was conducted on the following device configuration:

Handheld Device: Palm Tungsten W

Operating System: PalmOS 4.2

Memory: 16 MB SDRAM (14.8 MB actual storage capacity)

Processor: Motorola Dragonball VZ 33

Screen Display: Transflective TFT 320 x 320 color display supports more
than 65,000 colors
145

Cradle: USB Cradle and Battery Charger Included

Battery Support: Rechargeable Lithium Battery

Application under test:


MidCast 2.99 (Real-time wireless financial data on handheld devices)

Wireless Network:
AT&T Wireless GPRS network.

There was a problem in using the GPRS network of AT&T wireless within the Olin
engineering building of Florida Institute of Technology. This was resulting due to
reduced signal strength within the building. Testing was conducted at a place where
the signal strength was approximately 75% of the peak value. Mobility of the
device was minimal and localized within 10 meters.

Additional Environments:
Java HQ version 1.0 from Hillcast Technologies

Preferences settings for the Java environment were as follows:

Colors: Thousands

Drawing speed: Fast

App Memory: 64kb

Networking: Enabled
146

HTTP Proxy: N.A

Many issues were discovered during the testing conducted. Some of the important
ones are presented in detail.

7.3 Methodology Used to Test MidCast


7.3.1 Pilot Experiment Carried Out With Testers
A pilot study was conducted with two human subjects to determine the
effectiveness of the catalog in finding bugs in mobile applications.

Two individuals, undergraduate students in the Department of Computer Science at


Florida Institute of Technology, in Melbourne, Florida, were selected for the
experiment. Dr. Cem Kaner, Professor of software engineering at Florida Institute
of Technology, interviewed them to assess their testing skills. The students had
some experience in testing desktop applications, but had no experience in testing
wireless mobile applications.

The two software testers signed a consent form to be human subjects for study that
had been approved by the Human Subjects Institutional Research Board at Florida
Institute of technology.

147

Steps involved to conduct the experiment were:

Introduction to risk-based testing using the risk catalog: I explained the


concept of risks-based testing to the students, and provided then with a copy
of the research paper, for further reading.

Familiarization with the technologies in the mobile computing domain: I


gave them a brief overview of the wireless technologies.

Introduction to the risk catalog for mobile application: I explained the


concepts of the risk catalog and explained the failure categories.

I then asked the students to design test cases for MidCast and execute them using
the actual device and network. Details of the hardware, software, and wireless
network used are described in section 8.3. They spent approximately 10 hours
each, testing MidCast, and discovered around 35 issues of varying severity.

The two testers then filled up a survey stating their experience and provided
feedback on their experience of using the risk catalog. The survey forms are
provided in section 7.4.2.

7.4 Results of the Experiment


7.4.1 Issues Discovered in MidCast
The issues discovered in MidCast are presented in Appendix B of this thesis.

148

7.4.2 Feedback on the Risk Catalog


The feedback form had the following questions.

Did you have any prior experience testing mobile applications?

How did you use the risk catalog while testing the sample applications?

Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.

Which portion of the catalog was least useful?

What are some ways in which the risk catalog could be made more useful?

What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?

How much coverage of the risk catalog did you achieve while testing the
sample applications?

Was the distribution of time appropriate with respect to


o Familiarization with the testing technique
o Familiarization with the wireless technology
o Browsing through the failure mode catalog
o Time spent on testing the sample application

Do you have any other comments or criticisms?

Responses from the two testers are included in appendix B of this thesis.

149

Chapter 8: Testing Mobile Web


Services
In this chapter we will dive into the third mobile application that I tested. I
developed a mobile application using mobile Web services and Microsoft.Net
compact framework, deployed it on a Pocket PC running on Windows Mobile
2003, and tested it using the risk catalog. The applications tested in chapters 6 and
7 were finished software products, and I had limited information on the internal
components and source code. My primary objective in doing this experiment was to
populate the development quality categories of the risk catalog. By creating an
application utilizing mobile Web services, I could imagine problems that can occur
in internal interfaces and design of the mobile application. Section 8.1 provides an
overview of service-oriented architecture. Section 8.2 talks about mobile Web
services, and section 8.3 describes the architecture, design and functionality of the
mobile application. Section 8.4 and 8.5 provide information on the development
tools and technologies used, and experience report on enhancing the risk catalog by
designing, building and testing the mobile application.

8.1 Overview of Service Oriented Architecture


This section briefly describes the service-oriented architectural paradigm and
explains the role of Web services in enabling the service-oriented architecture.

150

8.1.1 Introduction to Services Oriented Architecture


The OASIS SOA Reference Model group defines SOA as follows. Service
Oriented Architecture is a paradigm for organizing and utilizing distributed
capabilities that may be under the control of different ownership domains. It
provides a uniform means to offer, discover, interact with and use capabilities to
produce desired effects consistent with measurable preconditions and
expectations. (http://www.oasis-open.org/committees/download.php/18486/pr2changes.pdf, last accessed July 9, 2006.)

In any distributed system, discrete software agents work together to perform some
tasks. In service-oriented architecture, a group of autonomous services co-operate
to carry out a task. There are some common elements in all the definitions and
representations of service-oriented architecture. Some core traits of service
orientation are:

Loosely coupled service: Loose coupling enables minimization of the


impact due to change in the business or application technology domains. As
per W3C, coupling is the dependency between interacting systems. This
dependency is of two types: real and artificial. Real dependency is defined
as the state in which a system depends on another for a set of functionality
or services. Artificial dependency is defined as the set of factors that the
system has to adhere to in order to consume the services or functionality
provided by another system, like the language, platform and API
dependencies. In loosely-coupled systems the artificial dependency is
reduced to the minimum possible level. To achieve this, each service is
defined with an explicit interface and a contract.
151

Service description: Each service is described with meta-data to be


discoverable and allows integration with the application or system at
design-time as well as run-time.

Network-based: Services are exposed to the network to allow easy


consumption from different clients. This enhances reusability.

There are three typical roles found in service-oriented systems. Those are the
service provider, service requestor, and the services infrastructure.
A service provider is the software module that implements a service and publishes
its interface, along with the service contract, to the service infrastructure.

The service infrastructure is the broker that provides the core facilities like service
information, contract, and interface to the service requestors. This infrastructure is
also responsible for maintaining standards across different services, implementing
quality of service, and security protocols.

A service requestor is the software module that consumes a service by invoking the
service published by the service provider. It binds itself to the service infrastructure
and adheres to the policies and protocols required by the service infrastructure. A
service requestor completes a desired business task after consuming the appropriate
services required to fulfill the business flow.

8.1.2 Motivation and Requirements for Service Oriented


Architecture
Software architects believe that service orientation allows systems to evolve
quickly with changing requirements and conditions. It is easier to maintain and
enhance systems with the service-oriented architecture as this paradigm enables
152

reusability at higher level than traditional object oriented paradigm. In an objectoriented system, objects that encapsulate data and behavior are the typical unit of
reuse. In service-oriented systems, reuse is promoted at the servicelevel, which is
a task or set of tasks that the system is required to perform.

There are several key components of a service-oriented framework. They are


briefly explained as follows on the basis of an analysis presented by Erl (2005).

Message: Message represents the data required to complete some or all


parts of a unit of work. They are autonomous and have enough information
to be self-governing. It is the information required by the operation within a
service to send a useful response back to the requestor.

Operation: An operation represents the logic required to complete a task by


processing the message. It is thus a unit of processing logic that acts on the
data provided by the message to carry out a task. An operation is largely
defined by the message it receives and sends.

Service: W3C defines service as an abstract resource that represents a


capability of performing tasks that form a coherent functionality from the
point of view of providers and requesters. In service-oriented architecture,
services are autonomous, yet not isolated from each other. These services
can evolve independently but still maintain some level of commonality and
standardization. Service is a group of related operations.

Business Process: A business process is a set of rules that governs how a


task is completed. In service-oriented architecture, a business process is
accomplished when a set of operations within services collaborate, to form
the logic and process flow, and to complete a unit of automation.

153

The following figure demonstrates the interactions between the operations of


service A and B via message exchange. This diagram is inspired by the analysis
presented by Erl (2005).

Operation A

Operation A

Service A

Message

Service B

Operation B
Service A

Message

Figure 8-1: Service oriented architecture

Some problems that this model of application architecture tries to solve are
(Chande, 2005):

Simplify application integration while developing complex business


systems.

Quick response to changes

Communication across heterogeneous environment.

Reuse of components

Programmatic access and reuse of legacy systems


154

8.1.3 Web Services


With the evolution of Web technologies, the concept of semantic Web was
introduced. (http://www.w3.org/2001/sw/, last accessed July 28, 2006.) In this,
information and data available on the Web is organized and linked in such a way
that it can be used, not just for display, but for automation, integration, and reuse of
data, across various applications. Due to the need to distribute the computing tasks
on the network across different nodes, to expedite processing, and share resources
on a network, communication-protocol models were created to handle distributed
applications for the communication mechanism between two concurrent processes.
Remote procedure call was the earliest developed mechanism to call a procedure
running in a different process and address space. As distributed computing grew,
there was a need to create distributed components to support tiered client-server
architecture. Common Object Request Broker Architecture (CORBA), Distributed
Component Object Model (DCOM), JAVA Remote Method Invocation (RMI) and
.NET Remoting were developed for different platforms. The combination of the
need for distributed computing and the movement towards making services
available on the network, not only to a human user, but also to other applications,
resulted in the realization of Web services.

Web services are defined as reusable software components that are published,
located, and invoked over a network, and that encapsulate the business logic
required to complete a task. Web services are applications that use standard
transports, encodings, and protocols to allow systems to communicate over the
network in a secure, transacted and reliable manner.
(http://msdn.microsoft.com/Webservices/default.aspx?pull=/library/enus/dnWebsrv/html/wsmsplatform.asp#wsmsplat_topic2, last accessed July 28,
2006).
155

Web Services use XML for data representation, have mechanisms to describe the
service, and provide features like:

Pervasive, open standards for distributed computing interface descriptions


and document exchange via messages.

Independence from the underlying execution technology and application


platforms.

Extensibility for enterprise qualities of service such as security, reliability,


and transactions.

Support for composite applications such as business process flows, multichannel access, and rapid integration. (IBM, 2005)

Web services are created on the basis of following core technologies:

Extensible Markup Language (XML) is the markup language that provides


data structured in a format which is independent of any programming
language, software system and development environment. XML has its own
syntax, and documents adhering to the syntax are called well-formed XML
documents.

Simple Object Access Protocol (SOAP) is a protocol for the communication


between the service requestor and provider. It is a network and platformneutral protocol based on XML. The message format for SOAP is divided
into header and body sections that are enclosed in the top-level root element
called envelope. There could be zero or more header and body sections in a
SOAP message. SOAP header carries the meta-information about the
message like routing, security, context and correlation data, that enables it
156

to be self governing and extensible. SOAP body houses the actual content
or the payload of the message.

Web services description language (WSDL) is an XML-based language that


provides description of a Web service. WSDL provides the point of contact
for the service provider, also known as the service endpoint, and allows the
service provider to specify the operations, parameters, and data types of a
Web service. The WSDL description can be divided into two parts: the
abstract description and the concrete description. The abstract description
defines the service interface, and the concrete description outlines the
location and transport information.

Service Registry is the repository that allows service providers to publish


information about their services, so that service requestors can discover it. It
is sometimes realized through Universal Description and Discovery
Language (UDDI).

8.1.4 Web Service Extensions


Service-oriented architecture demands loose coupling, interoperability,
discoverability, extensibility, and quality of service. A Web service in its native
form does not address all these problems. Web-service extensions, also known as
WS-*, are the second-generation group of Web-service specifications that allows
applications to comply with the requirements of an effective service-oriented
implementation. Figure 8.2 depicts a Web-services framework that enables service
requestors and providers to interact in a secure, coordinated and consistent manner
(Newcomer & Lomow, 2004).

157

Figure 8-2: Web services framework


Source: Newcomer et al, 2004

WS-*, or the Web-service extended specifications, are extensions to the SOAP and
Web-services infrastructure, primarily in the areas of:

Metadata management: WS-Addressing, WS-MessageDelivery, WSPolicy, Web Services Policy Language, and WS-MetadataExchange for
defining ways in which cooperating Web services can discover the features
each other support and interoperate.
158

Security: WS-Security, XML Signature, XML Encryption, and other


specifications for ensuring privacy, message integrity, authentication, and
authorized access.

Reliability: WS-Reliability and WS-ReliableMessaging for ensuring that


messages are delivered and processed.

Notification: WS-Eventing and WS-Notification for defining additional


message exchange patterns such as publish/subscribe.

Transactions: WS-Transactions family and WS-Composite Application


Framework for coordinating the work of multiple independent Web services
into a larger unit.

Orchestration: WS-BPEL and WS-CDL for combining multiple Web


services to perform a larger unit of work. (Newcomer & Lomow, 2004)

SOAP header block provides a way to enhance the messages sent and received
from service providers and requestors, and to include additional metadata that helps
in implementing WS-*, as required by the application. Usually a choice is made on
the extensions that are to be implemented for the framework, on the basis of the
context and specific requirements of the system.

8.1.5 Web Services and Service Oriented Architecture


After analyzing the anatomy and requirements for a service-oriented architecture, it
becomes evident that Web services satisfy many requirements of successful
service-oriented architecture inherently.

Web services require explicit contracts for communication through the use of
description language. They provide loose coupling and hide all the details of
159

implementation from the service requestor. They are platform and programming
languageneutral; hence the service provider and consumer can be implemented in
different environments. All Web services adhere to standards and are published,
located, and invoked on a network. All these characteristics make Web services a
very good fit for service-oriented architecture, and because of these reasons they
have become a dominant paradigm in the implementation of service-oriented
architecture.

8.2 Mobile Web Services


Service-oriented application development works well with mobile applications as
most of the processing logic resides on the network and not on the resourceconstrained mobile device. Applications can be rolled-out, targeting different form
factors, and using the same set of functionality and business logic that is located
and invoked on the network. The convergence of mobile and Web-services
technology enables newer business and application technology models.

8.2.1 Common Paradigms of Using Web Services in Mobile


Applications
There are primarily two ways in which Web services are used in mobile
applications. In the first model a mobile application consumes a Web service
hosted on the network, thereby acting as a Web service consumer. Secondly, a Web
service can be hosted on a computationally capable mobile device itself, offering
services to other mobile devices or networks. In the second model the mobile
application running on a mobile device acts as the service provider.

160

Consuming services hosted on a network from a mobile client are becoming


increasingly popular and a dominant paradigm of mobile application development.
Mobile Web services are available and rolled-out for domains like entertainment,
consumer, and enterprise services. The end-user can now use the same services on
a personal computer and mobile device. In the past there were interoperability
problems arising due to usage of non-standard technologies like cHTML, WML
and custom APIs. With the adoption of Web services like XML, xHTML, and
SOAP over HTTP, Web services have become the primary technology for mobile
development that is not tied to any particular platform or vendor or mobile device.
More time and resources are spent on the design of the user interface to make it
more intuitive and easy to use, as the backend services take care of all the business
logic processing. Using the network-hosted service model, mobile applications
have access to the backend databases that enable powerful customer relationship
management, inventory management and remote diagnostic applications (Hirsch et
al 2006).

Mobile-devices-hosting services in the second model can offer Web services to


other service providers like providing information stored in the device, such as
contact, calendar, or other personal information. Service providers can leverage
information provided by the mobile infrastructure like obtaining the geographic
location of a mobile device from a mobile-infrastructure Web service. This can be
used to provide customized information, such as weather, mapping, or other
location-specific information, in or near the device users current geographic
location (Hirsch & Kemp, 2006).

161

8.2.2 Challenges in Using Mobile Web Services


One of the biggest challenges in designing Web services with mobile application
consumers in mind is the inherent latency in the wireless network that calls for
better-performing Web service. Since most of the processing logic in the networkhosted Web-service model is not deployed on the mobile device itself, but accessed
on the network using wireless network, there is a constraint of architecting a
service that is very well decomposed into transactions and that does not have a very
large payload. In addition to latency of the wireless network, there are issues
surrounding diversity and availability of network coverage. Although the risk
associated with the limited resources available on the mobile device like smaller
screen real estate, low memory and hard disk, limitations on keyboard, and low
battery power, is mitigated to a large extent by consuming services on the network,
nevertheless significant forethought is required while designing the Web services to
support a wide variety of mobile devices seamlessly along with regular personal
computers that are not resource constrained (Chatterjee & Webber, 2003).

Developing effective and usable mobile Web services also requires the services
infrastructure that addresses issues related to identity management, security, and
the machine readable description of Web services and metadata management
(Hirsch & Kemp, 2006). Effort between the software companies and mobile
software stakeholders is ongoing to define consistent standards and models and to
enable service-oriented architecture across different mobile middleware and
platforms.

162

8.3 Architecture of a Mobile Application Utilizing


Web Services
8.3.1 A Sample .Net Mobile Application
I developed a mobile application that utilizes Web services, based on a mobile
book catalog available from Microsoft Solution Developers Network for mobile
development. (Source:
http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnppcgen/html/mobile_book_catalog_vsnet2003.asp, last accessed August 6,
2006)

This application has the user interface designed using Microsoft.NET windows
forms controls for smart devices utilizing a Web service created to fetch the book
title and price from a database running on Microsoft SQL Server 2000. The
following diagram (Figure 8-3) illustrates the tiered architecture of the mobile
application consuming a Web service. In the figure the mobile application (Mobile
book catalog application) consumes the book Web service hosted on the Web
server that returns a list of titles and price stored on the database server.
Communication between the service provider (Book Web service) and consumer
(Mobile book catalog) occurs over HTTP and SOAP. The book Web service
returns a dataset (Microsoft.NET data type) that is used to populate the view on the
mobile device.

Failure modes arising because of design and architectural errors can be imagined
and visualized easily if the mobile application tester is aware of the underlying
design and internal components of the mobile system. I chose to develop and test
this application to get better insight on these kinds of problems and failure modes.
163

Figure 8-3: Mobile book catalog application layers

8.3.2 Mobile Book Catalog Client


Figure 8.4 depicts the user interface of the mobile application that consists of
windows form with mobile controls. It has a listbox control to display the titles and
price of the books after the user clicks the button called Get Items. Apart from the
listview and button the user interface has a label beside the button that provides the
heading for the application with its text property set to Mobile Book Catalog. Just
below the listbox control is another textbox with the multiline property set to true
that tells the user about the action needed to download the book catalog.

164

Figure 8-4: User interface of the mobile book catalog

The mobile client uses a Web service called BookWebService that runs the
business component necessary to get the required information stored in the
database. When the user clicks the button Get Items; the method attached the
button is invoked. In this method data from the Web service is extracted and
assigned it to a temporary dataset. If the DataSet downloads successfully from the
Web service the temporary dataset TempDS is assigned to the book catalog dataset;
BookCatalogDS. Following code fragment shows the method fired after the user
clicks the Get Items button.
165

private void button1_Click(object sender, EventArgs e)


{
// code to get the dataset from the database using a web service
// code to data bind the mobile control to the dataset received
}

8.3.3 Consuming Book Catalog Web Service


BookWebService is a Web service that when invoked is responsible for extracting
some data from the database, load it into an ADO.NET dataset (a .NET data type)
and return the result to the service consumer as XML over HTTP.
BookWebService is an ASP.NET Web service that has just one operation defined
called GetItems as shown in the code fragment below.
[WebMethod]
public DataSet GetItems ()
{
return BookCatalog;
}
Whenever a client wants to invoke a Web service it has to create a proxy object on
the client side to call the methods on the object that is on the network. In the case of
Mobile Book Catalog when the user clicks the button GetItems on the user
interface of the mobile application the Web service that has the operation to carry
out the task is invoked on the network by the proxy created on the client side. A
Web service proxy was created after adding the Web reference to the mobile
application client project after parsing the Web service description language
(WSDL) of the service. A local copy of the class and dataset is created on the
device after parsing the XML response of the service to use in the client
166

application. This dataset returned from the service is then used to populate the list
view windows control on the mobile device windows form. Dataset returned from
the service has a datatable contained within it with multiple rows of book
information. The list view control is populated after iterating through the DataRows
and taking a DataItem from a particular DataRow to insert in to the listview
control.

GetItems () Web method returns a dataset that is populated from the database using
the method GetDataSet (). The GetDataSet () method connects to a SQL Server
2000 database and executes a SQL query on the database to fetch the datarows. The
following code fragment depicts how dataset is populated to be returned by the
Web service.

private DataSet GetDataSet()


{
// Open connection to the database
// Get dataset from the database
// Read the required table named titles
// Return the dataset
}

This method is then called in the constructor of the Web service class as shown
below to populate the BookCatalog declared in the BookWebService class.
Appendix C contains the complete source code listing for the Web service.

public BookWebService()
{
InitializeComponent();
167

BookCatalog = GetDataSet();
}

Figure 8.5 demonstrates the state of the mobile application when the use clicks on
the button to get the list of titles and price.

Figure 8-5: Populated list-view with book information

168

8.4 Building and Testing the Smartphone


Application Using the Risk Catalog
8.4.1 Development Technologies and Environment
I used Visual Studio.NET 2003 as the integrated development environment for the
development of the Web service and mobile application client code using C#.NET.
Visual Studio.NET provided the extension to develop the application for smart
devices. I utilized the Smart Device Programmability (SDP) features of Visual
Studio .NET allows to write mobile applications that take advantage of the
Microsoft .NET Compact Framework.

I installed the Software Development Kit (SDK) for windows mobile 2003 Pocket
PC and smartphones to write Mobile book catalog. This SDK provided the required
functionality to write managed code in C# or VB.NET for smart devices utilizing
either connected or disconnected modes for communication. SDK also provided the
required support to seamlessly call the APIs for the .NET compact framework
which is a trimmed down version of .NET framework designed for the mobile
devices. The SDK for mobile devices also contained an emulator that uses a virtual
machine to run the Pocket PC 2003 and smartphone software independent of the
operating system on the development machine. I utilized the emulator images to
deploy and test the mobile book catalog. Apart from Visual Studio.NET and SDK
for mobile devices, I used Microsoft ActiveSync 3.8.0 as the synchronization
software between windows mobile based smartphone and windows desktop based
development workstation.

169

The following list outlines the configuration and development tools installed at the
development workstation for mobile book catalog.

Development operating system: Windows 2000

Development IDE: Visual Studio .NET 2003

Microsoft.NET Framework 1.1

Microsoft.NET Compact Framework 1.0

Microsoft SQL Server 2000

Development synchronization software: ActiveSync 3.8

Development Emulator: Pocket PC 2003 SDK Emulator

Pocket PC device: Siemens Smartphone

8.4.2 Issues Encountered during Development and Debugging


Most of the problems that I encountered while developing the application was
surrounding the configuration of the development environment and debuggers.
Emulator used to test the builds was quite consistent in deploying the application to
the virtual machine using the emulation support but resulted in deployment failures
every now and then without giving any specific error messages and warnings. I
faced some minor issues setting up the development environment with the correct
versions of SDKs and components. However after the initial configuration and set
up the there were not many inexplicable problems in the development tools and
emulators. The next step was to ensure unit testing of the application under
construction and I encountered some testability issues during the development of
the web service. In the local development machine the .NET web service provided
the user interface to test the response returned from the database, but when the
service was deployed on the server, it required modifications in the web
170

configuration file to allow requests from a remote machine. Actual and potential
problems encountered while developing and testing the mobile application is listed
under testability and supportability categories in the risk catalog.

8.4.3 Deploying the Application on a Pocket PC based Smartphone


I used Microsoft ActiveSync 3.8 to deploy the application on a windows mobile
Pocket PC with smartphone. I conducted initial testing of the application on the
emulator clubbed with the development environment. With this set up it was not
possible to test the wireless communication between the mobile device and the
Web server housing the Web service for the book catalog. Also, testing for the
offline functionality was difficult using the emulator despite the functionality of the
emulator that allowed saving the state of the application after communicating with
the Web server to fetch the dataset. To test for these scenarios that were not
possible on the emulator, I deployed the mobile application on a real Pocket PC and
tested it on an 802.11 wireless network. I came across some risks and failures that
are listed under the field failures category of the risk catalog in chapter 5.

8.5 Enhancing the Catalog with Risks in Mobile


Web Services
8.5.1 Enhancing the Risk Catalog with Risks in Mobile Web
Services
The risk catalog developed in chapter 5 was enhanced further with faults and issues
encountered in mobiles web services. As I was working on designing, developing
and testing the mobile application, I used mind mapping software to organize my
171

thoughts and list the issues. To enhance the risk catalog further with these kinds of
risks I searched the bug databases, online reports of failures in web services, trade
press articles and inserted failures and possible problems in the risk catalog after
finishing work on the Mobile Book Catalog. Detailed report on the process that I
followed to test Mobile Book Catalog is presented in the next section.

8.5.2 Testing the Smartphone Application Using the Risk Catalog


After developing the application I started black box testing the application using
the risk catalog. I started by thinking of risks without using the catalog and jotted
down all the risks I could think of on a piece of paper. This was a very random list
of risks without any categorization. I was able to come up with an initial list of
potential problems and failures. This initial list lacked coverage on all product
dimensions and quality categories. Most of the potential problems were listed in the
functionality and usability categories and some other categories like security,
performance and compatibility were neglected. To resolve this issue I thought of
using mind maps to carry out further risk analysis. I used Mind Manager from
Mindjet software (www.mindjet.com, last accessed October 22, 2006).

Mind-maps are diagrammatic representations of possibly confusing concepts that


assist in brainstorming and idea generation. In a mind-map a central idea point
leads to sub ideas, which may lead to a hierarchy of further sub-ideas. It helps in
visual thinking of the problem at hand. The initial diagram that I created is as
shown in figure 8-6. This mind map gave me a birds-eye view of the entire failure
mode catalog and helped me focus on specific risk areas within the catalog.
After defining the top four categories I created the associated branches for each risk
heuristics. This allowed me to widen the coverage on failure categories that I did
not think of when I was designing the test cases without the risk catalog. I
172

expanded the risk analysis of the Mobile Book Catalog by imagining failures that
can happen in the quality risk categories like security, scalability, performance as
well as product dimensions like operations and data. Since I was the developer of
the mobile application, during the initial stages of this experiment I was focusing
more on design and ways in which the product will work. This was limiting my
thought process and risk analysis to imagine the ways application can fail when an
end user is using it. With the help of risk catalog, I could now think of problems
like, what will happen if I click the button to get the list of books and there are
other applications running on the handheld simultaneously.

Figure 8-6: Overview of the risk catalog using mindmap

Next I created further associative branches for each subcategories of the catalog
and hyperlinked it to the original diagram. This level was granular enough to start
thinking about the specific failures and risks for each failure category. Using the
173

combination of source code and the application running on device I then populated
these categories with specific potential risks and failures. At this stage I was able
to model the user behavior and was more empirical in my testing. I eventually used
these risks to update the risk catalog in chapter 5.

To give an example, the diagram for Product Elements detailing the categories and
risks identified against each category is shown below:

Figure 8-7: Heuristic risk analysis for product elements

174

Chapter 9: Conclusions and Closing


Remarks
9.1 Usage and Utility of the Risk Catalog
The risk catalog has been quite useful with the sample application tested as
described in chapters 6, 7 and 8. Participants in the pilot study stated that the risk
catalog was useful in removing the blind spots while they were testing the sample
application. While working on developing the sample application described in
chapter 8, I found the listings in the development quality attributes very helpful.
There are instances in the risk catalog where it becomes very specific to the
applications under test and could be confusing the reader who does not know about
the application. This is not uncommon with any risk lists and to some extent the
details of the risk lists are context dependent. There are however enough examples
of generic failures and risks in the risk catalog that we should be looking for in any
mobile application under test. Chapter 2 describes the risk analysis methods that are
the basis of the risk catalog. This chapter provides additional information on
forward and backward risk analysis that can be performed starting with the failure
modes described in the risk catalog.

9.2 Next Steps with the Risk Catalog


The consistent theme that emerged out of communicating and using the risk catalog
was the difficulty in navigation through the risk categories. Readers of the risk
175

catalog suggested that a birds eye view of the risk catalog with some sort of
navigable links to drill down into the failure categories would be helpful. I plan to
build a risk modeling tool on the basis of this risk catalog. The primary
functionality that this tool will provide is better navigation through the risk
categories. Another helpful feature of the tool will be to allow users of the catalog
enter their own risks and failures modes under categories. This will assist in
populating the failure categories even further and help in diversifying or focusing
the catalog for users based on the application under test and context.

176

Appendix A: Overview of Mobile


Computing Technology
The software/ hardware architecture of a typical mobile application could be best
visualized in a layered framework for strategizing the testing process. There are
typically four levels of abstraction that could be envisioned. Many different types
of devices, wireless networks and content delivery technologies are in use. Some of
the wireless networks and client side applications are briefly explained in this
appendix. Third appendix lists down some of the content delivery technologies and
associated acronyms.

Mobile applications.

Mobile content delivery and middleware.

Client-side devices.

Wireless networking infrastructure.

Mobile Applications
Many of these mobile solutions are already in use in the vertical industry like
Healthcare, Education and delivery services. Some of the commonly encountered
mobile applications are Mobile e-mail and Personal information management
(PIM). Other applications that are in use and emerging are: mobile financial
applications like banking and stock trading applications; mobile advertising, which
is location specific; mobile entertainment services and games; mobile office

177

products, like Pocket Word/Excel, mobile education software, enterprise wireless


data applications and mobile healthcare solutions (Varshney, 2002).

Mobile Content Delivery and Middleware


Numerous different technologies are available for content delivery and the
supporting middleware. Some of these are listed in glossary of mobile computing
terms and acronyms.

Microsoft Compact Framework


One of the sample applications that we chose to develop and test is based on
Microsoft.NET compact framework. The following section briefly describes the
technologies and development tools utilized while developing and testing the
sample application based on Windows mobile 2003 that consumes a Web service.
Microsoft.NET compact framework is a slimmed down version of Microsoft.NET
framework designed and optimized for mobile devices. Mobile devices are usually
of small form factor with limited storage space, memory and processing power
thereby requiring a runtime with smaller footprint and limited application
programming interfaces.

The .NET Compact Framework provides the following key functionalities:

Runs programs that are independent of hardware and operating systems.

Supports common network protocols, and connects seamlessly with XML


Web services.

178

Provides developers with a model for targeting their applications and


components to either a wide range or specific category of devices.

Provides benefits of design and optimization of limited system resources.

Obtains optimal performance in generating native code using just-in-time


(JIT) compilation.
(http://msdn.microsoft.com/library/default.asp?url=/library/enus/dv_evtuv/html/etconNETCompactFramework.asp, last accessed August
15, 2006)

The following figure demonstrates the architecture of compact framework and the
way it interacts with the native code of the machine. More information on
Microsoft.NET compact framework is available at

Figure 9-1 Schematic of Microsoft.NET compact framework


(Salmre, 2005)

179

Java ME
Sun Microsystems offers a highly optimized runtime environment targeted towards
the handheld devices having limited resources. Java ME provides some core APIs,
classes, emulators and technologies for wireless programming under its wireless
toolkit. It also follows a community process to define and allow implementers to
create new combinations of runtime optimized for different devices. Java ME is
divided into configurations, profiles and optional packages. More information at is
available at Sun Microsystems Website: http://java.sun.com/javame/index.jsp

WAP / WML
Wireless markup language and Wireless Application protocol are closely tied. They
are used to display information on narrowband wireless clients like cell phones and
pagers. WML is used for creating Web pages for handheld devices. WAP is the
application communication protocol used to access services and information. A
consortium consisting of Unwired Planet, Motorola, Ericsson, and Nokia was
responsible for the creation of WAP and WML. More information can be obtained
at: http://www.wapforum.org/.

180

Figure 9-2: WAP protocol stack. (WAPforum, 2000)

HDML
HDML stands for hand-held device markup language. HDML and HDTP, which is
the accompanying protocol, were created by Unwired Planet in 1997. There was
also a micro browser that was introduced called the UP.browser that runs on cell
phones and similar devices.

cHTML
Compact HTML is a subset of HTML 2.0, 3.2 and 4.0. The goal of the language is
quite similar to that of WML. The cHTML standard exists only as a W3C note
rather than a well-established standard. Compact HTML strips down the normal
HTML to the barebones making it suitable for narrowband and constrained devices.
181

It uses normal HTTP for data transfer making it easier to serve up content for the
handheld devices that support it.

VoiceXML
VoiceXML is an application of XML, so it possesses the same structure,
restrictions and benefits of XML. It is designed for creating audio dialogues with
human beings. It allows for a combination of synthesized speech and digitized
audio (output from the server side), recognition of spoken and DTMF key input,
and recording of spoken input. VoiceXML minimizes the client/server interactions
by specifying multiple interactions per document. The major goal is to bring the
advantages of Web-based development and content delivery to interactive voice
response applications.

Simplified HTML
This is a simplified version of HTML. PQA (Palm Query Application) uses a
subset of HTML and is one of the main browsing languages in the Palm handheld
market.

XHTML
XHTML is the replacement of HTML as the Web browser language as
recommended by W3C. XHTML 1.0 was a reformulation of HTML 4.01 in XML.
XHTML Basic is defined as proper subset of XHTML for mobile application

182

presentation including Web phones. More information is available at:


http://www.w3.org/TR/xhtml-basic/

iMode
imode is wireless data service developed by CoCoMo. It is packet based as
opposed to circuit switched voice systems. cHTML is used to write imode pages.
More information is available at: http://www.ai.mit.edu/people/hqm/imode/

SynchML
SynchML is a mobile data synchronization protocol that synchronizes data between
a network / desktop and a mobile device. It offers support for a variety of transport
protocols and applications thereby enhancing interoperability.

Client-Side Devices
The first handheld computing device that acquired a significant market share was
the Apple Newton. Since then, the handheld space has evolved to the point where
there are literally thousands of different combinations of hardware devices,
software capabilities and wireless networking features. These are some of the
devices that are in use in the commercial, industrial and personal sectors.

183

Smart phones
They are cellular phones with the display hardware and software for the wireless
Internet connectivity. They have a micro browser and some memory that is
continuously being expanded. There are many different names for such phones
depending on the technology used for the Internet services and information. In
Japan it is known as imode phone; in Europe it's called a WAP phone, and in many
places it is known as a Web phone. (Beaulieu, 2002)

PDA
It is a miniature computer with special OS, storage, a keyboard or the soft input
panel and a display. In general they have much more computing power than a smart
phone. They again are called with different names like handheld, palm-top,
communicator etcetera. There are two different kinds of handheld: the industrial
and the consumer handheld (Beaulieu, 2002). The main difference is in the
packaging. The PDAs used in the consumer market are mostly based on Palm OS,
Microsoft Pocket PC OS and Blackberry OS. Some manufacturers of industrial
handheld are: Symbol, Intermec, Itronix, Husky and others. The industrial
handhelds mostly connect to the wireless LAN rather than WAN.

Pagers
A pager is a handheld wireless device that uses a paging network for data
communication (Beaulieu, 2002). Pagers could be one-way, two-way or uplink. An
uplink pager is used to transmit telemetry or location information, normally used
184

for asset management. Pagers are more cost effective, time sensitive and have more
battery life than a cell phone.

Appliances
iAppliances is the generic name for the class of devices with a specialized purpose
and limited Internet or wireless data connectivity. Some examples of such devices
are e-book readers, e-mail stations, Internet radios, et al.

The Hybrids
Series of handheld compatible phones are rolled out. They could be called as the
communication devices that could compute or the computing devices that can
communicate. They can run high-level applications and still work as cellular
phones. Java phones are the early devices in this category that delivers voice as
well as data. Trend is towards development of a Swiss army knife kind of device
that combines all the benefits of the above-mentioned devices into a single ideal
handheld device. (Beaulieu, 2002)

Wireless Networking Infrastructure


Wireless Networks could be broadly divided into Wide Area Network (WAN),
Local Area Network (LAN) and the Personal Area Network (PAN) based on
coverage.

185

Wide Area Network (WAN)


Cellular Networks: This is the network with maximum coverage area. It is a
licensed public wireless network used by Web cell phones and private radio
frequency digital modems in handhelds. WAN cellular towers come in three
different power configurations: macro cell, micro cell, and pico cell. There are two
network architectures for the communicating devices: the circuit-switched and the
packet-switched.

A circuit-switched network builds up circuit for a call and establishes a dedicated


connection of circuits between points. Examples of circuit-switched devices are the
telephones, cellular phones, Web phones and dial-up modems.

In a packet-switched network, the IP-addressed data packets are routed between


points on demand. Packet-switched networks exchange variable amounts of data or
voice packets. Data can be transferred almost immediately as the network is always
on. WAN could be then further subdivided into voice-oriented network and data
oriented network. Some of the most widely used technologies for data transmission
are GSM/GPRS, 1XRTT CDMA, and Edge etcetera. GPRS and EDGE overlay
packet based air interface on the existing circuit-switched voice network. A new
generation of wireless wide area technology known as 3G is deployed in Japan and
Europe. It offers data speeds starting from 2Mbps in the fixed wireless
environment, 384 Kbps at low mobility and 128 Kbps while moving in a car. 3G
systems operate in 2GHz frequency band and are intended to provide a wide range
of services including telephony, paging, messaging, Internet and broadband data.

186

Figure 9-3: Cellular network


Paging network: Paging networks are of two types: one-way paging and two-way
paging. They are the earliest form of networks used to send messages to mobile
workers.

Local Area Network (LAN)

IEEE 802.11 family


These are the wireless local area networks operating in unlicensed spectrum. In
1997, the IEEE adopted IEEE Std. 802.11-1997, the first wireless LAN (WLAN)
standard. This standard defines the media access control (MAC) and physical
(PHY) layers for a LAN with wireless connectivity. It addresses local area
networking where the connected devices communicate over the air to other devices.
802.11b working on 2.4 GHz spectrum has the physical layer data rate at 11Mbps.
802.11g, is a new IEEE standard for wireless LANs. It is backward compatible
with 802.11. It can support 54Mbps raw data rate. The backward compatibility
comes from the fact that it works in the same 2.4 GHz band and OFDM
(Orthogonal Frequency division multiplexing) enables the high data transfer rate.

187

Another standard of interest is 802.11a. More information is available at:


http://www.wi-fi.org/OpenSection/index.asp?TID=1.

HiperLAN
In European countries the set of wireless network communication standards is
known as HiperLAN. There are two specifications adopted by ETSI (European
Telecommunications standards Institute), HiperLAN/1 and HiperLAN/2.

Personal Area Network


Bluetooth: Bluetooth is a wireless technology used in the personal connectivity
market by linking mobile computers, mobile phones, portable handheld devices,
and connectivity to the Internet. It is a low power, personal, wireless voice and data
network having a range of 10 meters. A Bluetooth network called Pico N(?)et can
connect eight Bluetooth devices. It works in the 2.4 GHz frequency band and does
not suffer from the interference by obstacles like a wall. It supports both point-topoint wireless connections without cables between mobile phones and personal
computers, as well as point-to-multipoint connections to enable ad hoc local
wireless networks (source: http://bluetooth.com/tech/works.asp).
Infrared: Infrared got standardized in 1994 with the publication of Infrared Data
Association's standard. Infrared devices use line of sight, exchanging data by lining
up their infrared lenses and have a typical range of 2 meters. They are mainly used
to manually exchange information using point-to-point connection.

188

Glossary and Acronyms


Some of the commonly used terms and acronyms are described in the glossary
below. A more comprehensive glossary is available at
http://www.devx.com/wireless/Door/11271. (Last accessed August 13, 2006)

CDPD: Cellular Digital Packet Data

GPRS: General Packet Radio Service

PIM: Personal Information Management

B2B: Business to Business

B2C: Business to Customer

CDMA: Code Division Multiple Access

TDMA: Time Division Multiple Access

FDMA: Frequency Division Multiple Access

GPS: Global Positioning System

GSM: Global System for Mobile Communication

SMS: Short Message Service

MMS: Multimedia Message Service

OFDM: Orthogonal Frequency Division Multiplexing

WISP: Wireless Internet Service Provider

WTP: Wireless Transaction Protocol

IDEN: Integrated Digital Enhanced Network

189

Appendix B
Issues Faced During Testing
The following issues were faced during testing of MidCast:

Issue # 1
Connection Lost During Connection Process
1. Begin Midcast
2. If prompted, connect and enable mobile
3. If the connection is lost at any point during this process, the following message
is displayed: Service Connection in Progress "Error: PPP timeout (0x1231)"

Issue # 2
Cannot navigate during connection hang-ups - REAL-TIME FAILURE
1. Get MidCast running and connected
2. During periods in which the connection is slow, the user is unable to navigate
using the onscreen prompts.

190

Usability Issues

Issue #3
When deleting a stock, you are NOT prompted to verify that correct action being
performed.

Issue #4
No way to look up stock names.

Issue #5
Cannot cycle back and forth through action options, only forward movement
possible.

Issue #6
Menus left justified to cells, however "ID" is centered over the stock names.

Issue #7
The "Action" button appears to have no relationship to the other button, but it
actually controls it.

Issue #8
Can not resort the list of stock names
191

Issue #9
Each time you start MidCast, a null stock appears, even after deletion

Issue # 10
Could not handle some NYSE stocks

1. Berkshire CL (NYSE: BRKA) was considered an invalid stock in MidCast,


perhaps because the closing trade price was 91100 which may be too large for
MidCast to display
2. Kramont PR (NYSE: KRT_pd) was considered an invalid stock in MidCast
3. Entering (NYSE: S&P 500) takes the input as two new stocks, and reads
"S&P_500" as invalid stocks
4. Would not accept NYSE as a valid input
5. Displays the volume for unknown stocks as "0"
6. The Chart in "Day chart" is not capitalized, but it is in "RT Chart"

Issue # 11
Real-Time Refreshing - REAL-TIME FAILURE
After some refreshes, the stock change was (e.g.) "+2.00" but the trade would still
show a red down arrow, meaning that the stock was down from its opening price

Issue # 12
Heap Memory Error - DATA INSTANCE ERROR
192

1. Begin Midcast and connect


2. Once inside the main console, continuously hit the "detail" action prompt,
which due to processor delay, will continue to accept it as a user selection
3. If done continuously, an "Out of heap memory" prompt will appear, which
clicking "Ok" to will close MidCast
4. If this same procedure is repeated with the "Day chart" or "RT Chart", the
message "Uncaught exception java/lang/OutOfMemoryError" appears. Hitting
"OK" restarts MidCast, but then the same error message is repeated, and will
continue to do this same loop until it eventually goes into an infinite loop of
trying to connect with no prompts or updates.

Issue # 13
Reappearing Stocks

1. Begin MidCast
2. From the main console, tap on "Day chart" 3 or 4 times continuously
3. Hit "cancel" once it displays "obtaining data" and it will then exit and reenter a
Day Chart
4. Return to the main console, and then repeat steps 2 and 3 with "RT Chart"
5. Repeat 2,3 and 4 a few times, and Stocks that have been deleted reappear, even
if they were deleted in previous sessions or if you have powered off

Issue # 14
Buffer Overrun and Memory Leak Requiring a Soft Reboot
193

1. Start MidCast
2. Once at the main console, tap on the "Day chart" or "RT Chart" options
3. Continuously until "Uncaught exception java/lang/OutOfMemoryError"
appears.
4. Repeat this two or three time
5. MidCast will run progressively slower until it eventually locks up

Issue # 15
Connection Errors

1. Start Midcast with mobile off


2. When prompted to turn mobile on, hit "cancel"
3. When prompted to connect to the GPRS network, hit "cancel"
4. Error message "Net.lib interface error: 0x00002F37"
5. Repeat step 1, and then turn mobile on when prompted
6. As soon as it begins to initialize the connection, hit "cancel"
7. When prompted to connect to the GPRS network, hit "cancel"
8. Error message "Net.lib interface error: 0x0000121F"

Issue # 16
Daytime and Real-time Chart Overload

1. Begin MidCast
2. Click on "Day chart" continuously until "uncaught exception java/lang/..."
message appears
194

3. Click okay, and then immediately begin clicking on "Day chart" again about ten
times.

MidCast will now enter and exit the daytime charts 10 times before locking up and
requiring a soft reboot

Issue # 17
Strange IP Value Error

1. Strange value of this IP, causing program crash

Issue # 18
Null Pointer Exception

1. Start MidCast
2. Tap on "RT Chart" 5 times continuously
3. Once in the RT Chart, click back
4. Error message: "Uncaught exception java/lang/NullPointerException" causing
Crash

Issue # 19
Fatal Alert

1. Start MidCast
2. Click on "Day chart" continuously until "uncaught exception..." error appears
195

3. Click okay, then when MidCast says "obtaining data", hit cancel
4. Repeat steps 2 and 3 until "Fatal Exception" error asks you to restart

Issue # 20
No news button as advertised in the description.

Issue # 21
There is no scroll bar, just dotted line which does not allow scrolling

1. Add many stocks


2. Try to view them all

The only way to view them is with the hardware button on the Palm, but not all
PDAs have hardware keyboards.

Issue # 22
Pressing Action modifies the second button instead of bringing a menu, as it would
be expected by a user

ACCURACY

Issue # 23
On page 16 of
http://www.hillcast.com/Website/support/user_manuals/pdf/MidCastGuide_PalmO
196

S.pdf the arrows for wick, volume and candlestick are pointing the wrong locations
(they are shifted one inch to the left)

Issue # 24
The application claims that its day chart "displays the intraday highs and lows for a
specific stock in 20-minute time intervals throughout the trading day"

1. Add stock R
2. Select day chart

At 1:46 PM the last time the chart was updated was 9:30 AM
At 2:45 PM the last price information was from 9:30 AM, The was volume info
was from 12:30 PM

Issue # 25
For nonexistent stocks that start with characters like ' the displayed information is
different than the information displayed for the rest of the non-existent stocks.

Issue # 26
Spelling error at the button Day Chart

1. Press Action several times until you see the Day Chart button on the right.

197

EFFICIENCY

Issue # 27
To perform an action the user has to click through all possible actions

Issue # 28
User is not given proper feedback. Creating a day chart for nonexistent stock makes
the application send and received data, but does not show any error message or
graph (just a blank screen with back button).

1. Add stock W
2. Click action button until the button on the right is Day Chart
3. Click on Day Chart

Issue # 29
Sometimes for no apparent reason the application quits and brings the user to
the mobile panel. Probably it is because of a problem with the network but the user
is not given any feedback.

RECOVERABILITY

Issue # 30
Pressing Day Chart many times crashes the application. The displayed error is:
Error Uncaught exception java/lang/OutOfMemoryError OR Error
198

Out of heap memory!

1. Select a stock
2. Press Day Chart many times

Probably the fault is in the fact that the application queues all user requests, without
checking the size of the queue, so if there are too many requests, at one point the
application just runs out of memory.

WIRELESS NETWORK FAILURES

Issue # 31
It is impossible to connect in some rooms.

ERROR MESSAGES

Issue # 32
If the Palm is not connected to GPRS, the application ask the user to connect, if the
user selects cancel, and on the next prompt cancel again, instead of quitting the
application displays an error:
Error: Net.lin interface error: 0x00002F37 and still tries to connect.

199

Issue # 33
There is no way to know if the stock market is not working or the application is not
working, because no error message or information is given to the user, but the data
is not updated at all.

AUTHENTICATION

Issue # 34
It is possible to make a stock appear to not exist when it does.

1. For example, stock K exists. Click Add Stock.


2. Type K, followed by many tabs, followed by a arbitrary letter
3. Click OK.

The result is that the screen shows stock K, which exists but the information shows
that it doesn't.

Issue # 35
Error message appeared without obvious reason
Error Strange value of this IP

200

Feedback Forms
Undergraduate Student 1

1. Did you have any prior experience testing mobile applications?


No.

2. How useful was the appendix introducing wireless technology? Is there


anything that could be added / deleted to make this portion more useful?
It was useful in a sense that it provided very basic information about the wireless
technology. I think it would be better if there is more information in this appendix
(preserving the same format). Links to Web sites and references to books that
discuss each category in details might be very helpful also.

3. How did you use the failure mode catalog while testing the sample
applications?
I started with Operational Quality Criteria -> Functionality -> Suitability. I read the
general description of the category. This focused my thinking in this area. Then I
read the failure modes, and for each of them I tried to imagine how I can apply the
same idea to my current application under test. Then I moved to the next category,
and so on. I traversed the categories in a sequential order (so I wouldnt miss
anything).

201

4. Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.
I found operational qualities criteria to be more useful than any other category of
failure for few reasons. First of all, it was perfectly suitable for the type of testing I
was doing (Black Box testing). Second, I didnt need much understanding of the
wireless technology to perform the tests suggested by this category.

5. Which portion of the catalog was least useful?


The Development Quality Criteria part of the catalog wasnt very useful for me, but
I can imagine that if I was developing an application it would be really useful. On
the other hand I believe that the Project Environment part would not be useful for
anyone since it is not directly related to testing or development. Moreover, most of
its content is common sense anyway.

6. What are some ways in which the failure mode catalog could be made
more useful?
For me the core of the paper is the failure modes and everything else is auxiliary.
That is why I believe that the most important thing would be to add more failures
modes in each category. For example currently Suitability has 8, Accuracy 6, and
so on. It would be better if they were something like 15 - 20 in each category.

Also, it would be better if the general structure of the paper is improved. I saw the
HTML version where each category was a link so the tester can navigate very
quickly. That was great idea. The problem with the PDF version of the paper is that
202

navigation is hard. Maybe page numbers will a little bit more helpful so the chart
with the catalog will be something like table of contents.

The purpose of the Sample Application section that immediately follows the
catalog chart is unclear. It looks like an introduction and if it is such it would be
better if it is in the introduction section.

7. What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?
Generally, I would find useful more relevant categories and more failure modes in
each one.

8. How much coverage of the risk catalog did you achieve while testing the
sample applications?
I spent 90% of the time on the Operational Quality Criteria. But still I went through
the whole catalog, but I was going rather quickly because I was aware of the time
constraints.

9. Was the distribution of time in:

Familiarization with the testing technique

Familiarization with the wireless technology

Browsing through the failure mode catalog

Time spent on testing the sample application appropriate?

203

How would you prefer to split the time if 30 hours are allocated in total to
finish all these tasks?
I believe that the distribution of my time was appropriate.

a) I assume that you mean familiarization with testing using a fault model catalog.
In this case 1 hour to read the paper would be enough (maybe one more hour to
read through other relevant papers).

b) I would say no more than 5% of the time, which in this case is hour and a half.
Note: If at some point I realize that I need additional information to perform a test,

c) I would spend additional time browsing the Web and reading.

d) It is hard to distinguish between the two because while I am testing I would be


browsing the catalog at the same time. This will take the rest of the time.

10. Do you have any other comments or criticisms?


As a summary my two most important points are:

Expand the categories and add more failure modes in each of them

Improve the structure of the paper

204

Undergraduate Student #2

1. Did you have any prior experience testing mobile applications?


No, I actually did not have any formal experience with testing at all. However,
testing mobile applications appeared to be a great place to start because of the
limited amount of data that the applications can run on and collect.

2. How useful was the appendix introducing wireless technology? Is there


anything that could be added / deleted to make this portion more useful?
The appendix was extremely useful. As someone who had no formal experience in
testing, especially none in mobile applications (I had never even held a palm pilot
before), the appendix was perfect in creating multiple starting points that I could
use as I felt comfortable with them. Perhaps the best feature about it is that it allows
testers to divide up paths/tasks to follow, allowing them to work with what they are
comfortable and realize what they are tackling.

3. How did you use the failure mode catalog while testing the sample
applications?
I used the catalog mainly as a checklist and a starting point. I would find something
that I was either ready or comfortable to test, then dive into the application with
that in mind. Once inside the application test, I would check back with the catalog
to see whether I had found something new, or else to get some idea of what I
should look for next. The catalog was a great resource for focusing the testing.
205

4. Which portion of the catalog was most useful while designing and
executing your tests? You could say something like: I found operational
qualities criteria to be more useful than any other category of failure.
Not to throw out a general response, but I found the entire catalog very useful. As I
said earlier, each section of the catalog was just a focal point to tackle. I certainly
made use of some more than others, but that in no way detracts from the usefulness
of all of them. I feel there may have been some areas that should have been subclassified a little more, as well as there were some others areas that should have
been broadened, but on the whole the catalog functioned much like a thorough
task-managing solution.

5. Which portion of the catalog was least useful?


If forced to answer this, I would say that Project Environment was the least useful,
but that was due solely to the fact that I didnt make use of it. Since I was on a oneman testing team, and since that part of the catalog was not actually completed as I
did the testing, I would have to declare it useless. However, looking over the
criteria that reside within that section, I have no doubts of its projected usefulness,
and its ability to organize and manage testing.

6. What are some ways in which the failure mode catalog could be made
more useful?
Certainly more examples for each of the sections would provide more usability.
Similarly, more sub-classifications so that there is a broader range of focal points
(which I assume will simply come as the catalog develops over time). With my
206

experience, I found the catalog more than sufficient. Perhaps someone with more
experience might find shortcomings, but I did not. The only constructive suggestion
I would make would be to include more detail in the examples, such that there is
more of an idea on how to locate certain bugs, or where it is that they arise (such as
heap memory failures, etc.). Also, perhaps definitions of the bugs, so that there is a
clear and precise understanding of what a tester is working with.

7. What additional information would you find useful regarding the form of
testing that you carried out based on bug taxonomies?
Again, definitions would be fantastic. Rather than just simply links with the bugs
name, more explanation on what the bug is and, especially, what damage it can
cause. Additionally, Im not sure it would be beneficial to organize the bugs in a
priority manner. The current version of the catalog was not organized that way, and
I feel that it should stay as such.

8. How much coverage of the risk catalog did you achieve while testing the
sample applications?
Actually, a fair amount. What I found was that either many of the bugs did not exist
or did not apply in the program I was testing. However, a lot of the bugs I did find I
was not expecting, and probably would not have found without the catalog.

9. Was the distribution of time in:

Familiarization with the testing technique

Familiarization with the wireless technology

Browsing through the failure mode catalog


207

Time spent on testing the sample application appropriate?

How would you prefer to split the time if 30 hours are allocated in total to
finish all these tasks?
I would have enjoyed more time testing some sample applications. The application
I did test thoroughly did not cover a lot of the listed bugs in the catalog, and for that
I feel that I did not get a chance to fully familiarize with many of the catalogs
features. If I had to make one request it would be more time testing, because even a
crash course in testing provides ample (and some of the best) time spent
familiarizing ones self with the catalog. Perhaps 5 hours towards familiarization
with testing techniques, wireless technology, and a general overview of the
calendar, and then the rest of the time simply allowing the user to get right into the
test application. In this way, you can really see where shortcomings in the catalog
would be just by any confusion that may reside in the tester once they get going.

10. Do you have any other comments or criticisms?


No, keep it up. I can see where the project is heading, and I feel very confident in
what it hopes to accomplish. It is an outstanding resource for both introducing
testers to new tests, keeping software coders aware of some of the mistakes that slip
through, and keeping experienced testers from overlooking some of the simpler
tests.

208

Appendix C: Institutional Research


Board Forms
Student Application Research Involving Human
Subjects
Name: Ajay Jha

Date: Feb 24, 2004

Major: Software Engineering

Course: Masters Thesis

Title of Project: Failure Mode Catalogs

Directions: Please provide the information requested below (please type). Use a
continuation sheet if necessary, and provide appropriate supporting documentation.
Please sign and date this form and then return it to your major advisor. You should
consult the university's document "Principles, Policy and Applicability for
Research Involving Human Subjects" prior to completing this form. Copies may be
obtained from the Office of the Vice President for Research.

1. List the objectives of the proposed project.


Develop a failure mode catalog for testing mobile applications.
Use the failure mode catalog to test mobile applications and the evidence of its
effectiveness / ineffectiveness as part of my thesis.
Publish the failure mode catalog and masters thesis at testingeducation.org and in
software testing journals, in order to improve testing of mobile applications.
209

2. Briefly describe the research project design/methodology.


An experiment will last for approximately 20 hours. Initial five hours will be
utilized to familiarize the students with the failure mode catalog. Testers will then
carry out testing of either one or two sample applications using the failure mode
catalog.

The essence of the project design is that

We will provide the tester with the online version of the failure mode
catalog for mobile applications.

Then we will provide the tester with some examples of bugs that have
occurred in a similar application falling under a failure category.

Tester will then spend around 12 hours testing a sample application.

Testers will then fill up a survey stating their experience and provide
feedback on their experience using the risk catalog.

To evaluate the results of the experiments, we will have some context-setting at the
start of the experiment and some results analysis left to do at the end:

Everyone will get a demographic questionnaire

Some testers may get a testing experience questionnaire or another


questionnaire related to their job and academic experience.

Everyone will take an oral pretest that will give us an indication of the skills
they already have in the type of testing they are required to carry out.

After they go through the training and testing of sample application,


everyone will fill up a survey stating their experience using the risk catalog.
210

3. Describe the characteristics of the subject population, including number,


age, sex, etc.
A closed invitation to students of Florida Tech who have SWE or equivalent and
MTH 2051. Outsiders who have software testing background might also be
recruited.

Anticipated number: 20-60 (20 initially plus 10 for pilot studies)

Age Range: 19 and above

Sex: Male or female; gender doesnt matter

Ethnic background: doesnt matter

Health status: not obviously ill

Use of special classes (pregnant women, prisoners, etc): N/A -- We


will not reject women just because they are pregnant. We are not recruiting
specifically for subject characteristics other than a minimum level of
computing background and age of maturity.

4. Describe any potential risks to the subjects -- physical, psychological,


social, legal, etc. and assess their likelihood and seriousness.
The usual minor risks associated with working and sitting in a classroom or
conference room for several hours.

5. Describe the procedures you will use to maintain confidentiality for your
research subjects and project data. What problems, if any, do you
anticipate in this regard?
211

We will not be using any personal data collected like name, etc in my thesis or
elsewhere.

Subjects will be assigned numbers, which they will use on all materials they hand
in. We will keep a list matching subject name and number, primarily for auditing
purposes. We will keep this list in an offsite file (Kaners house) and will not share
it with others unless they have a lawful need to know. We explain this in the
consent form.

6. Describe your plan for obtaining informed consent (attach proposed form).
Florida Tech IRB: 2/96
Consent will be sought using the attached consent form.

7. Discuss what benefits will accrue to your subjects and the importance of
the knowledge that will result from your study.

Training that is relevant to the participants field of interest.

Pay by the hour (typically $150 for the experiment or $10 per hour)

8. Explain how your proposed study meets the criteria for exemption from
Institutional Review Board review.
The following protocol applies to my research that should be exempted, because it
meets the criterion:

212

1. Research conducted in established or commonly accepted educational settings,


involving normal educational practices, such as:

research on regular and special education instructional strategies, or

research on the effectiveness of or the comparison among instructional


techniques, curricula, or classroom management methods.

I understand Florida Institute of Technology's policy concerning research involving


human subjects and I agree:

1. to accept responsibility for the scientific and ethical conduct of this research
study.
2. to obtain prior approval from the Institutional Review Board before amending
or altering the research protocol or implementing changes in the approved
consent form.
3. to immediately report to the IRB any serious adverse reactions and/or
unanticipated effects on subjects which may occur as a result of this study.
4. to complete, on request by the IRB, a Continuation Review Form if the study
exceeds its estimated duration.

Signature:

Ajay K Jha

Date:

Feb 25, 2004

This is to certify that I have reviewed this research protocol and that I attest to the
scientific merit of the study, the necessity for the use of human subjects in the study
to the student's academic program, and the competency of the student to conduct
the project.

213

Major Advisor _____________________________ Date _______________


This is to certify that I have reviewed this research protocol and that I attest to the
scientific merit of this study and the competency of the investigator(s) to conduct
the study.

Academic Unit Head __________________________Date ______________

IRB Approval _______________________________ Date ______________


Name
______________________________________________
Title

Consent Form
CONSENT FORM: FAILURE MODE CATALOG EXPERIMENT BY AJAY
JHA
We are seeking your participation in a research project in which we are developing
a failure mode catalog for testing mobile applications. We will use data from this
experiment to improve the risk catalog and adapt it to be more useful.

214

If you agree to participate, we may ask you to attend one or more lectures, read
materials, complete practice exercises, and take written tests at various points in the
study.

When you do an exercise or take a test, you will fill out an answer sheet. To
preserve your privacy, you will identify yourself on answer sheets with an
experimenter-assigned number. You might review answers written by other
students and they might review your answers.

You may be assigned to work on an exercise with another student, and if so, each
of you will fill out your own answer sheet.

The experiment will require several hours of your participation. If you cannot
participate for all of the scheduled hours, please do not begin this experiment. We
cannot use partial data and we cannot compensate you for partial participation.

We will split the experiment into sessions, which will last between 15 and 18
hours.

We estimate that your participation will require

sessions of __ hours each.

Participants in most of the phases of this work will be paid. Some of our colleagues
will serve as volunteers during the exploratory phases of the experiment and will
not be paid.

You will be paid $__ for your participation in this study.

215

You will be paid at the completion of your role in the experiment (all sessions
assigned to you). We can afford to pay you only if you attend all the assigned
sessions and complete the required assignments/exercises/quizzes. We cannot use
your data if you skip any session.

If we have to terminate your role in the experiment prematurely because of an error


that we made in running the experiment, we will pay you for the time you have
spent in the experiment to date of termination, at a rate of $10 per hour, rounded up
to the nearest whole hour.

Your participation will not subject you to any physical pain or risk. We do not
anticipate that you will be subject to any stress or embarrassment.

We will ask you to fill out one or more questionnaires that give us demographic
information about you and/or that give us insight into how you learn.

Your name will not be recorded on any answer sheet. You will be assigned an
anonymous code number. You will use that code number on your answer sheet.
Your responses will be tracked under that code number, not under your name. Any
reports about this research will contain only data of an anonymous or statistical
nature. Your name will not be used.

For auditing purposes, the experimenter will keep a list of all people who
participated in the experiment and the anonymous code assigned to them. That list
might be reviewed by the student experimenter, Ajay Jha, the projects Principal
Investigator, Cem Kaner, or by anyone designated by the Florida Institute of
Technology or an agency of the Government of the United States, including the
National Science Foundation, as having such legitimate administrative interests in
216

the project as analysis of the treatment of the subjects, the legitimacy of the data, or
the financial management of the project. We will file this list in a place we consider
safe and secure and take what we consider to be reasonable measures to protect its
confidentiality. We will treat it with the same (or greater) care as we would treat
our own confidential materials.

Any questions you have regarding this research may be directed to the
experimenter (Ajay Jha) or to Cem Kaner at Florida Tech's Department of
Computer Sciences, 321-674-7137. Information involving the conduct and review
of research involving humans may be obtained from the Chairman of the
Institutional Review Board of the Florida Institute of Technology, Dr. Ronald
Hansrote at 321-674-8120.

Your signature (below) indicates that you agree to participate in this research and
that:

You have read and understand the information provided above.

You understand that participation is voluntary and that refusal to participate


will involve no penalty or loss of benefits to which you are otherwise
entitled; and,

You understand that you are free to discontinue participation at any time
without penalty or loss of benefits to which you are otherwise entitled
except that you wont be entitled to be paid for the experiment since you did
not attend all the sessions.

___________________________________________
217

__________________

Participant

Date

I have explained the research procedures in which the


subject has consented to participate.

___________________________________________

__________________

Experimenter

Date

218

Appendix D: Source Code Listing


Form1.cs
using System;
using System.ComponentModel;
using System.Data;
using System.IO;
using System.Net;
using System.Runtime.InteropServices;
using System.Text;
using System.Windows.Forms;
using System.Xml;
using BookCatalogAppCS.BookCatalogWS;

namespace BookCatalogAppCS
{
public class Form1 : Form {

private Label label1;


private Button button1;
private MainMenu mainMenu1;
private DataSet BookCatalogDS;
private DataTable BookCatalogTable;
private Service1 ws = new Service1();
private static int hourGlassCursorID = 32514;
219

private static string DataDirectory = @"\Program


Files\BookCatalogApp\Data\";
private TextBox textBox1;
private ListView listView1;
private static string DataFile = @"\Program
Files\BookCatalogApp\Data\Catalog.xml";

[DllImport("coredll.dll")]
private static extern int LoadCursor (int zeroValue, int cursorID);
[DllImport("coredll.dll")]
private static extern int SetCursor(int cursorHandle);

public Form1()
{
InitializeComponent();
}

protected override void Dispose( bool disposing ) {


base.Dispose( disposing );
}
#region Windows Form Designer generated code

private void InitializeComponent() {


this.label1 = new System.Windows.Forms.Label();
this.button1 = new System.Windows.Forms.Button();
this.mainMenu1 = new
System.Windows.Forms.MainMenu();
220

this.textBox1 = new System.Windows.Forms.TextBox();


this.listView1 = new System.Windows.Forms.ListView();
//
// label1
//
this.label1.Font = new System.Drawing.Font("Arial", 9.75F,
System.Drawing.FontStyle.Bold);
this.label1.Location = new System.Drawing.Point(8, 11);
this.label1.Size = new System.Drawing.Size(152, 16);
this.label1.Text = "Mobile Book Catalog";
//
// button1
//
this.button1.Location = new System.Drawing.Point(160, 8);
this.button1.Text = "Get Items";
this.button1.Click += new
System.EventHandler(this.button1_Click);
//
// textBox1
//
this.textBox1.Font = new System.Drawing.Font("Arial", 9F,
System.Drawing.FontStyle.Regular);
this.textBox1.Location = new System.Drawing.Point(8,
200);
this.textBox1.Multiline = true;
this.textBox1.ReadOnly = true;
this.textBox1.ScrollBars =
System.Windows.Forms.ScrollBars.Vertical;
221

this.textBox1.Size = new System.Drawing.Size(224, 56);


this.textBox1.Text = "Welcome to the Mobile Book Catalog!
Click the \"Get Items\" button to download the " +
"catalog.";
//
// listView1
//
this.listView1.FullRowSelect = true;
this.listView1.Location = new System.Drawing.Point(8, 33);
this.listView1.Size = new System.Drawing.Size(224, 159);
this.listView1.SelectedIndexChanged += new
System.EventHandler(this.listView1_SelectedIndexChanged);
//
// Form1
//
this.Controls.Add(this.textBox1);
this.Controls.Add(this.listView1);
this.Controls.Add(this.button1);
this.Controls.Add(this.label1);
this.Menu = this.mainMenu1;
this.Text = "Book Catalog";
this.Closing += new
System.ComponentModel.CancelEventHandler(this.Form1_Closing);
this.Load += new System.EventHandler(this.Form1_Load);

}
#endregion
222

/// <summary>
/// The main entry point for the application.
/// </summary>
static void Main() {
Application.Run(new Form1());
}

private void button1_Click(object sender, EventArgs e) {


DataSet TempDS = new DataSet();

ShowWaitCursor(true);

try

{
//Get the data from the Web service and assign it to a

temporary DataSet.
//If the DataSet downloads successfully from the
Web service,
//assign TempDS to BookCatalogDS
TempDS = ws.GetItems();
BookCatalogDS = TempDS;
BookCatalogTable =
BookCatalogDS.Tables["Titles"];
AddDataToListView();
}
catch (WebException we) {
MessageBox.Show("Unable to connect. Error: " +
we.Message, "Connection Failed");
}
223

ShowWaitCursor(false);
}

private void listView1_SelectedIndexChanged(object sender,


EventArgs e) {
DataRow row;

if (listView1.SelectedIndices.Count > 0) {
row =
BookCatalogTable.Rows[listView1.SelectedIndices[0]];
textBox1.Text = "Description:\r\n" +
row["notes"].ToString();

try {
//pictureBox1.Image = new Bitmap(new
MemoryStream((byte[])row["Image"]));
}
catch (InvalidCastException ne) {
MessageBox.Show("Could not load image.
Exception: " + ne.Message);
}
}
}

private void Form1_Closing(object sender, CancelEventArgs e) {


DirectoryInfo dir;
FileInfo CatalogFile;
224

XmlWriter Writer;

ShowWaitCursor(true);

//Check to see if the application directory exists. If not,


create the directory.
dir = new DirectoryInfo(DataDirectory);
if (!dir.Exists) {
dir.Create();
}

//Check to see if the file already exists, and if so delete it.


CatalogFile = new FileInfo(DataFile);
if (CatalogFile.Exists) {
CatalogFile.Delete();
}

//Check to see whether or not there is any information in


BookCatalogTable
//(ie: Was the information downloaded) If so, save it to a
file.
if (BookCatalogDS != null) {
if (BookCatalogDS.Tables.Count != 0) {
Writer = new XmlTextWriter(DataFile,
Encoding.Unicode);

BookCatalogDS.WriteXml(Writer,XmlWriteMode.WriteSchema);
Writer.Close();
225

}
}

ShowWaitCursor(false);
}

private void AddImageToPictureBox()

//pictureBox1
//pictureBox1.Image = new
System.Drawing.Bitmap(Assembly.GetExecutingAssembly().GetManifestResourc
eStream("BookCatalogAppCS.logo.gif"));
// pictureBox1.Size = pictureBox1.Image.Size;
}

private void AddDataToListView() {


ListViewItem item;

listView1.Clear();
listView1.Columns.Add("Title",listView1.Width 60,HorizontalAlignment.Left);

listView1.Columns.Add("Price",45,HorizontalAlignment.Right);
listView1.View = View.Details;

foreach (DataRow row in BookCatalogTable.Rows) {


item = new ListViewItem(row["title"].ToString());
if (row["price"] != DBNull.Value) {
226

item.SubItems.Add(String.Format("{0:F2}",(decimal)row["price"]));

}
listView1.Items.Add(item);
}

textBox1.Text = "Click on a book title to see an image of the


book cover and a description.";
}

private void LoadCatalogFromFile() {


FileInfo CatalogFile = new FileInfo(DataFile);

if (CatalogFile.Exists) {
try {
BookCatalogDS = new DataSet();
BookCatalogDS.ReadXml(DataFile);
BookCatalogTable =
BookCatalogDS.Tables["Titles"];
}
catch (Exception ex) {
MessageBox.Show(ex.Message);
}
AddDataToListView();
}
}
private static void ShowWaitCursor (bool value) {
227

SetCursor (value ? LoadCursor(0, hourGlassCursorID) : 0);


}

private void Form1_Load(object sender, EventArgs e)


{
ShowWaitCursor(true);
AddImageToPictureBox();
LoadCatalogFromFile();
ShowWaitCursor(false);

}
}

WSDL
<?xml version="1.0" encoding="utf-8"?>
<definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:s0="http://tempuri.org/"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"

228

targetNamespace="http://tempuri.org/"
xmlns="http://schemas.xmlsoap.org/wsdl/">
<types>
<s:schema elementFormDefault="qualified"
targetNamespace="http://tempuri.org/">
<s:import namespace="http://www.w3.org/2001/XMLSchema" />
<s:element name="GetItems">
<s:complexType />
</s:element>
<s:element name="GetItemsResponse">
<s:complexType>
<s:sequence>
<s:element minOccurs="0" maxOccurs="1" name="GetItemsResult">
<s:complexType>
<s:sequence>
<s:element ref="s:schema" />
<s:any />
</s:sequence>
</s:complexType>
</s:element>
</s:sequence>
</s:complexType>
</s:element>
<s:element name="DataSet" nillable="true">
<s:complexType>
<s:sequence>
<s:element ref="s:schema" />
<s:any />
229

</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
<message name="GetItemsSoapIn">
<part name="parameters" element="s0:GetItems" />
</message>
<message name="GetItemsSoapOut">
<part name="parameters" element="s0:GetItemsResponse" />
</message>
<message name="GetItemsHttpGetIn" />
<message name="GetItemsHttpGetOut">
<part name="Body" element="s0:DataSet" />
</message>
<message name="GetItemsHttpPostIn" />
<message name="GetItemsHttpPostOut">
<part name="Body" element="s0:DataSet" />
</message>
<portType name="Service1Soap">
<operation name="GetItems">
<input message="s0:GetItemsSoapIn" />
<output message="s0:GetItemsSoapOut" />
</operation>
</portType>
<portType name="Service1HttpGet">
<operation name="GetItems">
<input message="s0:GetItemsHttpGetIn" />
230

<output message="s0:GetItemsHttpGetOut" />


</operation>
</portType>
<portType name="Service1HttpPost">
<operation name="GetItems">
<input message="s0:GetItemsHttpPostIn" />
<output message="s0:GetItemsHttpPostOut" />
</operation>
</portType>
<binding name="Service1Soap" type="s0:Service1Soap">
<soap:binding transport="http://schemas.xmlsoap.org/soap/http"
style="document" />
<operation name="GetItems">
<soap:operation soapAction="http://tempuri.org/GetItems" style="document"
/>
<input>
<soap:body use="literal" />
</input>
<output>
<soap:body use="literal" />
</output>
</operation>
</binding>
<binding name="Service1HttpGet" type="s0:Service1HttpGet">
<http:binding verb="GET" />
<operation name="GetItems">
<http:operation location="/GetItems" />
<input>
231

<http:urlEncoded />
</input>
<output>
<mime:mimeXml part="Body" />
</output>
</operation>
</binding>
<binding name="Service1HttpPost" type="s0:Service1HttpPost">
<http:binding verb="POST" />
<operation name="GetItems">
<http:operation location="/GetItems" />
<input>
<mime:content type="application/x-www-form-urlencoded" />
</input>
<output>
<mime:mimeXml part="Body" />
</output>
</operation>
</binding>
<service name="Service1">
<port name="Service1Soap" binding="s0:Service1Soap">
<soap:address
location="http://apps.gotdotnet.com/netcf/BookCatalogWS/Service1.asmx" />
</port>
<port name="Service1HttpGet" binding="s0:Service1HttpGet">
<http:address
location="http://apps.gotdotnet.com/netcf/BookCatalogWS/Service1.asmx" />
</port>
232

<port name="Service1HttpPost" binding="s0:Service1HttpPost">


<http:address
location="http://apps.gotdotnet.com/netcf/BookCatalogWS/Service1.asmx" />
</port>
</service>
</definitions>

Service.cs
using System;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Web;
using System.Web.Services;
using System.Data.SqlClient;
using System.Net;
using System.IO;
using System.Drawing;

namespace BookCatalogWS
{
/// <summary>
/// Summary description for Service1.
/// </summary>
public class Service1 : System.Web.Services.WebService
233

{
DataSet BookCatalog;

public Service1()
{
//CODEGEN: This call is required by the ASP.NET Web
Services Designer
InitializeComponent();
BookCatalog = GetDataSet();
}

#region Component Designer generated code

//Required by the Web Services Designer


private IContainer components = null;

/// <summary>
/// Required method for Designer support - do not modify
/// the contents of this method with the code editor.
/// </summary>
private void InitializeComponent()
{
}

/// <summary>
/// Clean up any resources being used.
/// </summary>
protected override void Dispose( bool disposing )
234

{
if(disposing && components != null)
{
components.Dispose();
}
base.Dispose(disposing);
}

#endregion

private DataSet GetDataSet()


{
SqlConnection myConnection = new SqlConnection("User
ID=ajaytest;Pwd=ajaytest;Initial Catalog=pubs;Data Source=scclab05;");
SqlDataAdapter myCommand = new SqlDataAdapter("select
* from Titles", myConnection);

DataSet ds = new DataSet();


myCommand.Fill(ds, "Titles");

DataTable dt = ds.Tables["Titles"];

DataColumn dc;

//Add a new column to the dataset


dc = new DataColumn();
235

// Create first column

dc.DataType

= typeof(byte[]);

// First field is an

integer
dc.ColumnName = "Image";

// Column name is

'Integer1'
dc.Unique
dc.ReadOnly

= false;
= false;

dt.Columns.Add(dc);

ds.AcceptChanges();

return ds;
}

[WebMethod]
public DataSet GetItems()
{
return BookCatalog;
}
}
}

236

// Not unique
// Read/write
// Add column

References
Agruss, C. (2000). Software Installation Testing. Software Testing & Quality
Engineering, 2(4), from
http://www.stickyminds.com/stickyfile.asp?i=1866150&j=29860&ext=.pdf.
Altshuller, G. (1997). 40 Principles: TRIZ Keys to Technical Innovation (Vol. 1).
Worcester, MA, USA: Technical Innovation Center.
Amland, S. (1999). Risk Based Testing and Metrics. Paper presented at the 5th
International Conference, EuroSTAR '99.
Bach, J. (1999). Risk-Based Testing. Software Testing & Quality Engineering,
1(6), 22-29, from http://www.satisfice.com/articles/hrbt.pdf.
Bach, J. (2003). Troubleshooting Risk-Based Testing. Software Testing & Quality
Engineering, 5(3), 28-33, from http://www.satisfice.com/articles/rbttrouble.pdf.
Bach, J. (2006a). Heuristic Test Strategy Model (4.8 ed., pp. 1-5).
http://www.satisfice.com: Satisfice, Inc.
Bach, J. (2006b). Rapid Software Testing - Course Notes (1.9.8.3 ed.).
http://www.satisfice.com: Satisfice, Inc.
Beizer, B. (1990). Software Testing Techniques (2 ed.). New York, NY, USA: Van
Nostrand Reinhold Co.
Biaz, S., & Vaidya, N. H. (1997). Tolerating location register failures in mobile
environments. Texas, USA: Tech. Rep. No. 97-015, Texas A&M
University, Department of Computer Science.
237

Biaz, S., & Vaidya, N. H. (1998). Tolerating visitor location register failures in
mobile environments. Paper presented at the The 17th IEEE Symposium on
Reliable Distributed Systems.
Bishop, M., & Bailey, D. (1996). A Critical Analysis of Vulnerability Taxonomies.
Davis, CA, USA: University of California at Davis.
Bloom, B. S. (1956). Taxonomy of Educational Objectives, Handbook I: The
Cognitive Domain. Longman, New York, USA: Addison Wesley Publishing
Company.
Cao, G. (2000). Designing Efficient Fault-Tolerant Systems on Wireless Networks.
Paper presented at the Proceedings of the Third IEEE Information
Survivability Workshop.
Chakravorty, R., & Pratt, I. (2002). Performance Issues with General Packet Radio
Service. Journal of Communications and Networks, 4(2), 266-281, from
cl.cam.ac.uk/users/rc277/jcn02.ps.
Chande, S. (2005). Mobile Web Services. University of Helsinki from
http://www.cs.helsinki.fi/u/chande/courses/cs/MWS/.
Chatterjee, S., & Webber, J. (2003). Developing Enterprise Web Services: An
Architect's Guide. East Patchogue, NY, USA: Prentice Hall PTR.
Cheng, S., Lai, K., & Baker, M. (1999). Analysis of HTTP/1.1 Performance on a
Wireless Network. Stanford, CA, USA: Computer Systems Laboratory,
Stanford University from
http://citeseer.ist.psu.edu/cache/papers/cs/2053/http:zSzzSzgunpowder.stanf
ord.eduzSz~laikzSzprojectszSzwireless_httpzSzpublicationszSztech_report
zSzwireless_http.pdf/cheng99analysis.pdf.
Collard, R. (2002). Performance, Load and Stress Testing. Software Productivity
Center Inc.
238

Czerny, B. J., DAmbrosio, J. G., Murray, B. T., & Sundaram, P. (2005). Effective
Application of Software Safety Techniques for Automotive Embedded
Control Systems. Unpublished SAE Technical Paper Series. SEA
International.
Dreamtech, S. T. (2002). Programming for Embedded Systems: Cracking the Code
(Bk&CD-Rom edition ed.). New York, NY, USA: Wiley Publishing Inc.
Erl, T. (2005). Service-Oriented Architecture (SOA): Concepts, Technology, and
Design. Upper Saddle River, New Jersey, USA: Prentice Hall Professional
Technical Reference.
Fowler, M., Beck, K., Brant, J., Opdyke, W., & Roberts, D. (2003). Refactoring:
Improving the design of exisiting code. Boston, MA, USA: Addison-Wesley
Longman Inc.
Gerrard, P., Gerard, P., & Thompson, N. (2002). Risk-Based E-Business Testing (1
ed.). Boston, MA, USA: Artech House Publishers.
Giguere, E. (1999). Palm database programming: The complete developers guide.
Indianapolis, IN, USA: John Wiley & Sons.
GoKnow, I. (2004). Palm OS PAAM Conduit for Windows. Ann Arbor, MI, USA:
GoKnow, Inc. from
http://paam.goknow.com/files/PAAMWalkthrough_021403.pdf
Hecht, H., Xuegao, A., & Hecht, M. (2003). Computer aided software fmea for
unified modeling language based software. Paper presented at the
Reliability and Maintainability, 2004 Annual Symposium - RAMS.
Henley, E. J., & Kumamoto, H. (1992). Probabilistic Risk Assessment: Reliability
Engineering, Design, and Analysis. New York, NY, USA: IEEE Press.
Hirsch, F., & Kemp, J. (2006). Mobile web services: Architecture and
implementation. West Sussex, England: John Wiley & Sons.
239

IBM. (2005). WebSphere Version 5.1 Application Developer 5.1.1 Web Services
Handbook. IBM WebSphere Software, Redbooks from
http://www.redbooks.ibm.com/redbooks/pdfs/sg246891.pdf.
IEEE. (1991). IEEE standard computer dictionary: A compilation of IEEE
standard computer glossaries. New York, NY, USA: IEEE Press.
ISO9126. (1991). Information technology - Software product evaluation - Quality
characteristics and guidelines for their use. Geneva, Switzerland:
International Standard ISO/IEC 9126.
ISO9241-11. (1998). Ergonomic requirements for office work with visual display
terminals: Guidance on Usability: American National Standards Institute.
Jha, A., & Kaner, C. (2003). Bug in the brave new unwired world. Paper presented
at the Pacific North-West Software Quality Conference.
Jouko, S., & Veikko, R. (1993). Quality Management of Safety and Risk Analysis.
Tampere, Finland: Elsevier Science Publishers Co.
Kaner, C., & Bach, J. (2005). Black box software testing. Unpublished Course
Notes. Florida Institute of Technology from
http://www.testingeducation.org/BBST/index.html.
Kaner, C., Bach, J., & Pettichord, B. (2001). Lessons learned in software testing (1
ed.). New York, NY, USA: John Wiley & Sons.
Kaner, C., Falk, J., & Nguyen, H. Q. (1999). Testing Computer Software (2 ed.).
New York, NY, USA: John Wiley and Sons.
Karygiannis, T., & Owens, L. (2002). Wireless network security National Institute
of Standards and Technology from
http://www.csrc.nist.gov/publications/nistpubs/800-48/NIST_SP_80048.pdf.
240

Ko, H.-P. (1996). Attacks on cellular systems. GTE Laboratories Incorporated from
http://seclab.cs.ucdavis.edu/projects/cmad/4-1996/pdfs/Ko.PDF.
Lee, V., Schneider, H., & Schell, R. (2004). Mobile applications: Architecture,
design and development. Indianapolis, Indiana, USA: Prentice Hall
Professional Technical Reference.
Luchini K., Quintana C., Soloway E. (2004). Evaluating the Impact of Small
Screens on the Use of Scaffolded Handheld Learning Tools. University of
Michigan. Paper presented at American Educational Research Association,
2004
Lutz, R. R., & Woodhouse, R. M. (1996, April 15-16, 1996). Experience report:
Contributions of SFMEA to requirements analysis. Paper presented at the
Second IEEE International Conference on Requirements Engineering,
Colorado Springs, CO, U.S.A.
Lutz, R. R., & Woodhouse, R. M. (1997). Requirements Analysis Using Forward
and Backward Search. Annals of Software Engineering, Special Volume on
Requirements Engineering(3).
Lyu, M. R. (1995). Handbook of software reliability engineering. New York, NY,
USA: IEEE Computer Society Press and McGraw-Hill Book Company.
Malloy, A. D., Varshney, U., & Snow, A. P. (2002). Supporting mobile commerce
applications using dependable wireless networks. Mobile Networks and
Applications, 7(3), 225 - 234.
Mantyla, M. (2003). Bad smells in software - A taxonomy and empirical study.
Helsinki University of Technology, Helsinki, Finland.
Marick, B. (1995). The Craft of Software Testing. Upper Saddle River, New Jersey,
USA: Prentice Hall.

241

Carr M. J., Konda S. L., Monarch I., Ulrich F. C., Walker C. F., (1993). TaxonomyBased Risk Identification.
McDermid, J. A., Nicholson, M., Pumfrey, D. J., & Fenelon, P. (1995). Experience
with the application of HAZOP to computer-based systems. Heslington,
York, U.K.: British Aerospace Dependable Computing Systems Centre and
High Integrity Systems Engineering Group, Department of Computer
Science,
University of York from
http://citeseer.ist.psu.edu/cache/papers/cs/16867/ftp:zSzzSzftp.cs.york.ac.uk
zSzhise_reportszSzsafetyzSzexperience.pdf/mcdermid95experience.pdf.
McDermid, J. A., & Pumfrey, D. J. (1994). Towards Integrated Safety Analysis
and Design. Paper presented at the COMPASS 94: Proceedings of the
Ninth Annual Conference on Computer Assurance, Gaithersburg, MD,
USA.
McGary, R. (2005). Passing the PMP Exam: How to Take It and Pass It
(Bk&CD-Rom edition ed.). Indianapolis, Indiana, USA: Prentice Hall
Professional Technical Reference.
Newcomer, E., & Lomow, G. (2004). Understanding SOA with Web Services.
Boston, MA, USA: Addison-Wesley Professional.
Nguyen, H. Q. (2003). Testing applications on the web (2 ed.). New York, NY,
USA: John Wiley and Sons.
Nielsen, J., & Mack, R. L. (1994). Usability Inspection Methods. New York, NY,
USA: John Wiley & Sons Inc.
Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Paper
presented at the Proc. ACM CHI'90 Conf, 249 - 256.

242

Passani, L. (2000). Building 'Usable' WAP Applications. Cell Network AS from


http://www.topxml.com/conference/wrox/wireless_2000/lucatext.pdf.
Pettichord, B. (2002, October 2002). Design for Testability. Paper presented at the
Pacific Northwest Software Quality Conference, Anaheim, California.
Polya, G. (2004). How to solve it: a new aspect of mathematical method. Princeton,
NJ, USA: Princeton University Press.
Pyhnen, E. (2001). Risk-based Testing. Nokia Research Center, SW Technology
Lab from http://www.pcuf.fi/sytyke/kerhot/testaus/arkisto/2002-0114_risk.pdf.
Rosenberg, L. H., Stapko, R., & Gallo, A. (1999). Risk-based Object Oriented
Testing. Software Assurance Technology Center, Goddard Space Flight
Center - NASA from
http://sel.gsfc.nasa.gov/website/sew/1999/topics/rosenberg_SEW99paper.p
df.
Sterbenz, J. P. G., Krishnan, R., Hain, R. R., Jackson, A. W., Levin, D.,
Ramanathan, R., et al. (2002). Survivable Mobile Wireless Networks:
Issues, Challenges, and Research Directions. Paper presented at the ACM
Workshop on Wireless Security (WiSe).
Sutton, I. S. (1991). Process Reliability and Risk Management. Ardmore, PA,
USA: Van Nostrand Reinhold.
Tian, J. (2001). Quality Assurance Alternatives and Techniques: A Defect-Based
Survey and Analysis. Software Quality Professional, 3(3), 6-18.
Varshney, U., & Vetter, R. J. (2002). Mobile commerce: Framework, applications
and networking support. Mobile Networks and Applications, 7(3), 185-198.
Vijayaraghavan, G. (2003). A taxonomy of e-commerce risks and failures. Florida
Institute of Technology, Melbourne, Florida, USA.
243

Yang, S. J., Nieh, J., Krishnappa, S., Mohla, A., & Sajjadpour, M. (2003, 20-24
May, 2003). Web browsing performance of wireless thin-client computing.
Paper presented at the The Twelfth International World Wide Web
Conference, Budapest, Hungary. from
http://www.ncl.cs.columbia.edu/publications/www2003_fordist.pdf.

244