Sie sind auf Seite 1von 12

Client/Server Software Testing

Contents
Introduction

I: Introduction to Client/Server Architecture


1. What is Client/Server Computing?
2. Architectures for Client/Server System.
2.1. Client/Server 2-tiered architecture
2.2. Modified 2-tiered architecture
2.3. 3-tiered architecture
3. Critical Issues Involved in Client/Server System Management

II Client/Server Software Testing


1. Introduction to Client/Server Software Testing
2. Testing Plan for Client/Server Computing
3. Client/Server Testing in Different Layers
3.1. Testing on the Client Side—Graphic User Interface Testing
3.1.1. Complexity for Graphic User Interface Testing
3.1.2. GUI testing techniques
3.2. Testing on the Server Side---Application Testing
3.2.1. Client/Server loading testing
3.2.2. Volume testing
3.2.3. Stress testing
3.2.4. Performance testing
3.2.5. Other server side testing related to data storage
3.2.6. Examples for automated server testing tools
3.3. Networked Application Testing
4. Special Concerns for Internet Computing —Security Testing

Client/Server Software Testing

Introduction

The first part of this essay is the introduction to Client/Server architecture, which includes three
sections: What is the Client/Server Computing, Architectures for Client/Server System, and
Critical Issues Involved in Client/Server System Management.

Client/Server computing is a current reality for professional system developers and for
sophisticated departmental computing users. The section, What is the Client/Server Computing,
points out the definition and major characteristics of Client/Server computing. Netcentric (or
Internet) computing, as an evolution of Client/Server model, has brought new technology to the
forefront. Hence, the major characteristics and differences between Netcentic and traditional
Client/Server computing are also presented in this section.

Both traditional and Netcentric computing are tiered architectures. The brief introduction for three
popular architectures, namely, 2-tiered architecture, modified 2-tiered architecture, and 3-tiered
architecture are found in the section -- The Architecture for Client/Server Computing.
The second part of this essay is about Client/Server software testing. There are four sections in this
part: Introduction to Client/Server Software Testing, Testing Plan for Client/Server Computing,
Client/Server Testing in Different Layers, and Special Concerns for Internet Computing—Security
Testing.

In the section Introduction to the Client/Server Software Testing, we present some basic
characteristics of Client/Server software testing from different points of view.

Because of the difference between traditional and Client/Server software testing, a practical testing
plan based on application functionality is attached in section 2 Testing Plan for Client/Server
Software Testing. We also give some detailed explanation for different test plans, such as, system
test plan, operational plan, acceptance test plan, and regression test plan, which are parts of a
Client/Server testing plan.

As mentioned in Part I, a Client/Server system has several layers, which can be viewed
conceptually and physically. Viewed physically, the layers are client, server, middleware, and
network. In section 3 Client/Server Testing in Different Layers, specific concerns related to client,
server and network problems, testing techniques, testing tools and some activities are addressed
separately in Testing on the Client Side, Testing on the Server Side, and Network Testing.

For Internet-based Client/Server systems, security is one of the major concerns. Hence, this essay
also includes some security risks that need to be tested in the Part II, section 4 Special Concerns
for Internet Computing—Security Testing.

2
Client/Server Software Testing

I: Introduction to Client/Server architecture:

Client/Server system development is the preferred method of constructing cost-effective


department- and enterprise-level strategic corporate information systems. It allows the rapid
deployment of information systems in end-user environments.

1: What is Client/Server Computing?

Client/Server computing is a style of computing involving multiple processors, one of which is


typically a workstation and across which a single business transaction is completed [1].
Client/Server computing recognizes that business users, and not a mainframe, are the center of a
business. Therefore, Client/Server is also called “client-centric” computing.

Today, Client/Server computing is extended to the Internet—netcentric computing (network


centric computing), the concepts of business users have expanded greatly. Forrester Report
describes the netcentric computing as “Remote servers and clients cooperating over the Internet to
do work” and says that Internet Computing extends and improves the Client/Server model [2].

The characteristics of Client/Server computing includes:


1. There are multiple processors.
2. A complete business transaction is processed across multiple servers

Netcentric computing ---- as an evolution of Client/Server model, has brought new technology to
the forefront, especially in the area of external presence and access, ease of distribution, and media
capabilities. Some of new technologies are [3]:

a. Browser, which provides a “universal client”. In the traditional Client/Server environment,


distributing an application internally or externally for an enterprise requires that the application
be recompiled and tested for all specific workstation platforms (operating systems). It also
usually requires loading the application on each client machine. The browser-centric
application style offers an alternative to this traditional problem. The web browser provides a
universal client that offers users a consistent and familiar user interface. Using a browser, a
user can launch many types of applications and view many types of documents. This can be
accomplished on different operating systems and is independent of where the applications or
documents reside.
b. Direct supplier-to-customer relationships. The external presence and access enabled by
connecting a business node to the Internet has opened up a series of opportunities to reach an
audience outside a company’s traditional internal users.
c. Richer documents. Netcentric technologies (such as HTML, documents, plug-ins, and Java)
and standardization of media information formats enable support for complex documents,
applications and even nondiscrete data types such as audio and video.
3
d. Application version checking and dynamic update. The configuration management of
traditional Client/Server applications, which tend to be stored on both the client and server
sides, is a major issue for many corporations. Netcentric computing can checking and update
application versions dynamically.

2: Architectures for Client/Server System.

Both traditional Client/Server as well as netcentric computing are tiered architectures. In both
cases, there is a distribution of presentation services, application code, and data across clients and
servers. In both cases, there is a networking protocol that is used for communication between
clients and servers. In both cases, they support a style of computing where processes on different
machines communicate using messages. In this style, the “client” delegates business functions or
other tasks (such as data manipulation logic) to one or more server processes. Server processes
respond to messages from clients.

A Client/Server system has several layers, which can be visualized in either a conceptual or a
physical manner. Viewed conceptually, the layers are presentation, process, and database. Viewed
physically, the layers are server, client, middleware, and network.

2.1. Client/Server 2-tiered architecture:

2-tiered architecture is also known as the client-centric model, which implements a “fat” client.
Nearly all of the processing happens on the client, and client accesses the database directly rather
than through any middleware. In this model, all of the presentation logic and the business logic are
implemented as processes on the client.

2-tiered architecture is the simplest one to implement. Hence, it is the simplest one to test. Also, it
is the most stable form of Client/Server implementation, making most of the errors that testers find
independent of the implementation. Direct access to the database makes it simpler to verify the test
results.

The disadvantage of this model is the limit of the scalability and difficulties for maintenance.
Because it doesn’t partition the application logic very well, changes require reinstallation of the
software on all of the client desktops.

2.2. Modified 2-tiered architecture:

Because of the nightmare of maintenance of the 2-tiered Client/Server architecture, the business
logic is moved to the database side, implemented using triggers and procedures. This kind of
model is known as modified 2-tiered architecture.

In terms of software testing, modified 2-tiered architecture is more complex than 2-tiered
architecture for the following reasons:
a. It is difficult to create a direct test of the business logic. Special tools are required to implement
and verify the tests.
b. It is possible to test the business logic from the GUI, but there is no way to determine the
numbers of procedures and/or triggers that fires and create intermediate results before the end
product is achieved.
c. Another complication is dynamic database queries. They are constructed by the application and
exist only when the program needs them. It is very difficult to be sure that the test generates a
4
query “correctly”, or as expected. Special utilities that show what is running in memory must
be used during the tests.

2.3. 3-tiered architecture:

For 3-tiered architecture, the application is divided into a presentation tier, a middle tier, and a data
tier. The middle tier is composed of one or more application servers distributed across one or more
physical machines. This architecture is also termed the “the thin client—fat server” approach.
This model is very complicated for testing because the business and/or data objects can be invoked
from many clients, and the objects can be partitioned across many servers. The characteristics
make the 3-tiered architecture desirable as a development and implementation framework at the
same time make testing more complicated and tricky.

3: Critical Issues Involved in Client/Server System Management:

Hurwitz Consulting Group, Inc. has provided a framework for managing Client/Server systems
that identifies eight primary management issues [4]:

a. Performance
b. Problem
c. Software distribution
d. Configuration and administration
e. Data and storage
f. Operations
g. Security
h. License

II Client/Server Software Testing:

Software testing for Client/Server systems (Desktop or Webtop) presents a new set of testing
problems, but it also includes the more traditional problems testers have always faced in the
mainframe world. Atre describes the special requirements of Client/Server testing [5]:
a. The client’s user interface
b. The client’s interface with the server
c. The server’s functionality
d. The network (the reliability and performance of the network)

1. Introduction to the Client/Server Software Testing:

We can view the Client/Server software testing from different perspectives:

a. From a “distributed processing” perspective: Since Client/Server is a form of distributed


processing, it is necessary to consider its testing implication from that point of view. The term
“distributed” implies that data and processes are dispersed across various and miscellaneous
platforms. Binder states several issues needed to be considered in the Client/Server
environments [6].
• Client GUI considerations
• Target environment and platform diversity considerations
• Distributed database considerations (including replicated data)
• Distributed processing considerations (including replicated processes)
5
• Nonrobust target environment
• Nonlinear performance relationships
b. From a cross-platform perspective: The networked cross-platform nature of Client/Server
systems requires that we pay much more attention to configuration testing and compatibility
testing. The purpose of configuration testing is to uncover the weakness of the system operated
in the different known hardware and software environments. The purpose of comparability
testing is to find any functionally inconsistency of the interface across hardware and software.
c. From a cross-window perspective: The current proliferation of Microsoft Windows
environments has created a number of problems for Client/Server developers. For
example, Windows 3.1 is a 16-bit environment, and Window 95 and Window NT are 32-bit
environment. Mixing and matching 16- bit and 32-bit code/16bits or 32bits systems and
products causes major problems. Now there exit some automated tools that can generate
both 16-bit and 32-bit test scripts.

2. Testing Plan for Client/Server Computing:

In many instances, testing Client/Server software cannot be planned from the perspective of
traditional integrated testing activities because this view either is not applicable at all or is too
narrow, and other dimensions must be considered. The following are some specific considerations
needing to be addressed in a Client/Server testing plan.
• Must include consideration of the different hardware and software platforms on which the
system will be used.
• Must take into account network and database server performance issues with which mainframe
systems did not have to deal.
• Has to consider the replication of data and processes across networked servers

See attached “Client/Server test plan based on application functionality” [7].

In the test plan, we may address or construct several different kinds of testing:
a. The system test plan: System test scenarios are a set of test scripts, which reflect user behaviors
in a typical business situation. It’s very important to identify the business scenarios before
constructing the system test plan.

See attached CASE STUDY: The business scenarios for the MFS imaging system

b. The user acceptance test plan: The user acceptance test plan is very similar to the system test
plan. The major difference is direction. The user acceptance test is designed to demonstrate the
major system features to the user as opposed to finding new errors.

See attached CASE STUDY: Acceptance test specification for the MFS imaging system

c. The operational test plan: It guides the single user testing of the graphical user interface and of
the system function. This plan should be constructed according to subsection A and B of
Section II in the testing plan template -- Client/Server test plan based on application
functionality. (See attached Appendix I)

d. The regression test plan: The regression test plan occurs at two levels. In Client/Server
development, regression testing happens between builds. Between system releases, regression
testing also occurs postproduction. Each new build/release must be tested for three aspects:
• To uncover errors introduced by the fix into previously correct function.
6
• To uncover previously reported errors that remain.
• To uncover errors in the new functionality.

e. Multiuser performance test plan: It is necessary to be performed in order to uncover any


unexpected system performance problem under load. This test plan should be constructed form
Section V of the testing plan template-- Client/Server test plan based on application
functionality. (See attached Appendix I)

3. Client/Server Testing in Different Layers:

3.1. Testing on the Client Side—Graphic User Interface Testing:

3.1.1 The complexity for Graphic User Interface Testing is due to:
a. Cross-platform nature: The same GUI objects may be required to run transparently
(provide a consistent interface across platforms, with the cross-platform nature
unknown to the user) on different hardware and software platforms
b. Event-driven nature: GUI-base applications have increased testing requirements
because they are in an event-driven environment where user actions are events that
determine the application’s behavior. Because the number of available user actions is
very high, the number of logical paths in the supporting program code is also very high.
c. The mouse, as an alternate method of input, also raises some problems. It is necessary
to assure that the application handles both mouse input and keyboard input correctly.
d. The GUI testing also requires testing for the existence of a file that provides supporting
data/information for text objects. The application must be sensitive to the existence, or
nonexistence.
e. In many cases, GUI testing also involves the testing of the function that allows end-
users to customize GUI objects. Many GUI development tools give the users the ability
to define their own GUI objects. The ability to do this requires the underlying
application to be able to recognize and process events related to these custom objects.

3.1.2 GUI testing techniques: Many traditional software testing techniques can be used in GUI
testing.

a. Review techniques such as walkthroughs and inspections [8]. These human testing
procedures have been found to be very effective in the prevention and early correction
of errors. It has been documented that two-thirds of all of the errors in finished
information systems are the results of logic flaws rather than poor coding [9].
Preventive testing approaches, such as walkthroughs and inspections can eliminate the
majority of these analysis and design errors before they go through to the production
system.

b. Data validation techniques: Some of the most serious errors in software systems have
been the result of inadequate or missing input validation procedures. Software testing
has powerful data validation procedures in the form of the Black Box techniques of
Equivalence Partitioning, Boundary Analysis, and Error Guessing. These techniques
are also very useful in GUI testing.

c. Scenario testing: It is a system-level Black Box approach that also assure good White
Box logic-level coverage for Client/Server systems.
7
d. The decision logic table (DLT): DLT represents an external view of the functional
specification that can be used to supplement scenario testing from a logic-coverage
perspective. In DLTs, each logical condition in the specification becomes a control path
in the finished system. Each rule in the table describes a specific instance of a pathway
that must be implemented. Hence, test cases based on the rules in a DLT provide
adequate coverage of the module’s logic independent of its coded implementation.

In addition to these traditional testing techniques, a number of companies have begun


producing structured capture/playback testing tools that address the unique properties of
GUIs. The difference between traditional capture/playback and structured capture/playback
paradigm is that capture/playback occurs at an external level. It records input as keystrokes
or mouse actions and output as screen images that are saved and compared against inputs
and output images of subsequent tasks.

Structured capture/playback is based on an internal view of external activities. The


application program’s interactions with the GUI are recorded as internal ‘events” that can
be saved as “scripts” written in some certain language.

3.2 Testing on the Server Side---Application Testing:

There are several situations that scripts can be designed to invoke during several tests: load
testing, volume tests, stress tests, performance tests, and data-recovery tests.

3.2.1 Client/Server loading tests:

Client/Server systems must undergo two types of testing: single-user-functional-based


testing and multiuser loading testing.
Multiuser loading testing is the best method to gauge Client/Server performance. It is
necessary in order to determine the suitability of application server, database server, and
web server performance. Because multiuser load test requires emulating a situation in
which multiple clients access a single server application, it is almost impossible to be done
without automation.

For the Client/Server load testing, some common objectives include:


• Measuring the length of time to complete an entire task
• Discovering which hardware/software configuration provides optimal performance
• Tuning database queries for optimal response
• Capturing Mean-Time-To-Failure as a measure of reliability
• Measuring system capacity to handle loads without performance degradation
• Identifying performance bottlenecks

Based on the test objectives, a set of performance measurements should be described.


Typical measurements include:
• End-to-end response time
• Network response time
• GUI response time
• Server response time
• Middleware response time

8
3.2.2 Volume testing:
The purpose of volume testing is to find weaknesses in the system with respect to its
handling of large amount of data during extended time periods

3.2.3 Stress testing:


The purpose of stress testing is to find defects of the system capacity of handling large
numbers of transactions during peak periods. For example, a script might require users to
login and proceed with their daily activities while, at the same time, requiring that a series
of workstations emulating a large number of other systems are running recorded scripts that
add, update, or delete from the database.

3.2.4 Performance testing:


System performance is generally assessed in terms of response time and throughput rates
under differing processing and configuration conditions. To attack the performance
problems, there are several questions should be asked first:
• How much application logic should be remotely executed?
• How much updating should be done to the database server over the network from
the client workstation?
• How much data should be sent to each in each transaction?

According to Hamilton [10], the performance problems are most often the result of the
client or server being configured inappropriately.

The best strategy for improving client-sever performance is a three-step process [11]. First,
execute controlled performance tests that collect the data about volume, stress, and loading
tests. Second, analyze the collected data. Third, examine and tune the database queries and,
if necessary, provide temporary data storage on the client while the application is
executing.

3.2.5 Other server side testing related to data storage:


• Data recovery testing
• Data backup and restoring testing
• Data security testing
• Replicated data integrity testing.

3.2.6 Examples for automated server testing tools:


LoadRunning/XL, offered from Mercury Interactive, is a Unix-based automated server
testing tool that tests the server side of multiuser Client/Server application.
LoadRunning/PC is similar to products based on Windows environments.

SQL Inspector and ODBC Inspector are tools for testing the link between the client and the
server. These products monitor the database interface pipeline and collect information
about all database calls or a selected subset of them.

SQL Profiler, is used for tuning database calls. It stores and displays statistics about SQL
commands embedded in Client/Server applications.

SQLEYE is an NT-based tool, offered by Microsoft. It can track the information passed
through the SQL Server and its client. Client application connect indirectly to SQL server
9
through SQLEYE, which allows users to view the queries sent to SQL Server, the returned
results, row counts, message, and errors

3.3 Networked Application Testing

Testing the network is beyond the scope of an individual Client/Server project as it may serve
more than a single Client/Server project. Thus, network testing falls into the domain of the
network management group. As Robert Buchanan [12] said: “If you haven’t tested a network
solution, it’s hard to say if it works. It may ‘work’. It may execute all commands, but it may
be too slow for your needs”.

Nemzom blames the majority of network performance problem on insufficient network


capacity [13]. He views bandwidth and latency as the critical determinants of network speed
and capacity. He also sees interactions among intermediate network nodes (switches,
bridges, routers and gateways) as adding to the problem.

Elements of network testing include:


• Application response time measures
• Application functionality
• Throughput and performance measurement
• Configuration and sizing
• Stress testing and performance testing
• Reliability

It is necessary to measure application response time while the application is completing a


series of tasks. This kind of measure reflects the user’s perception of the network, and is
applicable through the entire network life cycle phase. Testing application functionality
involves testing shared functionality across workstations, shared data, and shared
processes. This type of testing is applicable during the development and evolution.
Configuration and sizing measure the response of specific system configurations. This is
done for different network configurations until the desired performance level is reached. .
The point of stress testing is to overload network resource such as routers or hubs.
Performance testing can be used to determine how many network devices will be required
to meet the network’s performance requirements. Reliability testing involves running the
network for 24-72 hrs under a medium-to-heavy load. From a reliability point of view, it is
important that the network remain functional in the event of a node failure.

4 Special Concerns for Internet Computing --- Security Testing:

For internet-based Client/Server systems, security testing for the web server is important. The
web server is your LAN’s window to the world and, conversely, is the world’s window to
your LAN.

The following excerpt is taken from the WWW Security FAQ [14]:

It’s a maxim in system security circles that buggy software opens up security holes. It’s a maxim in software
development circles that large, complex programs contain bugs. Unfortunately, web servers are large, complex
programs that can contain security holes. Furthermore, the open architecture of web server allows arbitrary CGI
scripts to be executed on the server’s side of the connection in response to remote requests. Any CGI script
installed at your site may contain bugs, and every such bug is a potential security hole.

10
Three types of security risks have been identified [15]:

1. The primary risk is errors in the web server side misconfiguration that would allow
remote users to:
• Steal confidential information
• Execute commands on the server host, thus allowing the users to modify the system
• Gain information about the server host that would allow them to break into the
system
• Launch attacks that will bring the system down.
2. The secondary risk occurs on the Browser-side
• Active content that crashes the browser, damages your system, breaches your
company’s privacy, or creates an annoyance.
• The misuse of personal information provided by the end user.
3. The tertiary risk is data interception during data transfer.

The above risks are also the focuses of web server security testing. As a tester, it is your
responsibility to test if the security extends provided by the server meet the user’s expectation
for the network security.

Summary:

Client/Server system development is the preferred method of constructing cost-effective


department- and enterprise-level strategic corporate information systems. It allows the rapid
deployment of information systems in end-user environments

Both traditional Client/Server as well as netcentric computing are tiered architectures.


Currently, the dominant three types of Client/Server architectures include 2-tiered
architecture, modified 2-tiered architecture, and three tiered architecture. 2-tiered architecture
is the simplest one to implement, and the simplest one to test. The characteristics of the 3-
tiered architecture that make desirable as development and implementation framework at the
same time make testing more complicated

Testing Client/Server software cannot be planned from the perspective of traditional


integrated testing activities. In a Client/Server testing plan, some specific considerations, such
as different hardware and software platforms, network and database server performance
issues, the replication of data and processes across networked servers, etc. need to be
addressed.

The complexity for GUI (Graphic User Interface) testing is increase because of some
characteristics of GUIs, for instance, its cross-platform nature, event-driven nature, and an
additional input method—mouse. Many traditional software testing techniques can be used in
GUI testing. Currently, a number of companies have begun producing structured
capture/playback tools that address the unique properties of GUIs.

There are several situations that scripts can be designed to be invoked during server tests: load
testing, volume tests, stress tests, performance tests, and data-recovery tests. These types of
testing are nearly impossible without automation. Some sophisticated testing tools used in

11
server side testing already emerged in the market, such as LoadRunning/Xl, SQL Inspector,
SQL profiler, and SQLEYE.

Network test is a necessary but difficult series of tasks. Its difficulty is compounded by the
fact that Client/Server development may be targeted for an exiting network or for one that is
yet to be installed. Proactive network management and proper capacity planning will be very
helpful. In addition, performance and stress testing can ease the network testing burden.

For internet-based Client/Server systems, security testing for the web server is important. The
web server is your LAN’s window to the world and, conversely, is the world’s window to
your LAN. As a tester, it is your responsibility to find weakness in the system security

the document back into its original batch)

12

Das könnte Ihnen auch gefallen