Sie sind auf Seite 1von 93

Measuring Capacity Bandwidth of Targeted Path

Segments
ABSTRACT
Accurate measurement of network bandwidth is important for network
management applications as well as flexible Internet applications and protocols which
actively manage and dynamically adapt to changing utilization of network resources.
Extensive work has focused on two approaches to measuring bandwidth:
Measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, bestpractice techniques for the former are inefficient and techniques for the latter are only
able to observe bottlenecks visible at end-to-end scope.
We develop end-to-end probing methods which can measure bottleneck capacity
bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths
shared by a set of flows.
We evaluate our technique through ns simulations, and then provide a
comparative Internet performance evaluation against hop-by-hop and end-to-end
techniques. We also describe a number of applications which we foresee as standing to
benefit from solutions to this problem, ranging from network troubleshooting and
capacity provisioning to optimizing the layout of application-level overlay networks, to
optimized replica placement.

INTRODUCTION
MEASUREMENT of network bandwidth is important for many Internet
applications and protocols, especially those involving the transfer of large files and those
involving the delivery of content with real-time QoS constraints, such as streaming
media. Some specific examples of applications which can leverage accurate bandwidth
estimation include end-system multicast and overlay network configuration protocols,
content location and delivery in peer-to-peer (P2P) networks, network-aware cache or
replica placement policies and flow scheduling and admission control policies at
massively-accessed content servers. In addition, accurate measurements of network
bandwidth are useful to network operators concerned with problems such as capacity
provisioning, traffic engineering, network troubleshooting and verification of service
level agreements (SLAs).

Leveraging shared bandwidth measurement for optimizing


parallel downloads (left) and overlay network organization
(right). Numeric labels represent capacity bandwidth of path
segments in Mbps.

Bandwidth Measurement:
Two different measures used in end-to-end network bandwidth estimation are capacity
bandwidth, or the maximum transmission rate that could be achieved between two hosts
at the endpoints of a given path in the absence of any competing traffic, and available
bandwidth, the portion of the capacity bandwidth along a path that could be acquired by
a given flow at a given instant in time. Both of these measures are important, and each
captures different relevant properties of the network. Capacity bandwidth is a static
baseline measure that applies over long time-scales (up to the time-scale at which
network paths change), and is independent of the particular traffic dynamics at a time
instant. Available bandwidth provides a dynamic measure of the load on a path, or more
precisely, the residual capacity of a path. Additional application-specific information
must then be applied before making meaningful use of either measure. While measures of
available bandwidth are certainly more useful for control or optimization of processes
operating at short time scales, processes operating at longer time scales (e.g., server
selection or admission control) will find estimates of both measures to be helpful. On the
other hand, many network management applications (e.g., capacity provisioning) are
concerned primarily with capacity bandwidth. We focus on measuring capacity
bandwidth

Catalyst Applications:
To exemplify the type of applications that can be leveraged by the identification of shared
capacity bandwidth (or more generally, the capacity bandwidth of an arbitrary, targeted
subpath), a client must select two out of three sources to use to download data in parallel.
This scenario may arise when downloading content in parallel from a subset of mirror
sites or multicast sources or from a subset of peer nodes in P2P environments. In the
second scenario, an overlay network must be set up between a single source and two
destinations. This scenario may arise in ad-hoc networks and end-system multicast
systems

Paper Scope, Contributions, and Organization:


In this Project we propose an efficient end-to-end measurement technique that yields the
capacity bandwidth of an arbitrary subpath of a route between a set of end-points. By
subpath, we mean
a sequence of consecutive network links between any two identifiable nodes on that path.
A node on a path between a source and a destination is identifiable if it is possible to
coerce a packet injected at the source to exit the path at node. One can achieve this by:
1) targeting the packet to (if s IP address is known), or
2) forcing the packet to stop at through the use of TTL field (if the hop count from to is
known),
3) by targeting the packet to a destination , such that the paths from to and from to are
known diverge at node .
Our methods are much less resource-intensive than existing hop-by-hop methods for
estimating bandwidth along a path and much more general than end-to-end methods for
measuring capacity bandwidth. In particular, our method provides the following
advantages over existing techniques:
1) it can estimate bandwidth on links not visible at end-to-end scope,
2) it can measure the bandwidth of fast links following slow links as long as the
ratio between the link speeds does not exceed the ratio between the largest and the
smallest possible packet sizes.
That could be transmitted over these links. The remainder of this paper is
organized as follows, we review existing literature, we develop a basic probing toolkit,
comprising existing methods and our new ideas. We present results of simulation,
controlled laboratory experiments and Internet validation experiments, showing the
effectiveness of our constructions.

Topology between a server and two clients.


MODULES
There are 2 modules in this project:
SERVER
CLIENT

CLIENT A

CLIENT B

CLIENT B

EXISTING SYSTEM
Bandwidth is important for many Internet applications and protocols, especially
those involving the transfer of large files and those involving the delivery of content with
real-time QoS constraints, such as streaming media. Some specific examples of
applications which can leverage accurate bandwidth estimation include end-system
multicast and overlay network configuration protocols content location and delivery in
peer-to-peer networks

network-aware cache or replica placement policies and flow

scheduling and admission control policies at massively-accessed content servers .In


addition, accurate measurements of network bandwidth are useful to network operators
concerned with problems such as capacity provisioning, traffic engineering, network
troubleshooting and verification of service level agreements.

PROPOSED SYSTEM
In Proposed System we propose an efficient end-to-end measurement technique that
yields the capacity bandwidth of an arbitrary subpath of a route between a set of endpoints. By subpath, we mean a sequence of consecutive network links between any two
identifiable nodes on that path. A node is a path between a source and a destination is
identifiable if it is possible to coerce a packet injected at the source to exit the path at
node
We can achieve this by
1) Targeting the packet to node(if nodes IP address is known)
2) Forcing the packet to stop at node through the use of TTL field (if the hop count
source from to node is known)
3) By targeting the packet to a destination , such that the paths from source to
destination and from to are known diverge at node
Our methods are much less resource-intensive than existing hop-by-hop methods for
estimating bandwidth along a path and much more general than end-to-end methods for
measuring capacity bandwidth
Advantages
Our method provides the following advantages over existing techniques:
1) it can estimate bandwidth on links not visible at end-to-end scope,
2) it can measure the bandwidth of fast links following slow links as long as the
ratio between the link speeds does not exceed the ratio between the largest and
the smallest possible packet sizes that could be transmitted over these links.

2.1 STUDY OF THE SYSTEM


To provide flexibility to the users, the interfaces have been developed that are accessible
through a browser. The GUIS at the top level have been categorized as
1. Administrative user interface
2. The operational or generic user interface
The administrative user interface concentrates on the consistent information that is
practically, part of the organizational activities and which needs proper authentication for
the data collection. These interfaces help the administrators with all the transactional
states like Data insertion, Data deletion and Date updation along with the extensive data
search capabilities.
The operational or generic user interface helps the end users of the system in
transactions through the existing data and required services. The operational user
interface also helps the ordinary users in managing their own information in a customized
manner as per the included flexibilities
2.2 INPUT & OUTPOUT REPRESENTETION
Input design is a part of overall system design. The main objective during the input
design is as given below:

To produce a cost-effective method of input.

To achieve the highest possible level of accuracy.

To ensure that the input is acceptable and understood by the user.

INPUT STAGES:
The main input stages can be listed as below:
Data recording
Data transcription
Data conversion
Data verification

Data control
Data transmission
Data validation
Data correction
INPUT TYPES:
It is necessary to determine the various types of inputs. Inputs can be categorized as
follows:
External inputs, which are prime inputs for the system.
Internal inputs, which are user communications with the system.
Operational, which are computer departments communications to the
system?
Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:
At this stage choice has to be made about the input media. To conclude about the
input media consideration has to be given to;
Type of input
Flexibility of format
Speed
Accuracy
Verification methods
Rejection rates
Ease of correction
Storage and handling requirements
Security
Easy to use
Portability

Keeping in view the above description of the input types and input media, it can be said
that most of the inputs are of the form of internal and interactive. As
Input data is to be the directly keyed in by the user, the keyboard can be considered to be
the most suitable input device.
OUTPUT DESIGN:
In general are:

External Outputs whose destination is outside the organization.

Internal Outputs whose destination is with in organization and they are the Users
main interface with the computer. Outputs from computer systems are required
primarily to communicate the results of processing to users. They are also used to
provide a permanent copy of the results for later consultation. The various types
of outputs

Operational outputs whose use is purely with in the computer department.

Interface outputs, which involve the user in communicating directly with the
system.

OUTPUT DEFINITION
The outputs should be defined in terms of the following points:

Type of the output

Content of the output

Format of the output

Location of the output

Frequency of the output

Volume of the output

Sequence of the output

It is not always desirable to print or display data as it is held on a computer. It should be


decided as which form of the output is the most suitable.

For Example

Will decimal points need to be inserted

Should leading zeros be suppressed.

OUTPUT MEDIA:
In the next stage it is to be decided that which medium is the most appropriate for the
output. The main considerations when deciding about the output media are:

The suitability for the device to the particular application.

The need for a hard copy.

The response time required.

The location of the users

The software and hardware available.

Keeping in view the above description the project is to have outputs mainly
coming under the category of internal outputs. The main outputs desired according to the
requirement specification are:

The outputs were needed to be generated as a hard copy and as well as queries to be
viewed on the screen. Keeping in view these outputs, the format for the output is taken
from the outputs, which are currently being obtained after manual processing. The
standard printer is to be used as output media for hard copies.

2.3 PROCESS MODEL USED WITH JUSTIFICATION


SDLC (Umbrella Model):
Umbrella
Activity

DOCUMENT CONTROL

Business Requirement
Documentation

Feasibility Study
TEAM FORMATION
Project Specification
PREPARATION

Requirements
Gathering

ANALYSIS &
DESIGN

INTEGRATION
& SYSTEM
TESTING

TRAINING

DELIVERY/INS
TALLATION

CODE

Umbrella
Activity

UNIT TEST

ACCEPTANCE
TEST

Umbrella
Activity

SDLC is nothing but Software Development Life Cycle. It is a standard which is used by
software industry to develop good software.

Stages in SDLC:

Requirement Gathering
Analysis

ASSESSMEN

Designing
Coding
Testing
Maintenance
Requirements Gathering stage:

The requirements gathering process takes as its input the goals identified in the high-level
requirements section of the project plan. Each goal will be refined into a set of one or more
requirements. These requirements define the major functions of the intended application, define
operational data areas and reference data areas, and define the initial data entities. Major
functions include critical processes to be managed, as well as mission critical inputs, outputs and
reports. A user class hierarchy is developed and associated with these major functions, data areas,
and data entities. Each of these definitions is termed a Requirement. Requirements are identified
by unique requirement identifiers and, at minimum, contain a requirement title and
textual description.

These requirements are fully described in the primary deliverables for this stage: the
Requirements Document and the Requirements Traceability Matrix (RTM). The requirements

document contains complete descriptions of each requirement, including diagrams and references
to external documents as necessary. Note that detailed listings of database tables and fields are
not included in the requirements document.
The title of each requirement is also placed into the first version of the RTM, along with the
title of each goal from the project plan. The purpose of the RTM is to show that the product
components developed during each stage of the software development lifecycle are formally
connected to the components developed in prior stages.

In the requirements stage, the RTM consists of a list of high-level requirements, or goals,
by title, with a listing of associated requirements for each goal, listed by requirement title. In this
hierarchical listing, the RTM shows that each requirement developed during this stage is formally
linked to a specific product goal. In this format, each requirement can be traced to a specific
product goal, hence the term requirements traceability.
The outputs of the requirements definition stage include the requirements document, the
RTM, and an updated project plan.

Feasibility study is all about identification of problems in a project.


No. of staff required to handle a project is represented as Team Formation, in this case only
modules are individual tasks will be assigned to employees who are working for that project.

Project Specifications are all about representing of various possible inputs submitting to the
server and corresponding outputs along with reports maintained by administrator
Analysis Stage:

The planning stage establishes a bird's eye view of the intended software product, and uses
this to establish the basic project structure, evaluate feasibility and risks associated with the
project, and describe appropriate management and technical approaches.

The most critical section of the project plan is a listing of high-level product requirements, also
referred to as goals. All of the software product requirements to be developed during the
requirements definition stage flow from one or more of these goals. The minimum information
for each goal consists of a title and textual description, although additional information and
references to external documents may be included. The outputs of the project planning stage are
the configuration management plan, the quality assurance plan, and the project plan and schedule,
with a detailed listing of scheduled activities for the upcoming Requirements stage, and high level
estimates of effort for the out stages.

Designing Stage:

The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will be
produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe
the desired software features in detail, and generally include functional hierarchy diagrams,
screen layout diagrams, tables of business rules, business process diagrams, pseudo code, and a
complete entity-relationship diagram with a full data dictionary. These design elements are

intended to describe the software in sufficient detail that skilled programmers may develop the
software with minimal additional input.

When the design document is finalized and accepted, the RTM is updated to show that each
design element is formally associated with a specific requirement. The outputs of the design stage
are the design document, an updated RTM, and an updated project plan.

Development (Coding) Stage:

The development stage takes as its primary input the design elements described in the
approved design document. For each design element, a set of one or more software artifacts will
be produced. Software artifacts include but are not limited to menus, dialogs, data management
forms, data reporting formats, and specialized procedures and functions. Appropriate test cases
will be developed for each set of functionally related software artifacts, and an online help system
will be developed to guide users in their interactions with the software.

The RTM will be updated to show that each developed artifact is linked to a specific design
element, and that each developed artifact has one or more corresponding test case items. At this
point, the RTM is in its final configuration. The outputs of the development stage include a fully
functional set of software that satisfies the requirements and design elements previously
documented, an online help system that describes the operation of the software, an
implementation map that identifies the primary code entry points for all major system functions, a
test plan that describes the test cases to be used to validate the correctness and completeness of
the software, an updated RTM, and an updated project plan.

Integration & Test Stage:

During the integration and test stage, the software artifacts, online help, and test data are
migrated from the development environment to a separate test environment. At this point, all test
cases are run to verify the correctness and completeness of the software. Successful execution of
the test suite confirms a robust and complete migration capability. During this stage, reference
data is finalized for production use and production users are identified and linked to their

appropriate roles. The final reference data (or links to reference data source files) and production
user list are compiled into the Production Initiation Plan.

The outputs of the integration and test stage include an integrated set of software, an online
help system, an implementation map, a production initiation plan that describes reference data
and production users, an acceptance plan which contains the final suite of test cases, and an
updated project plan.

Installation & Acceptance Test:


During the installation and acceptance stage, the software artifacts, online help, and initial
production data are loaded onto the production server. At this point, all test cases are run to verify
the correctness and completeness of the software. Successful execution of the test suite is a
prerequisite to acceptance of the software by the customer.

After customer personnel have verified that the initial production data load is correct and
the test suite has been executed with satisfactory results, the customer formally accepts the
delivery of the software.

The primary outputs of the installation and acceptance stage include a production
application, a completed acceptance test suite, and a memorandum of customer acceptance of the
software. Finally, the PDR enters the last of the actual labor data into the project schedule and
locks the project as a permanent project record. At this point the PDR "locks" the project by
archiving all software items, the implementation map, the source code, and the documentation for
future reference.

Maintenance:

Outer rectangle represents maintenance of a project, Maintenance team will start with
requirement study, understanding of documentation later employees will be assigned work and
they will under go training on that particular assigned category.

For this life cycle there is no end, it will be continued so on like an umbrella (no ending point to
umbrella sticks).

SYSTEM REQUIREMNT SPECIFICATION

Hardware Requirements:

System

Pentium IV 2.4 GHz.

Hard Disk

40 GB.

Floppy Drive

1.44 MB

Monitor

15 VGA Colour.

Mouse

Logitech.

Ram

256 MB

Software Requirements:

Operating system

Windows XP Professional.

Coding Language

Java.

Tool Used

Eclipse.

DataFlow Diagrams:
Level 0 DFD for Server:

server

Connect
clients

Level0 DFD for client:

client
Measure
band width
of a file

Level 1 DFD for Server:

Start

Select File
server

Select
Destinati
on

send

Level 1 DFD for Client:

Ip
address

Client
port

name

FEASIBILITY STUDY
Preliminary investigation examine project feasibility, the likelihood the system will
be useful to the organization. The main objective of the feasibility study is to test the
Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited resources and
infinite time.
System analysis is conducted with the following objectives

Identify the user needs


Evaluate the system concept for feasibility
Perform technical and economic feasibility

Allocate functions to hardware, software, people, databases& other system


elements.
Establish cost schedule constraints.

There are aspects in the feasibility study portion of the preliminary investigation:

TECHNICAL FEASIBILITY

OPERATIONAL FEASIBILITY

ECONOMICAL FEASIBILITY

TECHNICAL FEASIBILITY
It is the most difficult area to access because objectives, functions performance are
somewhat hazy, anything seems to be possible if right assumptions are made. The
considerations that are normally associated with technical feasibility include
Development Risk:
The system is designed so that necessary functions and performance are achieved
within the constraints uncovered during the analysis.
Resource Availability:
It is important to determine the necessary resources to build the system.
Hardware feasibility:
This project is mainly concern about the software, where as the hardware needed
is simple computers that are available in the market, it is preferable to have good
computer with at least Pentium III processor and 128 of RAM, in order, to make the
system runs probably and to make the efficiency acceptable.
Software feasibility:
The development environment of this system is windows XP, so that, it is
compatible with windows platform, how ever, the language selected to implement this
project is Dot net, the reason for choosing Dot net are:

Portability.
Popular.
Object oriented.
Efficiency.

Technology:
The proposed system will generate many kinds of reports depending on the
requirements. By automating all these activities the work is done effectively and in time.
There is also quick and good response for each operation.
OPERATIONAL FEASIBILITY
Proposed project is beneficial only if it can be turned into information systems
that will meet the organizations operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. Are there
major barriers to Implementation? Here are questions that will help test the operational
feasibility of a project:
Is there sufficient support for the project from management from users? If the
current system is well liked and used to the extent that persons will not be able to see
reasons for change, there may be resistance.

Are the current business methods acceptable to the user? If they are not, Users
may welcome a change that will bring about a more operational and useful
systems.

Have the user been involved in the planning and development of the project?

Early involvement reduces the chances of resistance to the system and in general
and increases the likelihood of successful project.

Since the proposed system was to help reduce the hardships encountered. In the

existing manual system, the new system was considered to be operational


feasible.
ECONOMICAL FEASIBILITY
The Economic Feasibility is generally the bottom line considerations for most
systems. It is an obvious fact that the computerization of the project is economically
advantageous.
Firstly it will increase the efficiency and decrease the man-hour required to
achieve the necessary result. Secondly it will provide timely and up to date to the
administrative and individual departments. Since all the information is available with in a
few seconds the system performance will be substantially increased.
System inventing estimation
1. As system developers, estimation of cost involved in the system
developing is indeed important especially not to across users limit line.
2. It is also our effort to develop the system within the time given, as well
as to put users expanses on the system on minimum limit and not to
outweigh the estimated cost.
3. This will increase user confident with the new system as well as to
maintain our selfs reputation.
System operations cost estimation
4. In order to install and run the new system, the management has to
purchase and install several software.
5. It is very important to obtain official software licensees especially those
which working as a corporate system, regarding the legal issue of the
software itself.
6. Besides, it is of course that group of management will have to spend
little times to study and use of the new system although it design based
on users needs.

IMPLEMENTATION

4.1 INTRODUCTION TO JAVA


Initially the language was called as oak but it was renamed as Java in 1995.
The primary motivation of this language was the need for a platform-independent (i.e.,
architecture neutral) language that could be used to create software to be embedded in
various consumer electronic devices.
Java is a programmers language.
Java is cohesive and consistent.
Except for those constraints imposed by the Internet environment, Java gives

the programmer, full control.


Finally, Java is to Internet programming where C was to system programming.
With most programming languages, you either compile or interpret a program so
that you can run it on your computer. The Java programming language is unusual in that a
program is both compiled and interpreted. With the compiler, first you translate a
program into an intermediate language called Java byte codes the platformindependent codes interpreted by the interpreter on the Java platform. The interpreter
parses and runs each Java byte code instruction on the computer. Compilation happens
just once; interpretation occurs each time the program is executed. The following figure
illustrates how this works.

You can think of Java bytecodes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether its a development tool or a
Web browser that can run applets, is an implementation of the Java VM. Java bytecodes
help make write once, run anywhere possible. You can compile your program into
bytecodes on any platform that has a Java compiler. The bytecodes can then be run on
any implementation of the Java VM. That means that as long as a computer has a Java
VM, the same program written in the Java programming language can run on Windows
2000, a Solaris workstation, or on an iMac.

The Java Platform


A platform is the hardware or software environment in which a program
runs. Weve already mentioned some of the most popular platforms like Windows
2000, Linux, Solaris, and MacOS. Most platforms can be described as a
combination of the operating system and hardware. The Java platform differs
from most other platforms in that its a software-only platform that runs on top of
other hardware-based platforms.
The Java platform has two components:

The Java Virtual Machine (Java VM)

The Java Application Programming Interface (Java API)

Youve already been introduced to the Java VM. Its the base for the Java
platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that
provide many useful capabilities, such as graphical user interface (GUI) widgets.
The Java API is grouped into libraries of related classes and interfaces; these
libraries are known as packages. The next section, What Can Java Technology
Do?, highlights what functionality some of the packages in the Java API provide.
The following figure depicts a program thats running on the Java platform.
As the figure shows, the Java API and the virtual machine insulate the program
from the hardware.

Native code is code that after you compile it, the compiled code runs on a
specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, welltuned interpreters, and just-in-time bytecode compilers can bring performance
close to that of native code without threatening portability.
4.2 FEATURES OF JAVA :
Security
Every time you that you download a normal program, you are risking a viral
infection. Prior to Java, most users did not download executable programs frequently,
and those who did scanned them for viruses prior to execution. Most users still
worried about the possibility of infecting their systems with a virus. In addition,
another type of malicious program exists that must be guarded against. This type of
program can gather private information, such as credit card numbers, bank account
balances, and passwords. Java answers both of these concerns by providing a

firewall between a networked application and your computer. When you use a Javacompatible Web browser, you can safely download Java applets without fear of virus
infection or malicious intent.
Portability
For programs to be dynamically downloaded to all the various types of platforms
connected to the Internet, some means of generating portable executable code is
needed .As you will see, the same mechanism that helps ensure security also helps create
portability. Indeed, Javas solution to these two problems is both elegant and efficient.
The Byte code
The key that allows the Java to solve the security and portability problem is that the
output of Java compiler is Byte code. Byte code is a highly optimized set of instructions
designed to execute by the Java run-time system, which is called the Java Virtual
Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a program in
a wide variety of environments. The reason is, Once the run-time package exists for a
given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java
that prevents on-the-fly compilation of byte code into native code. Sun has just completed
its Just In Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it
compiles byte code into executable code in real time, on a piece-by-piece, demand basis.
It is not possible to compile an entire Java program into executable code all at once,
because Java performs various run-time checks that can be done only at run time. The JIT
compiles code, as it is needed, during execution.
Java Virtual Machine (JVM)
Beyond the language, there is the Java virtual machine. The Java virtual machine is an
important element of the Java technology. The virtual machine can be embedded within a
web browser or an operating system. Once a piece of Java code is loaded onto a machine,
it is verified. As part of the loading process, a class loader is invoked and does byte code

verification makes sure that the code thats has been generated by the compiler will not
corrupt the machine that its loaded on. Byte code verification takes place at the end of
the compilation process to make sure that is all accurate and correct. So byte code
verification is integral to the compiling and executing of Java code.

JavaSou

Javac

Java byte code

rce
.Java

Jav
a

.Class

Virtu

The above picture shows the development process a typical Java programming uses to
produce byte codes and executes them. The first box indicates that the Java source code is
located in a. Java file that is processed with a Java compiler called JAVA. The Java
compiler produces a file called a. class file, which contains the byte code. The class file is
then loaded across the network or loaded locally on your machine into the execution
environment is the Java virtual machine, which interprets and executes the byte code.
Java Architecture
Java architecture provides a portable, robust, high performing environment for
development. Java provides portability by compiling the byte codes for the Java Virtual
Machine, which is then interpreted on each platform by the run-time environment. Java is
a dynamic system, able to load code when needed from a machine in the same room or
across the planet.
Compilation of Code
When you compile the code, the Java compiler creates machine code (called byte code)
for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to
execute the byte code. The JVM is created for overcoming the issue of portability. The
code is written and compiled for one machine and interpreted on all machines. This
machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code

Java
Java

Interpreter
(PC)

Source

Byte code

Code
..
..

Macintosh
Compiler

Java
Interpreter

..

Java
SPARC

Interpreter

Compiler

During run-time the Java interpreter tricks the byte code file into thinking that it is
running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95
or SunSARC station running Solaris or Apple Macintosh running system and all could
receive code from any computer through Internet and run the Applets.
Simple
Java was designed to be easy for the Professional programmer to learn and to use
effectively. If you are an experienced C++ programmer, learning Java will be even easier.
Because Java inherits the C/C++ syntax and many of the object oriented features of C++.
Most of the confusing concepts from C++ are either left out of Java or implemented in a
cleaner, more approachable manner. In Java there are a small number of clearly defined
ways to accomplish a given task.

Object-Oriented
Java was not designed to be source-code compatible with any other language. This
allowed the Java team the freedom to design with a blank slate. One outcome of this was
a clean usable, pragmatic approach to objects. The object model in Java is simple and
easy to extend, while simple types, such as integers, are kept as high-performance nonobjects.
Robust
The multi-platform environment of the Web places extraordinary demands on a program,
because the program must execute reliably in a variety of systems. The ability to create
robust programs was given a high priority in the design of Java. Java is strictly typed
language; it checks your code at compile time and run time. Java virtually eliminates the
problems of memory management and de-allocation, which is completely automatic. In a
well-written Java program, all run time errors can and should be managed by your
program.
4.3 ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database systems,
programmers had to use proprietary languages for each database they wanted to connect
to. Now, ODBC has made the choice of the database system almost irrelevant from a
coding perspective, which is as it should be. Application developers have much more
important things to worry about than the syntax that is needed to port their program from
one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular
database that is associated with a data source that an ODBC application program is
written to use. Think of an ODBC data source as a door with a name on it. Each door will
lead you to a particular database. For example, the data source named Sales Figures
might be a SQL Server database, whereas the Accounts Payable data source could refer to
an Access database. The physical database referred to by a data source can reside
anywhere on the LAN.

The ODBC system files are not installed on your system by Windows 95. Rather,
they are installed when you setup a separate database application, such as SQL Server
Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a
file called ODBCINST.DLL. It is also possible to administer your ODBC data sources
through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit
version of this program, and each maintains a separate list of ODBC data sources.
From a programming perspective, the beauty of ODBC is that the application can
be written to use the same set of function calls to interface with any data source,
regardless of the database vendor. The source code of the application doesnt change
whether it talks to Oracle or SQL Server. We only mention these two as an example.
There are ODBC drivers available for several dozen popular database systems. Even
Excel spreadsheets and plain text files can be turned into data sources. The operating
system uses the Registry information written by ODBC Administrator to determine which
low-level ODBC drivers are needed to talk to the data source (such as the interface to
Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC
application program. In a client/server environment, the ODBC API even handles many
of the network issues for the application programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isnt as efficient as
talking directly to the native database interface. ODBC has had many detractors make the
charge that it is too slow. Microsoft has always claimed that the critical factor in
performance is the quality of the driver software that is used. In our humble opinion, this
is true. The availability of good ODBC drivers has improved a great deal recently. And
anyway, the criticism about performance is somewhat analogous to those who said that
compilers would never match the speed of pure assembly language. Maybe not, but the
compiler (or ODBC) gives you the opportunity to write cleaner programs, which means
you finish sooner. Meanwhile, computers get faster every year.
4.4 JDBC
In an effort to set an independent database standard API for Java, Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic

SQL database access mechanism that provides a consistent interface to a variety of


RDBMSs. This consistent interface is achieved through the use of plug-in database
connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he
or she must provide the driver for each platform that the database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBCs framework on ODBC.
As you discovered earlier in this chapter, ODBC has widespread support on a variety of
platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market
much faster than developing a completely new connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public
review that ended June 8, 1996. Because of user input, the final JDBC v1.0 specification
was released soon after.
The remainder of this section will cover enough information about JDBC for you to know
what it is about and how to use it effectively. This is by no means a complete overview of
JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one that,
because of its many goals, drove the development of the API. These goals, in conjunction
with early reviewer feedback, have finalized the JDBC class library into a solid
framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some insight as to
why certain classes and functionalities behave the way they do. The eight design goals for
JDBC are as follows:
1. SQL Level API
The designers felt that their main goal was to define a SQL interface for Java.
Although not the lowest database interface level possible, it is at a low enough level
for higher-level tools and APIs to be created. Conversely, it is at a high enough level
for application programmers to use it confidently. Attaining this goal allows for future
tool vendors to generate JDBC code and to hide many of JDBCs complexities
from the end user.

2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an
effort to support a wide variety of vendors, JDBC will allow any query statement to
be passed through it to the underlying database driver. This allows the connectivity
module to handle non-standard functionality in a manner that is suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must sit on top of other common SQL level APIs. This
goal allows JDBC to use existing ODBC level drivers by the use of a software
interface. This interface would translate JDBC calls to ODBC and vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for only
one method of completing a task per mechanism. Allowing duplicate functionality
only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also,
less errors appear at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer are
simple SELECTs, INSERTs, DELETEs and UPDATEs, these queries should be
simple to perform with JDBC. However, more complex SQL statements should also
be possible.

Finally we decided to proceed the implementation using Java Networking.

And for dynamically updating the cache table we go for MS Access database .
Java ha two things: a programming language and a platform.
Java is a high-level programming language that is all of the
following

Simple
Object-oriented

Architecture-neutral
Portable

Distributed

High-performance

Interpreted

multithreaded

Robust

Dynamic

Secure

Java is also unusual in that each Java program is both compiled and
interpreted. With a compile you translate a Java program into an intermediate
language called Java byte codes the platform-independent code instruction is
passed and run on the computer.
Compilation happens just once; interpretation occurs each time the
program is executed. The figure illustrates how this works.

Interpreter

Java
Program

Compilers

My Program

You can think of Java byte codes as the machine code instructions for the
Java Virtual Machine (Java VM). Every Java interpreter, whether its a Java
development tool or a Web browser that can run Java applets, is an
implementation of the Java VM. The Java VM can also be implemented in
hardware.

Java byte codes help make write once, run anywhere possible. You can
compile your Java program into byte codes on my platform that has a Java
compiler. The byte codes can then be run any implementation of the Java VM.
For example, the same Java program can run Windows NT, Solaris, and
Macintosh.

DESIGN ANALYSIS
5.1 INTRODUCTION

Software design sits at the technical kernel of the software engineering process
and is applied regardless of the development paradigm and area of application. Design is
the first step in the development phase for any engineered product or system. The
designers goal is to produce a model or representation of an entity that will later be built.
Beginning, once system requirement have been specified and analyzed, system design is
the first of the three technical activities -design, code and test that is required to build and
verify software.
The importance can be stated with a single word Quality. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we can
accurately translate a customers view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that follow.
Without a strong design we risk building an unstable system one that will be difficult to
test, one whose quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and
procedural details are developed reviewed and documented. System design can be viewed
from either technical or project management perspective. From the technical point of
view, design is comprised of four activities architectural design, data structure design,
interface design and procedural design.

5.2 UML DIAGRAMS

The Unified Modeling Language allows the software engineer to express an


analysis model using the modeling notation that is governed by a set of syntactic
semantic and pragmatic rules.

A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagram, which is as
follows.

User Model View


i. This view represents the system from the users perspective.
ii. The analysis representation describes a usage scenario from the
end-users perspective.

Structural model view


i. In this model the data and functionality are arrived from inside the
system.
ii. This model view models the static structures.

Behavioral Model View


It represents the dynamic of behavioral as parts of the system, depicting
the interactions of collection between various structural elements
described in the user model and structural model view.

Implementation Model View


In this the structural and behavioral as parts of the system are represented
as they are to be built.

Environmental Model View


In this the structural and behavioral aspects of the environment in which
the system is to be implemented are represented.

UML is specifically constructed through two different domains they are:

UML Analysis modeling, this focuses on the user model and structural model
views of the system.
UML design

modeling,

which

focuses

on

the

behavioral

modeling,

implementation modeling and environmental model views.

Use case Diagrams represent the functionality of the system from a users point of view.
Use cases are used during requirements elicitation and analysis to represent the
functionality of the system. Use cases focus on the behavior of the system from external
point of view.
Actors are external entities that interact with the system. Examples of actors include users
like administrator, bank customer etc., or another system like central database.

5.2.1 USE CASE DIAGRAM

Use Case: Use case describes the behavior of a system. It is used to structure things in a
model. It contains multiple scenarios, each of which describes a sequence of actions that
is clear enough for outsiders to understand.
Actor: An actor represents a coherent set of roles that users of a system play when
interacting with the use cases of the system. An actor participates in use cases to
accomplish an overall purpose. An actor can represent the role of a human, a device, or
any other systems.

calculate bandwidth,timedelay
according to file size

connect
client 1

select data file


calculate bandwidth,timedelay
according to file size

Server

client 2
select destination
calculate bandwidth,timedelay
according to file size

send
client 3

5.2.2 SEQUENCE DIAGRAM

This diagram is simple and visually logical, so it is easy to see the sequence of the
flow of control. It also clearly shows concurrent processes and activations in a design.
Object: Object can be viewed as an entity at a particular point in time with a specific
value and as a holder of identity that has different values over time. Associations among
objects are not shown. When you place an object tag in the design area, a lifeline is
automatically drawn and attached to that object tag.
Actor: An actor represents a coherent set of roles that users of a system play when
interacting with the use cases of the system. An actor participates in use cases to
accomplish an overall purpose. An actor can represent the role of a human, a device, or
any other systems.
Message: A message is a sending of a signal from one sender object to other receiver
object(s). It can also be the call of an operation on receiver object by caller object. The
arrow can be labeled with the name of the message (operation or signal) and its argument
values
Duration Message: A message that indicates an action will cause transition from one
state to another state.
Self Message: A message that indicates an action will perform at a particular state and
stay there.
Create Message: A message that indicates an action that will perform between two
states.

SERVER

Client A

Client B

Client C

1: start
2: connect

3: connect
4: connect

5: select data file

6: select destination

7: send

8: send

9: send

10: calculate bandwidth,time delay


11: calculate bandwidth,delay
12: calculate bandwidth,timedelay

5.2.3 CLASS DIAGRAM

Class: A Class is a description for a set of objects that shares the same attributes, and has
similar operations, relationships, behaviors and semantics.
Generalization: Generalization is a relationship between a general element and a more
specific kind of that element. It means that the more specific element can be used
whenever the general element appears. This relation is also known as specialization
or inheritance link.
Realization: Realization is the relationship between a specialization and its
implementation. It is an indication of the inheritance of behavior without the inheritance
of structure.
Association: Association is represented by drawing a line between classes. Associations
represent structural relationships between classes and can be named to facilitate model
understanding. If two classes are associated, you can navigate from an object of one class
to an object of the class.
Aggregation: Aggregation is a special kind of association in which one class represents
as the larger class that consists of a smaller class. It has the meaning of has-a
relationship.

Client A
ipadress
port
name

SERVER
name
id
port
destination
start()
selectFile()
selectDest()
send()

Receive()
calculate bandwidth()
cal timedelay()
Client B
ipaddress
port
name
Receive()
'cal bandwidth()
cal timedelay()
Client C
ipaddress
port
name
Receive()
cal bandwidth()
cal timedelay()

SAMPLE CODE
Sample code for client A:
public class clientA implements ActionListener
{
public Font f0 = new Font("Verdana" , Font.BOLD , 35);
public Font f = new Font("Times New roman" , Font.BOLD , 23);
public Font f2 = new Font("Times New roman" , Font.BOLD , 18);
public Font f1 = new Font("Calibrie", Font.BOLD + Font.ITALIC, 25);
public JLabel l=new JLabel("Received File");
public JLabel c1=new JLabel("Client A ");
public JLabel l1=new JLabel("Bandwidth (Kbs/ps) :");
public JLabel l2=new JLabel("Time Delay(ms)
public JLabel l3=new JLabel("File Size (Kbs)
public JTextField Tl1 = new JTextField("");
public JTextField Tl2 = new JTextField("");
public JTextField Tl3 = new JTextField("");
public JTextField T1 = new JTextField("");
public JScrollPane pane = new JScrollPane();
public JTextArea tf = new JTextArea();
public JButton graph=new JButton("Graphical");
public JButton Sub=new JButton("Submit");
public JButton Exit=new JButton("Exit");
public JFrame jf;

:");
:");

public Container c;
ServerSocket server;
Socket connection;
DataOutputStream output;
BufferedInputStream bis;
BufferedOutputStream bos;
byte[] receivedData;
int in;
String strLine;
clientA()
{
jf = new JFrame("Client A");
c = jf.getContentPane();
c.setLayout(null);
jf.setSize(800,670);
//c.setBackground(new Color(33,26,103));
c.setBackground(Color.BLACK);
l.setBounds(650,100,200,50);
l1.setBounds(30,170,250,50);
l2.setBounds(30,270,250,50);
l3.setBounds(30,370,250,50);
c1.setBounds(400,30,200,50);
c1.setFont(f0);
l1.setFont(f);
l2.setFont(f);
l3.setFont(f);
l.setForeground(Color.GREEN);
l1.setForeground(Color.MAGENTA);
l2.setForeground(Color.MAGENTA);

l3.setForeground(Color.MAGENTA);
Tl1.setBounds(250,173,200,40);
Tl1.setFont(f);
Tl2.setBounds(250,273,200,40);
Tl2.setFont(f);
Tl1.setForeground(Color.RED);
Tl2.setForeground(Color.RED);
Tl3.setForeground(Color.RED);
Tl3.setBounds(250,373,200,40);
Tl3.setFont(f);
Tl1.setBackground(new Color(246,233,191));
Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(Color.BLUE);
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);

Exit.setBackground(new Color(151,232,158));
graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);
c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);
jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}
});
int[] ports = new int[] { 8587 };

for (int i = 0; i < 1; i++) {


Thread t = new Thread(new PortListener(ports[i]));
t.setName("Listener-" + ports[i]);
t.start();
}
}
public static void main(String args[])
{
new clientA();
}
class PortListener implements Runnable {
ServerSocket server;
Socket connection;
BufferedReader br = null;
int port;
public PortListener(int port) {
this.port = port;
}
public void run() {
try {
server = new ServerSocket(port);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
while (true) {
connection = server.accept();

long startTime =System.currentTimeMillis();


br = new BufferedReader(
new InputStreamReader(new BufferedInputStream(
connection.getInputStream())));
String strLine;
StringBuffer buffer = new StringBuffer();
System.out.println("hi");
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
buffer.append(strLine + "\n");
}
br.close();
connection.close();
tf.setText(buffer.toString());
long endTime =System.currentTimeMillis();
float filesize=buffer.length();
float tfilesize=filesize/1024;
String totalfilesize =
Float.toString(tfilesize);
Tl3.setText(totalfilesize);
float cal=(endTime-startTime);
System.out.println(cal);
float ttime=cal/1000;
float timedelay =(float)filesize/ttime;
String totaltimedelay =
Float.toString(timedelay);

Tl2.setText(totaltimedelay);
float bandwidth = buffer.capacity();
float tbandwidth=(bandwidth*ttime)/1024;
String totalbandwidth =
Float.toString(tbandwidth);
Tl1.setText(totalbandwidth);
}
} catch (IOException e) {
} finally {
}
}
}
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == Exit)
{
System.exit(0);
}
if(e.getSource()== Sub)
{
try {
server = new ServerSocket( 8585 );
while ( true ) {
connection = server.accept();
output = new DataOutputStream
(connection.getOutputStream() );

//System.out.println( "Client message: " +input.readUTF()


);
output.writeUTF( " ack 1" );
receivedData = new byte[8192];
/*

bis Data from Client */


bis = new

BufferedInputStream(connection.getInputStream());
//

bos = new BufferedOutputStream(new

FileOutputStream("C:/sss.txt"));
//PrintStream p = new PrintStream(output);
//DataInputStream in = new DataInputStream(output);
BufferedReader br = new BufferedReader(new
InputStreamReader(bis));
bos = new BufferedOutputStream(new
FileOutputStream("C:/sss.txt"));
String strLine;
//Read File Line By Line
StringBuffer buffer = new StringBuffer();
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println (strLine);
buffer.append(strLine+ "\n");
}
tf.setText(buffer.toString());
// bos = new BufferedOutputStream(new
FileOutputStream("C:/sss.txt"));
while ((in = bis.read(receivedData)) != -1)
{
bos.write(receivedData,0,in);

}
bos.close();
output.writeUTF( " ack 2" );
}
}
catch (IOException e1 ) { }
}
}
}

public class clientB implements ActionListener


{
public Font f0 = new Font("Verdana" , Font.BOLD , 35);
public Font f = new Font("Times New roman" , Font.BOLD , 23);
public Font f2 = new Font("Times New roman" , Font.BOLD , 18);
public Font f1 = new Font("Calibrie", Font.BOLD + Font.ITALIC, 25);
public JLabel l=new JLabel("Received File");
public JLabel c1=new JLabel("Client B ");
public JLabel l1=new JLabel("Bandwidth (Kbs/ps) :");
public JLabel l2=new JLabel("Time Delay(ms)
public JLabel l3=new JLabel("File Size (Kbs)
public JTextField Tl1 = new JTextField("");
public JTextField Tl2 = new JTextField("");
public JTextField Tl3 = new JTextField("");
public JTextField T1 = new JTextField("");
public JScrollPane pane = new JScrollPane();
public JTextArea tf = new JTextArea();
public JButton graph=new JButton("Graphical");

:");
:");

public JButton Sub=new JButton("Submit");


public JButton Exit=new JButton("Exit");
public JFrame jf;
public Container c;
clientB()
{
jf = new JFrame("Client B");
c = jf.getContentPane();
c.setLayout(null);
jf.setSize(800,670);
//c.setBackground(new Color(33,26,103));
c.setBackground(Color.BLACK);
l.setBounds(650,100,200,50);
l1.setBounds(30,170,250,50);
l2.setBounds(30,270,250,50);
l3.setBounds(30,370,250,50);
c1.setBounds(400,30,200,50);
c1.setFont(f0);
l1.setFont(f);
l2.setFont(f);
l3.setFont(f);
l.setForeground(Color.GREEN);
l1.setForeground(Color.CYAN);
l2.setForeground(Color.CYAN);
l3.setForeground(Color.CYAN);
Tl1.setBounds(250,173,200,40);

Tl1.setForeground(Color.BLUE);
Tl2.setForeground(Color.BLUE);
Tl3.setForeground(Color.BLUE);
Tl2.setBounds(250,273,200,40);
Tl3.setBounds(250,373,200,40);
Tl1.setFont(f2);
Tl2.setFont(f2);
Tl3.setFont(f2);
Tl1.setBackground(new Color(246,233,191));
Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(Color.RED);
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);
Exit.setBackground(new Color(151,232,158));

graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);
c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);

int[] ports = new int[] { 1111 };


for (int i = 0; i < 1; i++) {
Thread t = new Thread(new PortListener(ports[i]));

t.setName("Listener-" + ports[i]);
t.start();
}
jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}
});

}
public static void main(String args[])
{
new clientB();
}
class PortListener implements Runnable {
ServerSocket server;
Socket connection;
BufferedReader br = null;
int port;
public PortListener(int port) {
this.port = port;
}

public void run() {


try {
server = new ServerSocket(port);

try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
while (true) {
connection = server.accept();
long startTime
=System.currentTimeMillis();
br = new BufferedReader(
new InputStreamReader(new
BufferedInputStream(
connection.getInputStream())));
String strLine;
StringBuffer buffer = new StringBuffer();
System.out.println("hi");
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
buffer.append(strLine + "\n");
}
br.close();

connection.close();
tf.setText(buffer.toString());
long endTime
=System.currentTimeMillis();
float filesize=buffer.length();
float tfilesize=filesize/1024;
String totalfilesize =
Float.toString(tfilesize);
Tl3.setText(totalfilesize);

float cal=(endTime-startTime);
System.out.println(cal);
float ttime=cal/1000;

float timedelay =(float)filesize/ttime;


String totaltimedelay =
Float.toString(timedelay);
Tl2.setText(totaltimedelay);
float bandwidth = buffer.capacity();
float tbandwidth=(bandwidth*ttime)/1024;
String totalbandwidth =
Float.toString(tbandwidth);
Tl1.setText(totalbandwidth);

}
} catch (IOException e) {
} finally {
}
}
}
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == Exit) {
jf.setVisible(false);
System.exit(0);
}
if(e.getSource()== Sub)
{
jf.setVisible(false);

}
}
}
public class clientC implements ActionListener

{
public Font f0 = new Font("Verdana" , Font.BOLD , 35);
public Font f = new Font("Times New roman" , Font.BOLD , 23);
public Font f2 = new Font("Times New roman" , Font.BOLD , 18);
public Font f1 = new Font("Calibrie", Font.BOLD + Font.ITALIC, 25);
public JLabel l=new JLabel("Received File");
public JLabel c1=new JLabel("Client C ");
public JLabel l1=new JLabel("Bandwidth (Kbs/ps) :");
public JLabel l2=new JLabel("Time Delay(ms)
public JLabel l3=new JLabel("File Size (Kbs)
public JTextField Tl1 = new JTextField("");
public JTextField Tl2 = new JTextField("");
public JTextField Tl3 = new JTextField("");
public JTextField T1 = new JTextField("");
public JScrollPane pane = new JScrollPane();
public JTextArea tf = new JTextArea();
public JButton graph=new JButton("Graphical");
public JButton Sub=new JButton("Submit");
public JButton Exit=new JButton("Exit");
public JFrame jf;
public Container c;
clientC()
{
jf = new JFrame("Client C");
c = jf.getContentPane();
c.setLayout(null);

:");
:");

jf.setSize(800,670);
//c.setBackground(new Color(33,26,103));
c.setBackground(Color.BLACK);
l.setBounds(650,100,200,50);
l1.setBounds(30,170,250,50);
l2.setBounds(30,270,250,50);
l3.setBounds(30,370,250,50);
c1.setBounds(400,30,200,50);
c1.setFont(f0);
l1.setFont(f);
l2.setFont(f);
l3.setFont(f);
l.setForeground(Color.GREEN);
l1.setForeground(Color.YELLOW);
l2.setForeground(Color.YELLOW);
l3.setForeground(Color.YELLOW);
Tl1.setBounds(250,173,200,40);
Tl1.setForeground(new Color(15,60,22));
Tl2.setBounds(250,273,200,40);
Tl2.setForeground(new Color(15,60,22));
Tl3.setBounds(250,373,200,40);
Tl3.setForeground(new Color(15,60,22));
Tl1.setFont(f2);
Tl2.setFont(f2);
Tl3.setFont(f2);
Tl3.setForeground(new Color(15,60,22));
Tl1.setBackground(new Color(246,233,191));

Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(new Color(120,0,0));
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);
Exit.setBackground(new Color(151,232,158));
graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);

c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);
int[] ports = new int[] { 2222 };
for (int i = 0; i < 1; i++) {
Thread t = new Thread(new PortListener(ports[i]));
t.setName("Listener-" + ports[i]);
t.start();
}

jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}

});

}
public static void main(String args[])
{
new clientC();
}
class PortListener implements Runnable {
ServerSocket server;
Socket connection;
BufferedReader br = null;
int port;
public PortListener(int port) {
this.port = port;
}
public void run() {
try {

try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

server = new ServerSocket(port);


long startTime =System.currentTimeMillis();
while (true) {
connection = server.accept();
br = new BufferedReader(
new

InputStreamReader(new

BufferedInputStream(
connection.getInputStream())));
String strLine;
StringBuffer buffer = new StringBuffer();
System.out.println("hi");
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
buffer.append(strLine + "\n");
}
br.close();
connection.close();
tf.setText(buffer.toString());
long

endTime

=System.currentTimeMillis();
float filesize=buffer.length();
float tfilesize=filesize/1024;
String
Float.toString(tfilesize);

totalfilesize

Tl3.setText(totalfilesize);

float cal=(endTime-startTime);
System.out.println(cal);
float ttime=cal/1000;

float timedelay =(float)filesize/ttime;


String

totaltimedelay

Float.toString(timedelay);
Tl2.setText(totaltimedelay);
float bandwidth = buffer.capacity();
float tbandwidth=(bandwidth*ttime)/1024;
String

totalbandwidth

Float.toString(tbandwidth);
Tl1.setText(totalbandwidth);
}
} catch (IOException e) {
} finally {
}
}
}

public void actionPerformed(ActionEvent e)


{
if (e.getSource() == Exit) {
jf.setVisible(false);
System.exit(0);
}
if(e.getSource()== Sub)
{
jf.setVisible(false);

}
}
}

SCREENS

TESTING

8.1 TESTING CONCEPTS:


Testing is the process of finding differences between the expected behavior
specified by system models and the observed behavior of the system.

A component is a part of the system that can be isolated for testing. A


component can be an object, a group of objects, or one or more subsystems.

A fault, also called bug or defect, is a design or coding mistake that may
cause abnormal component behavior.

An error is a manifestation of a fault during the execution of the system.

A failure is a deviation between the specification of a component and its


behavior. A failure is triggered by one or more errors.

A test case is a set of inputs and expected results that exercise a component
with the purpose of causing failures and detecting faults.

TESTING ACTIVITIES:

Inspecting a component, this finds faults in an individual component through


the manual inspection of its source code.

Unit testing, which finds faults by isolating an individual component using


test stubs and drivers and by exercising the components using a test case.

Integration testing, which finds faults by integrating several components


together.

System testing, which focuses on the complete system, its functional and
nonfunctional requirements and its target environment.

Testing is the phase where the errors remaining from all the previous phases
must be

detected. Hence, testing

assurance

and

software

consists of providing

observing if the

for

ensuring

performs a
the reliability
the

software behaves as

expected, then the conditions

very
of

critical

role

for

software. Testing

software wit a set of

test

which a failure

occurs

of designed
outputs

expected. If the software fails

under

quality
and

to behave

as

are needed for

debugging and correction.


The following terms are some commonly

used terms associated with testing.

Error
The

term

error

is

used

in

two

different ways. It

refers

to the

discrepancy between a computed, observed, or measured value and true, specified,


or theoretically correct value. Error

is also used

to refer

to human

action

that

results in software containing a defect or fault. This definition is quite general


and encompasses all the phases.
Fault

Fault

is

a condition

that causes

a system to

fail

in

performing its

required function. In other words a fault is an incorrect intermediate state that


may have been entered during program execution
Failure
Failure is the inability of the system or component to perform a required function
according to its specifications. In other words a failure is a manifestation of error. But
the mere presence of an error may not cause a failure.Presence of an error implies that a
failure must have occurred, and the observation of a failure implies that a fault must be
present in the system. However, the presence of a fault does not imply that a failure
must occur.
A test case is the triplet [i, s, o], where i stands for the data input to the
system, s is the state of the system at which the data is input, and o is the expected
output of the system .a test suite is the set of all test cases with which a given software
product is to be tested.

8.2. TESTING OBJECTIVES:


The main objective of testing is to uncover a host of errors, systematically and
with minimum effort and time. Stating formally, we can say,
Testing is a process of executing a program with the intent of finding an error.
A successful test is one that uncovers an as yet undiscovered error.
A good test case is one that has a high probability of finding error, if it exists.
The tests are inadequate to detect possibly present errors.
The software more or less confirms to the quality and reliable standards.

8.3 LEVELS OF TESTING


In order to uncover the errors present in different phases we have the concept of
levels of testing. The basic levels of testing are

Client Needs

Requirements

Design

Code

Acceptance Testing

System Testing

Integration Testing

Unit Testing

8.3.1 Unit testing:


Unit testing focuses verification effort on the smallest unit of software i.e. the
module. Using the detailed design and the process specifications testing is done to
uncover errors within the boundary of the module. All modules must be successful in the
unit test before the start of the integration testing begins.
In this project Evaluation of Employee Performance each service can be
thought of a module. There are so many modules like Executive, Debit Card, Credit
Cards, Performance, and Bills. Each module has been tested by giving different sets of
inputs (giving wrong Debit card Number, Executive code) when developing the module
as well as finishing the development so that each module works without any error. The
inputs are validated when accepting from the user.

8.3.2 Integration Testing:

After the unit testing we have to perform integration testing. The goal here is to
see if modules can be integrated properly, the emphasis being on testing interfaces
between modules. This testing activity can be considered as testing the design and hence
the emphasis on testing module interactions.
In this project Evaluation of Employee Performance, the main system is formed
by integrating all the modules. When integrating all the modules I have checked whether
the integration effects working of any of the services by giving different combinations of
inputs with which the two services run perfectly before Integration.

8.3.3 System Testing


Here the entire software system is tested. The reference document for this process
is the requirements document, and the goals to see if software meets its requirements.
Here entire Evaluation of Employee Performance has been tested against
requirements of project and it is checked whether all requirements of project have been
satisfied or not.

8.3.4 Acceptance Testing


Acceptance Test is performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here is focused on external behavior of the
system; the internal logic of program is not emphasized.
In this project Evaluation of Employee Performances have collected some data
and tested whether project is working correctly or not.
Test cases should be selected so that the largest number of attributes of an
equivalence class is exercised at once. The testing phase is an important part of software
development. It is the process of finding errors and missing operations and also a
complete verification to determine whether the objectives are met and the user
requirements are satisfied.

8.3.5 White Box Testing

This is a unit testing method where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors.
I tested step wise every piece of code, taking care that every statement in the
code is executed at least once. The white box testing is also called Glass Box Testing.
I have generated a list of test cases, sample data. Which is used to check all
possible combinations of execution paths through the code at every module level?

8.3.6 Black Box Testing


This testing method considers a module as a single unit and checks the unit at interface
and communication with other modules rather getting into details at statement level. Here
the module will be treated as a block box that will take some input and generate output.
Output for a given set of input combinations are forwarded to other modules

CONCLUSION

We have described an end-to-end probing technique which is capable of inferring


the capacity bandwidth along an arbitrary set of path segments in the network, or across
the portion of a path shared by a set of connections, and have presented results of
simulations and preliminary Internet measurements of our techniques. The constructions
we advocate are built in part upon packet-pair techniques, and the inferences we draw are
accurate under a variety of simulated network conditions and are robust to network
effects such as the presence of bursty cross-traffic.
While the end-to-end probing constructions we proposed in this paper are geared
towards a specific problem, we believe that there will be increasing interest in techniques
which conduct remote probes of network-internal characteristics, including those across
arbitrary subpaths or regions of the network. We anticipate that lightweight mechanisms
to facilitate measurement of metrics of interest, such as capacity bandwidth, will see
increasing use as emerging network-aware applications optimize their performance via
intelligent utilization of network resources.

REFERENCES

[1]

B. Ahlgren, M. Bjorkman, and B. Melander, Network probing using


packet trains, Swedish Inst., Technical Report, Mar. 1999.

[2]

D. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, Resilient


overlay networks, in SOSP 2001, Banff, Canada, Oct. 2001.

[3]

S. Banerjee and A. Agrawala, Estimating available capacity of a network


connection, in IEEE Int. Conf. Networks (ICON 01), Bangkok, Thailand, Oct.

2001.
[4]

J. C. Bolot, End-to-end packet delay and loss behavior in the Internet,


in Proc. ACM SIGCOMM93, Sep. 1993, pp. 289298.

[5]

J. Byers, J. Considine, M. Mitzenmacher, and S. Rost, Informed


content delivery across adaptive overlay networks, in ACM SIGCOMM 02,
Pittsburgh, PA, Aug. 2002.

[6]

J. Byers, M. Luby, and M. Mitzenmacher, Accessing multiple mirror


sites in parallel: Using Tornado codes to speed up downloads, in Proc. IEEE
INFOCOM99, Mar. 1999.

[7]

R. L. Carter and M. E. Crovella, Measuring bottleneck link speed in


packet switched networks, Performance Evaluation,

[8]

Y.-H. Chu, S. Rao, and H. Zhang, A case for end-system multicast,


in ACM SIGMETRICS00, Santa Clara, CA, Jun. 2000.

[9]

M. E. Crovella, R. Frangioso, and M. Harchol-Balter, Connection

scheduling in web servers, in Proc. 1999 USENIX Symp. Internet Technologies


and Systems (USITS99), Oct. 1999.
[10]

C. Dovrolis, P. Ramanathan, and D. Moore, What do packet dispersion


techniques measure?, in INFOCOM01, Anchorage, AK, Apr. 2001.

Das könnte Ihnen auch gefallen