Beruflich Dokumente
Kultur Dokumente
Segments
ABSTRACT
Accurate measurement of network bandwidth is important for network
management applications as well as flexible Internet applications and protocols which
actively manage and dynamically adapt to changing utilization of network resources.
Extensive work has focused on two approaches to measuring bandwidth:
Measuring it hop-by-hop, and measuring it end-to-end along a path. Unfortunately, bestpractice techniques for the former are inefficient and techniques for the latter are only
able to observe bottlenecks visible at end-to-end scope.
We develop end-to-end probing methods which can measure bottleneck capacity
bandwidth along arbitrary, targeted subpaths of a path in the network, including subpaths
shared by a set of flows.
We evaluate our technique through ns simulations, and then provide a
comparative Internet performance evaluation against hop-by-hop and end-to-end
techniques. We also describe a number of applications which we foresee as standing to
benefit from solutions to this problem, ranging from network troubleshooting and
capacity provisioning to optimizing the layout of application-level overlay networks, to
optimized replica placement.
INTRODUCTION
MEASUREMENT of network bandwidth is important for many Internet
applications and protocols, especially those involving the transfer of large files and those
involving the delivery of content with real-time QoS constraints, such as streaming
media. Some specific examples of applications which can leverage accurate bandwidth
estimation include end-system multicast and overlay network configuration protocols,
content location and delivery in peer-to-peer (P2P) networks, network-aware cache or
replica placement policies and flow scheduling and admission control policies at
massively-accessed content servers. In addition, accurate measurements of network
bandwidth are useful to network operators concerned with problems such as capacity
provisioning, traffic engineering, network troubleshooting and verification of service
level agreements (SLAs).
Bandwidth Measurement:
Two different measures used in end-to-end network bandwidth estimation are capacity
bandwidth, or the maximum transmission rate that could be achieved between two hosts
at the endpoints of a given path in the absence of any competing traffic, and available
bandwidth, the portion of the capacity bandwidth along a path that could be acquired by
a given flow at a given instant in time. Both of these measures are important, and each
captures different relevant properties of the network. Capacity bandwidth is a static
baseline measure that applies over long time-scales (up to the time-scale at which
network paths change), and is independent of the particular traffic dynamics at a time
instant. Available bandwidth provides a dynamic measure of the load on a path, or more
precisely, the residual capacity of a path. Additional application-specific information
must then be applied before making meaningful use of either measure. While measures of
available bandwidth are certainly more useful for control or optimization of processes
operating at short time scales, processes operating at longer time scales (e.g., server
selection or admission control) will find estimates of both measures to be helpful. On the
other hand, many network management applications (e.g., capacity provisioning) are
concerned primarily with capacity bandwidth. We focus on measuring capacity
bandwidth
Catalyst Applications:
To exemplify the type of applications that can be leveraged by the identification of shared
capacity bandwidth (or more generally, the capacity bandwidth of an arbitrary, targeted
subpath), a client must select two out of three sources to use to download data in parallel.
This scenario may arise when downloading content in parallel from a subset of mirror
sites or multicast sources or from a subset of peer nodes in P2P environments. In the
second scenario, an overlay network must be set up between a single source and two
destinations. This scenario may arise in ad-hoc networks and end-system multicast
systems
CLIENT A
CLIENT B
CLIENT B
EXISTING SYSTEM
Bandwidth is important for many Internet applications and protocols, especially
those involving the transfer of large files and those involving the delivery of content with
real-time QoS constraints, such as streaming media. Some specific examples of
applications which can leverage accurate bandwidth estimation include end-system
multicast and overlay network configuration protocols content location and delivery in
peer-to-peer networks
PROPOSED SYSTEM
In Proposed System we propose an efficient end-to-end measurement technique that
yields the capacity bandwidth of an arbitrary subpath of a route between a set of endpoints. By subpath, we mean a sequence of consecutive network links between any two
identifiable nodes on that path. A node is a path between a source and a destination is
identifiable if it is possible to coerce a packet injected at the source to exit the path at
node
We can achieve this by
1) Targeting the packet to node(if nodes IP address is known)
2) Forcing the packet to stop at node through the use of TTL field (if the hop count
source from to node is known)
3) By targeting the packet to a destination , such that the paths from source to
destination and from to are known diverge at node
Our methods are much less resource-intensive than existing hop-by-hop methods for
estimating bandwidth along a path and much more general than end-to-end methods for
measuring capacity bandwidth
Advantages
Our method provides the following advantages over existing techniques:
1) it can estimate bandwidth on links not visible at end-to-end scope,
2) it can measure the bandwidth of fast links following slow links as long as the
ratio between the link speeds does not exceed the ratio between the largest and
the smallest possible packet sizes that could be transmitted over these links.
INPUT STAGES:
The main input stages can be listed as below:
Data recording
Data transcription
Data conversion
Data verification
Data control
Data transmission
Data validation
Data correction
INPUT TYPES:
It is necessary to determine the various types of inputs. Inputs can be categorized as
follows:
External inputs, which are prime inputs for the system.
Internal inputs, which are user communications with the system.
Operational, which are computer departments communications to the
system?
Interactive, which are inputs entered during a dialogue.
INPUT MEDIA:
At this stage choice has to be made about the input media. To conclude about the
input media consideration has to be given to;
Type of input
Flexibility of format
Speed
Accuracy
Verification methods
Rejection rates
Ease of correction
Storage and handling requirements
Security
Easy to use
Portability
Keeping in view the above description of the input types and input media, it can be said
that most of the inputs are of the form of internal and interactive. As
Input data is to be the directly keyed in by the user, the keyboard can be considered to be
the most suitable input device.
OUTPUT DESIGN:
In general are:
Internal Outputs whose destination is with in organization and they are the Users
main interface with the computer. Outputs from computer systems are required
primarily to communicate the results of processing to users. They are also used to
provide a permanent copy of the results for later consultation. The various types
of outputs
Interface outputs, which involve the user in communicating directly with the
system.
OUTPUT DEFINITION
The outputs should be defined in terms of the following points:
For Example
OUTPUT MEDIA:
In the next stage it is to be decided that which medium is the most appropriate for the
output. The main considerations when deciding about the output media are:
Keeping in view the above description the project is to have outputs mainly
coming under the category of internal outputs. The main outputs desired according to the
requirement specification are:
The outputs were needed to be generated as a hard copy and as well as queries to be
viewed on the screen. Keeping in view these outputs, the format for the output is taken
from the outputs, which are currently being obtained after manual processing. The
standard printer is to be used as output media for hard copies.
DOCUMENT CONTROL
Business Requirement
Documentation
Feasibility Study
TEAM FORMATION
Project Specification
PREPARATION
Requirements
Gathering
ANALYSIS &
DESIGN
INTEGRATION
& SYSTEM
TESTING
TRAINING
DELIVERY/INS
TALLATION
CODE
Umbrella
Activity
UNIT TEST
ACCEPTANCE
TEST
Umbrella
Activity
SDLC is nothing but Software Development Life Cycle. It is a standard which is used by
software industry to develop good software.
Stages in SDLC:
Requirement Gathering
Analysis
ASSESSMEN
Designing
Coding
Testing
Maintenance
Requirements Gathering stage:
The requirements gathering process takes as its input the goals identified in the high-level
requirements section of the project plan. Each goal will be refined into a set of one or more
requirements. These requirements define the major functions of the intended application, define
operational data areas and reference data areas, and define the initial data entities. Major
functions include critical processes to be managed, as well as mission critical inputs, outputs and
reports. A user class hierarchy is developed and associated with these major functions, data areas,
and data entities. Each of these definitions is termed a Requirement. Requirements are identified
by unique requirement identifiers and, at minimum, contain a requirement title and
textual description.
These requirements are fully described in the primary deliverables for this stage: the
Requirements Document and the Requirements Traceability Matrix (RTM). The requirements
document contains complete descriptions of each requirement, including diagrams and references
to external documents as necessary. Note that detailed listings of database tables and fields are
not included in the requirements document.
The title of each requirement is also placed into the first version of the RTM, along with the
title of each goal from the project plan. The purpose of the RTM is to show that the product
components developed during each stage of the software development lifecycle are formally
connected to the components developed in prior stages.
In the requirements stage, the RTM consists of a list of high-level requirements, or goals,
by title, with a listing of associated requirements for each goal, listed by requirement title. In this
hierarchical listing, the RTM shows that each requirement developed during this stage is formally
linked to a specific product goal. In this format, each requirement can be traced to a specific
product goal, hence the term requirements traceability.
The outputs of the requirements definition stage include the requirements document, the
RTM, and an updated project plan.
Project Specifications are all about representing of various possible inputs submitting to the
server and corresponding outputs along with reports maintained by administrator
Analysis Stage:
The planning stage establishes a bird's eye view of the intended software product, and uses
this to establish the basic project structure, evaluate feasibility and risks associated with the
project, and describe appropriate management and technical approaches.
The most critical section of the project plan is a listing of high-level product requirements, also
referred to as goals. All of the software product requirements to be developed during the
requirements definition stage flow from one or more of these goals. The minimum information
for each goal consists of a title and textual description, although additional information and
references to external documents may be included. The outputs of the project planning stage are
the configuration management plan, the quality assurance plan, and the project plan and schedule,
with a detailed listing of scheduled activities for the upcoming Requirements stage, and high level
estimates of effort for the out stages.
Designing Stage:
The design stage takes as its initial input the requirements identified in the approved
requirements document. For each requirement, a set of one or more design elements will be
produced as a result of interviews, workshops, and/or prototype efforts. Design elements describe
the desired software features in detail, and generally include functional hierarchy diagrams,
screen layout diagrams, tables of business rules, business process diagrams, pseudo code, and a
complete entity-relationship diagram with a full data dictionary. These design elements are
intended to describe the software in sufficient detail that skilled programmers may develop the
software with minimal additional input.
When the design document is finalized and accepted, the RTM is updated to show that each
design element is formally associated with a specific requirement. The outputs of the design stage
are the design document, an updated RTM, and an updated project plan.
The development stage takes as its primary input the design elements described in the
approved design document. For each design element, a set of one or more software artifacts will
be produced. Software artifacts include but are not limited to menus, dialogs, data management
forms, data reporting formats, and specialized procedures and functions. Appropriate test cases
will be developed for each set of functionally related software artifacts, and an online help system
will be developed to guide users in their interactions with the software.
The RTM will be updated to show that each developed artifact is linked to a specific design
element, and that each developed artifact has one or more corresponding test case items. At this
point, the RTM is in its final configuration. The outputs of the development stage include a fully
functional set of software that satisfies the requirements and design elements previously
documented, an online help system that describes the operation of the software, an
implementation map that identifies the primary code entry points for all major system functions, a
test plan that describes the test cases to be used to validate the correctness and completeness of
the software, an updated RTM, and an updated project plan.
During the integration and test stage, the software artifacts, online help, and test data are
migrated from the development environment to a separate test environment. At this point, all test
cases are run to verify the correctness and completeness of the software. Successful execution of
the test suite confirms a robust and complete migration capability. During this stage, reference
data is finalized for production use and production users are identified and linked to their
appropriate roles. The final reference data (or links to reference data source files) and production
user list are compiled into the Production Initiation Plan.
The outputs of the integration and test stage include an integrated set of software, an online
help system, an implementation map, a production initiation plan that describes reference data
and production users, an acceptance plan which contains the final suite of test cases, and an
updated project plan.
After customer personnel have verified that the initial production data load is correct and
the test suite has been executed with satisfactory results, the customer formally accepts the
delivery of the software.
The primary outputs of the installation and acceptance stage include a production
application, a completed acceptance test suite, and a memorandum of customer acceptance of the
software. Finally, the PDR enters the last of the actual labor data into the project schedule and
locks the project as a permanent project record. At this point the PDR "locks" the project by
archiving all software items, the implementation map, the source code, and the documentation for
future reference.
Maintenance:
Outer rectangle represents maintenance of a project, Maintenance team will start with
requirement study, understanding of documentation later employees will be assigned work and
they will under go training on that particular assigned category.
For this life cycle there is no end, it will be continued so on like an umbrella (no ending point to
umbrella sticks).
Hardware Requirements:
System
Hard Disk
40 GB.
Floppy Drive
1.44 MB
Monitor
15 VGA Colour.
Mouse
Logitech.
Ram
256 MB
Software Requirements:
Operating system
Windows XP Professional.
Coding Language
Java.
Tool Used
Eclipse.
DataFlow Diagrams:
Level 0 DFD for Server:
server
Connect
clients
client
Measure
band width
of a file
Start
Select File
server
Select
Destinati
on
send
Ip
address
Client
port
name
FEASIBILITY STUDY
Preliminary investigation examine project feasibility, the likelihood the system will
be useful to the organization. The main objective of the feasibility study is to test the
Technical, Operational and Economical feasibility for adding new modules and
debugging old running system. All system is feasible if they are unlimited resources and
infinite time.
System analysis is conducted with the following objectives
There are aspects in the feasibility study portion of the preliminary investigation:
TECHNICAL FEASIBILITY
OPERATIONAL FEASIBILITY
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
It is the most difficult area to access because objectives, functions performance are
somewhat hazy, anything seems to be possible if right assumptions are made. The
considerations that are normally associated with technical feasibility include
Development Risk:
The system is designed so that necessary functions and performance are achieved
within the constraints uncovered during the analysis.
Resource Availability:
It is important to determine the necessary resources to build the system.
Hardware feasibility:
This project is mainly concern about the software, where as the hardware needed
is simple computers that are available in the market, it is preferable to have good
computer with at least Pentium III processor and 128 of RAM, in order, to make the
system runs probably and to make the efficiency acceptable.
Software feasibility:
The development environment of this system is windows XP, so that, it is
compatible with windows platform, how ever, the language selected to implement this
project is Dot net, the reason for choosing Dot net are:
Portability.
Popular.
Object oriented.
Efficiency.
Technology:
The proposed system will generate many kinds of reports depending on the
requirements. By automating all these activities the work is done effectively and in time.
There is also quick and good response for each operation.
OPERATIONAL FEASIBILITY
Proposed project is beneficial only if it can be turned into information systems
that will meet the organizations operating requirements. Simply stated, this test of
feasibility asks if the system will work when it is developed and installed. Are there
major barriers to Implementation? Here are questions that will help test the operational
feasibility of a project:
Is there sufficient support for the project from management from users? If the
current system is well liked and used to the extent that persons will not be able to see
reasons for change, there may be resistance.
Are the current business methods acceptable to the user? If they are not, Users
may welcome a change that will bring about a more operational and useful
systems.
Have the user been involved in the planning and development of the project?
Early involvement reduces the chances of resistance to the system and in general
and increases the likelihood of successful project.
Since the proposed system was to help reduce the hardships encountered. In the
IMPLEMENTATION
You can think of Java bytecodes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether its a development tool or a
Web browser that can run applets, is an implementation of the Java VM. Java bytecodes
help make write once, run anywhere possible. You can compile your program into
bytecodes on any platform that has a Java compiler. The bytecodes can then be run on
any implementation of the Java VM. That means that as long as a computer has a Java
VM, the same program written in the Java programming language can run on Windows
2000, a Solaris workstation, or on an iMac.
Youve already been introduced to the Java VM. Its the base for the Java
platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components that
provide many useful capabilities, such as graphical user interface (GUI) widgets.
The Java API is grouped into libraries of related classes and interfaces; these
libraries are known as packages. The next section, What Can Java Technology
Do?, highlights what functionality some of the packages in the Java API provide.
The following figure depicts a program thats running on the Java platform.
As the figure shows, the Java API and the virtual machine insulate the program
from the hardware.
Native code is code that after you compile it, the compiled code runs on a
specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, welltuned interpreters, and just-in-time bytecode compilers can bring performance
close to that of native code without threatening portability.
4.2 FEATURES OF JAVA :
Security
Every time you that you download a normal program, you are risking a viral
infection. Prior to Java, most users did not download executable programs frequently,
and those who did scanned them for viruses prior to execution. Most users still
worried about the possibility of infecting their systems with a virus. In addition,
another type of malicious program exists that must be guarded against. This type of
program can gather private information, such as credit card numbers, bank account
balances, and passwords. Java answers both of these concerns by providing a
firewall between a networked application and your computer. When you use a Javacompatible Web browser, you can safely download Java applets without fear of virus
infection or malicious intent.
Portability
For programs to be dynamically downloaded to all the various types of platforms
connected to the Internet, some means of generating portable executable code is
needed .As you will see, the same mechanism that helps ensure security also helps create
portability. Indeed, Javas solution to these two problems is both elegant and efficient.
The Byte code
The key that allows the Java to solve the security and portability problem is that the
output of Java compiler is Byte code. Byte code is a highly optimized set of instructions
designed to execute by the Java run-time system, which is called the Java Virtual
Machine (JVM). That is, in its standard form, the JVM is an interpreter for byte code.
Translating a Java program into byte code helps makes it much easier to run a program in
a wide variety of environments. The reason is, Once the run-time package exists for a
given system, any Java program can run on it.
Although Java was designed for interpretation, there is technically nothing about Java
that prevents on-the-fly compilation of byte code into native code. Sun has just completed
its Just In Time (JIT) compiler for byte code. When the JIT compiler is a part of JVM, it
compiles byte code into executable code in real time, on a piece-by-piece, demand basis.
It is not possible to compile an entire Java program into executable code all at once,
because Java performs various run-time checks that can be done only at run time. The JIT
compiles code, as it is needed, during execution.
Java Virtual Machine (JVM)
Beyond the language, there is the Java virtual machine. The Java virtual machine is an
important element of the Java technology. The virtual machine can be embedded within a
web browser or an operating system. Once a piece of Java code is loaded onto a machine,
it is verified. As part of the loading process, a class loader is invoked and does byte code
verification makes sure that the code thats has been generated by the compiler will not
corrupt the machine that its loaded on. Byte code verification takes place at the end of
the compilation process to make sure that is all accurate and correct. So byte code
verification is integral to the compiling and executing of Java code.
JavaSou
Javac
rce
.Java
Jav
a
.Class
Virtu
The above picture shows the development process a typical Java programming uses to
produce byte codes and executes them. The first box indicates that the Java source code is
located in a. Java file that is processed with a Java compiler called JAVA. The Java
compiler produces a file called a. class file, which contains the byte code. The class file is
then loaded across the network or loaded locally on your machine into the execution
environment is the Java virtual machine, which interprets and executes the byte code.
Java Architecture
Java architecture provides a portable, robust, high performing environment for
development. Java provides portability by compiling the byte codes for the Java Virtual
Machine, which is then interpreted on each platform by the run-time environment. Java is
a dynamic system, able to load code when needed from a machine in the same room or
across the planet.
Compilation of Code
When you compile the code, the Java compiler creates machine code (called byte code)
for a hypothetical machine called Java Virtual Machine (JVM). The JVM is supposed to
execute the byte code. The JVM is created for overcoming the issue of portability. The
code is written and compiled for one machine and interpreted on all machines. This
machine is called Java Virtual Machine.
Compiling and interpreting Java Source Code
Java
Java
Interpreter
(PC)
Source
Byte code
Code
..
..
Macintosh
Compiler
Java
Interpreter
..
Java
SPARC
Interpreter
Compiler
During run-time the Java interpreter tricks the byte code file into thinking that it is
running on a Java Virtual Machine. In reality this could be a Intel Pentium Windows 95
or SunSARC station running Solaris or Apple Macintosh running system and all could
receive code from any computer through Internet and run the Applets.
Simple
Java was designed to be easy for the Professional programmer to learn and to use
effectively. If you are an experienced C++ programmer, learning Java will be even easier.
Because Java inherits the C/C++ syntax and many of the object oriented features of C++.
Most of the confusing concepts from C++ are either left out of Java or implemented in a
cleaner, more approachable manner. In Java there are a small number of clearly defined
ways to accomplish a given task.
Object-Oriented
Java was not designed to be source-code compatible with any other language. This
allowed the Java team the freedom to design with a blank slate. One outcome of this was
a clean usable, pragmatic approach to objects. The object model in Java is simple and
easy to extend, while simple types, such as integers, are kept as high-performance nonobjects.
Robust
The multi-platform environment of the Web places extraordinary demands on a program,
because the program must execute reliably in a variety of systems. The ability to create
robust programs was given a high priority in the design of Java. Java is strictly typed
language; it checks your code at compile time and run time. Java virtually eliminates the
problems of memory management and de-allocation, which is completely automatic. In a
well-written Java program, all run time errors can and should be managed by your
program.
4.3 ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database systems,
programmers had to use proprietary languages for each database they wanted to connect
to. Now, ODBC has made the choice of the database system almost irrelevant from a
coding perspective, which is as it should be. Application developers have much more
important things to worry about than the syntax that is needed to port their program from
one database to another when business needs suddenly change.
Through the ODBC Administrator in Control Panel, you can specify the particular
database that is associated with a data source that an ODBC application program is
written to use. Think of an ODBC data source as a door with a name on it. Each door will
lead you to a particular database. For example, the data source named Sales Figures
might be a SQL Server database, whereas the Accounts Payable data source could refer to
an Access database. The physical database referred to by a data source can reside
anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95. Rather,
they are installed when you setup a separate database application, such as SQL Server
Client or Visual Basic 4.0. When the ODBC icon is installed in Control Panel, it uses a
file called ODBCINST.DLL. It is also possible to administer your ODBC data sources
through a stand-alone program called ODBCADM.EXE. There is a 16-bit and a 32-bit
version of this program, and each maintains a separate list of ODBC data sources.
From a programming perspective, the beauty of ODBC is that the application can
be written to use the same set of function calls to interface with any data source,
regardless of the database vendor. The source code of the application doesnt change
whether it talks to Oracle or SQL Server. We only mention these two as an example.
There are ODBC drivers available for several dozen popular database systems. Even
Excel spreadsheets and plain text files can be turned into data sources. The operating
system uses the Registry information written by ODBC Administrator to determine which
low-level ODBC drivers are needed to talk to the data source (such as the interface to
Oracle or SQL Server). The loading of the ODBC drivers is transparent to the ODBC
application program. In a client/server environment, the ODBC API even handles many
of the network issues for the application programmer.
The advantages of this scheme are so numerous that you are probably thinking
there must be some catch. The only disadvantage of ODBC is that it isnt as efficient as
talking directly to the native database interface. ODBC has had many detractors make the
charge that it is too slow. Microsoft has always claimed that the critical factor in
performance is the quality of the driver software that is used. In our humble opinion, this
is true. The availability of good ODBC drivers has improved a great deal recently. And
anyway, the criticism about performance is somewhat analogous to those who said that
compilers would never match the speed of pure assembly language. Maybe not, but the
compiler (or ODBC) gives you the opportunity to write cleaner programs, which means
you finish sooner. Meanwhile, computers get faster every year.
4.4 JDBC
In an effort to set an independent database standard API for Java, Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In an
effort to support a wide variety of vendors, JDBC will allow any query statement to
be passed through it to the underlying database driver. This allows the connectivity
module to handle non-standard functionality in a manner that is suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must sit on top of other common SQL level APIs. This
goal allows JDBC to use existing ODBC level drivers by the use of a software
interface. This interface would translate JDBC calls to ODBC and vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers feel
that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for only
one method of completing a task per mechanism. Allowing duplicate functionality
only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time; also,
less errors appear at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer are
simple SELECTs, INSERTs, DELETEs and UPDATEs, these queries should be
simple to perform with JDBC. However, more complex SQL statements should also
be possible.
And for dynamically updating the cache table we go for MS Access database .
Java ha two things: a programming language and a platform.
Java is a high-level programming language that is all of the
following
Simple
Object-oriented
Architecture-neutral
Portable
Distributed
High-performance
Interpreted
multithreaded
Robust
Dynamic
Secure
Java is also unusual in that each Java program is both compiled and
interpreted. With a compile you translate a Java program into an intermediate
language called Java byte codes the platform-independent code instruction is
passed and run on the computer.
Compilation happens just once; interpretation occurs each time the
program is executed. The figure illustrates how this works.
Interpreter
Java
Program
Compilers
My Program
You can think of Java byte codes as the machine code instructions for the
Java Virtual Machine (Java VM). Every Java interpreter, whether its a Java
development tool or a Web browser that can run Java applets, is an
implementation of the Java VM. The Java VM can also be implemented in
hardware.
Java byte codes help make write once, run anywhere possible. You can
compile your Java program into byte codes on my platform that has a Java
compiler. The byte codes can then be run any implementation of the Java VM.
For example, the same Java program can run Windows NT, Solaris, and
Macintosh.
DESIGN ANALYSIS
5.1 INTRODUCTION
Software design sits at the technical kernel of the software engineering process
and is applied regardless of the development paradigm and area of application. Design is
the first step in the development phase for any engineered product or system. The
designers goal is to produce a model or representation of an entity that will later be built.
Beginning, once system requirement have been specified and analyzed, system design is
the first of the three technical activities -design, code and test that is required to build and
verify software.
The importance can be stated with a single word Quality. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we can
accurately translate a customers view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that follow.
Without a strong design we risk building an unstable system one that will be difficult to
test, one whose quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and
procedural details are developed reviewed and documented. System design can be viewed
from either technical or project management perspective. From the technical point of
view, design is comprised of four activities architectural design, data structure design,
interface design and procedural design.
A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagram, which is as
follows.
UML Analysis modeling, this focuses on the user model and structural model
views of the system.
UML design
modeling,
which
focuses
on
the
behavioral
modeling,
Use case Diagrams represent the functionality of the system from a users point of view.
Use cases are used during requirements elicitation and analysis to represent the
functionality of the system. Use cases focus on the behavior of the system from external
point of view.
Actors are external entities that interact with the system. Examples of actors include users
like administrator, bank customer etc., or another system like central database.
Use Case: Use case describes the behavior of a system. It is used to structure things in a
model. It contains multiple scenarios, each of which describes a sequence of actions that
is clear enough for outsiders to understand.
Actor: An actor represents a coherent set of roles that users of a system play when
interacting with the use cases of the system. An actor participates in use cases to
accomplish an overall purpose. An actor can represent the role of a human, a device, or
any other systems.
calculate bandwidth,timedelay
according to file size
connect
client 1
Server
client 2
select destination
calculate bandwidth,timedelay
according to file size
send
client 3
This diagram is simple and visually logical, so it is easy to see the sequence of the
flow of control. It also clearly shows concurrent processes and activations in a design.
Object: Object can be viewed as an entity at a particular point in time with a specific
value and as a holder of identity that has different values over time. Associations among
objects are not shown. When you place an object tag in the design area, a lifeline is
automatically drawn and attached to that object tag.
Actor: An actor represents a coherent set of roles that users of a system play when
interacting with the use cases of the system. An actor participates in use cases to
accomplish an overall purpose. An actor can represent the role of a human, a device, or
any other systems.
Message: A message is a sending of a signal from one sender object to other receiver
object(s). It can also be the call of an operation on receiver object by caller object. The
arrow can be labeled with the name of the message (operation or signal) and its argument
values
Duration Message: A message that indicates an action will cause transition from one
state to another state.
Self Message: A message that indicates an action will perform at a particular state and
stay there.
Create Message: A message that indicates an action that will perform between two
states.
SERVER
Client A
Client B
Client C
1: start
2: connect
3: connect
4: connect
6: select destination
7: send
8: send
9: send
Class: A Class is a description for a set of objects that shares the same attributes, and has
similar operations, relationships, behaviors and semantics.
Generalization: Generalization is a relationship between a general element and a more
specific kind of that element. It means that the more specific element can be used
whenever the general element appears. This relation is also known as specialization
or inheritance link.
Realization: Realization is the relationship between a specialization and its
implementation. It is an indication of the inheritance of behavior without the inheritance
of structure.
Association: Association is represented by drawing a line between classes. Associations
represent structural relationships between classes and can be named to facilitate model
understanding. If two classes are associated, you can navigate from an object of one class
to an object of the class.
Aggregation: Aggregation is a special kind of association in which one class represents
as the larger class that consists of a smaller class. It has the meaning of has-a
relationship.
Client A
ipadress
port
name
SERVER
name
id
port
destination
start()
selectFile()
selectDest()
send()
Receive()
calculate bandwidth()
cal timedelay()
Client B
ipaddress
port
name
Receive()
'cal bandwidth()
cal timedelay()
Client C
ipaddress
port
name
Receive()
cal bandwidth()
cal timedelay()
SAMPLE CODE
Sample code for client A:
public class clientA implements ActionListener
{
public Font f0 = new Font("Verdana" , Font.BOLD , 35);
public Font f = new Font("Times New roman" , Font.BOLD , 23);
public Font f2 = new Font("Times New roman" , Font.BOLD , 18);
public Font f1 = new Font("Calibrie", Font.BOLD + Font.ITALIC, 25);
public JLabel l=new JLabel("Received File");
public JLabel c1=new JLabel("Client A ");
public JLabel l1=new JLabel("Bandwidth (Kbs/ps) :");
public JLabel l2=new JLabel("Time Delay(ms)
public JLabel l3=new JLabel("File Size (Kbs)
public JTextField Tl1 = new JTextField("");
public JTextField Tl2 = new JTextField("");
public JTextField Tl3 = new JTextField("");
public JTextField T1 = new JTextField("");
public JScrollPane pane = new JScrollPane();
public JTextArea tf = new JTextArea();
public JButton graph=new JButton("Graphical");
public JButton Sub=new JButton("Submit");
public JButton Exit=new JButton("Exit");
public JFrame jf;
:");
:");
public Container c;
ServerSocket server;
Socket connection;
DataOutputStream output;
BufferedInputStream bis;
BufferedOutputStream bos;
byte[] receivedData;
int in;
String strLine;
clientA()
{
jf = new JFrame("Client A");
c = jf.getContentPane();
c.setLayout(null);
jf.setSize(800,670);
//c.setBackground(new Color(33,26,103));
c.setBackground(Color.BLACK);
l.setBounds(650,100,200,50);
l1.setBounds(30,170,250,50);
l2.setBounds(30,270,250,50);
l3.setBounds(30,370,250,50);
c1.setBounds(400,30,200,50);
c1.setFont(f0);
l1.setFont(f);
l2.setFont(f);
l3.setFont(f);
l.setForeground(Color.GREEN);
l1.setForeground(Color.MAGENTA);
l2.setForeground(Color.MAGENTA);
l3.setForeground(Color.MAGENTA);
Tl1.setBounds(250,173,200,40);
Tl1.setFont(f);
Tl2.setBounds(250,273,200,40);
Tl2.setFont(f);
Tl1.setForeground(Color.RED);
Tl2.setForeground(Color.RED);
Tl3.setForeground(Color.RED);
Tl3.setBounds(250,373,200,40);
Tl3.setFont(f);
Tl1.setBackground(new Color(246,233,191));
Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(Color.BLUE);
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);
Exit.setBackground(new Color(151,232,158));
graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);
c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);
jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}
});
int[] ports = new int[] { 8587 };
Tl2.setText(totaltimedelay);
float bandwidth = buffer.capacity();
float tbandwidth=(bandwidth*ttime)/1024;
String totalbandwidth =
Float.toString(tbandwidth);
Tl1.setText(totalbandwidth);
}
} catch (IOException e) {
} finally {
}
}
}
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == Exit)
{
System.exit(0);
}
if(e.getSource()== Sub)
{
try {
server = new ServerSocket( 8585 );
while ( true ) {
connection = server.accept();
output = new DataOutputStream
(connection.getOutputStream() );
BufferedInputStream(connection.getInputStream());
//
FileOutputStream("C:/sss.txt"));
//PrintStream p = new PrintStream(output);
//DataInputStream in = new DataInputStream(output);
BufferedReader br = new BufferedReader(new
InputStreamReader(bis));
bos = new BufferedOutputStream(new
FileOutputStream("C:/sss.txt"));
String strLine;
//Read File Line By Line
StringBuffer buffer = new StringBuffer();
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println (strLine);
buffer.append(strLine+ "\n");
}
tf.setText(buffer.toString());
// bos = new BufferedOutputStream(new
FileOutputStream("C:/sss.txt"));
while ((in = bis.read(receivedData)) != -1)
{
bos.write(receivedData,0,in);
}
bos.close();
output.writeUTF( " ack 2" );
}
}
catch (IOException e1 ) { }
}
}
}
:");
:");
Tl1.setForeground(Color.BLUE);
Tl2.setForeground(Color.BLUE);
Tl3.setForeground(Color.BLUE);
Tl2.setBounds(250,273,200,40);
Tl3.setBounds(250,373,200,40);
Tl1.setFont(f2);
Tl2.setFont(f2);
Tl3.setFont(f2);
Tl1.setBackground(new Color(246,233,191));
Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(Color.RED);
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);
Exit.setBackground(new Color(151,232,158));
graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);
c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);
t.setName("Listener-" + ports[i]);
t.start();
}
jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}
});
}
public static void main(String args[])
{
new clientB();
}
class PortListener implements Runnable {
ServerSocket server;
Socket connection;
BufferedReader br = null;
int port;
public PortListener(int port) {
this.port = port;
}
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
while (true) {
connection = server.accept();
long startTime
=System.currentTimeMillis();
br = new BufferedReader(
new InputStreamReader(new
BufferedInputStream(
connection.getInputStream())));
String strLine;
StringBuffer buffer = new StringBuffer();
System.out.println("hi");
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
buffer.append(strLine + "\n");
}
br.close();
connection.close();
tf.setText(buffer.toString());
long endTime
=System.currentTimeMillis();
float filesize=buffer.length();
float tfilesize=filesize/1024;
String totalfilesize =
Float.toString(tfilesize);
Tl3.setText(totalfilesize);
float cal=(endTime-startTime);
System.out.println(cal);
float ttime=cal/1000;
}
} catch (IOException e) {
} finally {
}
}
}
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == Exit) {
jf.setVisible(false);
System.exit(0);
}
if(e.getSource()== Sub)
{
jf.setVisible(false);
}
}
}
public class clientC implements ActionListener
{
public Font f0 = new Font("Verdana" , Font.BOLD , 35);
public Font f = new Font("Times New roman" , Font.BOLD , 23);
public Font f2 = new Font("Times New roman" , Font.BOLD , 18);
public Font f1 = new Font("Calibrie", Font.BOLD + Font.ITALIC, 25);
public JLabel l=new JLabel("Received File");
public JLabel c1=new JLabel("Client C ");
public JLabel l1=new JLabel("Bandwidth (Kbs/ps) :");
public JLabel l2=new JLabel("Time Delay(ms)
public JLabel l3=new JLabel("File Size (Kbs)
public JTextField Tl1 = new JTextField("");
public JTextField Tl2 = new JTextField("");
public JTextField Tl3 = new JTextField("");
public JTextField T1 = new JTextField("");
public JScrollPane pane = new JScrollPane();
public JTextArea tf = new JTextArea();
public JButton graph=new JButton("Graphical");
public JButton Sub=new JButton("Submit");
public JButton Exit=new JButton("Exit");
public JFrame jf;
public Container c;
clientC()
{
jf = new JFrame("Client C");
c = jf.getContentPane();
c.setLayout(null);
:");
:");
jf.setSize(800,670);
//c.setBackground(new Color(33,26,103));
c.setBackground(Color.BLACK);
l.setBounds(650,100,200,50);
l1.setBounds(30,170,250,50);
l2.setBounds(30,270,250,50);
l3.setBounds(30,370,250,50);
c1.setBounds(400,30,200,50);
c1.setFont(f0);
l1.setFont(f);
l2.setFont(f);
l3.setFont(f);
l.setForeground(Color.GREEN);
l1.setForeground(Color.YELLOW);
l2.setForeground(Color.YELLOW);
l3.setForeground(Color.YELLOW);
Tl1.setBounds(250,173,200,40);
Tl1.setForeground(new Color(15,60,22));
Tl2.setBounds(250,273,200,40);
Tl2.setForeground(new Color(15,60,22));
Tl3.setBounds(250,373,200,40);
Tl3.setForeground(new Color(15,60,22));
Tl1.setFont(f2);
Tl2.setFont(f2);
Tl3.setFont(f2);
Tl3.setForeground(new Color(15,60,22));
Tl1.setBackground(new Color(246,233,191));
Tl2.setBackground(new Color(246,233,191));
Tl3.setBackground(new Color(246,233,191));
pane.setBounds(550,170,400,360);
tf.setColumns(20);
tf.setRows(10);
tf.setForeground(new Color(120,0,0));
tf.setFont(f2);
tf.setBackground(new Color(246,233,191));
tf.setName("tf");
pane.setName("pane");
pane.setViewportView(tf);
l.setFont(f);
T1.setFont(f);
Sub.setFont(f);
Exit.setFont(f);
graph.setFont(f);
T1.setBounds(200,100,350,50);
Sub.setBounds(430,640,120,35);
Exit.setBounds(510,590,200,40);
Exit.setBackground(new Color(151,232,158));
graph.setBounds(220,590,200,40);
graph.setBackground(new Color(151,232,158));
T1.setBackground(Color.white);
T1.setForeground(Color.white);
Exit.setForeground(Color.BLACK);
c.add(l);
c.add(l1);
c.add(l2);
c.add(l3);
c.add(graph);
c.add(Tl1);
c.add(Tl2);
c.add(Tl3);
c.add(pane, BorderLayout.CENTER);
c1.setForeground(Color.RED);
Sub.setBackground(new Color(151,232,158));
jf.show();
c.add(c1);
c.add(Exit);
Sub.addActionListener(this);
Exit.addActionListener(this);
int[] ports = new int[] { 2222 };
for (int i = 0; i < 1; i++) {
Thread t = new Thread(new PortListener(ports[i]));
t.setName("Listener-" + ports[i]);
t.start();
}
jf.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent win) {
System.exit(0);
}
});
}
public static void main(String args[])
{
new clientC();
}
class PortListener implements Runnable {
ServerSocket server;
Socket connection;
BufferedReader br = null;
int port;
public PortListener(int port) {
this.port = port;
}
public void run() {
try {
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
InputStreamReader(new
BufferedInputStream(
connection.getInputStream())));
String strLine;
StringBuffer buffer = new StringBuffer();
System.out.println("hi");
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
buffer.append(strLine + "\n");
}
br.close();
connection.close();
tf.setText(buffer.toString());
long
endTime
=System.currentTimeMillis();
float filesize=buffer.length();
float tfilesize=filesize/1024;
String
Float.toString(tfilesize);
totalfilesize
Tl3.setText(totalfilesize);
float cal=(endTime-startTime);
System.out.println(cal);
float ttime=cal/1000;
totaltimedelay
Float.toString(timedelay);
Tl2.setText(totaltimedelay);
float bandwidth = buffer.capacity();
float tbandwidth=(bandwidth*ttime)/1024;
String
totalbandwidth
Float.toString(tbandwidth);
Tl1.setText(totalbandwidth);
}
} catch (IOException e) {
} finally {
}
}
}
}
}
}
SCREENS
TESTING
A fault, also called bug or defect, is a design or coding mistake that may
cause abnormal component behavior.
A test case is a set of inputs and expected results that exercise a component
with the purpose of causing failures and detecting faults.
TESTING ACTIVITIES:
System testing, which focuses on the complete system, its functional and
nonfunctional requirements and its target environment.
Testing is the phase where the errors remaining from all the previous phases
must be
assurance
and
software
consists of providing
observing if the
for
ensuring
performs a
the reliability
the
software behaves as
very
of
critical
role
for
software. Testing
test
which a failure
occurs
of designed
outputs
under
quality
and
to behave
as
Error
The
term
error
is
used
in
two
different ways. It
refers
to the
is also used
to refer
to human
action
that
Fault
is
a condition
that causes
a system to
fail
in
performing its
Client Needs
Requirements
Design
Code
Acceptance Testing
System Testing
Integration Testing
Unit Testing
After the unit testing we have to perform integration testing. The goal here is to
see if modules can be integrated properly, the emphasis being on testing interfaces
between modules. This testing activity can be considered as testing the design and hence
the emphasis on testing module interactions.
In this project Evaluation of Employee Performance, the main system is formed
by integrating all the modules. When integrating all the modules I have checked whether
the integration effects working of any of the services by giving different combinations of
inputs with which the two services run perfectly before Integration.
This is a unit testing method where a unit will be taken at a time and tested
thoroughly at a statement level to find the maximum possible errors.
I tested step wise every piece of code, taking care that every statement in the
code is executed at least once. The white box testing is also called Glass Box Testing.
I have generated a list of test cases, sample data. Which is used to check all
possible combinations of execution paths through the code at every module level?
CONCLUSION
REFERENCES
[1]
[2]
[3]
2001.
[4]
[5]
[6]
[7]
[8]
[9]