Sie sind auf Seite 1von 59

Abstract

In the current network environment it is difficult to resolve the problems in


network performance. By increasing the performance of energy oriented
networking applications, a new network approach is introduced called, Green
Computing. Some streaming analysis and measurements are created for preventing
the energy streams in architecture. Traditional packet processing engine method is
used to represent the most energy saving components of network devices and
maintain the traffic overload. A data center resource management framework to
support green computing is proposed. This framework is composed of the power
and workload management, and capacity planning schemes. While an action of
power and workload management is performed in a short-term basis (e.g., fraction
of minute), a decision of capacity planning is made in a long-term basis (e.g., few
months). This paper mainly addresses a capacity planning problem. With the power
and workload management, we formulate a capacity planning optimization as a
stochastic programming model. The solution is the number of servers to be
installed and deployed in a data center over multiple periods. The objective of this
optimization model is to minimize the long-term cost under workload demand
uncertainty. From the performance evaluation, with the proposed optimization
model for the capacity planning scheme, the total cost to operate the data center in
the long-term basis can be minimized while the job waiting time and job blocking
probability are maintained below the target thresholds.

TABLE OF CONTENTS
CHAPTER NO. TITLE

PAGE NO.

ABSTRACT
LIST OF FIGURES
LIST OF ABBREVIATIONS
1

INTRODUCTION
1.1. Energy-Aware Network Performance
1.2. Capacity Planning
1.3. The Proposed Queuing Model

1.4. Resource Management Framework


2

LITERATURE SURVEY
2.1 Green Computing Technologies
2.2 Green Support for PC-based Software Router
2.3 Energy-Aware Load Balancing for Parallel Packet Processing
2.4 Load Balancing for Parallel Forwarding
2.6 Existing System
2.7 Proposed System

SYSTEM ORGANIZATION
3.1 System Architecture Design
3.2 Feature Extraction
3.3 Data Flow Diagram

PROPOSED SYSTEM
4.1 Modules
4.1.1. Network Module
4.1.2. Delay Tolerant Network
4.1.3. Dynamic Routing

REQUIREMENT SPECIFICATION
5.1 System Requirements
5.1.1 Hardware Required
5.1.2 Software Required
5.2 Java in Detail

IMPLEMENTATION RESULT
6.1 Coding
6.2 Screen Shots

CONCLUSION AND FUTURE WORK

REFERENCES

CHAPTER 1
INTRODUCTION
In the last few years power consumption has shown a growing and alarming
trend in all industrial sectors, and particularly in Information and Communication
Technology (ICT). Public organizations, Internet Service Providers (ISPs) and
Telecom operators reported alarming statistics of network energy requirements and
of the related carbon footprint. The study of power-saving network devices has
been based in recent years on the possibility of adapting network energy
requirements to the actual traffic loads. Indeed, it is well known that network links
and devices are generally provisioned or busy or rush hour load, which typically
exceeds their average utilization by a wide margin. Although this margin is seldom
reached, network devices are designed on its basis and, consequently, their power
consumption remains more or less constant even in the presence of fluctuating
traffic loads.
Thus, the key of any advanced power saving criteria resides in dynamically
adapting resources, provided at the network, link or equipment level, to current
traffic requirements and loads. In this respect, current green computing approaches
have been based on numerous energy-related criteria, to be applied in particular to
network equipment and component interfaces. Despite the great progresses of
optics in transmission and switching, it is well known that todays networks still
rely very strongly on electronics.
With the constant increase of power cost and awareness of global warming,
the green computing concept has been introduced to improve an efficiency of
computing resource usage. Power management is one of the major approaches in
green computing to minimize the power consumption. In a distributed system,
power management is generally implemented in the network level in which the

operation mode of the computing resource (e.g., server) can be controlled remotely
by the centralized controller (e.g., batch scheduler).
This power management has to operate jointly with the workload
management to achieve an optimal performance. In this case, the workload (i.e.,
jobs) could be processed by the local data center (i.e., in-house computing
resource) or sent to a third party processing service (e.g., cloud computing) when
the computing resource of the data center is heavily occupied(e.g., due to the
instantaneous peak workload). With the power and workload management, not
only the power consumption can be reduced but also met the performance
requirements of workload processing.
For the owner of a data center, the capacity planning is an important issue.
Given the power and workload management, an optimal size of the data center
(e.g., the optimal number of servers) has to be determined so that the long-term
cost is minimized, while the processing capacity is sufficient to the workload
submitted from users of the data center. One of the most challenging issues in
designing the power and workload management as well as the capacity planning
schemes is due to the uncertainty of the workload. That is, processing jobs can be
generated by the users randomly and unknown a priori. Therefore, to achieve the
objective and to meet all constraints of the distributed system, a stochastic
optimization model would be required.

Fig1. System Model of a Data Center


1.1. ENERGY-AWARE NETWORK PERFORMANCE
The current generation of network devices does not support power scaling
functionalities. However, power management is a key feature in todays processors
across all market segments, and it is rapidly evolving also in other HW
technologies. In this process we first introduce the ACPI (Advanced Configuration
and Power Interface) specification and how it makes AR and LPI capabilities
accessible at the software (SW) layer. Then, in the second part we discuss the
impact of AR and LPI on the forwarding performance of a network device, and
how these two capabilities may interact between themselves.

1.2 CAPACITY PLANNING


Capacity planning or system planning is an important issue in distributed
system management. This capacity planning is required to ensure that the
computing resource meets users demand while the cost is minimized. Most of the
works related to the capacity planning issue are based on the estimation of
workload. In the resource management framework was proposed for highperformance computing system which consists of in-house computing resource,
and utility computing service provider. In this system, the workload can be
processed by in-house resource in which the extra capacity required to meet the
performance requirement can be purchased from the utility computing service
provider. The heuristic planning algorithms to assign workload to the available
resource were developed. In the capacity planning scheme for grid computing
based on advance resource reservation to support quality of service (QoS) was
proposed.
The resource allocation based on heuristic algorithm was proposed to
improve the system utilization and meet QoS requirement. The negotiation
protocol was also introduced to support resource reservation. In the capacity
management tool for computing resource pools was introduced. This tool uses the
traces of application workload to analyze and to simulate the workload assignment
so that the performance requirement can be met. The optimization search-based
algorithm was developed to assign workload to the available computing resource to
achieve an objective of the system (e.g., maximum resource utilization). In the
profile driven analytical model was proposed to estimate performance
characteristics of processing workload. Then, the heuristic method was developed
to determine the number of resources for the workload.

However, all works in the literature related to capacity planning ignored the
optimization with the uncertainty of workload demand. Although some works
proposed the optimization algorithms for capacity planning of distributed system,
the impact of power and workload management was not jointly considered. In the
capacity planning optimization model with the power and workload management
scheme was formulated to determine the optimal number of servers required in
new data centers, to locate the new data centers to appropriate areas, and to
distribute computing jobs to the data centers where energy costs can be minimized.
However, this work did not take the uncertainty of workload demand into account.
1.3 THE PROPOSED QUEUING MODE
Considering the assumption to send every packet composing a batch to a single pipeline,
we can outline that the model we propose corresponds to a Mx/D/1/SET queuing system. For
each pipeline, packets arrive in batches with exponentially distributed inter-arrival times with
average rate, and are served at a fixed rate. In order to take the LPI transition periods into
account, the model considers deterministic server setup times. When the system becomes empty,
the server is turned off. The system returns operational only when a batch of packets rives. At
this point in time service can begin only after an interval has elapsed. The next three sub-sections
introduce the analytical model and its specialization to our case. We derive the PGF and the
stationary probabilities of the Mx/D/1/SET queuing system; we express the servers idle and
busy periods. Then, we propose an approximation for the packet loss probability in the case of a
finite buffer of size N and we derive network- and energy aware performance indexes.

1.4 RESOURCE MANAGEMENT FRAMEWORK


To achieve the optimal performance and minimum cost, the resource
management framework manages servers in a data center. This framework is
composed of the power and workload management, and capacity planning
schemes. The action of power and workload management scheme is performed by
the batch scheduler to control the server mode switching and job outsourcing in the

short-term basis for every time slot (e.g., fraction of minute). Alternatively, the
decision of capacity planning scheme is made by the data center owner to
determine the size of data center (i.e., the number of servers to be in the center) in
the long-term basis (e.g., for every few months).

Fig.1.4. Interaction between power and workload management, and capacity


planning schemes in the proposed resource management framework.
An interaction between these power and workload management, and
capacity planning schemes. The power and workload management yields the
optimal policy for the short-term plan. This optimal policy provides the total cost
information to operate the data center. In particular, the capacity planning scheme
utilizes the total cost information to determine the optimal number of servers in the
data center.

CHAPTER 2
LITERATURE SURVEY
Literature survey is the most important step in software development process. Before
developing the tool it is necessary to determine the time factor, economy and company strength.
Once these things are satisfied, then the next step is to determine which operating system and
language can be used for developing the tool. Once the programmers start building the tool the
programmers need lot of external support. This support can be obtained from senior
programmers, from book or from websites. Before building the system the above consideration
are taken into account for developing the proposed system.
The major part of the project development sector considers and fully survey all the
required needs for developing the project. For every project Literature survey is the most
important sector in software development process. Before developing the tools and the
associated designing it is necessary to determine and survey the time factor, resource
requirement, man power, economy, and company strength. Once these things are satisfied and
fully surveyed, then the next step is to determine about the software specifications in the
respective system such as what type of operating system the project would require, and what are
all the necessary software are needed to proceed with the next step such as developing the tools,
and the associated operations.

2.1 GREEN COMPUTING TECHNOLOGIES AND THE ART OF


TRADING-OFF - R. BOLLA, R. BRUSCHI
In this contribution, we focus on energy-aware devices able to reduce their
energy requirements by adapting their performance. We propose an analytical
model to accurately represent the impact of green computing technologies (i.e.,
low power idle and adaptive rate) on network- and energy-aware performance
indexes. The model has been validated with experimental results, performed by
using energy-aware software routers and real-world traffic traces. The achieved
results demonstrate how the proposed model can effectively represent energy- and
network-aware performance indexes. Moreover, also an optimization procedure

based on the model has been proposed and experimentally evaluated. The
procedure aims at dynamically adapting the energy-aware device configuration to
minimize energy consumption, while coping with incoming traffic volumes and
meeting network performance constraints.
2.2

GREEN

SUPPORT

FOR

PC-BASED

SOFTWARE

ROUTER:

PERFORMANCE EVALUATION AND MODELING - R. BOLLA, R.


BRUSCHI, AND A. RANIERI
We consider a new generation of COTS software routers (SRs), able to
effectively exploit multi-Core/CPU HW platforms. Our main objective is to
evaluate and to model the impact of power saving mechanisms, generally included
in today's COTS processors, on the SR networking performance and behavior. To
this purpose, we separately characterized the roles of both HW and SW layers
through a large set of internal and external experimental measurements, obtained
with a heterogeneous set of HW platforms and SR setups. Starting from this
detailed measure analysis, we propose a simple model, able to represent the SR
performance with a high accuracy level in terms of packet throughput and related
power consumption. The proposed model can be effectively applied inside "green
optimization" mechanisms in order to minimize power consumption, while
maintaining a certain SR performance target.
2.3 ENERGY-AWARE LOAD BALANCING FOR PARALLEL PACKET
PROCESSING ENGINES - R. BOLLA AND R. BRUSCHI
In this approach, we consider energy-aware network devices (e.g. routers,
switches, etc.) able to trade their energy consumption for packet forwarding
performance by means of both low power idle and adaptive rate schemes. We focus
on state-of-the-art packet processing engines, which generally represent the most

energy-starving components of network devices, and which are often composed of


a number of parallel pipelines to divide and conquer the incoming traffic load.
Our goal is to control both the power configuration of pipelines, and the way to
distribute traffic flows among them, in order to optimize the trade-off between
energy consumption and network performance indexes. With this aim, we propose
and analyze a constrained optimization policy, which try to find the best trade-off
between power consumption and packet latency times. In order to deeply
understand the impact of such policy, a number of tests have been performed by
using experimental data from SW router architectures and real-world traffic traces.
2.4 LOAD BALANCING FOR PARALLEL FORWARDING - W. SHI, M.
MACGREGOR, AND P. GBURZYNSKI
Workload distribution is critical to the performance of network processor
based parallel forwarding systems. Scheduling schemes that operate at the packet
level, e.g., round-robin, cannot preserve packet-ordering within individual TCP
connections. Moreover, these schemes create duplicate information in processor
caches and therefore are inefficient in resource utilization. Hashing operates at the
flow level and is naturally able to maintain per-connection packet ordering;
besides, it does not pollute caches. A pure hash-based system, however, cannot
balance processor load in the face of highly skewed flow-size distributions in the
Internet; usually, adaptive methods are needed. In this paper, based on
measurements of Internet traffic, we examine the sources of load imbalance in
hash-based scheduling schemes. We prove that under certain Zipf-like flow-size
distributions, hashing alone is not able to balance workload. We introduce a new
metric to quantify the effects of adaptive load balancing on overall forwarding
performance. To achieve both load balancing and efficient system resource
utilization, we propose a scheduling scheme that classifies Internet flows into two

categories: the aggressive and the normal, and applies different scheduling policies
to the two classes of flows. Compared with most state-of-the-art parallel
forwarding schemes, our work exploits flow-level Internet traffic characteristics.

2.5 EXISTING SYSTEM

The main challenge in the implementation of existing schemes lies in how to


effectively deal with the interruptions of network connectivity and node failures.
Thus, the existing schemes have been reported to seriously suffer from long
delivery delays and/or large message loss ratio.
Although improved in terms of performance, the previously reported Stream
based routing schemes are subject to the following problems and implementation
difficulties.
DISADVANTAGES:
These schemes inevitably take a large number of energy consumption,
and a vast amount of transmission bandwidth and nodal memory space,
which could easily exhaust the network resource.
They suffer from disagreement in case of high traffic loads, when packet
drops could result in a significant degradation of performance and
scalability.
Limited Visibility under the guided measurements.
Measurement wastages are high during the detection of latencies.
2.6 PROPOSED SYSTEM
A novel network routing technique is proposed is introduced, which aims to
overcome the shortcomings of the previously reported network routing schemes
which provides an efficient and cost effective data transfer. The main goal is to
achieve a superb applicability to the network scenario with densely distributed
hand-held devices. The main feature of this system is the strong capability in
adaptation to the fluctuation of network status, traffic patterns/characteristics, user
encounter behaviors, and user resource availability, so as to improve network

performance in terms of message delivery ratio, message delivery delay, and


number of transmissions.
The proposed scheme is characterized by the ability of adapting itself to the
observed network behaviors, which is made possible by employing an efficient
time window based update mechanism for some network status parameters at each
node. This approach uses time-window based update strategy because it is simple
in implementation and robust against parameter fluctuation. Note that the network
conditions could change very fast and make a completely event driven model.
ADVANTAGES:
This scheme takes less energy consumption, and small amount of
transmission bandwidth and less memory space, which could easily
simplify the network resource and provide efficient data transmission.
Provide good support in high traffic loads, which results in a significant
performance and scalability.
Visibility under the scalable measurements, packets can dynamically
move under the bandwidth circumstances.
Measurement wastages are low during the detection of latencies.

CHAPTER 3
SYSTEM ARCHITECTURE DESIGN

Figure 3.1.1 System Architecture Design

3.2 FEATURE EXTRACTION


To provide a new routing methodology and maintaining the worst case
complexities of the router architectures and real world traffic traces while
empirically demonstrating a moderate increase in average resource utilization and
power consumption procedures. The proposed scheme is characterized by the
ability of adapting itself to the observed network behaviors, which is made possible
by employing an efficient time window based update mechanism for some network
status parameters at each node. This approach uses time-window based update
strategy because it is simple in implementation and robust against parameter
fluctuation. Note that the network conditions could change very fast and make a
completely event driven model.

3.3 DATA FLOW DIAGRAM


LEVEL 0

Client

EFFICIENT DATA CENTER


SUPPORT AND
OPTIMIZATION METHODS
IN NEW ROUTING
METHODS VIA GREEN
COMPUTING

Server

Figure 3.3.1. Level-0 - Data Flow Diagram

LEVEL 1

Client

Select the data to


Transmit

Split the data into


Packets

Select the Router to


transmit the packets

Analyze the
bandwidth

Figure 3.3.2. Level-1 - Data Flow Diagram

LEVEL - 2

Client

Router

Receive the Packets


with equal proportion

Select the data to


Transmit

Transmit the
Packets
Analyze the Received
Packets size and the
capability of the sub
routers

Split the data into


Packets

Select the Router to


transmit the packets

Transmit the packets to


the sub router

Sub router transmit


the received
packets to the
server

Figure 3.3.3. Level-2 - Data Flow Diagram

CHAPTER 4
PROPOSED SYSTEM
4.1 MODULES
4.1.1. Network Module
4.1.2. Delay Tolerant Network
4.1.3. Dynamic Routing
4.1.4. Randomization Process
4.1.5. Routing Table Maintenance
4.1.6. Load on Throughput
4.1.1 NETWORK MODULE
Client-server computing or networking is a distributed application
architecture that partitions tasks or workloads between service providers (servers)
and service requesters, called clients. Often clients and servers operate over a
computer network on separate hardware. A server machine is a high-performance
host that is running one or more server programs which share its resources with
clients. A client also shares any of its resources; Clients therefore initiate
communication sessions with servers which await (listen to) incoming requests.
4.1.2 DELAY TOLERANT NETWORK
DTN is characterized by the lack of end-to-end paths for a given node pair
for extended periods, which poses a completely different design scenario from that
for conventional mobile adhoc networks (MANETs). Due to the intermittent
connections in DTNs, a node is allowed to buffer a message and wait until the next
hop node is found to continue storing and carrying the message. Such a process is
repeated until the message reaches its destination. This model of routing is
significantly different from that employed in the MANETs. DTN routing is usually
referred to as encounter-based, store-carry-forward, or mobility-assisted routing,

due to the fact that nodal mobility serves as a significant factor for the forwarding
decision of each message.
4.1.3 DYNAMIC ROUTING
To propose a distance-vector based algorithm for dynamic routing to
improve the security of data transmission. We propose to rely on existing distance
information exchanged among neighboring nodes (referred to as routers as well in
this paper) for the seeking of routing paths. In many distance-vector-based
implementations, e.g., those based on RIP, each node maintains a routing table in
which each entry is associated with a tuple, and Next hop denote some unique
destination node, an estimated minimal cost to send a packet to t, and the next node
along the minimal-cost path to the destination node, respectively.
4.1.4. RANDOMIZATION PROCESS
In order to minimize the probability that packets are eavesdropped over a
specific link, a randomization process for packet deliveries, in this process, the
previous next-hop for the source node s is identified in the first step of the process.
Then, the process randomly picks up a neighboring node as the next hop for the
current packet transmission. The exclusion for the next hop selection avoids
transmitting two consecutive packets in the same link, and the randomized pickup
prevents attackers from easily predicting routing paths for the coming transmitted
packets.
4.1.5. ROUTING TABLE MAINTENANCE
In the network be given a routing table and a link table. We assume that the
link table of each node is constructed by an existing link discovery protocol, such
as the Hello protocol in. On the other hand, the construction and maintenance of
routing tables are revised based on the well-known Bellman-Ford algorithm.

4.1.6. LOAD ON THROUGHPUT


Investigate the effect of traffic load on throughput for our proposed DDRA; the
traffic is also generated based on variable-bit-rate applications such as file transfers
over Transmission Control Protocol (TCP). The average packet size is 1,000 bytes,
and source-destination pairs are chosen randomly with uniform probabilities.

CHAPTER 5
REQUIREMENT SPECIFICATION

5.1 System Requirements


5.1.1 Hardware Requirements
System

: Pentium IV 2.4 GHz

Hard Disk

: 40 GB

Ram

: 512 Mb

5.1.2 Software Requirements


Operating system

: Windows XP

Technology Used

: JAVA

Coding Language

: Java Swings

5.2 JAVA IN DETAIL


Java Technology
Java technology is both a programming language and a platform.
The Java Programming Language
The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
With most programming languages, you either compile or interpret a
program so that you can run it on your computer. The Java programming language
is unusual in that a program is both compiled and interpreted. With the compiler,
first you translate a program into an intermediate language called Java byte codes
the platform-independent codes interpreted by the interpreter on the Java platform.
The interpreter parses and runs each Java byte code instruction on the computer.
Compilation happens just once; interpretation occurs each time the program is

executed. You can think of Java byte codes as the machine code instructions for the
Java Virtual Machine (Java VM). Every Java interpreter, whether its a
development tool or a Web browser that can run applets, is an implementation of
the Java VM. Java byte codes help make write once, run anywhere possible. You
can compile your program into byte codes on any platform that has a Java
compiler. The byte codes can then be run on any implementation of the Java VM.
That means that as long as a computer has a Java VM, the same program written in
the Java programming language can run on Windows 2000, a Solaris workstation,
or on an iMac.
The Java Platform
A platform is the hardware or software environment in which a
program runs. Weve already mentioned some of the most popular platforms
like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be
described as a combination of the operating system and hardware. The Java
platform differs from most other platforms in that its a software-only
platform that runs on top of other hardware-based platforms.
The Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
Youve already been introduced to the Java VM. Its the base for the Java
platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components
that provide many useful capabilities, such as graphical user interface (GUI)
widgets. The Java API is grouped into libraries of related classes and

interfaces; these libraries are known as packages. The next section, What
Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program thats running on the Java
platform. As the figure shows, the Java API and the virtual machine insulate
the program from the hardware.
Native code is code that after you compile it, the compiled code runs on a
specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, welltuned interpreters, and just-in-time byte code compilers can bring performance
close to that of native code without threatening portability.

What Can Java Technology Do?


The most common types of programs written in the Java programming
language are applets and applications. If youve surfed the Web, youre
probably already familiar with applets. An applet is a program that adheres
to certain conventions that allow it to run within a Java-enabled browser.
However, the Java programming language is not just for writing cute,
entertaining applets for the Web. The general-purpose, high-level Java
programming language is also a powerful software platform. Using the
generous API, you can write many types of programs.
An application is a standalone program that runs directly on the Java
platform. A special kind of application known as a server serves and
supports clients on a network. Examples of servers are Web servers, proxy

servers, mail servers, and print servers. Another specialized program is a


servlet. A servlet can almost be thought of as an applet that runs on the
server side. Java Servlets are a popular choice for building interactive web
applications, replacing the use of CGI scripts. Servlets are similar to applets
in that they are runtime extensions of applications. Instead of working in
browsers, though, servlets run within Java Web servers, configuring or
tailoring the server.
How does the API support all these kinds of programs? It does so with
packages of software components that provides a wide range of
functionality. Every full implementation of the Java platform gives you the
following features:
The essentials: Objects, strings, threads, numbers, input and output,
data structures, system properties, date and time, and so on.
Applets: The set of conventions used by applets.
Networking: URLs, TCP (Transmission Control Protocol), UDP
(User Data gram Protocol) sockets, and IP (Internet Protocol)
addresses.
Internationalization: Help for writing programs that can be localized
for users worldwide. Programs can automatically adapt to specific
locales and be displayed in the appropriate language.
Security: Both low level and high level, including electronic
signatures, public and private key management, access control, and
certificates.
Software components: Known as JavaBeansTM, can plug into existing
component architectures.

Object

serialization:

Allows

lightweight

persistence

and

communication via Remote Method Invocation (RMI).


Java Database Connectivity (JDBCTM): Provides uniform access to
a wide range of relational databases.
The Java platform also has APIs for 2D and 3D graphics, accessibility,
servers, collaboration, telephony, speech, animation, and more. The
following figure depicts what is included in the Java 2 SDK.
How Will Java Technology Change My Life?
We cant promise you fame, fortune, or even a job if you learn the Java
programming language. Still, it is likely to make your programs better and requires
less effort than other languages. We believe that Java technology will help you do
the following:
Get started quickly: Although the Java programming language is a
powerful object-oriented language, its easy to learn, especially for
programmers already familiar with C or C++.
Write less code: Comparisons of program metrics (class counts,
method counts, and so on) suggest that a program written in the Java
programming language can be four times smaller than the same
program in C++.
Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps you avoid memory
leaks. Its object orientation, its JavaBeans component architecture,
and its wide-ranging, easily extendible API let you reuse other
peoples tested code and introduce fewer bugs.

Develop programs more quickly: Your development time may be as


much as twice as fast versus writing the same program in C++. Why?
You write fewer lines of code and it is a simpler programming
language than C++.
Avoid platform dependencies with 100% Pure Java: You can keep
your program portable by avoiding the use of libraries written in other
languages. The 100% Pure JavaTM Product Certification Program has a
repository of historical process manuals, white papers, brochures, and
similar materials online.
Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently
on any Java platform.
Distribute software more easily: You can upgrade applets easily
from a central server. Applets take advantage of the feature of
allowing new classes to be loaded on the fly, without recompiling
the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard programming
interface for application developers and database systems providers. Before ODBC
became a de facto standard for Windows programs to interface with database
systems, programmers had to use proprietary languages for each database they
wanted to connect to. Now, ODBC has made the choice of the database system
almost irrelevant from a coding perspective, which is as it should be. Application
developers have much more important things to worry about than the syntax that is
needed to port their program from one database to another when business needs
suddenly change.

Through the ODBC Administrator in Control Panel, you can specify the
particular database that is associated with a data source that an ODBC application
program is written to use. Think of an ODBC data source as a door with a name on
it. Each door will lead you to a particular database. For example, the data source
named Sales Figures might be a SQL Server database, whereas the Accounts
Payable data source could refer to an Access database. The physical database
referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95.
Rather, they are installed when you setup a separate database application, such as
SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in
Control Panel, it uses a file called ODBCINST.DLL. It is also possible to
administer your ODBC data sources through a stand-alone program called
ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each
maintains a separate list of ODBC data sources. From a programming perspective,
the beauty of ODBC is that the application can be written to use the same set of
function calls to interface with any data source, regardless of the database vendor.
The source code of the application doesnt change whether it talks to Oracle or
SQL Server. We only mention these two as an example. There are ODBC drivers
available for several dozen popular database systems. Even Excel spreadsheets and
plain text files can be turned into data sources. The operating system uses the
Registry information written by ODBC Administrator to determine which lowlevel ODBC drivers are needed to talk to the data source (such as the interface to
Oracle or SQL Server). The loading of the ODBC drivers is transparent to the
ODBC application program. In a client/server environment, the ODBC API even
handles many of the network issues for the application programmer.

The advantages of this scheme are so numerous that you are probably
thinking there must be some catch. The only disadvantage of ODBC is that it isnt
as efficient as talking directly to the native database interface. ODBC has had
many detractors make the charge that it is too slow. Microsoft has always claimed
that the critical factor in performance is the quality of the driver software that is
used. In our humble opinion, this is true. The availability of good ODBC drivers
has improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java; Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMSs. This consistent interface is achieved through the use of plugin database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBCs framework on
ODBC. As you discovered earlier in this chapter, ODBC has widespread support
on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring
JDBC drivers to market much faster than developing a completely new
connectivity solution.

JDBC was announced in March of 1996. It was released for a 90 day public
review that ended June 8, 1996. Because of user input, the final JDBC v1.0
specification was released soon after.
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.

JDBC Goals
Few software packages are designed without goals in mind. JDBC is one
that, because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some
insight as to why certain classes and functionalities behave the way they do. The
eight design goals for JDBC are as follows:
1. SQL Level API
The designers felt that their main goal was to define a SQL interface for
Java. Although not the lowest database interface level possible, it is at a low
enough level for higher-level tools and APIs to be created. Conversely, it is at a
high enough level for application programmers to use it confidently. Attaining
this goal allows for future tool vendors to generate JDBC code and to hide
many of JDBCs complexities from the end user.
2. SQL Conformance

SQL syntax varies as you move from database vendor to database vendor. In
an effort to support a wide variety of vendors, JDBC will allow any query
statement to be passed through it to the underlying database driver. This allows
the connectivity module to handle non-standard functionality in a manner that is
suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must sit on top of other common SQL level APIs.
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers
feel that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time;
also, less error appear at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer
are simple SELECTs, INSERTs, DELETEs and UPDATEs, these queries

should be simple to perform with JDBC. However, more complex SQL


statements should also be possible.

CHAPTER 6
IMPLEMENTATION AND RESULTS
6.1 CODING
CLIENT
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Container;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;

import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.UnknownHostException;
import java.util.Random;

import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFileChooser;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.JTextField;

import javax.swing.plaf.basic.BasicLookAndFeel;

public class client implements ActionListener {

String event;
String rr;
String rr1;
String rr2;

String firststr;
String secondstr;
String thirdstr;
String firstpckts;
String secondpckts;
String thirdpckts;
String text;
String text1;
String text2;
int i;
int j;
String ftf;
String ftf1;
String ftf2;
int lentf;
int lentf1;
int lentf2;

int p1;
int p2;
int p3;

int q1;
int q2;
int q3;

int r1;
int r2;
int r3;

String rrA;
String rrB;
String rrC;

String len;
int t1len;

String t1str;
String tft1;
String tft2;
String tft3;

String stf;

String ttf;
String one;
String ten;
String fifty;
String hundred;

String twohun;
String threehun;
String fourhun;
String fivehun;
String sixhun;
String sevenhun;
String eighthun;
String ninehun;
String tennhun;

public JButton b1 = new JButton("Browse");

public JButton b2 = new JButton("Split");


public JButton b3 = new JButton("Send");
public JLabel la1 = new JLabel("Select the file :");
public JLabel la2 = new JLabel("File path

:");

public JLabel la3 = new JLabel("File Size (Bits) :");


public JLabel la4 = new JLabel("CLIENT ");
public JLabel c1 = new JLabel();

public JLabel c2 = new JLabel("");


public JTextArea t2 = new JTextArea("");
public JScrollPane sc = new JScrollPane();
public JScrollPane sc1 = new JScrollPane();
public JTextArea t1 = new JTextArea("");
public Font l = new Font("Times New roman", Font.BOLD, 18);
public Font l2 = new Font("Times New roman", Font.BOLD, 13);
public Font l1 = new Font("Times New roman", Font.BOLD, 30);
public Font l3 = new Font("Times New roman", Font.BOLD, 16);
public Font l4 = new Font("tahoma", Font.BOLD, 27);

public JFrame jf;


public Container c;
public JLabel ftr = new JLabel("FILE TRANSFERRING STATUS");
public JLabel routerA = new JLabel();
public JLabel routerB = new JLabel();
public JLabel routerC = new JLabel();

public JLabel fileA = new JLabel();


public JLabel fileB = new JLabel();
public JLabel fileC = new JLabel();

public JLabel subrA = new JLabel();


public JLabel subrB = new JLabel();
public JLabel subrC = new JLabel();

public JLabel serA = new JLabel();


public JLabel serB = new JLabel();
public JLabel serC = new JLabel();

public JButton test = new JButton();


public JLabel test1 = new JLabel("Browse");
public JLabel sen = new JLabel("Send");

public JLabel backgr = new JLabel("Browse");


public JLabel splt = new JLabel("Split");
public JLabel title = new JLabel("GREEN NETWORKING - EFFICIENT PACKET
PROCESSING");

client()
{
ImageIcon v1=new
ImageIcon(this.getClass().getResource("button.png"));

b1.setIcon(v1);
b2.setIcon(v1);
b3.setIcon(v1);
ImageIcon back=new
ImageIcon(this.getClass().getResource("background-blue.jpg"));
backgr.setBounds(0,-80,1035,900);
backgr.setIcon(back);

jf = new JFrame("Client");

c = jf.getContentPane();
c.setLayout(null);
jf.setSize(1035,900);

title.setForeground(Color.MAGENTA);
title.setBounds(135,10,900,50);
title.setFont(l4);
b1.setBounds(380,153,120,35);

b1.setForeground(Color.GREEN);
b2.setBounds(380,266,120,35);
splt.setBounds(420,267,120,35);
splt.setForeground(Color.BLACK);
splt.setFont(l);
b3.setBounds(480,630,120,35);
sen.setBounds(520,630,120,35);
sen.setForeground(Color.BLACK);
sen.setFont(l);

la1.setBounds(100,148,150,50);
la1.setForeground(Color.GREEN);
la2.setBounds(100,205,150,50);
la2.setForeground(Color.GREEN);
la3.setBounds(100,263,150,50);
la3.setForeground(Color.GREEN);

la4.setBounds(440,60,150,50);
la4.setForeground(Color.GREEN);
c1.setBounds(280,216,400,35);
c2.setBounds(280,266,100,35);
c2.setFont(l2);
c2.setForeground(Color.MAGENTA);
sc1.setBounds(400,370,100,200);

sc.setBounds(100,370,300,200);
t1.setColumns(20);
t1.setRows(10);
t1.setFont(l3);
t1.setForeground(Color.BLUE);
sc.setViewportView(t1);
t2.setBounds(100,300,300,200);
t2.setColumns(20);
t2.setRows(10);
t2.setFont(l3);
t2.setForeground(Color.BLUE);
sc1.setViewportView(t2);
ftr.setBounds(650,150,350,35);
ftr.setForeground(Color.CYAN);
ftr.setFont(l);
fileA.setBounds(550,230,150,35);
fileA.setForeground(Color.RED);
fileA.setFont(l2);

fileB.setBounds(550,300,150,35);
fileB.setForeground(Color.RED);
fileB.setFont(l2);
fileC.setBounds(550,370,150,35);
fileC.setForeground(Color.RED);
fileC.setFont(l2);

routerA.setBounds(600,230,150,35);
routerA.setForeground(Color.MAGENTA);
routerA.setFont(l2);
routerB.setBounds(600,300,150,35);
routerB.setForeground(Color.MAGENTA);
routerB.setFont(l2);
routerC.setBounds(600,370,150,35);
routerC.setFont(l2);
routerC.setForeground(Color.MAGENTA);
subrA.setBounds(680,230,150,35);
subrA.setForeground(Color.CYAN);
subrA.setFont(l2);
subrB.setBounds(680,300,150,35);
subrB.setForeground(Color.CYAN);
subrB.setFont(l2);
subrC.setBounds(680,370,150,35);
subrC.setForeground(Color.CYAN);
subrC.setFont(l2);
serA.setBounds(790,230,250,35);

serA.setForeground(Color.GREEN);
serA.setFont(l2);
serB.setBounds(790,300,250,35);
serB.setForeground(Color.GREEN);
serB.setFont(l2);
serC.setBounds(790,370,250,35);
serC.setFont(l2);
serC.setForeground(Color.GREEN);
c.add(ftr);

c.add(title);
c.add(fileA);
c.add(fileB);
c.add(fileC);

c.add(routerA);
c.add(routerB);
c.add(routerC);

c.add(subrA);
c.add(subrB);
c.add(subrC);

c.add(serA);
c.add(serB);
c.add(serC);

c.add(la1);
c.add(la2);
c.add(la3);
c.add(la4);
c.add(c1);
c.add(c2);

c.add(sc1,BorderLayout.CENTER);
c.add(sc,BorderLayout.CENTER);
la1.setFont(l);
la2.setFont(l);
la3.setFont(l);
la4.setFont(l1);

test1.setBounds(408,154,100,35);
test.setBounds(500,200,110,35);

test1.setForeground(Color.BLACK);
test1.setFont(l);
test.setFocusPainted(true);
c.add(splt);
b1.setBorderPainted(true);
b1.setContentAreaFilled(false);
b2.setBorderPainted(true);
b2.setContentAreaFilled(false);

b3.setBorderPainted(true);
b3.setContentAreaFilled(false);
c.add(test1);
c.add(sen);
c.add(b1);
c.add(b2);
c.add(b3);

b1.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v2=new
ImageIcon(this.getClass().getResource("button.png"));

b1.setIcon(v2);

}
public void mouseExited(MouseEvent e)
{
ImageIcon v=new
ImageIcon(this.getClass().getResource("button3.png"));

b1.setIcon(v);

}
public void mouseClicked(MouseEvent e)
{

});

b2.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v3=new
ImageIcon(this.getClass().getResource("button.png"));

b2.setIcon(v3);

}
public void mouseExited(MouseEvent e)
{
ImageIcon v5=new
ImageIcon(this.getClass().getResource("button3.png"));

b2.setIcon(v5);

}
public void mouseClicked(MouseEvent e)
{

});

b3.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v2=new
ImageIcon(this.getClass().getResource("button.png"));

b3.setIcon(v2);

}
public void mouseExited(MouseEvent e)
{
ImageIcon v=new
ImageIcon(this.getClass().getResource("button3.png"));

b3.setIcon(v);

}
public void mouseClicked(MouseEvent e)
{
serC.setText("");

c1b.setText(totalbandwidthA+"(Kbs/s)");

String str=buffer.toString();
int total=str.length();
String
substr=str.substring(0,5);
String
subtf=str.substring(5,total);

c1.setText("");
c2.setText("");
t1.setText("");

tflda.setText("");
tfld1a.setText("");
tfld2a.setText("");
t2.setText(subtf.toString());

Random r2 = new Random();


int p1
=(Math.abs(r2.nextInt()) % 20)+10;
System.out.println(" P1

:"+p1);

String p1text=Integer.toString(p1);

//tfld1a.setText(p1text.toString());
tfldab.setText("127.0.0.1");
tfld1ab.setText(p1text);
tfld2ab.setText("This Link have"+"\n"+"Destination Node");

String
lenstr=str.substring(0,str.length());
int len=subtf.length();
String
strlen=Integer.toString(len);
c2b.setText(strlen);

String first=t2.getText();
int lenf=first.length();

String
sep=first.substring(0, first.length());

String rutname=p1text;
rutname+="SUBROUTER
A2";
String rnstr=rutname+str;
System.out.println(" Router
name and bandwidth ="+rnstr);

byte[] byteArray;
Socket client = null;
try

client = new
Socket("127.0.0.1", 6000);
bos = new
BufferedOutputStream(client.getOutputStream());

byteArray
=rnstr.getBytes();
bos.write(byteArray, 0,
byteArray.length);
bos.flush();
bos.close();
client.close();
}

catch (UnknownHostException e1)

e1.printStackTrace();
}
catch (IOException e1)
{}

finally
{}

String subA2="SUBROUTER A2";


try

client = new
Socket("127.0.0.1", 2233);
bos = new
BufferedOutputStream(client.getOutputStream());

byteArray
=subA2.getBytes();
bos.write(byteArray, 0,
byteArray.length);
bos.flush();
bos.close();
client.close();

catch (UnknownHostException e1)


{
e1.printStackTrace();
}
catch (IOException e1)
{}
Finally
{}
}
}
catch (IOException e)
{
}
finally
{
}
}
}
}
public void actionPerformed(ActionEvent e)
{
if(e.getSource()==b)
{}
}

6.2 SCREEN SHOTS


6.2.1 Client:

Figure 6.2.1 Client

6.2.2 Router

Figure 6.2.2 Router

6.2.3 Sub Router

Figure 6.2.3 Sub Router

CHAPTER 7
CONCLUSION AND FUTURE WORK
In this approach the resource management framework is proposed to reduce
the power consumption and to minimize the total cost of a data center to support
green computing. This framework is composed of power and workload
management, and capacity planning schemes. The power and workload
management is used by a batch scheduler of the data center to switch the server
operation mode (i.e., active or sleep). With the power and workload management,
the capacity planning is performed to obtain the optimal number of servers in the
data center over multiple periods. To obtain the optimal decision in each period,
the deterministic and stochastic optimization models have been proposed. The
deterministic optimization model based on multi-stage assignment problem can be
applied when the exact information about the job processing demand (i.e., job
arrival rate) is known. On the other hand, the stochastic optimization model based
on multi-stage stochastic programming is applied when only the probability
distribution of the job processing demand is available. While the action of power
and workload management is performed in the short term basis (e.g., fraction of
minute), the decision of capacity planning (i.e., to expand the size of data center) is
made in the long-term basis (i.e., few months). With the proposed joint short-term
and long-term optimization models, the performance evaluation has shown that the
total cost of the data center can be minimized while the performance requirements
are met.
For the future work, a stochastic optimization model for the resource
management framework with the virtualization technology and the dynamic
pricing of electricity in a smart grid will be developed. The computing resource
management and capacity planning for a cloud provider owning multiple data
centers will be considered.

CHAPTER 8
REFERENCES
[1] R. Harmon, H. Demirkan, N. Auseklis, and M. Reinoso, From Green
Computing to Sustainable IT: Developing a Sustainable Service Orientation, in
Proceedings of the 43rd Hawaii International Conference on System Sciences
(HICSS), 2010.
[2] D. Niyato, S. Chaisiri, and B. S. Lee, Optimal power management for server
farm to support green computing, in Proceedings of the 2009 9th IEEE/ACM
International Symposium on Cluster Computing and the Grid (CCGrid), pp. 84-91,
2009.
[3] M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic
Programming, Wiley-Interscience, April 1994.
[4] J. R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer,
February 2000.
[5] Y. Chen, A. Das, W. Qin, A. Sivasubramaniam, Q. Wang, and N. Gautam,
Managing server energy and operational costs in hosting centers, in Proceedings
of ACM SIGMETRICS Performance Evaluation Review, vol.33, no. 1, pp. 303314, June 2005.
[6] R. Nathuji and K. Schwan, VirtualPower: coordinated power management in
virtualized enterprise systems, in Proceedings of ACM SIGOPS Symposium on
Operating Systems Principles (SOSP), pp. 265-278, 2007.

[7] A. Verma, P. Ahuja, and A. Neogi, Power-aware dynamic placement of HPC


applications, in Proceedings of the 22nd Annual International Conference on
Supercomputing (ICS), pp. 175-184, 2008.

Das könnte Ihnen auch gefallen