Beruflich Dokumente
Kultur Dokumente
TABLE OF CONTENTS
CHAPTER NO. TITLE
PAGE NO.
ABSTRACT
LIST OF FIGURES
LIST OF ABBREVIATIONS
1
INTRODUCTION
1.1. Energy-Aware Network Performance
1.2. Capacity Planning
1.3. The Proposed Queuing Model
LITERATURE SURVEY
2.1 Green Computing Technologies
2.2 Green Support for PC-based Software Router
2.3 Energy-Aware Load Balancing for Parallel Packet Processing
2.4 Load Balancing for Parallel Forwarding
2.6 Existing System
2.7 Proposed System
SYSTEM ORGANIZATION
3.1 System Architecture Design
3.2 Feature Extraction
3.3 Data Flow Diagram
PROPOSED SYSTEM
4.1 Modules
4.1.1. Network Module
4.1.2. Delay Tolerant Network
4.1.3. Dynamic Routing
REQUIREMENT SPECIFICATION
5.1 System Requirements
5.1.1 Hardware Required
5.1.2 Software Required
5.2 Java in Detail
IMPLEMENTATION RESULT
6.1 Coding
6.2 Screen Shots
REFERENCES
CHAPTER 1
INTRODUCTION
In the last few years power consumption has shown a growing and alarming
trend in all industrial sectors, and particularly in Information and Communication
Technology (ICT). Public organizations, Internet Service Providers (ISPs) and
Telecom operators reported alarming statistics of network energy requirements and
of the related carbon footprint. The study of power-saving network devices has
been based in recent years on the possibility of adapting network energy
requirements to the actual traffic loads. Indeed, it is well known that network links
and devices are generally provisioned or busy or rush hour load, which typically
exceeds their average utilization by a wide margin. Although this margin is seldom
reached, network devices are designed on its basis and, consequently, their power
consumption remains more or less constant even in the presence of fluctuating
traffic loads.
Thus, the key of any advanced power saving criteria resides in dynamically
adapting resources, provided at the network, link or equipment level, to current
traffic requirements and loads. In this respect, current green computing approaches
have been based on numerous energy-related criteria, to be applied in particular to
network equipment and component interfaces. Despite the great progresses of
optics in transmission and switching, it is well known that todays networks still
rely very strongly on electronics.
With the constant increase of power cost and awareness of global warming,
the green computing concept has been introduced to improve an efficiency of
computing resource usage. Power management is one of the major approaches in
green computing to minimize the power consumption. In a distributed system,
power management is generally implemented in the network level in which the
operation mode of the computing resource (e.g., server) can be controlled remotely
by the centralized controller (e.g., batch scheduler).
This power management has to operate jointly with the workload
management to achieve an optimal performance. In this case, the workload (i.e.,
jobs) could be processed by the local data center (i.e., in-house computing
resource) or sent to a third party processing service (e.g., cloud computing) when
the computing resource of the data center is heavily occupied(e.g., due to the
instantaneous peak workload). With the power and workload management, not
only the power consumption can be reduced but also met the performance
requirements of workload processing.
For the owner of a data center, the capacity planning is an important issue.
Given the power and workload management, an optimal size of the data center
(e.g., the optimal number of servers) has to be determined so that the long-term
cost is minimized, while the processing capacity is sufficient to the workload
submitted from users of the data center. One of the most challenging issues in
designing the power and workload management as well as the capacity planning
schemes is due to the uncertainty of the workload. That is, processing jobs can be
generated by the users randomly and unknown a priori. Therefore, to achieve the
objective and to meet all constraints of the distributed system, a stochastic
optimization model would be required.
However, all works in the literature related to capacity planning ignored the
optimization with the uncertainty of workload demand. Although some works
proposed the optimization algorithms for capacity planning of distributed system,
the impact of power and workload management was not jointly considered. In the
capacity planning optimization model with the power and workload management
scheme was formulated to determine the optimal number of servers required in
new data centers, to locate the new data centers to appropriate areas, and to
distribute computing jobs to the data centers where energy costs can be minimized.
However, this work did not take the uncertainty of workload demand into account.
1.3 THE PROPOSED QUEUING MODE
Considering the assumption to send every packet composing a batch to a single pipeline,
we can outline that the model we propose corresponds to a Mx/D/1/SET queuing system. For
each pipeline, packets arrive in batches with exponentially distributed inter-arrival times with
average rate, and are served at a fixed rate. In order to take the LPI transition periods into
account, the model considers deterministic server setup times. When the system becomes empty,
the server is turned off. The system returns operational only when a batch of packets rives. At
this point in time service can begin only after an interval has elapsed. The next three sub-sections
introduce the analytical model and its specialization to our case. We derive the PGF and the
stationary probabilities of the Mx/D/1/SET queuing system; we express the servers idle and
busy periods. Then, we propose an approximation for the packet loss probability in the case of a
finite buffer of size N and we derive network- and energy aware performance indexes.
short-term basis for every time slot (e.g., fraction of minute). Alternatively, the
decision of capacity planning scheme is made by the data center owner to
determine the size of data center (i.e., the number of servers to be in the center) in
the long-term basis (e.g., for every few months).
CHAPTER 2
LITERATURE SURVEY
Literature survey is the most important step in software development process. Before
developing the tool it is necessary to determine the time factor, economy and company strength.
Once these things are satisfied, then the next step is to determine which operating system and
language can be used for developing the tool. Once the programmers start building the tool the
programmers need lot of external support. This support can be obtained from senior
programmers, from book or from websites. Before building the system the above consideration
are taken into account for developing the proposed system.
The major part of the project development sector considers and fully survey all the
required needs for developing the project. For every project Literature survey is the most
important sector in software development process. Before developing the tools and the
associated designing it is necessary to determine and survey the time factor, resource
requirement, man power, economy, and company strength. Once these things are satisfied and
fully surveyed, then the next step is to determine about the software specifications in the
respective system such as what type of operating system the project would require, and what are
all the necessary software are needed to proceed with the next step such as developing the tools,
and the associated operations.
based on the model has been proposed and experimentally evaluated. The
procedure aims at dynamically adapting the energy-aware device configuration to
minimize energy consumption, while coping with incoming traffic volumes and
meeting network performance constraints.
2.2
GREEN
SUPPORT
FOR
PC-BASED
SOFTWARE
ROUTER:
categories: the aggressive and the normal, and applies different scheduling policies
to the two classes of flows. Compared with most state-of-the-art parallel
forwarding schemes, our work exploits flow-level Internet traffic characteristics.
CHAPTER 3
SYSTEM ARCHITECTURE DESIGN
Client
Server
LEVEL 1
Client
Analyze the
bandwidth
LEVEL - 2
Client
Router
Transmit the
Packets
Analyze the Received
Packets size and the
capability of the sub
routers
CHAPTER 4
PROPOSED SYSTEM
4.1 MODULES
4.1.1. Network Module
4.1.2. Delay Tolerant Network
4.1.3. Dynamic Routing
4.1.4. Randomization Process
4.1.5. Routing Table Maintenance
4.1.6. Load on Throughput
4.1.1 NETWORK MODULE
Client-server computing or networking is a distributed application
architecture that partitions tasks or workloads between service providers (servers)
and service requesters, called clients. Often clients and servers operate over a
computer network on separate hardware. A server machine is a high-performance
host that is running one or more server programs which share its resources with
clients. A client also shares any of its resources; Clients therefore initiate
communication sessions with servers which await (listen to) incoming requests.
4.1.2 DELAY TOLERANT NETWORK
DTN is characterized by the lack of end-to-end paths for a given node pair
for extended periods, which poses a completely different design scenario from that
for conventional mobile adhoc networks (MANETs). Due to the intermittent
connections in DTNs, a node is allowed to buffer a message and wait until the next
hop node is found to continue storing and carrying the message. Such a process is
repeated until the message reaches its destination. This model of routing is
significantly different from that employed in the MANETs. DTN routing is usually
referred to as encounter-based, store-carry-forward, or mobility-assisted routing,
due to the fact that nodal mobility serves as a significant factor for the forwarding
decision of each message.
4.1.3 DYNAMIC ROUTING
To propose a distance-vector based algorithm for dynamic routing to
improve the security of data transmission. We propose to rely on existing distance
information exchanged among neighboring nodes (referred to as routers as well in
this paper) for the seeking of routing paths. In many distance-vector-based
implementations, e.g., those based on RIP, each node maintains a routing table in
which each entry is associated with a tuple, and Next hop denote some unique
destination node, an estimated minimal cost to send a packet to t, and the next node
along the minimal-cost path to the destination node, respectively.
4.1.4. RANDOMIZATION PROCESS
In order to minimize the probability that packets are eavesdropped over a
specific link, a randomization process for packet deliveries, in this process, the
previous next-hop for the source node s is identified in the first step of the process.
Then, the process randomly picks up a neighboring node as the next hop for the
current packet transmission. The exclusion for the next hop selection avoids
transmitting two consecutive packets in the same link, and the randomized pickup
prevents attackers from easily predicting routing paths for the coming transmitted
packets.
4.1.5. ROUTING TABLE MAINTENANCE
In the network be given a routing table and a link table. We assume that the
link table of each node is constructed by an existing link discovery protocol, such
as the Hello protocol in. On the other hand, the construction and maintenance of
routing tables are revised based on the well-known Bellman-Ford algorithm.
CHAPTER 5
REQUIREMENT SPECIFICATION
Hard Disk
: 40 GB
Ram
: 512 Mb
: Windows XP
Technology Used
: JAVA
Coding Language
: Java Swings
executed. You can think of Java byte codes as the machine code instructions for the
Java Virtual Machine (Java VM). Every Java interpreter, whether its a
development tool or a Web browser that can run applets, is an implementation of
the Java VM. Java byte codes help make write once, run anywhere possible. You
can compile your program into byte codes on any platform that has a Java
compiler. The byte codes can then be run on any implementation of the Java VM.
That means that as long as a computer has a Java VM, the same program written in
the Java programming language can run on Windows 2000, a Solaris workstation,
or on an iMac.
The Java Platform
A platform is the hardware or software environment in which a
program runs. Weve already mentioned some of the most popular platforms
like Windows 2000, Linux, Solaris, and MacOS. Most platforms can be
described as a combination of the operating system and hardware. The Java
platform differs from most other platforms in that its a software-only
platform that runs on top of other hardware-based platforms.
The Java platform has two components:
The Java Virtual Machine (Java VM)
The Java Application Programming Interface (Java API)
Youve already been introduced to the Java VM. Its the base for the Java
platform and is ported onto various hardware-based platforms.
The Java API is a large collection of ready-made software components
that provide many useful capabilities, such as graphical user interface (GUI)
widgets. The Java API is grouped into libraries of related classes and
interfaces; these libraries are known as packages. The next section, What
Can Java Technology Do? Highlights what functionality some of the
packages in the Java API provide.
The following figure depicts a program thats running on the Java
platform. As the figure shows, the Java API and the virtual machine insulate
the program from the hardware.
Native code is code that after you compile it, the compiled code runs on a
specific hardware platform. As a platform-independent environment, the Java
platform can be a bit slower than native code. However, smart compilers, welltuned interpreters, and just-in-time byte code compilers can bring performance
close to that of native code without threatening portability.
Object
serialization:
Allows
lightweight
persistence
and
Through the ODBC Administrator in Control Panel, you can specify the
particular database that is associated with a data source that an ODBC application
program is written to use. Think of an ODBC data source as a door with a name on
it. Each door will lead you to a particular database. For example, the data source
named Sales Figures might be a SQL Server database, whereas the Accounts
Payable data source could refer to an Access database. The physical database
referred to by a data source can reside anywhere on the LAN.
The ODBC system files are not installed on your system by Windows 95.
Rather, they are installed when you setup a separate database application, such as
SQL Server Client or Visual Basic 4.0. When the ODBC icon is installed in
Control Panel, it uses a file called ODBCINST.DLL. It is also possible to
administer your ODBC data sources through a stand-alone program called
ODBCADM.EXE. There is a 16-bit and a 32-bit version of this program and each
maintains a separate list of ODBC data sources. From a programming perspective,
the beauty of ODBC is that the application can be written to use the same set of
function calls to interface with any data source, regardless of the database vendor.
The source code of the application doesnt change whether it talks to Oracle or
SQL Server. We only mention these two as an example. There are ODBC drivers
available for several dozen popular database systems. Even Excel spreadsheets and
plain text files can be turned into data sources. The operating system uses the
Registry information written by ODBC Administrator to determine which lowlevel ODBC drivers are needed to talk to the data source (such as the interface to
Oracle or SQL Server). The loading of the ODBC drivers is transparent to the
ODBC application program. In a client/server environment, the ODBC API even
handles many of the network issues for the application programmer.
The advantages of this scheme are so numerous that you are probably
thinking there must be some catch. The only disadvantage of ODBC is that it isnt
as efficient as talking directly to the native database interface. ODBC has had
many detractors make the charge that it is too slow. Microsoft has always claimed
that the critical factor in performance is the quality of the driver software that is
used. In our humble opinion, this is true. The availability of good ODBC drivers
has improved a great deal recently. And anyway, the criticism about performance is
somewhat analogous to those who said that compilers would never match the
speed of pure assembly language. Maybe not, but the compiler (or ODBC) gives
you the opportunity to write cleaner programs, which means you finish sooner.
Meanwhile, computers get faster every year.
JDBC
In an effort to set an independent database standard API for Java; Sun
Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a
generic SQL database access mechanism that provides a consistent interface to a
variety of RDBMSs. This consistent interface is achieved through the use of plugin database connectivity modules, or drivers. If a database vendor wishes to have
JDBC support, he or she must provide the driver for each platform that the
database and Java run on.
To gain a wider acceptance of JDBC, Sun based JDBCs framework on
ODBC. As you discovered earlier in this chapter, ODBC has widespread support
on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring
JDBC drivers to market much faster than developing a completely new
connectivity solution.
JDBC was announced in March of 1996. It was released for a 90 day public
review that ended June 8, 1996. Because of user input, the final JDBC v1.0
specification was released soon after.
The remainder of this section will cover enough information about JDBC for you
to know what it is about and how to use it effectively. This is by no means a
complete overview of JDBC. That would fill an entire book.
JDBC Goals
Few software packages are designed without goals in mind. JDBC is one
that, because of its many goals, drove the development of the API. These goals, in
conjunction with early reviewer feedback, have finalized the JDBC class library
into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give you some
insight as to why certain classes and functionalities behave the way they do. The
eight design goals for JDBC are as follows:
1. SQL Level API
The designers felt that their main goal was to define a SQL interface for
Java. Although not the lowest database interface level possible, it is at a low
enough level for higher-level tools and APIs to be created. Conversely, it is at a
high enough level for application programmers to use it confidently. Attaining
this goal allows for future tool vendors to generate JDBC code and to hide
many of JDBCs complexities from the end user.
2. SQL Conformance
SQL syntax varies as you move from database vendor to database vendor. In
an effort to support a wide variety of vendors, JDBC will allow any query
statement to be passed through it to the underlying database driver. This allows
the connectivity module to handle non-standard functionality in a manner that is
suitable for its users.
3. JDBC must be implemental on top of common database interfaces
The JDBC SQL API must sit on top of other common SQL level APIs.
This goal allows JDBC to use existing ODBC level drivers by the use of a
software interface. This interface would translate JDBC calls to ODBC and
vice versa.
4. Provide a Java interface that is consistent with the rest of the Java system
Because of Javas acceptance in the user community thus far, the designers
feel that they should not stray from the current design of the core Java system.
5. Keep it simple
This goal probably appears in all software design goal listings. JDBC is no
exception. Sun felt that the design of JDBC should be very simple, allowing for
only one method of completing a task per mechanism. Allowing duplicate
functionality only serves to confuse the users of the API.
6. Use strong, static typing wherever possible
Strong typing allows for more error checking to be done at compile time;
also, less error appear at runtime.
7. Keep the common cases simple
Because more often than not, the usual SQL calls used by the programmer
are simple SELECTs, INSERTs, DELETEs and UPDATEs, these queries
CHAPTER 6
IMPLEMENTATION AND RESULTS
6.1 CODING
CLIENT
import java.awt.BorderLayout;
import java.awt.Color;
import java.awt.Container;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.ServerSocket;
import java.net.Socket;
import java.net.UnknownHostException;
import java.util.Random;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFileChooser;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JScrollPane;
import javax.swing.JTextArea;
import javax.swing.JTextField;
import javax.swing.plaf.basic.BasicLookAndFeel;
String event;
String rr;
String rr1;
String rr2;
String firststr;
String secondstr;
String thirdstr;
String firstpckts;
String secondpckts;
String thirdpckts;
String text;
String text1;
String text2;
int i;
int j;
String ftf;
String ftf1;
String ftf2;
int lentf;
int lentf1;
int lentf2;
int p1;
int p2;
int p3;
int q1;
int q2;
int q3;
int r1;
int r2;
int r3;
String rrA;
String rrB;
String rrC;
String len;
int t1len;
String t1str;
String tft1;
String tft2;
String tft3;
String stf;
String ttf;
String one;
String ten;
String fifty;
String hundred;
String twohun;
String threehun;
String fourhun;
String fivehun;
String sixhun;
String sevenhun;
String eighthun;
String ninehun;
String tennhun;
:");
client()
{
ImageIcon v1=new
ImageIcon(this.getClass().getResource("button.png"));
b1.setIcon(v1);
b2.setIcon(v1);
b3.setIcon(v1);
ImageIcon back=new
ImageIcon(this.getClass().getResource("background-blue.jpg"));
backgr.setBounds(0,-80,1035,900);
backgr.setIcon(back);
jf = new JFrame("Client");
c = jf.getContentPane();
c.setLayout(null);
jf.setSize(1035,900);
title.setForeground(Color.MAGENTA);
title.setBounds(135,10,900,50);
title.setFont(l4);
b1.setBounds(380,153,120,35);
b1.setForeground(Color.GREEN);
b2.setBounds(380,266,120,35);
splt.setBounds(420,267,120,35);
splt.setForeground(Color.BLACK);
splt.setFont(l);
b3.setBounds(480,630,120,35);
sen.setBounds(520,630,120,35);
sen.setForeground(Color.BLACK);
sen.setFont(l);
la1.setBounds(100,148,150,50);
la1.setForeground(Color.GREEN);
la2.setBounds(100,205,150,50);
la2.setForeground(Color.GREEN);
la3.setBounds(100,263,150,50);
la3.setForeground(Color.GREEN);
la4.setBounds(440,60,150,50);
la4.setForeground(Color.GREEN);
c1.setBounds(280,216,400,35);
c2.setBounds(280,266,100,35);
c2.setFont(l2);
c2.setForeground(Color.MAGENTA);
sc1.setBounds(400,370,100,200);
sc.setBounds(100,370,300,200);
t1.setColumns(20);
t1.setRows(10);
t1.setFont(l3);
t1.setForeground(Color.BLUE);
sc.setViewportView(t1);
t2.setBounds(100,300,300,200);
t2.setColumns(20);
t2.setRows(10);
t2.setFont(l3);
t2.setForeground(Color.BLUE);
sc1.setViewportView(t2);
ftr.setBounds(650,150,350,35);
ftr.setForeground(Color.CYAN);
ftr.setFont(l);
fileA.setBounds(550,230,150,35);
fileA.setForeground(Color.RED);
fileA.setFont(l2);
fileB.setBounds(550,300,150,35);
fileB.setForeground(Color.RED);
fileB.setFont(l2);
fileC.setBounds(550,370,150,35);
fileC.setForeground(Color.RED);
fileC.setFont(l2);
routerA.setBounds(600,230,150,35);
routerA.setForeground(Color.MAGENTA);
routerA.setFont(l2);
routerB.setBounds(600,300,150,35);
routerB.setForeground(Color.MAGENTA);
routerB.setFont(l2);
routerC.setBounds(600,370,150,35);
routerC.setFont(l2);
routerC.setForeground(Color.MAGENTA);
subrA.setBounds(680,230,150,35);
subrA.setForeground(Color.CYAN);
subrA.setFont(l2);
subrB.setBounds(680,300,150,35);
subrB.setForeground(Color.CYAN);
subrB.setFont(l2);
subrC.setBounds(680,370,150,35);
subrC.setForeground(Color.CYAN);
subrC.setFont(l2);
serA.setBounds(790,230,250,35);
serA.setForeground(Color.GREEN);
serA.setFont(l2);
serB.setBounds(790,300,250,35);
serB.setForeground(Color.GREEN);
serB.setFont(l2);
serC.setBounds(790,370,250,35);
serC.setFont(l2);
serC.setForeground(Color.GREEN);
c.add(ftr);
c.add(title);
c.add(fileA);
c.add(fileB);
c.add(fileC);
c.add(routerA);
c.add(routerB);
c.add(routerC);
c.add(subrA);
c.add(subrB);
c.add(subrC);
c.add(serA);
c.add(serB);
c.add(serC);
c.add(la1);
c.add(la2);
c.add(la3);
c.add(la4);
c.add(c1);
c.add(c2);
c.add(sc1,BorderLayout.CENTER);
c.add(sc,BorderLayout.CENTER);
la1.setFont(l);
la2.setFont(l);
la3.setFont(l);
la4.setFont(l1);
test1.setBounds(408,154,100,35);
test.setBounds(500,200,110,35);
test1.setForeground(Color.BLACK);
test1.setFont(l);
test.setFocusPainted(true);
c.add(splt);
b1.setBorderPainted(true);
b1.setContentAreaFilled(false);
b2.setBorderPainted(true);
b2.setContentAreaFilled(false);
b3.setBorderPainted(true);
b3.setContentAreaFilled(false);
c.add(test1);
c.add(sen);
c.add(b1);
c.add(b2);
c.add(b3);
b1.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v2=new
ImageIcon(this.getClass().getResource("button.png"));
b1.setIcon(v2);
}
public void mouseExited(MouseEvent e)
{
ImageIcon v=new
ImageIcon(this.getClass().getResource("button3.png"));
b1.setIcon(v);
}
public void mouseClicked(MouseEvent e)
{
});
b2.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v3=new
ImageIcon(this.getClass().getResource("button.png"));
b2.setIcon(v3);
}
public void mouseExited(MouseEvent e)
{
ImageIcon v5=new
ImageIcon(this.getClass().getResource("button3.png"));
b2.setIcon(v5);
}
public void mouseClicked(MouseEvent e)
{
});
b3.addMouseListener(new MouseAdapter()
{
public void mouseEntered(MouseEvent e)
{
ImageIcon v2=new
ImageIcon(this.getClass().getResource("button.png"));
b3.setIcon(v2);
}
public void mouseExited(MouseEvent e)
{
ImageIcon v=new
ImageIcon(this.getClass().getResource("button3.png"));
b3.setIcon(v);
}
public void mouseClicked(MouseEvent e)
{
serC.setText("");
c1b.setText(totalbandwidthA+"(Kbs/s)");
String str=buffer.toString();
int total=str.length();
String
substr=str.substring(0,5);
String
subtf=str.substring(5,total);
c1.setText("");
c2.setText("");
t1.setText("");
tflda.setText("");
tfld1a.setText("");
tfld2a.setText("");
t2.setText(subtf.toString());
:"+p1);
String p1text=Integer.toString(p1);
//tfld1a.setText(p1text.toString());
tfldab.setText("127.0.0.1");
tfld1ab.setText(p1text);
tfld2ab.setText("This Link have"+"\n"+"Destination Node");
String
lenstr=str.substring(0,str.length());
int len=subtf.length();
String
strlen=Integer.toString(len);
c2b.setText(strlen);
String first=t2.getText();
int lenf=first.length();
String
sep=first.substring(0, first.length());
String rutname=p1text;
rutname+="SUBROUTER
A2";
String rnstr=rutname+str;
System.out.println(" Router
name and bandwidth ="+rnstr);
byte[] byteArray;
Socket client = null;
try
client = new
Socket("127.0.0.1", 6000);
bos = new
BufferedOutputStream(client.getOutputStream());
byteArray
=rnstr.getBytes();
bos.write(byteArray, 0,
byteArray.length);
bos.flush();
bos.close();
client.close();
}
e1.printStackTrace();
}
catch (IOException e1)
{}
finally
{}
client = new
Socket("127.0.0.1", 2233);
bos = new
BufferedOutputStream(client.getOutputStream());
byteArray
=subA2.getBytes();
bos.write(byteArray, 0,
byteArray.length);
bos.flush();
bos.close();
client.close();
6.2.2 Router
CHAPTER 7
CONCLUSION AND FUTURE WORK
In this approach the resource management framework is proposed to reduce
the power consumption and to minimize the total cost of a data center to support
green computing. This framework is composed of power and workload
management, and capacity planning schemes. The power and workload
management is used by a batch scheduler of the data center to switch the server
operation mode (i.e., active or sleep). With the power and workload management,
the capacity planning is performed to obtain the optimal number of servers in the
data center over multiple periods. To obtain the optimal decision in each period,
the deterministic and stochastic optimization models have been proposed. The
deterministic optimization model based on multi-stage assignment problem can be
applied when the exact information about the job processing demand (i.e., job
arrival rate) is known. On the other hand, the stochastic optimization model based
on multi-stage stochastic programming is applied when only the probability
distribution of the job processing demand is available. While the action of power
and workload management is performed in the short term basis (e.g., fraction of
minute), the decision of capacity planning (i.e., to expand the size of data center) is
made in the long-term basis (i.e., few months). With the proposed joint short-term
and long-term optimization models, the performance evaluation has shown that the
total cost of the data center can be minimized while the performance requirements
are met.
For the future work, a stochastic optimization model for the resource
management framework with the virtualization technology and the dynamic
pricing of electricity in a smart grid will be developed. The computing resource
management and capacity planning for a cloud provider owning multiple data
centers will be considered.
CHAPTER 8
REFERENCES
[1] R. Harmon, H. Demirkan, N. Auseklis, and M. Reinoso, From Green
Computing to Sustainable IT: Developing a Sustainable Service Orientation, in
Proceedings of the 43rd Hawaii International Conference on System Sciences
(HICSS), 2010.
[2] D. Niyato, S. Chaisiri, and B. S. Lee, Optimal power management for server
farm to support green computing, in Proceedings of the 2009 9th IEEE/ACM
International Symposium on Cluster Computing and the Grid (CCGrid), pp. 84-91,
2009.
[3] M. L. Puterman, Markov Decision Processes: Discrete Stochastic Dynamic
Programming, Wiley-Interscience, April 1994.
[4] J. R. Birge and F. Louveaux, Introduction to Stochastic Programming, Springer,
February 2000.
[5] Y. Chen, A. Das, W. Qin, A. Sivasubramaniam, Q. Wang, and N. Gautam,
Managing server energy and operational costs in hosting centers, in Proceedings
of ACM SIGMETRICS Performance Evaluation Review, vol.33, no. 1, pp. 303314, June 2005.
[6] R. Nathuji and K. Schwan, VirtualPower: coordinated power management in
virtualized enterprise systems, in Proceedings of ACM SIGOPS Symposium on
Operating Systems Principles (SOSP), pp. 265-278, 2007.