Sie sind auf Seite 1von 50

Three-server swapping for access confidentiality

ABSTRACT:
We propose an approach to protect confidentiality of data and accesses to them
when data are stored and managed by external providers, and hence not under
direct control of their owner. Our approach is based on the use of distributed data
allocation among three independent servers and on a dynamic re-allocation of data
at every access. Dynamic re-allocation is enforced by swapping data involved in an
access across the servers in such a way that accessing a given node implies re-
allocating it to a different server, then destroying the ability of servers to build
knowledge by observing accesses. The use of three servers provides uncertainty, to
the eyes of the servers, of the result of the swapping operation, even in presence of
collusion among them.

INTRODUCTION

A recent trend and innovation in the IT scenario has been the increasing adoption
of the scenario has been the increasing adoption of the cloud computing paradigm.
Companies can rely on the cloud for data storage and management and then benefit
from low costs and high availability. End users can benefit from cloud storage for
enjoying availability of data anytime anywhere, even from mobile devices.
Together with such a convenience comes however a loss of control of the data
(stored and managed by “the cloud”). The problem of ensuring data confidentiality
in outsourcing and cloud scenarios has received considerable attention by the
research and development communities in the last few years and several solutions
have been proposed. A simple solution for guaranteeing data confidentiality
consists in encrypting the data. Modern cryptographic algorithms offer high
efficiency and strong protection of data content. Simply protecting data content
with an encryption layer does not fully solve the confidentiality problem, as access
confidentiality, namely the confidentiality of the specific accesses performed on
the data remains at risk. There are several reasons for which access confidentiality
may be demanded, among which the fact that breaches in access confidentiality
may leak information on access profiles of users and, in the end, even on the data
themselves, therefore causing breaches in data confidentiality. Several approaches
have been recently proposed to protect access confidentiality. While with different
variations, such approaches share the common.
It differs from the shuffle index in the management of the data structure, for both
storage and access (which exploit a distributed allocation) and in the way the node-
block correspondence is modified, applying swapping instead of random shuffling,
forcing the node involved in an access to change the block where it is stored (again
exploiting the distributed allocation). Also, it departs from the cache, then not
requiring any storage at the client side.

The basic idea of our approach is to randomly partition data among three
independent storage servers, and, at every access, randomly move (swap) data
retrieved from a server to any of the other two so that data retrieved from a server
would not be at the same server after the access. Since nodes are randomly
allocated to servers, the path from the root to the leaf target of an access can
traverse nodes allocated at different servers. Then, to provide uniform visibility at
any access at every server (which should operate as if it was the only one serving
the client), every time the node to be accessed at a given level belongs to one
server, our approach also requests to access one additional block (distributed
cover) at the same level at each of the other servers.

The reader may wonder why we are distributing the index structure among three
servers, and not two or four. The rationale behind the use of multiple servers is to
provide limited visibility, at each of the servers, of the data structure and of the
accesses to it. In this respect, even adopting two servers could work. However, an
approach using only two servers would remain too exposed to collusion between
the two that, by merging their knowledge, could reconstruct the node-block
correspondence and compromise access and data confidentiality. Also, the data
swapping we adopt, while providing better protection with respect to shuffling in
general, implies deterministic reallocation in the case of two servers and could then
cause exposure in case of collusion. The use of three servers provides instead
considerably better protection. Swapping ensures that data are moved out from a
server at every access, providing non determinism in data reallocation (as the data
could have moved to any of the other two servers), even in presence of collusion
among the three servers. While going from two servers to three servers provides
considerably higher protection guarantees, further increasing the number of servers
provides limited advantage, while instead increasing the complexity of the system

Several approaches have been recently proposed to protect access confidentiality.


While with different variations, such approaches share the common observation
that the major problem to be tackled to provide access confidentiality is to break
the static correspondence between data and the physical location where they are
stored. Among such proposals, the shuffle index [1] provides a key-based
hierarchical organization of the data, supporting then an efficient and effective
access execution (e.g., including support of range operations). In this paper, we
build on such an indexing structure and on the idea of dynamically changing, at
every access, the physical location of data, and provide a new approach to access
confidentiality based on a combination of data distribution and swapping. The idea
of applying data distribution for confidentiality protection is in line with the
evolution of the market, with an increasing number of providers offering
computation and storage services, which represent an opportunity for providing
better functionality and security. In particular, our approach relies on data
distribution by allocating the data structure over three different servers, each of
which will then see only a portion of the data blocks and will similarly have a
limited visibility of the accesses to the data. Data swapping implies changing the
physical location of accessed data by swapping them among the three involved
servers. Swapping, in contrast to random shuffling, forces the requirement that
whenever a block is accessed, the data retrieved from it (i.e., stored in the block
before the access) should not be stored at the same block after the access. We
illustrate in this paper how the use of three servers (for distributed data allocation)
together with swapping (forcing data re-allocation across servers) provide nice
protection guarantees, typically outperforming the use of a random shuffling
assuming (as it is to be expected) no collusion among servers, and maintaining
sufficient protection guarantees even in the presence of collusions among two, or
even all three, of the involved servers.

The remainder of the paper is organized as follows. Section 2 recalls the basic
concepts of the shuffle index. Section 3 introduces the rationale of our approach.
Section 4 describes our index structure working on three servers. Section 5
presents the working of our approach, discussing protection techniques and data
access. Section 6 analyzes protection guarantees. Section 7 discusses the
motivations behind our choice of swapping and of three as the number of servers to
be used, and provides some performance and economic considerations for our
approach. Section 8 illustrates related works. Finally, Section 9 concludes the
paper.

LITURATURE SURVEY:
A Cumulative Study for Three Servers Swapping in Cloud

Cloud computing plays vital role in today’s IT industry, as it offers tremendous


computing and storage facilities for the task and outsourced data respectively. To
provide these facilities cloud is powered with complex integration of huge number
of servers, due to this cloud computing becomes gigantic environment for
computing which eventually makes it as one of the most important driving force
for the computer industry. Due to huge number of the servers in cloud computing it
compromises with the poor security for the stored data, but applying of
cryptographic techniques solve this to the great extent. Even though the threat of
data accessing is always there, this is directly depend on the storage pattern of the
data in different servers of cloud. So it is necessary to wipeout traces of data
expedition to the destination server to avoid accessing through any illegal means.
As solution to this data swapping between server draws more attention as it
successfully wipeout the traces. So a need of the better system is always there
which enforces re-allocation of data by constant swapping between the many
servers to provide un-predictable data route for the illegal seekers, which shall
make impossible to access the cloud confidential data in any illegal means.

Enhancing Security through Data Swapping and Shuffling Across the Servers
in Cloud

The shuffle technique is used recently for organizing and accessing data in cloud.
We use distributed data allocation among more than two independent servers.
Dynamic reallocation is done by swapping across the servers in such a way that
accessing a given node implies re-allocating it to a different server. There is more
protection that derives from the use of independent servers as compared to the use
of one server. In this paper, we introduce shuffling technique for the use of
multiple servers for storing data; introduce a new protection technique (shadow
copy) and enhancing the original ones by operating in a distributed system.

Role-based access control policy administration

The wide proliferation of the Internet has set new requirements for access control
policy specification. Due to the demand for ad-hoc cooperation between
organisations, applications are no longer isolated from each other; consequently,
access control policies face a large, heterogeneous, and dynamic environment.
Policies, while maintaining their main functionality, go through many minor
adaptations, evolving as the environment changes. In this thesis we investigate the
long-term administration of role-based access control (RBAC) – in particular
OASIS RBAC – policies. With the aim of encapsulating persistent goals of policies
we introduce extensions in the form of meta-policies. These meta-policies, whose
expected lifetime is longer than the lifetime of individual policies, contain extra
information and restrictions about policies. It is expected that successive policy
versions are checked at policy specification time to ensure that they comply with
the requirements and guidelines set by meta-policies.

Aggregating Privatized Medical Data for Secure Querying Applications

Data sharing enhances the utilisation of data and promotes competition of scientific
ideas as well as promoting collaboration. Sharing and reusing public data sets has
the potential to increase research efficiency and quality. Sharing data that contains
personally identifiable or sensitive information, such as medical records, always
has privacy and security implications. For research purposes, having access to
large sets of data, often from various regions, improves statistical outcomes of
analysis. However, shared data is usually considered to be sensitive and access to it
is restricted by law and regulation. This thesis employs privatization techniques to
provide an architecture which enables sharing of sensitive data. Utilization of our
architecture is demonstrated by means of a case study based on four medical data
sets. This thesis also provides a solution for sharing the sensitive data where large
numbers of data contributors publish their privatized data sets and aggregates on a
cloud so that data can be made available to anyone who wants access to it, for
whatever purpose. Additionally, our solution determines how aggregated data can
be efficiently and effectively queried, while retaining privacy not only of the data,
but also of the original data owner and of both the query and person querying.

Emotions and Performance in Virtual Worlds

In this work, we first investigate characteristics of virtual worlds and determine


important situational variables concerning virtual world usage. Moreover, we
develop a model which relates individual differences of virtual world users,
namely emotional and cognitive abilities, experiences with virtual worlds as a
child, and the level of cognitive absorption perceived during virtual world use, to
the users’ individual performance in virtual worlds. We further test our model with
observed data from 4,048 study participants. Our results suggest that cognitive
ability, childhood media experience, and cognitive absorption influence multiple
facets of emotional capabilities, which in turn have a varyingly strong effect on
virtual world performance among different groups. Notably, in the present study,
the effect of emotional capabilities on performance was stronger for users which
prefer virtual worlds that have more emotional content and require more social and
strategic skills, particularly related to human behavior. Interestingly, while
cognitive ability was positively related to various emotional capabilities, no
evidence for a direct path between cognitive ability to performance could be
identified. Similarly, cognitive absorption positively affected emotion perception,
yet did not influence performance directly. Our findings make the case for
abandoning the traditional perspective on IS–which mainly relies on mere usage
measures–and call for a more comprehensive understanding and clearer
conceptualizations of human performance in psychometric studies. Additionally,
our study treats missing data (an inherent property of the data underlying our
study), links their presence to theoretical and practical issues, and discusses
implications

EXISTING SYSTEM:
In particular, our approach relies on data distribution by allocating the data
structure over three different servers, each of which will the data structure over
three different servers, each of which will then see only a portion of the data blocks
and will similarly have a limited visibility of the accesses to the data. Data
swapping implies changing the physical location of accessed data by swapping
them among the three involved servers. Swapping, in contrast to random shuffling,
forces the requirement that whenever a block is accessed, the data retrieved from it
(i.e., stored in the block before the access) should not be stored at the same block
after the access. We illustrate in this paper how the use of three servers (for
distributed data allocation) together with swapping (forcing data re-allocation
across servers) provide nice protection guarantees, typically outperforming the use
of a random shuffling assuming (as it is to be expected) no collusion among
servers, and maintaining sufficient protection guarantees even in the presence of
collusions among two, or even all three, of the involved servers.

DISADVANTAGES OF EXISTING SYSTEM:


Several approaches have been recently proposed to protect access
confidentiality. While with different variations, such approaches share the
common observation that the major problem to be tackled to provide such
approaches share the common observation that the major problem to be
tackled to provide access confidentiality is to break the static
correspondence between data and the physical location where they are
stored. Among such proposals, the shuffle index provides a key-based
hierarchical effective access execution (e.g., including support of range
operations). In this paper, we build on such an indexing structure and on the
idea of dynamically changing, at every access, the physical location of data,
and provide a new approach to access confidentiality based on a
combination of data distribution and swapping. The idea of applying data
distribution for confidentiality protection is

PROPOSED SYSTEM:
The basic idea of our approach is to randomly partition data among three
independent storage servers, and, at every access, randomly move (swap) data
retrieved from a server to any of the other two so that data retrieved from a server
would not be at the same server after the access. Since nodes are randomly
allocated to servers, the path from the root to the leaf target of an access can
traverse nodes allocated at different servers. Then, to provide uniform visibility at
any access at every server (which should operate as if it was the only one serving
the client), every time the node to be accessed at a given level belongs to one
server, our approach also requests to access one additional block (distributed
cover) at the same level at each of the other servers.

ADVANTAGES OF PROPOSED SYSTEM:


The shuffle index, retrieval of a key value (or more precisely the data indexed with
that key value and stored in a leaf node) entails traversing the index starting from
the root and following, at every node, the pointer to the child in the path to the leaf
possibly containing the target value. Again, being data encrypted, such a process
needs to be performed iteratively, starting from the root to the leaf, at every level
decrypting (and checking integrity of) the retrieved node to determine the child to
follow at the next level. Since our data structure is distributed among three servers
and the allocation of nodes to servers is independent from the topology of the
index structure, the path from the root to a target leaf may (and usually does)
involve nodes stored at different servers.
SYSTEM ARCHITECTURE:

System Configuration

H/W System Configuration:

Processor - Pentium –III

Speed - 1.1 Ghz

RAM - 256 MB(min)

Hard Disk - 20 GB
Floppy Drive - 1.44 MB

Key Board - Standard Windows Keyboard

Mouse - Two or Three Button Mouse

Monitor - SVGA

S/W System Configuration:


 Operating System :Windows 7/8/10

 Application Server : Tomcat5.0/6.X

 Front End : HTML, Java, Jsp

 Scripts : JavaScript.

 Server side Script : Java Server Pages.

 Database : Mysql

 Database Connectivity : JDBC.

Modules

Module Description:

Data Structure And Three-Server Allocation


At the abstract level, our structure is essentially the same as the shuffle index,
namely we consider an unchained B+- tree defined over candidate key K, with fan-
out F, and storing data in its leaves. However, we consider the root to have three
times the capacity of internal nodes. Since internal nodes and leaves will be
distributed to three different servers, assuming a three times larger root allows us
to conveniently split it among the different servers (instead of replicating it)
providing better access performance by potentially reducing the height of the tree.
In fact, a B+- tree having at most 3F children for the root node can store up to three
times the number of tuples/values stored in a traditional B+-tree of the same
height.
RATIONALE OF THE APPROACH
Our approach builds on the shuffle index by borrowing from it the base data
structure (encrypted unchained B+- tree) and the idea of breaking the otherwise
static correspondence between nodes and physical blocks at every access. It differs
from the shuffle index in the management of the data structure, for both storage
and access (which exploit a distributed allocation) and in the way the node-block
correspondence is modified, applying swapping instead of random shuffling,
forcing the node involved in an access to change the block where it is stored (again
exploiting the distributed allocation). Also, it departs from the cache, then not
requiring any storage at the client side.
Distributed covers
Like in the shuffle index, retrieval of a key value (or more precisely the data
indexed with that key value and stored in a leaf node) entails traversing the index
starting from the root and following, at every node, the pointer to the child
in the path to the leaf possibly containing the target value. Again, being data
encrypted, such a process needs to be performed iteratively, starting from the root
to the leaf, at every level decrypting (and checking integrity of) the retrieved
node to determine the child to follow at the next level. Since our data structure is
distributed among three servers and the allocation of nodes to servers is
independent from the topology of the index structure, the path from the root to
a target leaf may (and usually does) involve nodes stored at different servers. For
instance, with reference to Figure 1, retrieval of a value d1 entails traversing path
hr1, d, d1i and hence accessing blocks G01, Y12, and B24 each stored at a
different server.
Modeling knowledge
The storage servers know (or can infer from their interactions with the client) the
following information: the total number of blocks (nodes) in the distributed index;
the height h of the tree structure; the identifier of each block b and its level in the
tree; the identifier of read and written blocks for each access operation. On the
contrary, they do not know nor can infer the content and the topology of the index
(i.e., the pointers between parent and children), thanks to the fact that nodes are
encrypted. For simplicity, but without loss of generality, we focus our analysis
only on leaf blocks/nodes, since leaves are considerably more exposed than
internal nodes. Internal nodes are more protected since they are accessed, and
hence involved in swapping operations, more often than leaf nodes.
Software Description
INTODUCTION TO JAVA:

Java is a general-purpose computer programming language that is concurrent,


class-based, object-oriented, and specifically designed to have as few
implementation dependencies as possible. It is intended to let application
developers "write once, run anywhere" (WORA), meaning that compiled Java code
can run on all platforms that support Java without the need for recompilation. Java
applications are typically compiled to bytecode that can run on any Java virtual
machine (JVM) regardless of computer architecture. As of 2016, Java is one of the
most popular programming languages in use, particularly for client-server web
applications, with a reported 9 million developers. Java was originally developed
by James Gosling at Sun Microsystems (which has since been acquired by Oracle
Corporation) and released in 1995 as a core component of Sun Microsystems' Java
platform. The language derives much of its syntax from C and C++, but it has
fewer low-level facilities than either of them. The original and reference
implementation Java compilers, virtual machines, and class libraries were
originally released by Sun under proprietary licences. As of May 2007, in
compliance with the specifications of the Java Community Process, Sun relicensed
most of its Java technologies under the GNU General Public License. Others have
also developed alternative implementations of these Sun technologies, such as the
GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries),
and IcedTea-Web (browser plugin for applets). The latest version is Java 8, which
is the only version currently supported for free by Oracle, although earlier versions
are supported both by Oracle and other companies on a commercial basis.

INTRODUCTION TO JSP/SERVLET:

From Static to Dynamic WebPages

In the early days of the Web, most business Web pages were simply forms of
advertising tools and content providers offering customers useful manuals,
brochures and catalogues. In other words, these business-web-pages had static
content and were called "Static WebPages".

Later, the dot-com revolution enabled businesses to provide online services for
their customers; and instead of just viewing inventory, customers could also
purchase it. This new phase of the evolution created many new requirements; Web
sites had to be reliable, available, secure, and, if it was at all possible, fast. At this
time, the content of the web pages could be generated dynamically, and were
called "Dynamic WebPages".

The Static Web Pages contents are placed by the professional web developer
himself when developing the website; this is also called "design-time page
construction". Any changes needed have to be done by a web developer, this
makes Static WebPages expensive in their maintenance, especially when frequent
updates or changes are needed, for example in a news website.

On the other hand, dynamic Web pages have dynamically generated content that
depends on requests sent from the client's browser; this is called "constructed on
the fly". Updates and maintenance can be done by the client himself (doesn't need
professional web developers), this makes Dynamic WebPages considered to be the
best choice for web sites experiencing frequent changes.

Client-Side vs. Server-Side scripting

To generate dynamic content, scripting languages are needed. There are two types
of scripting languages:

1.Client-Side Scripting:

 It refers to the classes or programs embedded in a website and executed at


the client-side by the client's browser. These scripts can be embedded in
HTML code or placed in a separate file and referred to by the document
using it. On request, these files are sent to the user's browser where it gets
executed and the appropriate content is displayed.
 In client-side scripting, the user is able to view the source code (as it is
placed on the client's browser)
 Client-side scripts do not require additional software or an interpreter on the
server; however, they require that the user's web browser understands the
scripting language in which they are written. It is therefore important for the
developer to write scripts in a language that is supported by the web
browsers used by a majority of his or her users.
 Ideas of when to use client side scripts:
 Complimentary form pre-processing (should not be relied upon!)
 To get data about the user's screen or browser.
 Online games.
 Customizing the display (without reloading the page)
 Examples: JavaScript, VBScript

Server-side Scripting:

 It refers to the technology in which the user's request runs a script, or a


program, on the web server, generating customized web content that is sent
to the client in a format understandable by the web browsers (usually
HTML).
 Users are not able to display the source code of these scripts (the scripts are
placed on the server), they can only display the customized HTML sent to
the browser.
 Server-side scripts require their language's interpreter to be installed on the
server, and produce the same output regardless of the client's browser,
operating system, or other system details
 Server-side scripting –unlike client side scripting- has access to databases
which makes it more flexible and secure.
 Ideas for when to use server side scripts:
 Password protection.
 Browser sniffing/customization.
 Form processing.
 Building and displaying pages created from a database
 Examples: ASP, JSP, PHP, Perl
Introduction to JDBC:

Java Database Connectivity (JDBC) is an Application Programming Interface


(API) used to connect Java application with Database. JDBC is used to interact
with various type of Database such as Oracle, MS Access, My SQL and SQL
Server.

Introduction of JavaScript

 JavaScript is a object-based scripting language.


 Giving the user more control over the browser.
 It Handling dates and time.
 It Detecting the user's browser and OS,
 It is light weighted.
 JavaScript is a scripting language and it is not java.
 JavaScript is interpreter based scripting language.

JAVA Virtual Machine

The Java Virtual Machine Machine language consists of very simple instructions
that can be executed directly by (online) the CPU of a computer. Almost all
programs, though, are written in high-level programming languages such as Java,
Pascal, or C++. A program written in a high-level language cannot be run directly
on any computer. First, it has to be translated into machine language. This
translation can be done by a program called a compiler. A compiler takes a high-
level-language program and translates it into an executable machine-language
program. Once the translation is done, the machine-language program can be run
any number of times, but of course it can only be run on one type of computer
(since each type of computer has its own individual machine language). If the
program is to run on another type of computer it has to be re-translated, using a
different compiler, into the appropriate machine language.

There is an alternative to compiling a high-level language program. Instead of


using a compiler, which translates the program all at once, you can use an
interpreter, which translates it instruction-by-instruction, as necessary. An
interpreter is a program that acts much like a CPU, with a kind of fetch-and-
execute cycle. In order to execute a program, the interpreter runs in a loop in which
it repeatedly reads one instruction from the program, decides what is necessary to
carry out that instruction, and then performs the appropriate machine-language
commands to do so.

One use of interpreters is to execute high-level language programs. For example,


the programming language Lisp is usually executed by an interpreter rather than a
compiler. However, interpreters have another purpose: they can let you use a
machine-language program meant for one type of computer on a completely
different type of computer. For example, there is a program called “Virtual PC”
that runs on Mac OS computers. Virtual PC is an interpreter that executes
machine-language programs written for IBM-PC-clone computers. If you run
Virtual PC on your Mac OS, you can run any PC program, including programs
written for Windows. (Unfortunately, a PC program will run much more slowly
than it would on an actual IBM clone. The problem is that Virtual PC executes
several Mac OS machine-language instructions for each PC machine-language
instruction in the program it is interpreting. Compiled programs are inherently
faster than interpreted programs.)
The designers of Java chose to use a combination of compilation and
interpretation. Programs written in Java are compiled into machine language, but it
is a machine language for a computer that doesn’t really exist. This so-called
“virtual” computer is known as the Java Virtual Machine, or JVM. The machine
language for the Java Virtual Machine is called Java bytecode. There is no reason
why Java byte code couldn’t be used as the machine language of a real computer,
rather than a virtual computer. But in fact the use of a virtual machine makes
possible one of the main selling points of Java: the fact that it can actually be used
on any computer. All that the computer needs is an interpreter for Java bytecode.
Such an interpreter simulates the JVM in the same way that Virtual PC simulates a
PC computer. (The term JVM is also used for the Java bytecode interpreter
program that does the simulation, so we say that a computer needs a JVM in order
to run Java programs. Technically, it would be more correct to say that the
interpreter implements the JVM than to say that it is a JVM.)

Of course, a different Java bytecode interpreter is needed for each type of


computer, but once a computer has a Java byte code interpreter, it can run any Java
bytecode program. And the same Java byte code program can be run on any
computer that has such an interpreter. This is one of the essential features of Java:
the same compiled program can be run on many different types of computers.

Why, you might wonder, use the intermediate Java bytecode at all? Why not just
distribute the original Java program and let each person compile it into the machine
language of whatever computer they want to run it on? There are many reasons.
First of all, a compiler has to understand Java, a complex high-level language. The
compiler is itself a complex program. A Java bytecode interpreter, on the other
hand, is a fairly small, simple program. This makes it easy to write a bytecode
interpreter for a new type of computer; once that is done, that computer can run
any compiled Java program. It would be much harder to write a Java compiler for
the same computer.

Furthermore, many Java programs are meant to be downloaded over a network.


This leads to obvious security concerns: you don’t want to download and run a
program that will damage your computer or your files. The bytecode interpreter
acts as a buffer between you and the program you download. You are really
running the interpreter, which runs the downloaded program indirectly. The
interpreter can protect you from potentially dangerous actions on the part of that
program.

When Java was still a new language, it was criticized for being slow: Since Java
bytecode was executed by an interpreter, it seemed that Java bytecode programs
could never run as quickly as programs compiled into native machine language
(that is, the actual machine language of the computer on which the program is
running). However, this problem has been largely overcome by the use of just-in-
time compilers for executing Java bytecode. A just-in-time compiler translates Java
bytecode into native machine language. It does this while it is executing the
program. Just as for a normal interpreter, the input to a just-in-time compiler is a
Java bytecode program, and its task is to execute that program. But as it is
executing the program, it also translates parts of it into machine language. The
translated parts of the program can then be executed much more quickly than they
could be interpreted. Since a given part of a program is often executed many times
as the program runs, a just-in-time compiler can significantly speed up the overall
execution time.

I should note that there is no necessary connection between Java and Java
bytecode. A program written in Java could certainly be compiled into the machine
language of a real computer. And programs written in other languages could be
compiled into Java bytecode. However, it is the combination of Java and Java
bytecode that is platform-independent, secure, and networkcompatible while
allowing you to program in a modern high-level object-oriented language.

mpatible while allowing you to program in a modern high-level object-oriented


language. (In the past few years, it has become fairly common to create new
programming languages, or versions of old languages, that compile into Java
bytecode. The compiled bytecode programs can then be executed by a standard
JVM. New languages that have been developed specifically for programming the
JVM include Groovy, Clojure, and Processing. Jython and JRuby are versions of
older languages, Python and Ruby, that target the JVM. These languages make it
possible to enjoy many of the advantages of the JVM while avoiding some of the
technicalities of the Java language. In fact, the use of other languages with the
JVM has become important enough that several new features have been added to
the JVM in Java Version 7 specifically to add better support for some of those
languages.) ∗∗∗

Sensor networks are used in application domains, examples are cyber physical
infrastructure, environmental monitoring, whether monitoring power grids, etc.
The data that should be large sensor node sources and processed in-network with
their way to a Base Station (BS) that performs which decision should be taking.
Information is considered in the decision process or making. Data provenance is an
effective method to assess data trustworthiness, and the actions performed on the
data. Provenance in sensor networks has not been present properly addressed. We
investigate the problem of secure and efficient Journal homepage: www.mjret.in
ISSN:2348 - 6953 Multidisciplinary Journal of Research in Engineering and
Technology, Volume 2, Issue 4, Pg.823-828 824 | P a g e M43-2-4-10-2015
provenance transmission and handling for sensor networks, and we use provenance
to detect packet loss attacks staged by malicious sensor nodes. In a multi-hop
sensor network the data provenance is to allow the Base Station to trace the source
and forwarding path of a specific data packet the provenance must be record for
each an every packet, but important challenges arise due to some reason the first is
tight storage, energy and bandwidth constraints of sensor nodes. Therefore it is
necessary devise a light-weight provenance solution with low overhead. Sensors
should operate in untrusted environment, where they may be happens subject to
attacks. That’s why it is necessary to address security requirements such as
privacy, reliability and cleanness of provenance. Our project goal is to design a
provenance encoding and decoding tool that would be satisfies such safety and
presentation needs. We Design propose a provenance encoding strategy where
each node on the track of a data packet securely embeds provenance information
within a Bloom filter that is conveyed along with the data. Receiving the packet the
Base Station should be extracts and verifies the provenance information. The
provenance encoding system that allows the Base Station to detect if a packet drop
attack was staged by a malicious node. We use fast Message Authentication Code
and Bloom filters (BF), which are stable size data structures that efficiently
represent provenance. The modern developments in micro sensor technology and
low power analog and digital electronics, have led to the development of
distributed, wireless networks of sensor devices Sensor networks of the future are
intended to consist of hundreds of cheap nodes, that can be readily deployed in
physical situations to collect useful information. Our motivation on the subsection
of distributed networking applications created on packetheader-size Bloom filters
to share some state between network nodes. The specific state carried in the Bloom
filter differs from application to application, ranging from secure credentials to IP
prefixes and link identifiers with the shared requirement of a fixed-size packet
header data structure to well verify set memberships. Bloom filters make effective
usage of bandwidth, and they yield low error rates in practice. Our specific
contributions are:-

 We formulate the problem of secure provenance transmission in sensor


networks.
 The implementation of an in-packet Bloom filter provenance encoding
Scheme.
 To design efficient techniques for provenance decoding and verification at
the base station.
 To design mechanism that detects packet drop attacks staged by
maliciousforwarding sensor nodes.
 To perform a detailed security analysis and performance Evaluation.
SYSTEM IMPLEMENTATION

We addressed the problem of securely transmitting provenance for sensor


networks, and proposed a light-weight provenance encoding and decoding scheme
based on Bloom filters. The scheme ensures confidentiality, integrity and freshness
of provenance. We propose an in-packet Bloom filter (IBF) provenance-encoding
scheme. We design efficient techniques for provenance decoding and verification
at the base station. We perform a detailed security analysis and performance
evaluation of the proposed provenance encoding scheme and packet loss detection
mechanism

We investigate the problem of secure and efficient provenance transmission and


processing for sensor networks, and we use provenance to detect packet loss
attacks staged by malicious sensor nodes. We propose a provenance encoding
strategy whereby each node on the path of a data packet securely embeds
provenance information within a Bloom filter (BF) that is transmitted along with
the data. Upon receiving the packet, the BS extracts and verifies the provenance
information. We also devise an extension of the provenance encoding scheme that
allows the BS to detect if a packet drop attack was staged by a malicious node.

We introduce the network, data and provenance models used. We also present the
threat model and security requirements. Finally, we provide a brief primer on
Bloom filters, their fundamental properties and operations.

JAVA
The Java architecture consists of:
• A high-level object-oriented programming language,
• A platform-independent representation of a compiled class,
• A pre-defined set of run-time libraries,
• A virtual machine.
This book is mainly concerned with the language aspects of Java and the
associated java.lang library package. Consequently, the remainder of this section
provides a brief introduction to the language. Issues associated with the other
components will be introduced as and when needed in the relevant chapters. The
introduction is broken down into the following components • identifiers and
primitive data types
• structured data types
• Reference types
• Blocks and exception handling
• control structures
• Procedures and functions
• object oriented programming, packages and classes
• Inheritance
• interfaces
• Inner classes.
Identifiers and primitive data types
Identifiers Java does not restrict the lengths of identifiers. Although the language
does allow the use of a “_” to be included in identifier names, the emerging style is
to use a mixture of upper and lower case characters. The following are example
identifiers:
FIGURE 1. Part of the The Java Predefined Throwable Class Hierarchy

SQL Server 2005 Enterprise Edition

Enterprise Edition includes the complete set of SQL Server data management and
analysis features and is uniquely characterized by several features that make it the
most scalable and available edition of SQL Server 2005. It scales to the
performance levels required to support the largest Web sites, Enterprise Online
Transaction Processing (OLTP) systems and Data Warehousing systems. Its
support for failover clustering also makes it ideal for any mission critical line-of-
business application.

Top-10 Features of SqlServer-2005


1. T-SQL (Transaction SQL) enhancements
T-SQL is the native set-based RDBMS programming language offering high-
performance data access. It now incorporates many new features including error
handling via the TRY and CATCH paradigm, Common Table Expressions (CTE),
which return a record set in a statement, and the ability to shift columns to rows
and vice versa with the PIVOT and UNPIVOT commands.
2. CLR (Common Language Runtime)
The next major enhancement in SQL Server 2005 is the integration of a .NET
compliant language such as C#, ASP.NET or VB.NET to build objects (stored
procedures, triggers, functions, etc.). This enables you to execute .NET code in the
DBMS to take advantage of the .NET functionality. It is expected to replace
extended stored procedures in the SQL Server 2000 environment as well as expand
the traditional relational engine capabilities.
3. Service Broker
The Service Broker handles messaging between a sender and receiver in a loosely
coupled manner. A message is sent, processed and responded to, completing the
transaction. This greatly expands the capabilities of data-driven applications to
meet workflow or custom business needs.

4. Data encryption
SQL Server 2000 had no documented or publicly supported functions to encrypt
data in a table natively. Organizations had to rely on third-party products to
address this need. SQL Server 2005 has native capabilities to support encryption of
data stored in user-defined databases.

5. SMTP mail
Sending mail directly from SQL Server 2000 is possible, but challenging. With
SQL Server 2005, Microsoft incorporates SMTP mail to improve the native mail
capabilities. Say "see-ya" to Outlook on SQL Server!
6. HTTP endpoints
You can easily create HTTP endpoints via a simple T-SQL statement exposing an
object that can be accessed over the Internet. This allows a simple object to be
called across the Internet for the needed data.
7. Multiple Active Result Sets (MARS)
MARS allow a persistent database connection from a single client to have more
than one active request per connection. This should be a major performance
improvement, allowing developers to give users new capabilities when working
with SQL Server. For example, it allows multiple searches, or a search and data
entry. The bottom line is that one client connection can have multiple active
processes simultaneously.
8. Dedicated administrator connection
If all else fails, stop the SQL Server service or push the power button. That
mentality is finished with the dedicated administrator connection. This
functionality will allow a DBA to make a single diagnostic connection to SQL
Server even if the server is having an issue.
9. SQL Server Integration Services (SSIS)
SSIS has replaced DTS (Data Transformation Services) as the primary ETL
(Extraction, Transformation and Loading) tool and ships with SQL Server free of
charge. This tool, completely rewritten since SQL Server 2000, now has a great
deal of flexibility to address complex data movement.
10. Database mirroring
It's not expected to be released with SQL Server 2005 at the RTM in November,
but I think this feature has great potential. Database mirroring is an extension of
the native high-availability capabilities. So, stay tuned for more details….

INFORMATION SUPER HIGHWAY:


A set of computer networks, made up of a large number of smaller networks, using
different networking protocols. The world's largest computing network consisting
of over two million computers supporting over 20 millions users in almost 200
different countries. The Internet is growing a phenomenal rate between 10 and 15
percent. So any size estimates are quickly out of date.

Internet was originally established to meet the research needs of the U.S Defence
Industry. But it has grown into a huge global network serving universities,
academic researches, commercial interest and Government agencies, both in the
U.S and Overseas. The Internet uses TCP/IP protocols and many of the Internet
hosts run the Unix Operating System.
SOFTWARE REQUIREMENT SPECIFICATION
A software requirements specification (SRS) is a complete description of the
behavior of the software to be developed. It includes a set of use cases that
describe all of the interactions that the users will have with the software. In
addition to use cases, the SRS contains functional requirements, which define the
internal workings of the software: that is, the calculations, technical details, data
manipulation and processing, and other specific functionality that shows how the
use cases are to be satisfied. It also contains nonfunctional requirements, which
impose constraints on the design or implementation (such as performance
requirements, quality standards or design constraints).

The SRS phase consists of two basic activities:


1) Problem/Requirement Analysis:
The process is order and more nebulous of the two, deals with understanding the
problem, the goal and constraints.
2) Requirement Specification:
Here, the focus is on specifying what has been found giving analysis such as
representation, specification languages and tools, and checking the specifications
are addressed during this activity.
The Requirement phase terminates with the production of the validate SRS
document. Producing the SRS document is the basic goal of this phase.
Role of SRS:
The purpose of the Software Requirement Specification is to reduce the
communication gap between the clients and the developers. Software Requirement
Specification is the medium though which the client and user needs are accurately
specified. It forms the basis of software development. A good SRS should satisfy
all the parties involved in the system.

Data Flow Diagram

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated
by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with
the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at
any level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
UML DIAGRAM

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations
to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type of


behavioral diagram defined by and created from a Use-case analysis. Its purpose is
to present a graphical overview of the functionality provided by a system in terms
of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be
depicted.

Class Diagram

In software engineering, a class diagram in the Unified Modeling Language


(UML) is a type of static structure diagram that describes the structure of a system
by showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.
SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams.
INPUT DESIGN:

The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security and ease of
use with retaining the privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error
occur.
OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input


into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.

2.It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.

3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that the
user will not be in maize of instant. Thus the objective of input design is to create
an input layout that is easy to follow

OUTPUT DESIGN:

A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to
the users and to other system through outputs. In output design it is determined
how the information is to be displaced for immediate need and also the hard copy
output. It is the most important and direct source information to the user. Efficient
and intelligent output design improves the system’s relationship to help user
decision-making.

1. Designing computer output should proceed in an organized, well thought out


manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and
effectively. When analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.

2. Select methods for presenting information.

3. Create document, report, or other formats that contain information produced by


the system.

The output form of an information system should accomplish one or more of the
following objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.
SYSTEM STUDY:

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put
forth with a very general plan for the project and some cost estimates. During
system analysis the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the company. For
feasibility analysis, some understanding of the major requirements for the system
is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research
and development of the system is limited. The expenditures must be justified. Thus
the developed system as well within the budget and this was achieved because
most of the technologies used are freely available. Only the customized products
had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand
on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user.
This includes the process of training the user to use the system efficiently. The user
must not feel threatened by the system, instead must accept it as a necessity. The
level of acceptance by the users solely depends on the methods that are employed
to educate the user about the system and to make him familiar with it. His level of
confidence must be raised so that he is also able to make some constructive
criticism, which is welcomed, as he is the final user of the system.

SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.
TYPES OF TESTS:

Unit testing

Unit testing involves the design of test cases that validate that the internal program
logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on
knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

Integration testing

Integration tests are designed to test integrated software components to determine


if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that
although the components were individually satisfaction, as shown by successfully
unit testing, the combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise from the
combination of components.

Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key


functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

System Test

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

White Box Testing


White Box Testing is a testing in which in which the software tester has knowledge
of the inner workings, structure and language of the software, or at least its
purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.

Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such
as specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

1. Unit Testing

Unit testing is usually conducted as part of a combined code and unit test phase of
the software lifecycle, although it is not uncommon for coding and unit testing to
be conducted as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in
detail.

Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.
2. Integration Testing

Software integration testing is the incremental integration testing of two or more


integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

6.3 Acceptance Testing

User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Conclusion

We have proposed an approach that protects both the confidentiality of data stored
at external servers and the accesses to them. This approach is based on the use of a
key based dynamically allocated data structure distributed over three independent
servers. We have described our reference data structure and illustrated how our
distributed allocation and swapping techniques operate at every access to ensure
protection of access confidentiality. Our analysis illustrates the protection offered
by our approach considering two representative scenarios. We considered first a
worst-case scenario where servers start with a complete knowledge of the data they
store, showing how swapping quickly brings to a degradation of such a knowledge.
We also analyzed a scenario where the servers do not have initial knowledge, but
know the individual accesses, and show how our approach prevents knowledge
accumulation. Our analysis confirms that distributed allocation and swapping
provide nice protection guarantees, typically outperforming traditional shuffling,
even in presence of collusion.

Future Work
A different, although related, line of works is represented by fragmentation-based
approaches for protecting data confidentiality. These solutions are based on the
idea of splitting sensitive data among different relations, possibly stored at
different storage servers, to protect sensitive associations between attributes in the
original relation. Although based on a similar principle, fragmentation-based
approaches only protect content confidentiality.

SYSTEM IMPLEMENTATION
Implementation is the stage of the project when the theoretical design is turned
out into a working system. Thus it can be considered to be the most critical stage in
achieving a successful new system and in giving the user, confidence that the new
system will work and be effective.

The implementation stage involves careful planning, investigation of the


existing system and it’s constraints on implementation, designing of methods to
achieve changeover and evaluation of changeover methods.

Implementation is the process of converting a new system design into


operation. It is the phase that focuses on user training, site preparation and file
conversion for installing a candidate system. The important factor that should be
considered here is that the conversion should not disrupt the functioning of the
organization.

References

1. User Interfaces in C#: Windows Forms and Custom Controls by Matthew


MacDonald.

2. Applied Microsoft® .NET Framework Programming (Pro-Developer) by


Jeffrey Richter.

3. Practical .Net2 and C#2: Harness the Platform, the Language, and the
Framework by Patrick Smacchia.

4. Data Communications and Networking, by Behrouz A Forouzan.


5. Computer Networking: A Top-Down Approach, by James F. Kurose.

6. Operating System Concepts, by Abraham Silberschatz.

7. M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. H. Katz, A. Konwinski,


G. Lee, D. A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “Above the
clouds: A berkeley view of cloud computing,” University of California,
Berkeley, Tech. Rep. USB-EECS-2009-28, Feb 2009.

8. “The apache cassandra project,” http://cassandra.apache.org/.

9. L. Lamport, “The part-time parliament,” ACM Transactions


on Computer Systems, vol. 16, pp. 133–169, 1998.

10. N. Bonvin, T. G. Papaioannou, and K. Aberer, “Cost-efficient


and differentiated data availability guarantees in data clouds,”
in Proc. of the ICDE, Long Beach, CA, USA, 2010.

11. O. Regev and N. Nisan, “The popcorn market. online markets


for computational resources,” Decision Support Systems,
vol. 28, no. 1-2, pp. 177 – 189, 2000.

12. A. Helsinger and T. Wright, “Cougaar: A robust configurable


multi agent platform,” in Proc. of the IEEE Aerospace Conference,
2005.

13. J. Brunelle, P. Hurst, J. Huth, L. Kang, C. Ng, D. C. Parkes,


M. Seltzer, J. Shank, and S. Youssef, “Egg: an extensible and
economics-inspired open grid computing platform,” in Proc.
of the GECON, Singapore, May 2006.

14. J. Norris, K. Coleman, A. Fox, and G. Candea, “Oncall: Defeating


spikes with a free-market application cluster,” in Proc.
of the International Conference on Autonomic Computing,
New York, NY, USA, May 2004.

15. C. Pautasso, T. Heinis, and G. Alonso, “Autonomic resource


provisioning for software business processes,” Information
and Software Technology, vol. 49, pp. 65–80, 2007.

16. A. Dan, D. Davis, R. Kearney, A. Keller, R. King, D. Kuebler,


H. Ludwig, M. Polan, M. Spreitzer, and A. Youssef, “Web
services on demand: Wsla-driven automated management,”
IBM Syst. J., vol. 43, no. 1, pp. 136–158, 2004.

17. M. Wang and T. Suda, “The bio-networking architecture: a


biologically inspired approach to the design of scalable, adaptive,
and survivable/available network applications,” in Proc.
of the IEEE Symposium on Applications and the Internet,
2001.

18. N. Laranjeiro and M. Vieira, “Towards fault tolerance in


web services compositions,” in Proc. of the workshop on
engineering fault tolerant systems, New York, NY, USA,
2007.

19. C. Engelmann, S. L. Scott, C. Leangsuksun, and X. He,


“Transparent symmetric active/active replication for servicelevel
high availability,” in Proc. of the CCGrid, 2007.

20. J. Salas, F. Perez-Sorrosal, n.-M. M. Pati and R. Jim´enez-


Peris, “Ws-replication: a framework for highly available web
services,” in Proc. of the WWW, New York, NY, USA, 2006,

Das könnte Ihnen auch gefallen