Sie sind auf Seite 1von 86

A Project Report On

SSL BACKEND FORWARDING SCHEME IN CLUSTER-


BASED WEB SERVERS

This is submitted in partial fulfillment of the requirements for


the award of the degree of Bachelor of Technology (B.Tech)

In

COMPUTER SCIENCE & ENGINEERING

Under The Guidance Of

Sri N.V.Rajasekhar Reddy.,


Assistant Professor

Submitted By

K.DURGA BHAVANI
K.RAMESH BABU M.YASWANTH KUMAR
R.MOHAN KRISHNA AYYAPPA P.SUNDARAM

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


G.V.R. & S COLLEGE OF ENGINEERING & TECHNOLOGY
(AFFILIATED TO ACHARYA NAGARJUNA UNIVERSITY)
Ganginenipuram, Near Budampadu, GUNTUR – 522013, A.P.
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
G.V.R. & S COLLEGE OF ENGINEERING & TECHNOLOGY
(AFFILIATED TO ACHARYA NAGARJUNA UNIVERSITY)

CERTIFICATE

This is to certify that the report “WEB BASED SQL INJECTION PREVENTER
(WASP)” entitled is the bonafide work of K.Durga Bhavani(y7cs1422),k.Ramesh
Babu(y7cs1428),R.MohanKrishna Ayyappa(y7cs1439),M.yaswanth Kumar(y7cs1422),
P.Sundaram(y7cs1448) submitted in partial fulfillment of the requirements for the award
of the Degree Of Bachelor Of Technology (B.Tech) In Computer Science& Engineering
By Acharya Nagarjuna University, Nagarjunanagar, Guntur during the Academic
Year 2010-2011.

Guide Head of the Department

(N.V.Rajasekhar Reddy) (A.Hanumath Prasad)


Assistant Professor Associate Professor
Department of Computer Science & Engineering) Department of Computer Science & Engineering
GVR&S College of Engineering & Technology GVR&S College of Engineering & Technology
Near Budampadu, Guntur Near Budampadu, Guntur

(Dr. N. Radha Krishna Murthy)


Principal
ACKNOWLEDGEMENT

We are very much thankful to Sri. N.V.Rajasekhar Reddy, Assistant Professor and
Guide, Dept. of Computer Science & Engineering, G.V.R.&S College of Engineering &
Technology, Near Budampadu, Guntur, for the encouragement and constant support to
carry out this work successfully.

We would like to take this opportunity to express our gratitude to our Chairman
Dr. G.Venkateswara Rao and our principal Dr. N. Radha Krishna Murthy for giving
us this opportunity to do the project work.

We would like to express my sincere thanks to Sri. A. Hanumath Prasad, Head


of the Department of Computer Science and Engineering and Sri. G.Ramanjaiah, Head
of the PG Studies for their encouragement.

We also thankful to all our faculty members for their suggestions and the moral
support extended by them.

We take this opportunity to express our heartfelt appreciation to our beloved


TVU-SUD and e-Curve staff for their support and help rendered during the completion of
project work.

We acknowledge the support of the Programmers, Lab technicians and other non-
teaching staff for their help in completion of this project work.

We place our gratitude to all our friends and well wishers who helped directly or
indirectly to complete this project work.

Finally we would like to extend our heartfelt thanks to our beloved parents whose
blessings and encouragement were always there as source of strength and inspiration.

(K.DURGA BHAVANI)

(K.RAMESH BABU) (M.YESWANTHKUMAR)

(R.MOHAN KRISHNA AYYAPPA) (P.SUNDARAM)


Index
Contents PageNo

1. Introduction
1.1 Overview Of the System
1.2 Existing System
1.3 Proposed System
1.4 About the Organisation
1.5 System Environment
2.Feasibility Study
2.1 Technical Feasibility
2.2 Operational Feasibility
2.3 Economic Feasibility
3. Modules
3.1 MultimediaObjects Storing

3.2 Client Request To MetaDataServer

3.3 MetaDataServer Indexing

3.4 Request transfered to ObjectStorageServer

3.5 Load Balancing made by ObjectStorageServer

3.6 Response Sending to client

4. System Requirements
4.1. Hardware Requirements
4.2 Software Requirements
5. System Design
5.1 usecase diagram
5.2 class diagram
5.3 Block Diagram
5.4 Dataflow Diagram
5.5 Sequential Diagram
5.6 class Diagram
6.Devalopment of System And Testing
6.1 Unit Testing
6.2 Integration Testing
6.3 Validation Testing
7.Implementaion
8.System Maintenance
9. Screen Shots
10. Conclusion & Future Enhancements
11. Bibliography
ABSTRACT

 State-of-the-art cluster-based data centers consisting of three tiers (Web


server, application server, and database server) are being used to host complex Web
services such as e-commerce applications. The application server handles dynamic
and sensitive Web contents that need protection from eavesdropping, tampering, and
forgery.
 Although the Secure Sockets Layer is the most popular protocol to provide
a secure channel between a client and a cluster-based network server, its high
overhead degrades the server performance considerably and, thus, affects the server
scalability.
 It improving the performance of SSL-enabled network servers is critical
for designing scalable and high-performance data centers. We examine the impact of
SSL offering and SSL-session-aware distribution in cluster-based network servers.
 We propose a back-end forwarding scheme, called ssl_with_bf, that
employs a low-overhead user-level communication mechanism like Virtual Interface
Architecture to achieve a good load balance among server nodes.
 We compare three distribution models for network servers, Round Robin,
ssl_with_session, and ssl_with_bf, through simulation.
 The experimental results with 16-node and 32-node cluster configurations
show that, although the session reuse of ssl_with_session is critical to improve the
performance of application servers, the proposed back-end forwarding scheme can
further enhance the performance due to better load balancing.
 The ssl_with_bf scheme can minimize the average latency by about 40
percent and improve throughput across a variety of workloads.
INTRODUCTION
1. INTRODUCTION

1.1 OVERVIEW OF THE SYSTEM

Due to the growing popularity of the Internet, data centers/network servers are
anticipated to be the bottleneck in hosting network-based services, even though the
network bandwidth continues to increase faster than the server capacity. It has been
observed that network servers contribute to approximately 40 percent of the overall
delay, and this delay is likely to grow with the increasing use of dynamic Web contents.
For Web-based applications, a poor response time has significant financial implications.
For example, E-Biz reported about $1.9 billion loss in revenue in 1998 due to the long
response time resulting from the Secure Sockets Layer (SSL), which is commonly used
for secure communication between clients and Web servers. Even though SSL is the de
facto standard for transport layer security, its high overhead and poor scalability are two
major problems in designing secure large-scale network servers. Deployment of SSL can
decrease a server’s capacity by up to two orders of magnitude.
In addition, the overhead of SSL becomes even more severe in application
servers. Application servers provide dynamic contents and the contents require secure
mechanisms for protection. Generating dynamic content takes about 100 to 1,000 times
longer than simply reading static content. Moreover, since static content is seldom
updated, it can be easily cached. Several efficient caching algorithms have been proposed
to reduce latency and increase throughput of front-end Web services. However, because
dynamic content is generated during the execution of a program, caching dynamic
content is not an efficient option like caching static content. Recently, a multitude of
network services have been designed and evaluated using cluster platforms. Specifically,
the design of distributed Web servers has been a major research thrust to improve the
throughput and response time. It is the first Web server model that exploits user-level
communication in a cluster-based Web server. Our previous work reduces the response
time in a cluster-based Web server using co scheduling schemes. In this paper, first, we
investigate the impact of SSL offering in cluster-based network servers, focusing on
application servers, which mainly provide dynamic content. Second, we show the
possible performance improvement when the SSL-session reuse scheme is utilized in
cluster based servers. The SSL-session reuse scheme has been tested on a single Web
server node and extended to a cluster system that consisted of three Web servers. In this
paper, we explore the SSL-session reuse scheme using 16-node and 32-node cluster
systems with various levels of workload. Third, we propose a back-end forwarding
mechanism by exploiting the low-overhead user-level communication to enhance the
SSL-enabled network server performance.
To this end, we compare three distribution models in clusters: Round Robin (RR),
ssl_with_session, and ssl_with_bf (backend_forwarding). The RR model, widely used in
Web clusters, distributes requests from clients to servers using the RR scheme.
ssl_with_session uses a more sophisticated distribution algorithm in which subsequent
requests of the same client are forwarded to the same server, avoiding expensive SSL
setup costs. The proposed ssl_with_bf uses the same distribution policy as the
ssl_with_session, but includes an intelligent load balancing scheme that forwards client
requests from a heavily loaded back-end node to a lightly loaded node to improve the
utilization across all nodes. This policy uses the underlying user-level communication for
fast communication. Extensive performance analyses with various workload and system
configurations are summarized as follows: First, schemes with reusable sessions,
deployed in the ssl_with_session and ssl_with_bf models, are essential to minimize the
SSL overhead. Second, the average latency can be reduced by 40 percent with the
proposed ssl_with_bf model compared to the ssl_with_session model, resulting in
improved throughput. Third, the proposed scheme provides high utilization and better
load balance across all nodes. The rest of this paper is organized as follows: a brief
overview of cluster-based network servers, user-level communication, and SSL is
provided. Section 3 outlines three distribution models, including our proposed SSL back-
end forwarding scheme.
DESCRIPTION OF THE PROBLEM

1.2 EXISTING SYSTEM:

 In existing system, they have used to develop the project using Round Robin [RR]
model and SSL_with_Session model. Those models are not effective. Those
models are not able to give the out put in time and the thorough put also lesser
than that their expected output.
 These models had made the Latency problem and minimal through put. For this
problem they introduced the SSL_with_bf (Backend forwarding) model is to
overcome the existing problems. We going to implement SSL_with_Backend
Forwarding model in our proposed system.

1.3 PROPOSED SYSTEM:

 In our Proposed System, We are going to implement the SSL_with_Backend


Forwarding model (Algorithm) is to overcome the problem of existing system.
 This model will reduce the latency and increase the throughput than the existing
system (Round Robin model and SSL_with_Session).
 The Secure Socket Layer_with_BF model is very helpful for load balancing of
the server. This will reduce the load of the server while the server is being busy.
These are the advantages of our proposed system.
 The ssl_with_bf scheme can minimize the average latency by about 40 percent
and improve throughput across a variety of workloads.
1.4 ABOUT THE ORGANISATION

COMPANY PROFILE

At Blue Chip Technologies, We go beyond providing software solutions.


We work with our client’s technologies and business changes that shape their competitive
advantages.

Founded in 2000, Blue Chip Technologies (P) Ltd. is a software and


service provider that helps organizations deploy, manage, and support their business-
critical software more effectively. Utilizing a combination of proprietary software,
services and specialized expertise, Blue Chip Technologies (P) Ltd. helps mid-to-large
enterprises, software companies and IT service providers improve consistency, speed,
and transparency with service delivery at lower costs. Blue Chip Technologies (P) Ltd.
helps companies avoid many of the delays, costs and risks associated with the distribution
and support of software on desktops, servers and remote devices.

Our automated solutions include rapid, touch-free deployments, ongoing software


upgrades, fixes and security patches, technology asset inventory and tracking, software
license optimization, application self-healing and policy management. At Blue Chip
Technologies, we go beyond providing software solutions. We work with our clients’
technologies and business processes that shape their competitive

About The People

As a team we have the prowess to have a clear vision and realize it too. As
a statistical evaluation, the team has more than 40,000 hours of expertise in providing
real-time solutions in the fields of Embedded Systems, Control systems, Micro-
Controllers, c Based Interfacing, Programmable Logic Controller, VLSI Design And
Implementation, Networking With C, C++, java, client Server Technologies in Java,
(J2EE\J2ME\J2SE\EJB),VB & VC++, Oracle and operating system concepts with
LINUX.
Our Vision
“Dreaming a vision is possible and realizing it is our goal”.

Our Mission
We have achieved this by creating and perfecting processes that are in par
with the global standards and we deliver high quality, high value services, reliable and
cost effective IT products to clients around the world.
1.5 SYSTEM ENVIRONMENT:

FRONT END USED:

Microsoft Visual Studio. Net used as front end tool. The reason for
selecting Visual Studio dot Net as front end tool as follows:

• Visual Studio .Net has flexibility, allowing one or more language to


interoperate to provide the solution. This Cross Language Compatibility
allows to do project at faster rate.
• Visual Studio. Net has Common Language Runtime, which allows the
entire component to converge into one intermediate format and then can
interact.
• Visual Studio. Net has provide excellent security when your application is
executed in the system
• Visual Studio.Net has flexibility, allowing us to configure the working
environment to best suit our individual style. We can choose between a
single and multiple document interfaces, and we can adjust the size and
positioning of the various IDE elements.
• Visual Studio. Net has Intelligence feature that make the coding easy and
also dynamic help provides very less coding time.
• The working environment in Visual Studio.Net is often referred to as
Integrated Development Environment because it integrates many different
functions such as design, editing, compiling and debugging within a
common environment. In most traditional development tools, each of
separate program, each with its own interface.
• The Visual Studio.Net language is quite powerful – if we can imagine a
programming task and accomplished using Visual Basic .Net.
• After creating a Visual Studio. Net application, if we want to distribute it to
others we can freely distribute any application to anyone who uses
Microsoft windows. We can distribute our applications on disk, on CDs,
across networks, or over an intranet or the internet.
• Toolbars provide quick access to commonly used commands in the
programming environment. We click a button on the toolbar once to carry
out the action represented by that button. By default, the standard toolbar is
displayed when we start Visual Basic. Additional toolbars for editing, form
design, and debugging can be toggled on or off from the toolbars command
on the view menu.
• Many parts of Visual Studio are context sensitive. Context sensitive means
we can get help on these parts directly without having to go through the
help menu. For example, to get help on any keyword in the Visual Basic
language, place the insertion point on that keyword in the code window and
press F1.
• Visual Studio interprets our code as we enter it, catching and highlighting
most syntax or spelling errors on the fly. It’s almost like having an expert
watching over our shoulder as we enter our code.

.NET (C Sharp)
INTRODUCTION:
Following literatures has guided us throughout the execution of
the project. These literatures are dealt with the environment in which the project is
executed and the technology required to perform the complete operations.
.Net
C#.NET:

SIMILARITIES BETWEEN C# AND JAVA:

C# is a programming language developed by taking all the important features


of various Programming languages into consideration. The similarities with various
programming languages are as follow
Java: 70%

C++: 10%

Visual Basic: 5%

New Programming languages: 15%

Here are the common features that C# and Java share. It is very important to be aware
of these similarities, though we are not going to focus on these.

• Compiles into machine-independent, language-independent code which runs


in a managed execution environment.

• Garbage Collection coupled with the elimination of pointers (in C# restricted


use is permitted within code marked unsafe)

• Powerful reflection capabilities

• No header files, all code scoped to packages or assemblies, no problems


declaring one class before another with circular dependencies

• Classes all descend from object and must be allocated on the heap with new
keyword

• Thread support by putting a lock on objects when entering code marked as


locked/synchronized

• Interfaces, with multiple-inheritance of interfaces, single inheritance of


implementations

• Inner classes

• No concept of inheriting a class with a specified access level

• No global functions or constants, everything belongs to a class

• Arrays and strings with lengths built-in and bounds checking

• The "." operator is always used, no more ->, :: operators

• Null and Boolean/bool are keywords

• All values are initialized before use


• Can't use integers to govern if statements

• Try Blocks can have a finally clause

CHARACTERISTICS OF C# :

Elegant object oriented design:

The concurrence with these golden principles of object


orientation, encapsulation, inheritance and polymorphism, has made C# programming
a great choice for architecting a wide range of components from high-level business
objects to system - level software applications. The C# programming language
constructs converts these components into XML Web services, which permits them to
be invoked across the Internet, from any language running on any operating system.

Safety and Productivity:

In C#, the unsafe code must be explicitly declared with the


modifier as 'unsafe' to prevent accident features. Moreover, the compiler and
execution engine works hand in hand to ensure that the unsafe code is not executed in
an unreliable environment.

Name spaces:

C# does its job in a hierarchical name space model.


Namespaces are C# program elements which help to organize programs. Objects are
grouped into name spaces and a particular namespace has to be included in a software
program to access the classes and objects within it.

Security:
In C#, unsafe codes must be explicitly declared unsafe by the
modifier to prevent accident features. Moreover the compiler and execution engine
work hand in hand and ensure that an unsafe code is not executed in an unreliable
environment.

Garbage collection:

The memory management feature leads all managed objects.


Garbage collection is a feature of .NET that C# uses during runtime.

Data types:

This is a regulatory type language set rules to maintain the


integrity of data stored in it. Three types of data types include value types, reference
types, boxing and unboxing. There are also simple types namely integral type,
Boolean type, char type, floating- point type, decimal type, structure type, and
enumeration type.

Versioning:

C# programming supports this versioning. . NET solves the


versioning problem and enables the software developer to specify version
dependencies between different pieces of software.

Indexes:

C# has indexes which help to access value in a class with an


array like syntax.

Exception handling:
.NET standardizes the exception handling across languages.
C# offers the conditional keyword to control the flaw and make the code more
readable.

Error Elimination:

C# programming eliminates costly software programming errors, through garbage


collection which are automatically initialized by the environment type safe variables.
C# makes it simple for the software developer to write and maintain programs that
give solutions for complex business problems.

Flexibility & Power:

C# has the flexibility which permits typed, extensible metadata


that can be applied to any object. A project architect can define domain-specific
attributes and apply them to any language, element, classes, interfaces and so on.

Extensive inter-operability:

Almost all enterprise software applications can be managed


easily by type-safe environment. This extensive inter-operability makes C# the
obvious choice for most of the software developers.

To conclude, the inference is that C# programming is a


sophisticated language for the sophisticated world, which is productive, object
oriented and lessens development, while keeping in pace with the programming
heritage of C++. This brand new language facilitates programmers to develop, fast
and easy solutions for .NET development environment. And it does it all by
considerably reducing the increased development costs, providing flexibility and
productivity.
OVERVIEW OF THE .NET FRAMEWORK

Visual Studio .NET is a complete set of development tools for building ASP Web
applications, XML Web services, desktop applications, and mobile applications. Visual
Basic .NET, Visual C++ .NET, Visual C# .NET, and Visual J# .NET all use the same
integrated development environment (IDE), which allows them to share tools and
facilitates in the creation of mixed-language solutions. In addition, these languages
leverage the functionality of the .NET Framework, which provides access to key
technologies that simplify the development of ASP Web applications and XML Web
services.

The .NET Framework is an integral Windows component that supports building and
running the next generation of applications and XML Web services. The .NET
Framework is designed to fulfill the following objectives:

• To provide a consistent object-oriented programming environment whether object


code is stored and executed locally, executed locally but Internet-distributed, or
executed remotely.
• To provide a code-execution environment that minimizes software deployment
and versioning conflicts.
• To provide a code-execution environment that promotes safe execution of code,
including code created by an unknown or semi-trusted third party.
• To provide a code-execution environment that eliminates the performance
problems of scripted or interpreted environments.
• To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based applications.
• To build all communication on industry standards to ensure that code based on the
.NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and
the .NET Framework class library. The common language runtime is the foundation of
the .NET Framework. You can think of the runtime as an agent that manages code at
execution time, providing core services such as memory management, thread
management, and remote processing, while also enforcing strict type safety and other
forms of code accuracy that promote security and robustness. In fact, the concept of code
management is a fundamental principle of the runtime. Code that targets the runtime is
known as managed code, while code that does not target the runtime is known as
unmanaged code. The class library, the other main component of the .NET Framework, is
a comprehensive, object-oriented collection of reusable types that you can use to develop
applications ranging from traditional command-line or graphical user interface (GUI)
applications to applications based on the latest innovations provided by ASP.NET, such
as Web Forms and XML Web services.

The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed and unmanaged features.
The .NET Framework not only provides several runtime hosts, but also supports the
development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side


environment for managed code. ASP.NET works directly with the runtime to enable
ASP.NET applications and XML Web services, both of which are discussed later in this
topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in
the form of a MIME type extension). Using Internet Explorer to host the runtime enables
you to embed managed components or Windows Forms controls in HTML documents.
Hosting the runtime in this way makes managed mobile code (similar to Microsoft®
ActiveX® controls) possible, but with significant improvements that only managed code
can offer, such as semi-trusted execution and isolated file storage.
The following illustration shows the relationship of the common language runtime and
the class library to your applications and to the overall system. The illustration also
shows how managed code operates within a larger architecture.

.NET Framework in context

Fig 3.1 .NET framework illustration


The following sections describe the main components and features of the .NET
Framework in greater detail.

ADO.NET Overview

ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the
web with scalability, statelessness, and XML in mind.

ADO.NET uses some ADO objects, such as the Connection and Command objects, and
also introduces new objects. Key new ADO.NET objects include the DataSet,
DataReader, and DataAdapter.

The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct
from any data stores. Because of that, the DataSet functions as a standalone entity. You
can think of the DataSet as an always disconnected recordset that knows nothing about
the source or destination of the data it contains. Inside a DataSet, much like in a
database, there are tables, columns, relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed
while the DataSet held the data. In the past, data processing has been primarily
connection-based. Now, in an effort to make multi-tiered apps more efficient, data
processing is turning to a message-based approach that revolves around chunks of
information. At the center of this approach is the DataAdapter, which provides a bridge
to retrieve and save data between a DataSet and its source data store. It accomplishes this
by means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as
collections and data types. No matter what the source of the data within the DataSet is, it
is manipulated through the same set of standard APIs exposed through the DataSet and
its subordinate objects.

While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill,
and persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data
Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection, DataReader and
DataAdapter. In the remaining sections of this document, we'll walk through each part
of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they
are, and how to program against them.

The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:

• Connections. For connection to and managing transactions against a


database.
• Commands. For issuing SQL commands against a database.
• DataReaders. For reading a forward-only stream of data records from a
SQL Server data source.
• DataSets. For storing, remoting and programming against flat data, XML
data and relational data.
• DataAdapters. For pushing data into a DataSet, and reconciling data
against a database.

When dealing with connections to a database, there are two different options: SQL
Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data
Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).

Connections

Connections are used to 'talk to' databases, and are respresented by provider-specific
classes such as SQLConnection. Commands travel over connections and resultsets are
returned in the form of streams which can be read by a DataReader object, or pushed
into a DataSet object.

Commands

Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SQLCommand. A command can be a stored procedure
call, an UPDATE statement, or a statement that returns results. You can also use input
and output parameters, and return values as part of your command syntax. The example
below shows how to issue an INSERT statement against the Northwind database.

DataReaders

The DataReader object is somewhat synonymous with a read-only/forward-only cursor


over data. The DataReader API supports flat as well as hierarchical data. A DataReader
object is returned after executing a command against a database. The format of the
returned DataReader object is different from a recordset. For example, you might use
the DataReader to show the results of a search list in a web page.

DataSets and DataAdapters

DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns,
relationships, and constraints. However, though a DataSet can and does behave much
like a database, it is important to remember that DataSet objects do not interact directly
with databases, or other source data. This allows the developer to work with a
programming model that is always consistent, regardless of where the source data resides.
Data coming from a database, an XML file, from code, or user input can all be placed
into DataSet objects. Then, as changes are made to the DataSet they can be tracked and
verified before updating the source data. The GetChanges method of the DataSet object
actually creates a second DatSet that contains only the changes to the data. This DataSet
is then used by a DataAdapter (or other objects) to update the original data source.

The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas
interchanged via WebServices. In fact, a DataSet with a schema can actually be
compiled for type safety and statement completion.

DataAdapters (OLEDB/SQL)

The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand
and SqlConnection) can increase overall performance when working with a Microsoft
SQL Server databases. For other OLE DB-supported databases, you would use the
OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection
objects.

The DataAdapter object uses commands to update the data source after changes have
been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT
command; using the Update method calls the INSERT, UPDATE or DELETE command
for each changed row. You can explicitly set these commands in order to control the
statements used at runtime to resolve changes, including the use of stored procedures. For
ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon
a select statement. However, this run-time generation requires an extra round-trip to the
server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time
performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the
forefront. Two new objects, the DataSet and DataAdapter, are provided for
these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a
cache for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in
order to do inserts, updates, and deletes. You don't need to first put data into a
DataSet in order to insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the data,
and navigate data relationships

BACK END USED:

SQL SERVER 2005

Microsoft SQL Server is a relational database management system (RDBMS)


produced by Microsoft. The code base for MS SQL Server originated in Sybase SQL
Server.

SQL Server 2005, released in October 2005, is the successor to SQL Server 2000. It
included native support for managing XML data, an ETL tool (SQL Server Integration
Services or SSIS), a Reporting Server, an OLAP and data mining server (Analysis
Services), and several messaging technologies, specifically Service Broker and
Notification Services.

Microsoft SQL Server 2005 includes a component named SQL CLR via
which it integrates with .NET Framework. Unlike most other applications that use .NET
Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory,
threading and resource management requirements of .NET Framework are satisfied by
SQLOS itself, rather than the underlying Windows operating system. SQLOS provides
deadlock detection and resolution services for .NET code as well. With SQL CLR, stored
procedures and triggers can be written in any managed .NET language, including C# and
VB.NET. Managed code can also be used to define UDT's (user defined types), which
can persist in the database. Managed code is compiled to .NET assemblies and after being
verified for type safety, registered at the database. After that, they can be invoked like
any other procedure.[26] However, only a subset of the Base Class Library is available,
when running code under SQL CLR. Most APIs relating to user interface functionality
are not available.[26]

When writing code for SQL CLR, data stored in SQL Server databases
can be accessed using the ADO.NET APIs like any other managed application that
accesses SQL Server data. However, doing that creates a new database session, different
from the one in which the code is executing. To avoid this, SQL Server provides some
enhancements to the ADO.NET provider that allows the connection to be redirected to
the same session which already hosts the running code. Such connections are called
context connections and are set by setting context connection parameter to true in the
connection string. SQL Server also provides several other enhancements to the
ADO.NET API, including classes to work with tabular data or a single row of data as
well as classes to work with internal metadata about the data stored in the database.
FEASIBILITY STUDY
2. Feasibility Study
The next step in analysis is to verify the feasibility of the proposed system. “All
projects are feasible given unlimited resources and infinite time“. But in reality both
resources and time are scarce. Project should confirm to time bounce and should be
optimal in there consumption of resources. This place a constant is approval of any
project.
Feasibility has applied to Digital Tune pertains to the following areas:
• Technical feasibility
• Operational feasibility
• Economical feasibility

2.1 TECHNICAL FEASIBILITY:


To determine whether the proposed system is technically feasible, we should take into
consideration the technical issues involved behind the system.
2.2 OPERATIONAL FEASIBILITY:
To determine the operational feasibility of the system we should take into
consideration the awareness level of the users. This system is operational feasible since
the users are familiar with the technologies and hence there is no need to gear up the
personnel to use system. Also the system is very friendly and to use.
2.3. ECONOMIC FEASIBILITY
To decide whether a project is economically feasible, we have to consider various factors
as:
• Cost benefit analysis
• Long-term returns
• Maintenance costs
MODULES
3.Modules
3.1 MultimediaObjects Storing

3.2 Client Request To MetaDataServer

3.3MetaDataServer Indexing

3.4 Request transfered to ObjectStorageServer

3.5 Load Balancing made by ObjectStorageServer

3.6 Response Sending to client

3.1 MultimediaObjects storing:

In this module we are storing the Multimedia-objects such as Audio, Video


and image into the server database according to the Size ,location and type of the
object.The objects are converted into the binary format and then stored into the server
database. After the time of retrieval,it gets converted to the original format.The audio and
video files are stored as large-objects.The datatype of the image is varbinary.

3.2 Client Request to MetaDataServer:

This module contains the client request sending information. Initially the client
view the files in server then make a request to the metadataserver with the corresponding
Client-ipaddress and portno.After that,the MetaDataServer stored the Client request
file,ipaddress ,File-Size,File_type.This process called as Request indexing.

3.3 Request Indexing by MetaDataServer:


In this module the centralized server,MetaDataServer get the client request and
note down the information client_ipaddress,File_name,file_size,File_type.After that this
server redirect the request to PrimaryObjectStorageServer which will give the high
performance in cluster of ObjectStorageServers.MetaDataServer is only responsible for
the redirection of the request and searching the location of the requested objects.

3.4 Request Transfered To PrimaryObjectStorageServer:

In this module ObjectStorageServer handles the request that comes from the
MetaDataServer and also handle the loadbalancing with the help of secondary
OSSs.Initially it accept all request from the MetaDataServer,after that it will check out
whether it will handle the request or transfer the request to
SecondaryObjectStorageServer.That process is check out by count. The request will
exceed the count means it will transfered to SecondaryObjectStorageServers.

3.5 Load Balancing made by ObjectStorageServer:

If the load is full at the primary server then the requests ae transferred to other
servers with in the clusters.

3.6 ResponseSendToClient:

In this module the secondaryobjectstorageserver handle the response comes


from the primaryObjectStorageServer with the corresponding client_ipaddress.After sent
response it will wait for the acknowledgement from the client.Once it got the
acknowledgement,secondaryObjectStorageServer will send response to primary OSS
finally the primaryOSS send acknowledgement to MetaDataServer.
SYSTEM
REQUIREMENTS
4.SYSTEM REQUIREMENTS

4.1Hardware Requirements:

Processor : Pentium III / IV


Hard Disk : 80 GB
Ram : 1 GB
Monitor : 15VGA Color
Mouse : Ball / Optical
CD-Drive : LG 52X
Keyboard : 108 Keys

4.2 Software Requirements:

Operating System : Windows XP professional


.Net Framework : Version 4.0
Front End : Microsoft Visual Studio .Net 2008
Language : Visual C#.Net
Back End : SQL Server 2005
SYSTEM DESIGN
5. System Design
Design is concerned with identifying software components specifying
relationships among components. Specifying software structure and providing blue print
for the document phase. Modularity is one of the desirable properties of large systems. It
implies that the system is divided into several parts. In such a manner, the interaction
between parts is minimal clearly specified. Design will explain software components in
detail. This will help the implementation of the system. Moreover, this will guide the
further changes in the system to satisfy the future requirements.
ARCHITECTURE:

The SSL protocol:


Cluster-Based Data Centers
The Fig depicts the typical architecture of a cluster-based data center or network server
consisting of three layers: front end Web server, mid-level application server, and back-
end database server. A Web server layer in a data center is a Web system architecture that
consists of multiple server nodes interconnected through a System Area Network (SAN).
The Web server presents the clients a single system view through a front-end Web
switch, which distributes the requests among the nodes. A request from a client goes
through a Web switch to initiate a connection between the client and the Web server.
When a request arrives at the Web switch, the Web switch distributes the request to one
of the servers using either a content-aware (Layer-7) or a content-oblivious (Layer-4)
distribution [11]. The front-end Web server provides static or simple dynamic services.
The Web resources provided by the first tier are usually open to the public and, thus, do
not require authentication or data encryption. Hence, the average latency of client
requests in this layer is usually shorter than in the application servers. The mid-tier, called
the application server, is located between the Web servers and the back-end database.
The application server has a separate load balancer and a security infrastructure such as a
firewall and should be equipped with a support for databases, transaction management,
communication, legacy data, and other functionalities [37]. After receiving a client’s
request, an application server parses and converts it to a query. Then, it sends the
generated query to a database and gets back the response from the database. Finally, it
converts the response into an HTML-based document and sends it back to the client. The
application server provides important functionalities for online business such as online
billing, banking, and inventory management. Therefore, the majority of the content here
is generated dynamically and requires an adequate security mechanism. The back-end
database layer houses the most confidential and secures data. The main communication
overhead of a database layer is the frequent disk access through the Storage Area
Network

Multi tier data center architecture:

UNIFIED MODELLING LANGUAGE(UML)


An Overview of UML:
The UML is a language for
♦ Visualizing
♦ Specifying
♦ Constructing
♦ Documenting

These are the artifacts of a software-intensive system.

A conceptual model of UML:


The three major elements of UML are
♦ The UML’s basic building blocks
♦ The rules that dictate how those building blocks may be put together.
♦ Some common mechanisms that apply throughout the UML.

Basic building blocks of the UML

The vocabulary of UML encompasses three kinds of building blocks:


♦ Things
♦ Relationships
♦ Diagrams

Things are the abstractions that are first-class citizens in a model.


Relationships tie these things together.
Diagrams group the interesting collection of things.

Things in UML: There are four kind of things in the UML


1. Structural things
2. Behavioral things.
3. Grouping things
4. Annotational things
These things are the basic object oriented building blocks of the UML.They are used to
write well-formed models.

STRUCTURAL THINGS
Structural things are the nouns of the UML models. These are mostly static parts of the
model, representing elements that are either conceptual or physical. In all, there are seven
kinds of Structural things.
Class:
A class is a description of a set of objects that share the same attributes, operations,
relationships, and semantics. A class implements one or more interfaces.
Graphically a class is rendered as a rectangle, usually including its name, attributes and
operations, as shown below.

RELATIONSHIPS IN THE UML:

There are four kinds of relationships in the UML:


1. Dependency
2. Association
3. Generalization
4. Realization

USE CASES
Use Case diagrams are one of the five diagrams in the UML for modeling the dynamic
aspects of systems(activity diagrams, sequence diagrams, state chart diagrams and
collaboration diagrams are the four other kinds of diagrams in the UML for modeling the
dynamic aspects of systems). Use Case diagrams are central to modeling the behavior of
the system, a sub-system, or a class. Each one shows a set of use cases and actors and
relationships.

Common Properties:
A Use Case diagram is just a special kind of diagram and shares the same common
properties, as do all other diagrams- a name and graphical contents that are a projection
into the model. What distinguishes a use case diagram from all other kinds of diagrams is
its particular content.
Contents:
Use Case diagrams commonly contain:
Use Cases
Actors
Dependency, generalization, and association relationships

Like all other diagrams, use case diagrams may contain notes and constraints.
Use Case diagrams may also contain packages, which are used to group elements of your
model into larger chunks. Occasionally, you will want to place instances of use cases in
your diagrams, as well, especially when you want to visualize a specific executing
system.

INTERACTION DIAGRAMS:

An Interaction diagram shows an interaction, consisting of a set of objects and their


relationships, including the messages that may be dispatched among them.
Interaction diagrams are used for modeling the dynamic aspects of the system.
A sequence diagram is an interaction diagram that emphasizes the time ordering of the
messages. Graphically, a sequence diagram is a table that shows objects arranged alongs
the X-axis and messages, ordered in increasing time, along the Y-axis and messages,
ordered in increasing time, along the Y-axis.

Contents:
Interaction diagrams commonly contains:
Objects
Links
Messages
Like all other diagrams, interaction diagrams may contain notes and constraints.
SEQUENCE DIAGRAMS:
A sequence diagram is an interaction diagram that emphasizes the time ordering of the
messages. Graphically, a sequence diagram is a table that shows objects arranged along
the X-axis and messages, ordered in increasing time, along the Y-axis.
Typically you place the object that initiates the interaction at the left, and increasingly
more sub-routine objects to the right. Next, you place the messages that these objects
send and receive along the Y-axis , in order of increasing time from top to the bottom.
This gives the reader a clear visual cue to the flow of control over time.

Sequence diagrams have two interesting features:


1. There is the object lifeline. An object lifeline is the vertical dashed line that
represents the existence of an object over a period of time. Most objects that
appear in the interaction diagrams will be in existance for the duration of the
interaction, so these objects are all aligned at the top of the diagram, with their
lifelines drawn from the top of the diagram to the bottom.
2. There is a focus of the control. The focus of control is tall, thin rectangle that
shows the period of time during which an object is performing an action, either
directly or through the subordinate procedure. The top of the rectangle is alignes
with the action; the bottom is aligned with its completion.

ACTIVITY DIAGRAM:

An Activity Diagram is essentially a flow chart showing flow of control from activity to
activity. They are used to model the dynamic aspects of as system .They can also be used
to model the flow of an object as it moves from state to state at different points in the
flow of control.
An activity is an ongoing non-atomic execution with in a state machine. Activities
ultimately result in some action, which is made up of executable atomic computations
that result in a change of state of distinguishes a use case diagram from all other kinds of
diagrams is its particular content.
Contents
Use case diagrams commonly contain:
Use cases
Actors
Dependency, generalizations, and association relationships

Like all other diagrams use case diagrams may contain notes and constraints
Use case diagrams may also contain packages which are used to group elements of your
model into larger chunks. Occasionally you will want to place instances of use cases of
your diagrams, as well especially when you want to visualize a specific executing system.

5.1Use Case Diagram:


5.2.class diagram:
User:

User

+details
+Login()
+Registration()

process for Image:


Process for Audio

process for Video:


5.3 Block Diagram:

Communication among oss

PrimaryOSS
Client1
Redirecting all request
OSS
Client2
MetaDataServer
OSS
Client3

OSS
Client4
5.4 DataFlowDiagram:

Step1(client process):

Userlogi
n

View files in server


server

Request to
mdsserver

step2(MetaDataServer):
MDS Login

Get The client Request

Redirecting the
request
To primary OSS

PrimaryOSS
S
step3(Primary OSS process):

OSSlogin

Get request from

MDS

count<3 Check Count>3


count of
Request

Send response to Transfer request


client to oss

oss send
response to client
to client
Server 1 Upload or Server 2
Download

Client Ssl_with_bf
Response time

Server 3

5.5 Sequential Diagram for client

LOGIN Image upload video upload audio download video download


audio upload

Home
upload images upload audio files

upload video

download audio files download video files

VALID USER

INVALID USER
Sequential diagram for VIA:

Mds login client request redirect the request to primary Transfer to client location secondary server
Home
valid user receive client request
redirect to primary server transfer to client

redirect to secondary server

valid user

invalid user
5.6 ER Diagram:

audio
image

video
Upload

Login id
Passwor
Select
d
File
SSL
Backend Admin/User
Forwarding Login

Mds recive
Download client request

Select File
From Better
Response time

Partion the
requests

Primary server
Oos sewrver

Secondary
server
Structure:

Home

Client Mds login

IP Representation

Upload file Browse


View IP addresses

Server 1 Server 2 Server 3 View uploaded filenames

Request
Transfer to reciver
Send req. which server Response
is free
Response time and which
server we are getting.
TESTING
6. Development of System and Testing

SYSTEM TESTING:

Testing is done for each module. After testing all the modules, the modules are
integrated and testing of the final system is done with the test data, specially designed to
show that the system will operate successfully in all its aspects conditions. The procedure
level testing is made first. By giving improper inputs, the errors occurred are noted and
eliminated. Thus the system testing is a confirmation that all is correct and an opportunity
to show the user that the system works. The final step involves Validation testing, which
determines whether the software function as the user expected. The end-user rather than
the system developer conduct this test most software developers as a process called
“Alpha and Beta test” to uncover that only the end user seems able to find.
This is the final step in system life cycle. Here we implement the tested error-free
system into real-life environment and make necessary changes, which runs in an online
fashion. Here system maintenance is done every months or year based on company
policies, and is checked for errors like runtime errors, long run errors and other
maintenances like table verification and reports.
6.1 UNIT TESTING

Unit testing verification efforts on the smallest unit of software design,


module. This is known as “Module Testing”. The modules are tested separately. This
testing is carried out during programming stage itself. In these testing steps, each
module is found to be working satisfactorily as regard to the expected output from the
module.

6.2 INTEGRATION TESTING

Integration testing is a systematic technique for constructing tests to


uncover error associated within the interface. In the project, all the modules are
combined and then the entire programmer is tested as a whole. In the integration-
testing step, all the error uncovered is corrected for the next testing steps.

6.3 VALIDATION TESTING

To uncover functional errors, that is, to check whether functional characteristics


confirm to specification or not specified.
IMPLEMENTATION
7.IMPLEMENTATION

Implementation is the most crucial stage in achieving a successful system


and giving the user’s confidence that the new system is workable and effective.
Implementation of a modified application to replace an existing one. This type of
conversation is relatively easy to handle, provide there are no major changes in the
system.
Each program is tested individually at the time of development using the data and
has verified that this program linked together in the way specified in the programs
specification, the computer system and its environment is tested to the satisfaction of the
user. The system that has been developed is accepted and proved to be satisfactory for the
user. And so the system is going to be implemented very soon. A simple operating
procedure is included so that the user can understand the different functions clearly and
quickly.
Initially as a first step the executable form of the application is to be created
and loaded in the common server machine which is accessible to the entire user and the
server is to be connected to a network. The final stage is to document the entire system
which provides components and the operating procedures of the system.
Implementation is the stage of the project when the theoretical design is
turned out into a working system. Thus it can be considered to be the most critical stage
in achieving a successful new system and in giving the user, confidence that the new
system will work and be effective.
The implementation stage involves careful planning, investigation of the
existing system and it’s constraints on implementation, designing of methods to achieve
changeover and evaluation of changeover methods.
Implementation is the process of converting a new system design into
operation. It is the phase that focuses on user training, site preparation and file conversion
for installing a candidate system. The important factor that should be considered here is
that the conversion should not disrupt the functioning of the organization.
MAINTENANCE
8.SYSTEM MAINTENANCE:

The objectives of this maintenance work are to make sure that the system gets into
work all time without any bug. Provision must be for environmental changes which may
affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In our project the process can be
added without affecting other parts of the system. Maintenance plays a vital role. The
system liable to accept any modification after its implementation. This system has been
designed to favor all new changes. Doing this will not affect the system’s performance or
its accuracy.
SCREEN SHOTS
9.ScreenShots:
View FilesFromServer:
Request To MetaDataServer

waiting for response:


Request from client page:
Redirect all request to primary Oss:
ObjectStorageServerLogin:
Request Handled By PrimaryObjectStorageServer:

Response send to client:


Response handled by SecondaryOss
Response Time:

Acknowledgement From Client to OSS:


DATA TABLE STRUCTURE

Client login:
image upload:

image path(Primary_Key)

Table for AudioFiles:


MetaDataServer Login
Table For VideoFiles:
ObjectStorageServer Login:

Das könnte Ihnen auch gefallen