Beruflich Dokumente
Kultur Dokumente
In
Submitted By
K.DURGA BHAVANI
K.RAMESH BABU M.YASWANTH KUMAR
R.MOHAN KRISHNA AYYAPPA P.SUNDARAM
CERTIFICATE
This is to certify that the report “WEB BASED SQL INJECTION PREVENTER
(WASP)” entitled is the bonafide work of K.Durga Bhavani(y7cs1422),k.Ramesh
Babu(y7cs1428),R.MohanKrishna Ayyappa(y7cs1439),M.yaswanth Kumar(y7cs1422),
P.Sundaram(y7cs1448) submitted in partial fulfillment of the requirements for the award
of the Degree Of Bachelor Of Technology (B.Tech) In Computer Science& Engineering
By Acharya Nagarjuna University, Nagarjunanagar, Guntur during the Academic
Year 2010-2011.
We are very much thankful to Sri. N.V.Rajasekhar Reddy, Assistant Professor and
Guide, Dept. of Computer Science & Engineering, G.V.R.&S College of Engineering &
Technology, Near Budampadu, Guntur, for the encouragement and constant support to
carry out this work successfully.
We would like to take this opportunity to express our gratitude to our Chairman
Dr. G.Venkateswara Rao and our principal Dr. N. Radha Krishna Murthy for giving
us this opportunity to do the project work.
We also thankful to all our faculty members for their suggestions and the moral
support extended by them.
We acknowledge the support of the Programmers, Lab technicians and other non-
teaching staff for their help in completion of this project work.
We place our gratitude to all our friends and well wishers who helped directly or
indirectly to complete this project work.
Finally we would like to extend our heartfelt thanks to our beloved parents whose
blessings and encouragement were always there as source of strength and inspiration.
(K.DURGA BHAVANI)
1. Introduction
1.1 Overview Of the System
1.2 Existing System
1.3 Proposed System
1.4 About the Organisation
1.5 System Environment
2.Feasibility Study
2.1 Technical Feasibility
2.2 Operational Feasibility
2.3 Economic Feasibility
3. Modules
3.1 MultimediaObjects Storing
4. System Requirements
4.1. Hardware Requirements
4.2 Software Requirements
5. System Design
5.1 usecase diagram
5.2 class diagram
5.3 Block Diagram
5.4 Dataflow Diagram
5.5 Sequential Diagram
5.6 class Diagram
6.Devalopment of System And Testing
6.1 Unit Testing
6.2 Integration Testing
6.3 Validation Testing
7.Implementaion
8.System Maintenance
9. Screen Shots
10. Conclusion & Future Enhancements
11. Bibliography
ABSTRACT
Due to the growing popularity of the Internet, data centers/network servers are
anticipated to be the bottleneck in hosting network-based services, even though the
network bandwidth continues to increase faster than the server capacity. It has been
observed that network servers contribute to approximately 40 percent of the overall
delay, and this delay is likely to grow with the increasing use of dynamic Web contents.
For Web-based applications, a poor response time has significant financial implications.
For example, E-Biz reported about $1.9 billion loss in revenue in 1998 due to the long
response time resulting from the Secure Sockets Layer (SSL), which is commonly used
for secure communication between clients and Web servers. Even though SSL is the de
facto standard for transport layer security, its high overhead and poor scalability are two
major problems in designing secure large-scale network servers. Deployment of SSL can
decrease a server’s capacity by up to two orders of magnitude.
In addition, the overhead of SSL becomes even more severe in application
servers. Application servers provide dynamic contents and the contents require secure
mechanisms for protection. Generating dynamic content takes about 100 to 1,000 times
longer than simply reading static content. Moreover, since static content is seldom
updated, it can be easily cached. Several efficient caching algorithms have been proposed
to reduce latency and increase throughput of front-end Web services. However, because
dynamic content is generated during the execution of a program, caching dynamic
content is not an efficient option like caching static content. Recently, a multitude of
network services have been designed and evaluated using cluster platforms. Specifically,
the design of distributed Web servers has been a major research thrust to improve the
throughput and response time. It is the first Web server model that exploits user-level
communication in a cluster-based Web server. Our previous work reduces the response
time in a cluster-based Web server using co scheduling schemes. In this paper, first, we
investigate the impact of SSL offering in cluster-based network servers, focusing on
application servers, which mainly provide dynamic content. Second, we show the
possible performance improvement when the SSL-session reuse scheme is utilized in
cluster based servers. The SSL-session reuse scheme has been tested on a single Web
server node and extended to a cluster system that consisted of three Web servers. In this
paper, we explore the SSL-session reuse scheme using 16-node and 32-node cluster
systems with various levels of workload. Third, we propose a back-end forwarding
mechanism by exploiting the low-overhead user-level communication to enhance the
SSL-enabled network server performance.
To this end, we compare three distribution models in clusters: Round Robin (RR),
ssl_with_session, and ssl_with_bf (backend_forwarding). The RR model, widely used in
Web clusters, distributes requests from clients to servers using the RR scheme.
ssl_with_session uses a more sophisticated distribution algorithm in which subsequent
requests of the same client are forwarded to the same server, avoiding expensive SSL
setup costs. The proposed ssl_with_bf uses the same distribution policy as the
ssl_with_session, but includes an intelligent load balancing scheme that forwards client
requests from a heavily loaded back-end node to a lightly loaded node to improve the
utilization across all nodes. This policy uses the underlying user-level communication for
fast communication. Extensive performance analyses with various workload and system
configurations are summarized as follows: First, schemes with reusable sessions,
deployed in the ssl_with_session and ssl_with_bf models, are essential to minimize the
SSL overhead. Second, the average latency can be reduced by 40 percent with the
proposed ssl_with_bf model compared to the ssl_with_session model, resulting in
improved throughput. Third, the proposed scheme provides high utilization and better
load balance across all nodes. The rest of this paper is organized as follows: a brief
overview of cluster-based network servers, user-level communication, and SSL is
provided. Section 3 outlines three distribution models, including our proposed SSL back-
end forwarding scheme.
DESCRIPTION OF THE PROBLEM
In existing system, they have used to develop the project using Round Robin [RR]
model and SSL_with_Session model. Those models are not effective. Those
models are not able to give the out put in time and the thorough put also lesser
than that their expected output.
These models had made the Latency problem and minimal through put. For this
problem they introduced the SSL_with_bf (Backend forwarding) model is to
overcome the existing problems. We going to implement SSL_with_Backend
Forwarding model in our proposed system.
COMPANY PROFILE
As a team we have the prowess to have a clear vision and realize it too. As
a statistical evaluation, the team has more than 40,000 hours of expertise in providing
real-time solutions in the fields of Embedded Systems, Control systems, Micro-
Controllers, c Based Interfacing, Programmable Logic Controller, VLSI Design And
Implementation, Networking With C, C++, java, client Server Technologies in Java,
(J2EE\J2ME\J2SE\EJB),VB & VC++, Oracle and operating system concepts with
LINUX.
Our Vision
“Dreaming a vision is possible and realizing it is our goal”.
Our Mission
We have achieved this by creating and perfecting processes that are in par
with the global standards and we deliver high quality, high value services, reliable and
cost effective IT products to clients around the world.
1.5 SYSTEM ENVIRONMENT:
Microsoft Visual Studio. Net used as front end tool. The reason for
selecting Visual Studio dot Net as front end tool as follows:
.NET (C Sharp)
INTRODUCTION:
Following literatures has guided us throughout the execution of
the project. These literatures are dealt with the environment in which the project is
executed and the technology required to perform the complete operations.
.Net
C#.NET:
C++: 10%
Visual Basic: 5%
Here are the common features that C# and Java share. It is very important to be aware
of these similarities, though we are not going to focus on these.
• Classes all descend from object and must be allocated on the heap with new
keyword
• Inner classes
CHARACTERISTICS OF C# :
Name spaces:
Security:
In C#, unsafe codes must be explicitly declared unsafe by the
modifier to prevent accident features. Moreover the compiler and execution engine
work hand in hand and ensure that an unsafe code is not executed in an unreliable
environment.
Garbage collection:
Data types:
Versioning:
Indexes:
Exception handling:
.NET standardizes the exception handling across languages.
C# offers the conditional keyword to control the flaw and make the code more
readable.
Error Elimination:
Extensive inter-operability:
Visual Studio .NET is a complete set of development tools for building ASP Web
applications, XML Web services, desktop applications, and mobile applications. Visual
Basic .NET, Visual C++ .NET, Visual C# .NET, and Visual J# .NET all use the same
integrated development environment (IDE), which allows them to share tools and
facilitates in the creation of mixed-language solutions. In addition, these languages
leverage the functionality of the .NET Framework, which provides access to key
technologies that simplify the development of ASP Web applications and XML Web
services.
The .NET Framework is an integral Windows component that supports building and
running the next generation of applications and XML Web services. The .NET
Framework is designed to fulfill the following objectives:
The .NET Framework can be hosted by unmanaged components that load the common
language runtime into their processes and initiate the execution of managed code, thereby
creating a software environment that can exploit both managed and unmanaged features.
The .NET Framework not only provides several runtime hosts, but also supports the
development of third-party runtime hosts.
Internet Explorer is an example of an unmanaged application that hosts the runtime (in
the form of a MIME type extension). Using Internet Explorer to host the runtime enables
you to embed managed components or Windows Forms controls in HTML documents.
Hosting the runtime in this way makes managed mobile code (similar to Microsoft®
ActiveX® controls) possible, but with significant improvements that only managed code
can offer, such as semi-trusted execution and isolated file storage.
The following illustration shows the relationship of the common language runtime and
the class library to your applications and to the overall system. The illustration also
shows how managed code operates within a larger architecture.
ADO.NET Overview
ADO.NET is an evolution of the ADO data access model that directly addresses user
requirements for developing scalable applications. It was designed specifically for the
web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects, and
also introduces new objects. Key new ADO.NET objects include the DataSet,
DataReader, and DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous data
architectures is that there exists an object -- the DataSet -- that is separate and distinct
from any data stores. Because of that, the DataSet functions as a standalone entity. You
can think of the DataSet as an always disconnected recordset that knows nothing about
the source or destination of the data it contains. Inside a DataSet, much like in a
database, there are tables, columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed
while the DataSet held the data. In the past, data processing has been primarily
connection-based. Now, in an effort to make multi-tiered apps more efficient, data
processing is turning to a message-based approach that revolves around chunks of
information. At the center of this approach is the DataAdapter, which provides a bridge
to retrieve and save data between a DataSet and its source data store. It accomplishes this
by means of requests to the appropriate SQL commands made against the data store.
The XML-based DataSet object provides a consistent programming model that works
with all models of data storage: flat, relational, and hierarchical. It does this by having no
'knowledge' of the source of its data, and by representing the data that it holds as
collections and data types. No matter what the source of the data within the DataSet is, it
is manipulated through the same set of standard APIs exposed through the DataSet and
its subordinate objects.
While the DataSet has no knowledge of the source of its data, the managed provider has
detailed and specific information. The role of the managed provider is to connect, fill,
and persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data
Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net
Framework provide four basic objects: the Command, Connection, DataReader and
DataAdapter. In the remaining sections of this document, we'll walk through each part
of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they
are, and how to program against them.
The following sections will introduce you to some objects that have evolved, and some
that are new. These objects are:
When dealing with connections to a database, there are two different options: SQL
Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider
(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data
Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).
Connections
Connections are used to 'talk to' databases, and are respresented by provider-specific
classes such as SQLConnection. Commands travel over connections and resultsets are
returned in the form of streams which can be read by a DataReader object, or pushed
into a DataSet object.
Commands
Commands contain the information that is submitted to a database, and are represented by
provider-specific classes such as SQLCommand. A command can be a stored procedure
call, an UPDATE statement, or a statement that returns results. You can also use input
and output parameters, and return values as part of your command syntax. The example
below shows how to issue an INSERT statement against the Northwind database.
DataReaders
DataSets
The DataSet object is similar to the ADO Recordset object, but more powerful, and with
one other important distinction: the DataSet is always disconnected. The DataSet object
represents a cache of data, with database-like structures such as tables, columns,
relationships, and constraints. However, though a DataSet can and does behave much
like a database, it is important to remember that DataSet objects do not interact directly
with databases, or other source data. This allows the developer to work with a
programming model that is always consistent, regardless of where the source data resides.
Data coming from a database, an XML file, from code, or user input can all be placed
into DataSet objects. Then, as changes are made to the DataSet they can be tracked and
verified before updating the source data. The GetChanges method of the DataSet object
actually creates a second DatSet that contains only the changes to the data. This DataSet
is then used by a DataAdapter (or other objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to produce and
consume XML data and XML schemas. XML schemas can be used to describe schemas
interchanged via WebServices. In fact, a DataSet with a schema can actually be
compiled for type safety and statement completion.
DataAdapters (OLEDB/SQL)
The DataAdapter object works as a bridge between the DataSet and the source data.
Using the provider-specific SqlDataAdapter (along with its associated SqlCommand
and SqlConnection) can increase overall performance when working with a Microsoft
SQL Server databases. For other OLE DB-supported databases, you would use the
OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection
objects.
The DataAdapter object uses commands to update the data source after changes have
been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT
command; using the Update method calls the INSERT, UPDATE or DELETE command
for each changed row. You can explicitly set these commands in order to control the
statements used at runtime to resolve changes, including the use of stored procedures. For
ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon
a select statement. However, this run-time generation requires an extra round-trip to the
server in order to gather required metadata, so explicitly providing the INSERT,
UPDATE, and DELETE commands at design time will result in better run-time
performance.
1. ADO.NET is the next evolution of ADO for the .Net Framework.
2. ADO.NET was created with n-Tier, statelessness and XML in the
forefront. Two new objects, the DataSet and DataAdapter, are provided for
these scenarios.
3. ADO.NET can be used to get data from a stream, or to store data in a
cache for updates.
4. There is a lot more information about ADO.NET in the documentation.
5. Remember, you can execute a command directly against the database in
order to do inserts, updates, and deletes. You don't need to first put data into a
DataSet in order to insert, update, or delete it.
6. Also, you can use a DataSet to bind to the data, move through the data,
and navigate data relationships
SQL Server 2005, released in October 2005, is the successor to SQL Server 2000. It
included native support for managing XML data, an ETL tool (SQL Server Integration
Services or SSIS), a Reporting Server, an OLAP and data mining server (Analysis
Services), and several messaging technologies, specifically Service Broker and
Notification Services.
Microsoft SQL Server 2005 includes a component named SQL CLR via
which it integrates with .NET Framework. Unlike most other applications that use .NET
Framework, SQL Server itself hosts the .NET Framework runtime, i.e., memory,
threading and resource management requirements of .NET Framework are satisfied by
SQLOS itself, rather than the underlying Windows operating system. SQLOS provides
deadlock detection and resolution services for .NET code as well. With SQL CLR, stored
procedures and triggers can be written in any managed .NET language, including C# and
VB.NET. Managed code can also be used to define UDT's (user defined types), which
can persist in the database. Managed code is compiled to .NET assemblies and after being
verified for type safety, registered at the database. After that, they can be invoked like
any other procedure.[26] However, only a subset of the Base Class Library is available,
when running code under SQL CLR. Most APIs relating to user interface functionality
are not available.[26]
When writing code for SQL CLR, data stored in SQL Server databases
can be accessed using the ADO.NET APIs like any other managed application that
accesses SQL Server data. However, doing that creates a new database session, different
from the one in which the code is executing. To avoid this, SQL Server provides some
enhancements to the ADO.NET provider that allows the connection to be redirected to
the same session which already hosts the running code. Such connections are called
context connections and are set by setting context connection parameter to true in the
connection string. SQL Server also provides several other enhancements to the
ADO.NET API, including classes to work with tabular data or a single row of data as
well as classes to work with internal metadata about the data stored in the database.
FEASIBILITY STUDY
2. Feasibility Study
The next step in analysis is to verify the feasibility of the proposed system. “All
projects are feasible given unlimited resources and infinite time“. But in reality both
resources and time are scarce. Project should confirm to time bounce and should be
optimal in there consumption of resources. This place a constant is approval of any
project.
Feasibility has applied to Digital Tune pertains to the following areas:
• Technical feasibility
• Operational feasibility
• Economical feasibility
3.3MetaDataServer Indexing
This module contains the client request sending information. Initially the client
view the files in server then make a request to the metadataserver with the corresponding
Client-ipaddress and portno.After that,the MetaDataServer stored the Client request
file,ipaddress ,File-Size,File_type.This process called as Request indexing.
In this module ObjectStorageServer handles the request that comes from the
MetaDataServer and also handle the loadbalancing with the help of secondary
OSSs.Initially it accept all request from the MetaDataServer,after that it will check out
whether it will handle the request or transfer the request to
SecondaryObjectStorageServer.That process is check out by count. The request will
exceed the count means it will transfered to SecondaryObjectStorageServers.
If the load is full at the primary server then the requests ae transferred to other
servers with in the clusters.
3.6 ResponseSendToClient:
4.1Hardware Requirements:
STRUCTURAL THINGS
Structural things are the nouns of the UML models. These are mostly static parts of the
model, representing elements that are either conceptual or physical. In all, there are seven
kinds of Structural things.
Class:
A class is a description of a set of objects that share the same attributes, operations,
relationships, and semantics. A class implements one or more interfaces.
Graphically a class is rendered as a rectangle, usually including its name, attributes and
operations, as shown below.
USE CASES
Use Case diagrams are one of the five diagrams in the UML for modeling the dynamic
aspects of systems(activity diagrams, sequence diagrams, state chart diagrams and
collaboration diagrams are the four other kinds of diagrams in the UML for modeling the
dynamic aspects of systems). Use Case diagrams are central to modeling the behavior of
the system, a sub-system, or a class. Each one shows a set of use cases and actors and
relationships.
Common Properties:
A Use Case diagram is just a special kind of diagram and shares the same common
properties, as do all other diagrams- a name and graphical contents that are a projection
into the model. What distinguishes a use case diagram from all other kinds of diagrams is
its particular content.
Contents:
Use Case diagrams commonly contain:
Use Cases
Actors
Dependency, generalization, and association relationships
Like all other diagrams, use case diagrams may contain notes and constraints.
Use Case diagrams may also contain packages, which are used to group elements of your
model into larger chunks. Occasionally, you will want to place instances of use cases in
your diagrams, as well, especially when you want to visualize a specific executing
system.
INTERACTION DIAGRAMS:
Contents:
Interaction diagrams commonly contains:
Objects
Links
Messages
Like all other diagrams, interaction diagrams may contain notes and constraints.
SEQUENCE DIAGRAMS:
A sequence diagram is an interaction diagram that emphasizes the time ordering of the
messages. Graphically, a sequence diagram is a table that shows objects arranged along
the X-axis and messages, ordered in increasing time, along the Y-axis.
Typically you place the object that initiates the interaction at the left, and increasingly
more sub-routine objects to the right. Next, you place the messages that these objects
send and receive along the Y-axis , in order of increasing time from top to the bottom.
This gives the reader a clear visual cue to the flow of control over time.
ACTIVITY DIAGRAM:
An Activity Diagram is essentially a flow chart showing flow of control from activity to
activity. They are used to model the dynamic aspects of as system .They can also be used
to model the flow of an object as it moves from state to state at different points in the
flow of control.
An activity is an ongoing non-atomic execution with in a state machine. Activities
ultimately result in some action, which is made up of executable atomic computations
that result in a change of state of distinguishes a use case diagram from all other kinds of
diagrams is its particular content.
Contents
Use case diagrams commonly contain:
Use cases
Actors
Dependency, generalizations, and association relationships
Like all other diagrams use case diagrams may contain notes and constraints
Use case diagrams may also contain packages which are used to group elements of your
model into larger chunks. Occasionally you will want to place instances of use cases of
your diagrams, as well especially when you want to visualize a specific executing system.
User
+details
+Login()
+Registration()
PrimaryOSS
Client1
Redirecting all request
OSS
Client2
MetaDataServer
OSS
Client3
OSS
Client4
5.4 DataFlowDiagram:
Step1(client process):
Userlogi
n
Request to
mdsserver
step2(MetaDataServer):
MDS Login
Redirecting the
request
To primary OSS
PrimaryOSS
S
step3(Primary OSS process):
OSSlogin
MDS
oss send
response to client
to client
Server 1 Upload or Server 2
Download
Client Ssl_with_bf
Response time
Server 3
Home
upload images upload audio files
upload video
VALID USER
INVALID USER
Sequential diagram for VIA:
Mds login client request redirect the request to primary Transfer to client location secondary server
Home
valid user receive client request
redirect to primary server transfer to client
valid user
invalid user
5.6 ER Diagram:
audio
image
video
Upload
Login id
Passwor
Select
d
File
SSL
Backend Admin/User
Forwarding Login
Mds recive
Download client request
Select File
From Better
Response time
Partion the
requests
Primary server
Oos sewrver
Secondary
server
Structure:
Home
IP Representation
Request
Transfer to reciver
Send req. which server Response
is free
Response time and which
server we are getting.
TESTING
6. Development of System and Testing
SYSTEM TESTING:
Testing is done for each module. After testing all the modules, the modules are
integrated and testing of the final system is done with the test data, specially designed to
show that the system will operate successfully in all its aspects conditions. The procedure
level testing is made first. By giving improper inputs, the errors occurred are noted and
eliminated. Thus the system testing is a confirmation that all is correct and an opportunity
to show the user that the system works. The final step involves Validation testing, which
determines whether the software function as the user expected. The end-user rather than
the system developer conduct this test most software developers as a process called
“Alpha and Beta test” to uncover that only the end user seems able to find.
This is the final step in system life cycle. Here we implement the tested error-free
system into real-life environment and make necessary changes, which runs in an online
fashion. Here system maintenance is done every months or year based on company
policies, and is checked for errors like runtime errors, long run errors and other
maintenances like table verification and reports.
6.1 UNIT TESTING
The objectives of this maintenance work are to make sure that the system gets into
work all time without any bug. Provision must be for environmental changes which may
affect the computer or software system. This is called the maintenance of the system.
Nowadays there is the rapid change in the software world. Due to this rapid change, the
system should be capable of adapting these changes. In our project the process can be
added without affecting other parts of the system. Maintenance plays a vital role. The
system liable to accept any modification after its implementation. This system has been
designed to favor all new changes. Doing this will not affect the system’s performance or
its accuracy.
SCREEN SHOTS
9.ScreenShots:
View FilesFromServer:
Request To MetaDataServer
Client login:
image upload:
image path(Primary_Key)