Sie sind auf Seite 1von 49

DATA BASE 1 Probe data initiation Data delay Queuingand destination Data loss Hypothesis/ Normal to end probe

End links Node data Probe Node selection / data Probe MBI transformation Register/login MBI MBI &hypothesis BONAFIDE CERTIFICATE Delay quality Link INFERENCE Network Entry Data loss Data loss Network delay Data storage Link loss Inference for link quality Data storage Data loss Certified that this project report titled MODEL BASED IDENTIFICATION OF DOMINANT Network path analyzing DominantLINKis link CONGESTED Network status the bonafide work of Selected best path K.LAVANYA(51708205004),N.POONGODHAI(51708205008) who carried out the research Admin User under my supervision. Certified further, that to the best of my knowledge the work reported herein Register does not form part of any other project report or dissertation on the basis of which a degree or Login

award was conferred on an earlier occasion on this or any other candidate.

Head of the Department Supervisor Mrs.k.MENAGA Asst. Professor, Department Of Information Technology , THIRUMALAI ENGG.COLLEGE, KILAMBI, KANCHIPURAM. Mr.NEELAKANDEN,B.E,M.E. Head of the Department, Department Of Information technology, THIRUMALAI ENGG.COLLEGE, KILAMBI, KANCHIPURAM.

Submitted for project and Viva Examination held on___________

Internal Examiner

External Examiner

ABSTRACT This project is a model-based approach that uses periodic endend probes to identify whether a dominant congested link exists along an endend path. The dominant congested link refers to a link that incurs the most losses and significant queuing delays along the path. Usually the data queuing delay are occurs at the node when the links has minimum capacity or its bandwidth is too low. This approach for dominant congested link identification is based on interpreting probe loss as an unobserved delay. We begin by providing a formal yet intuitive definition of dominant congested link and present two simple hypothesis tests to identify whether such a link exists. We develop parameter inference algorithms for hidden Markov model and Markov model with a hidden dimension to infer his virtual delay. And also derive an upper bound of the maximum queuing delay of that link, which is an important path characteristic and is complementary to other tools that estimate the available bandwidth or the minimum link capacity of a path. Identifying the existence of a dominant congested link is useful for traffic engineering. It also helps us understand and model the dynamics of the network since the behavior of a network with a dominant congested link differs dramatically from one with multiple congested links.

ACKNOWLEDGEMENT We profoundly thank our chairman and trust members of kanchi Krishna educational trust for providing adequate facilities. We acknowledge and thank our beloved principal Mr.RAMU, for his continuous encouragement. We extend our thanks to head of the department, the Asistant prof . Neelakanden, for his precious advice regarding the project. We would like to express our deep and unbounced gratefulness to my project guide Assittantprof. Neelakanden . , head of the department for his valuable guidance and encouragement throughout the project .he has been

a constant source of inspiration and has provided the precious suggestions throughout this project. We are deeply grateful to Mr . xxxx , veeken technology solution(p) LTD. external guide for their technical guidance consistently and encouraging with spirit all along We thank all faculty members and supporting staff for the help they extended ,in completing this project. I also express my sincere thanks to our parents, family members and all my friends for their continuous support.

TABLE OF CONTENTS CHAPTER N0. ABSTRACT LIST OF TABLES LIST OF FIGURES LIST OF ABBREVIATIONS 1. 2. 3. INTRODUCTION LITERATURE SURVEY SYSTEM ANALYSIS
1. Existing System

TITLE

PAGE No. ii viii ix xi 1 3 6 6 7 8 8 8 8 8

3.2 Proposed System 4. SYSTEM REQUIREMENTS 4.1 System Requirements 4.2 Hardware Requirements 4.3 software specification 4.3.1 visual studio .net Features of .net 4.3.2 sql server through the server explorer

4.3.2.1 Database diagrams 4.3.2.2 Tables 4.3.2. View 4.3.2.4 Stored procedures 4.3.2.5 Functions 5. SYSTEM DESIGN 5.1 System Architecture

9 10 10 11 12 13 13

5.2 UML Diagrams 5.2.1 Usecase Diagram 5.2.2 Dataflow Diagram

14 14 15

5.3 Module Design 5.3.1 Input design

18 18

5.3.2 Output design

19

5.4 Database Design 5.4.1 Table Design 5.4.2 HTML server control 5.4.3 web server control 6. TESTING 6.1 System Testing 6.2 Unit Testing

23 23 23 24 25 25 25

6.3 Integration Testing 6.4 Validation Testing 7. SYSTEM IMPLEMENTATION 7.1 Modules 7.1.1 SENDERS/RECEIVER 7.1.2 ADMIN 7.1.3 PROBE PACKETS 7.1.4 NETWORK ANALYSIS 7.1.5 PATH ESTIMATION

26 28 30 30 30 30 31 31 31

7.1.6 REPORT 8. 9. 10. SAMPLE CODING SCREEN SHOTS CONCLUSION AND FUTURE ENHANCEMENTS 10.1 Conclusion 10.2 Merits Of The System 32 39 48 48

CHAPTER 1 INTRODUCTION
Identifying the existence of a dominant congested link is useful for traffic engineering. For example, when there are multiple paths from one host to another and all are congested, improving the quality along a path with one dominant congested link may require fewer resources than those along a path with multiple congested links. Identifying whether a path has a dominant congested link also helps us understand and model the dynamics of the network since the behavior of a network with a dominant congested link differs dramatically from one with multiple congested links. When a dominant congested link exists, identifying the existence of such a link requires distinguishing its delay and loss characteristics from

those of the other links. Achieving this goal via direct measurements is only possible for the organization in charge of that network. However, commercial factors often prevent an organization from disclosing the performance of internal links. Furthermore, as the Internet grows in both size and diversity, one organization may only be responsible for a subset of links on an endend path. Some measurement techniques obtain internal properties of a path by using ICMP messages to query internal routers. Trace route and ping are two widely used tools in this category. Some more advanced techniques use ICMP messages to measure per-hop capacity or delay and pinpoint faulty links. These approaches, however require cooperation of the routers (to respond to ICMP messages and treat them similarly as data packets). Contrary to direct measurements using responses from routers, a collection of network tomography techniques infers internal loss Most rate and delay characteristics techniques, using end-end require measurements. tomography however,

observations from multiple vintage points. Network tomography infers internal link properties through endend measurements. A rich collection of network tomography techniques have been developed in the past. Many techniques rely on correlated measurements (through multicast or striped unicast probes). More recently, several studies use uncorrelated measurements to detect lossy links, estimate loss rates, or locate congested segments that have transient high delays. Most tomography techniques, however, require many vintage points, while we only need measurements between two end-hosts along a single path. The work closest in spirit to ours is the loss pair approach that is used to discover network properties. A loss pair is formed when two packets are sent close in time and only one of the packets is lost. Assuming that t he two packets experience similar behaviors along the path, the packet not lost in a loss pair is used to provide insights on network conditions close to the time when loss occurs. Although our work also uses properties of lost packets, our objectives differ tremendously from those traditional methods. More specifically, the study begins with assuming that

a bottleneck link exists along the path and uses loss pairs to determine the maximum queuing delay of the bottleneck link.

CHAPTER 2 LITERATURE SURVEY


End-to-End Internet Packet Dynamics Using TCP, however, also incurs two serious analysis headaches. First, we need to distinguish between the apparently intertwined effects of the transport protocol and the network. To do so, we developed tcp analysis, a program that understands the specifics of the different TCP implementations in our study and thus can separate TCP behavior from network behavior. Tcp analysis also forms the basis for the analysis in this paper: after removing TCP effects, it then computes a wide range of statistics concerning network dynamics. Second, TCP packets are sent over a wide range of time scales, from milliseconds to many seconds between consecutive packets. Such irregular spacing greatly complicates co relational and frequency-domain analysis, because a stream of TCP packets does not give us a traditional time series of constant-rate observations to work with. Consequently, in this paper we do not attempt these sorts of analyses; though we hope to pursue them in future work. See also for previous work in applying frequency-domain analysis to Internet paths. Even though Internet routers employ FIFO queuing, any time a route changes, if the new route offers a lower delay than the old one, then reordering can occur. Since we recorded packets at both ends of each TCP connection, we can detect network reordering, as follows. First, we remove from our analysis any trace pairs suffering packet filter errors [Pa97a]. Then, for each arriving packet pi, we check whether it was sent after the last non-reordered packet. If so, then it becomes the new such packet. Otherwise, we count its arrival as an instance of a network reordering. End to End Available Bandwidth: Measurement Methodology, Dynamics, and Relation with TCP Throughput

The available bandwidth in a network path is of major importance in congestion control, streaming applications, QoS verification, server selection, and overlay networks. We describe an end-to-end methodology, called Self Loading Periodic Streams, for measuring available bandwidth. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream's rate is higher than the avail-bw. We implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is non-intrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability (`dynamics') of the availbw in some paths that cross USA and Europe. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in paths with limited capacity (probably due to a lower degree of statistical multiplexing). We finally examine the relation between avail-bw and TCP throughput. A persistent TCP connection can be used to roughly measure the avail-bw in a path, but TCP saturates the path, and increases significantly the path delays and jitter. An original end-to-end avail-bw measurement methodology, called SelfLoading Periodic Streams or SLoPS. The basic idea in SLoPS is that the oneway delays of a periodic packet stream show an increasing trend when the stream's rate is higher than the avail-bw. SLoPS have been implemented in a measurement tool called pathload. The tool has been verified experimentally, by comparing its results with MRTG utilization graphs for the path links. We have also evaluated pathload in a controlled and reproducible environment using NS simulations. The simulations show that pathload reports a range that includes the average avail-bw in a wide range of load conditions and path configurations. The tool underestimates the avail-bw, however, when the path includes several tight links. The pathload measurements are nonintrusive, meaning that they do not cause significant increases in the network utilization, delays, or losses. Pathload is described in detail in a

different publication here we describe the tool's salient features and show a few experimental and simulation results to evaluate the tool's accuracy. A Measurement Study of Available Bandwidth Estimation Tools Available bandwidth estimation is useful for route selection in overlay networks, QoS verification, and traffic engineering. Recent years have seen a surge in interest in available bandwidth estimation. A few tools have been proposed and evaluated in simulation and over a limited number of Internet paths, but there is still great uncertainty in the performance of these tools over the Internet at large. The probe rate model (PRM) is based on the concept of self-induced congestion; informally, if one sends probe traffic at a rate lower than the available bandwidth along the path, then the arrival rate of probe traffic at the receiver will match their rate at the sender. In contrast, if the probe traffic is sent at a rate higher than the available bandwidth, then queues will build up inside the network and the probe traffic will be delayed. As a result, the probes' rate at the receiver will be less than their sending rate. Thus, one can measure the available bandwidth by searching for the turning point at which the probe sending and receiving rates start matching. The probe gap model (PGM) exploits the information in the time gap between the arrivals of two successive probes at the receiver. A probe pair is sent with a time gap in, and reaches the receiver with a time gap out. Assuming a single bottleneck and that the queue does not become empty between the departure of the first probe in the pair and the arrival of the second probe, then out is the time taken by the bottleneck to transmit the second probe in the pair and the cross traffic that arrived during in. Thus, the time to transmit the cross traffic is out - in, and the rate of the crosstraffic is out in out-inin*C Where C is the capacity of the bottle neck. The available bandwidth is: A=C*(1- out-inin) User level Internet Path Diagnosis We focus on the problem of locating performance faults such as loss, reordering, and significant queuing at specific links, routers, or middle

boxes (e.g., firewalls) along Internet paths. We consider this problem from the point of view of an ordinary user, with no special privileges, in a general setting where paths cross multiple administrative domains. We refer to this as the problem of user-level path diagnosis. It is important that unprivileged users be able to diagnose their paths. Performance depends on the interaction of the properties of the entire path and the application. Since operators do not share the users view of the network, they are not always well-placed to even observe the problem. Even when they are, they may be little better off than users. Operators may have no more insight than unprivileged users for problems inside other administrative domains because of the distributed management of the Internet, and most Internet paths cross multiple domains. Of course, users must also be able to do something about the problems they observe. Often, detailed knowledge is enough. By mapping the faulty component to the ISP that owns it, the user can directly contact the responsible ISP leading to faster problem resolution; operators are frequently not even aware of the problem. Further, we believe ISPs will better provision and manage their networks if their users can readily identify faults. For example, users can demand that their ISP provide additional capacity if upstream links are frequently overloaded. In the absence of fault localization, ISPs tend to blame poor performance on factors beyond their control. Finally, recent research has examined various ways to route around network problems, either through manipulating BGP policy choices or via overlay-level source routing. These techniques are more effective and scalable with fault localization than blindly trying all possibilities. Unfortunately, existing diagnosis tools have significant limitations because they are based on round trip measurements to routers. For instance, path char measures the queuing at each hop along the path by analyzing round trip times to successive routers. But this approach has the fundamental disadvantage that it confuses the properties of the forward and reverse paths. The asymmetry of most Internet paths, with different paths to and from routers, makes it even harder to draw strong conclusions about per-hop behavior.

Empirical Bandwidth

Evaluation

of

Techniques

for

Measuring

Available

Increasing the measurement timescale improves the accuracy of all available bandwidth estimation tools. This is to be expected larger MTs imply that a larger number of probe packets interact with the cross-traffic and are able to better sample the AB process. However, the gain in accuracy is most significant at fine time-scales. The gains are negligible beyond an MT of 50 ms. the impact of MT on Path Chirp is lower than on the other tools. This is due to the exponential inter-packet spacing in the probe streamsthe number of probes sent does not increase proportionally with MT. More importantly, by keeping the MT the same across different tools, the relative performance difference between the tools changes! Most significantly, Spruce now is the most accurate, while it was the least accurate with the default settings of MT. SI has a negligible impact on the AB estimation accuracy of the open-loop tools, Spruce and Path Chirp. This result may seem contrary to the observations made in [20] that high values of SI lead to better sampling accuracyit is important to note, however, that the AB estimation accuracy is also limited by the accuracy of the inference logic used by the respective tools. Our observations indicate that increasing the rate of probing the AB process is not likely to help improve the accuracy of current tools. The ability to measure end-to-end Available Bandwidth (AB) on a network path is useful in several domains, including overlay-routing infrastructure, network monitoring, and design of transport protocols. Several tools have, consequently, been proposed to estimate end-to-end AB. Unfortunately, existing evaluations of these tools are either not comprehensive or are biased by the current state of implementation technology. In this paper, we conduct a comprehensive empirical evaluation of algorithmic techniques for measuring AB.

CHAPTER 3 SYSTEM ANALYSIS

EXISTING SYSTEM In the existing systems, the identification of a dominant congested link existence requires distinguishing its delay and loss characteristics from those of the other links. Achieving this goal via direct measurements is only possible for the organization in charge of that network. However, commercial factors often prevent an organization from disclosing the performance of internal links. Furthermore, as the Internet grows in both size and diversity, one organization may only be responsible for a subset of links on an endend path. Some measurement techniques obtain internal properties of a path by using ICMP messages to query internal routers. Trace route and ping are two widely used tools in this category. Some more advanced techniques use ICMP messages to measure per-hop capacity or delay and pinpoint faulty links. These approaches, however, require cooperation of the routers (to respond to ICMP messages and treat them similarly as data packets). Contrary to direct measurements using responses from routers, a collection of network tomography techniques infers internal loss Most rate and delay characteristics techniques, using end-end require measurements.

tomography

however,

observations from multiple vintage points. Disadvantages of this system are Loss of data is high, More delay time, Low available bandwidth, Increasing the sending rate techniques, Require many vintage points for a single path along the two end host, Low capability, Loss pair approach, and Tight and narrow links. The system proposes a novel model-based approach to identify whether a dominant congested link exists along an endend path using endend measurements. We periodically send probes from one host to

PROPOSED SYSTEM

another so as to obtain a sequence of delay and loss values. The key insight in our approach is to utilize the queuing delay properties of the lost probes. We interpret a loss as an unobserved delay and discretize the delay values. Afterwards, make a model the discretized delay sequence of all probes including those with missing values to infer whether a dominant congested link exists. Based on it we provide a statistical upper bound on the maximum queuing delay of a dominant congested link once we identify such a link exists. And the hypothesis tests utilize the queuing delays of the virtual probes with loss marks. This approach infers the properties of the lost packets by utilizing delay and loss observations jointly and the correlation in the entire observation sequence, instead of using direct measurements from the loss pairs. The main advantages of the proposed system are define as follows,

Simple hypothesis test, Interpreting probe, Utilizes the full information of probing packets, Fast identification, More accuracy, Delay and loss utilization for inference, Correlation of entire observation, High security, and High Bandwidth and capacity.

In further the throughput of the whole network communication will be calculated, this may helps to find the path capability and the data originality, data efficiency, path protection and more The major objective of this project can define as follows

Identify the bottleneck links, Reduces the delay time, Interpreting probe, and

Data security

CHAPTER 4 SYSTEM REQUIREMENTS


HARDWARE REQUIREMENTS:
Hard disk RAM Processor Monitor

: : : :

40 GB 512mb Pentium IV 17 Color monitor Multi media.

Key board, Mouse:

SOFTWARE REQUIREMENTS:
Front End Code Behind Back End

VISUAL STUDIO.NET 2005 : C#.NET

SQL SERVER 2000 : Windows XP.

Operating System

THIS LANGUAGE SPECIFICATION FEATURES OF VISUAL STUDIO .NET Visual Studio .NET is the single IDE that all the .NET languages can use. It makes everything available to all languages.

Visual Studio .NET is a great Multilanguage development environment and offers a complete set of tools to build Windows Forms , ASP.NET Web applications , and XML Web services. Start Page The Start page offers three tabs at the top of the window that enables to modify Visual Studio.NET as well as find important information. The tabs are HTML Server Controls versus Web Server Controls Control Type HTML Controls When to use this Control Type

Server When converting traditional ASP 3.0 Web pages to ASP.NET Web pages and speed of completion is a concern. It is a lot easier to change your HTML elements to HTML server controls than it is to change them to Web server controls. When you prefer a more HTML-type

programming model. When you wish to explicitly control the code that is generated for the browser. Web Controls Server When you require a rich set of functionality to perform complicated page requirements. When you are developing web pages that will be viewed by a multitude of browser types and that require different code based on these types.

When you prefer a more Visual Basic-type programming model that is based on the use of controls and control properties.

Projects tab: This tab is the one to start new projects and launch projects that already exists. This tab lets you to create a new project or open an existing project. Online Resources tab: This tab provides a number of online resources when connected to the Internet. My Profile tab: This tab enables to customize the Visual Studio.NET environment to resemble the structured environment that is familiar with. Server Explorer This window enables to perform a number of functions such as database connectivity, performance monitoring, and interacting with event logs. By using Server Explorer you can log on to a remote server and view database and system data about that server. Many of the functions that are performed with the Enterprise Manager in SQL Server can now be executed in the Server Explorer. Solution Explorer provides an organized view of the projects in the application.The toolbar within the Solution Explorer enables to

View code page of the selected item. View design page of the selected item. Refresh the state of the selected item.

Copy the Web project between Web servers. Show all the files in the project, including the hidden files. See Properties of the selected item.

Class View The Class View window can be viewed from the Start Page by clicking the Class View tab. The Class View shows all the classes that are contained within your solution. The Class View shows the hierarchical relationship among the classes in your solution as well as the number of other items including methods, enumerations, namespaces, unions, and events. It is possible to organize the view of these items within the window by right-clicking anywhere in the Class View area and choosing how the items are sorted. Toolbox The Toolbox window enables to specify elements that will be part of the Windows Forms or Web Forms. It provides a drag and drop means of adding elements and controls to the pages or forms. The code snippets can also be stored within the Toolbox. Properties window This window provides the properties of an item that is part of the application. This enables to control the style and behavior of the item selected to modify. Dynamic Help This window shows a list of help topics. The help topics change based on the item selected or the action being taken. The Dynamic Help window shows the help items displayed when you have a Button control on the page selected. After the item is selected, a list of targeted help topic is displayed. The topics are organized

as a list of links. Clicking one of the links in the Dynamic Help window opens the selected help topic in the Document window.

Document window The Document window is the main window within Visual Studio.NET where the applications are built. The Document window shows open files in either Design or HTML mode. Each open file is represented by a tab at the top of the Document window. Any number of files can be kept open at the same time, and you can switch between the open files by clicking the appropriate tab. Design mode versus HTML mode Visual Studio.NET offers two modes for viewing and building files: Design and HTML. By clicking the Design tab at the bottom of the Document window, you can see how the page will view to the user. The page is built in the Design mode by dragging and dropping elements directly onto the design page or form. Visual Studio .NET automatically generates the appropriate code. When the page is viewed in HTML mode, it shows the code for the page. It enables to directly modify the code to change the way in which the page is presented. Working with SQL Server through the Server Explorer Using Visual Studio.NET , there is no need to open the Enterprise Manager from SQL Server. Visual Studio.NET has the SQL Servers tab within the Server Explorer that gives a list of all the servers that are connected to those having SQL Server on them. Opening up a particular server tab gives five options:

Database Diagrams

Tables Views Stored Procedures Functions

Database Diagrams To create a new diagram right click Database diagrams and select New Diagram. The Add Tables dialog enables to select one to all the tables that you want in the visual diagram you are going to create. Visual Studio .NET looks at all the relationships between the tables and then creates a diagram that opens in the Document window. Each table is represented in the diagram and a list of all the columns that are available in that particular table. Each relationship between tables is represented by a connection line between those tables. The properties of the relationship can be viewed by right clicking the relationship line. Tables The Server Explorer allows to work directly with the tables in SQL Server. It gives a list of tables contained in the particular database selected. By double clicking one of the tables, the table is seen in the Document window. This grid of data shows all the columns and rows of data contained in the particular table. The data can be added or deleted from the table grid directly in the Document window. To add a new row of data , move to the bottom of the table and type in a new row of data after selecting the first column of

the first blank row. You can also delete a row of data from the table by right clicking the gray box at the left end of the row and selecting Delete. By right clicking the gray box at the far left end of the row, the primary key can be set for that particular column. The relationships to columns in other tables can be set by selecting the Relationships option. To create a new table right-click the Tables section within the Server Explorer and selecting New Table. This gives the design view that enables to start specifying the columns and column details about the table. To run queries against the tables in Visual Studio .NET, open the view of the query toolbar by choosing View->Toolbars->Query. To query a specific table, open that table in the Document window. Then click the SQL button which divides the Document window into two panes-one for query and other to show results gathered from the query. The query is executed by clicking the Execute Query button and the result is produced in the lower pane of the Document window. Views To create a new view, right-click the View node and select New View. The Add Table dialog box enables to select the tables from which the view is produced. The next pane enables to customize the appearance of the data in the view.

CHAPTER 5 SYSTEM DESIGN

SYSTEM DIAGRAM

DATA FLOW DIAGRAM

INPUT DESIGN Input design is the process of converting user-originated inputs to a computer-based format to the application forms. Input design is one of the most expensive phases of the operation of computerized system and is often the major problem of a system. OUTPUT DESIGN Output design generally refers to the results and information that are generated by the system for many end-users; output is the main reason for developing the system and the basis on which they evaluate the usefulness of the application. The output is designed in such a way that it is attractive, convenient and informative. Forms are designed in C#.NET with various features, which make the console output more pleasing.As the outputs are the most important sources of information to the users, better design should improve the systems relationships with us and also will help in decision-making. Form design elaborates the way output is presented and the layout available for capturing information.

DATABASE DESIGN THE DATABASE DESIGN IS A MUST FOR ANY APPLICATION DEVELOPED ESPECIALLY MORE FOR THE DATA STORE PROJECTS. SINCE THE CHATTING METHOD INVOLVES STORING THE MESSAGE IN THE TABLE AND PRODUCED TO THE SENDER AND RECEIVER, PROPER HANDLING OF THE TABLE IS A MUST. IN THE PROJECT, LOGIN TABLE IS DESIGNED TO BE UNIQUE IN ACCEPTING THE USERNAME AND THE LENGTH OF THE USERNAME AND PASSWORD SHOULD BE GREATER THAN ZERO. BOTH THE COMPANY AND SEEKER USERNAME ARE STORED IN THE SAME TABLE WITH DIFFERENT FLAG VALUES. THE JOB AND QUESTION TABLE IS COMMON TO ALL COMPANIES. LIKEWISE JOB APPLY DETAILS ARE STORED IN THE COMMON APPLY TABLE. THE DIFFERENT USERS VIEW THE DATA IN DIFFERENT FORMAT ACCORDING TO THE PRIVILEGES GIVEN.THE COMPLETE LISTING OF THE TABLES AND THEIR FIELDS ARE PROVIDED IN THE ANNEXURE UNDER THE TITLE TABLE STRUCTURE.

CHAPTER 6 TESTING
SYSTEM TESTING Testing is done for each module. After testing all the modules, the modules are integrated and testing of the final system is done with the test data, specially designed to show that the system will operate

successfully in all its aspects conditions. The procedure level testing is made first. By giving improper inputs, the errors occurred are noted and eliminated. Thus the system testing is a confirmation that all is correct and an opportunity to show the user that the system works. The final step involves Validation testing, which determines whether the software function as the user expected. The end-user rather than the system developer conduct this test most software developers as a process called Alpha and Beta test to uncover that only the end user seems able to find.

This is the final step in system life cycle. Here we implement the tested error-free system into real-life environment and make necessary changes, which runs in an online fashion. Here system maintenance is done every months or year based on company policies, and is checked for errors like runtime errors, long run errors and other maintenances like table verification and reports.

UNIT TESTING: Unit testing verification efforts on the smallest unit of software design, module. This is known as Module Testing. The modules are tested separately. This testing is carried out during programming stage itself. In these testing steps, each module is found to be working satisfactorily as regard to the expected output from the module.

INTEGRATION TESTING: Integration testing is a systematic technique for constructing tests to uncover error associated within the interface. In the

project, all the modules are combined and then the entire programmer is tested as a whole. In the integration-testing step, all the error uncovered is corrected for the next testing steps. VALIDATION TESTING: To uncover functional errors, that is, to check whether functional characteristics confirm to specification or not specified.

CHAPTER 7 SYSTEM IMPLEMENTATION


MODULES
1. Sender/Receiver 2. Admin 3. Probe packets 4. Network analysis 5. Path estimation and 6. Reports.

1. SENDER/RECEIVER This module is the initiate module for the data transfer between the nodes in network communication. A sender node is the data source node which will transmit that data into the valid destination /receiver node. Before the data initiated, here the sender should sends the probe data to avoid data failure. Once it sends the data it will receive the acknowledgement for the data from corresponding destination node.

Receiver is the destination node where the sender data have to reach. Receiver nodes are generally aware of its neighbor nodes to get the data from sender. After it received the data exactly receiver node should send acknowledgement to the sender to intimate that the data was received.

2. ADMIN This module is the overall controller of the network, this makes all users authenticate and maintain the path link ability. Some of the admin works are,
o o o o o

User authentication, User details, Network link details, Path verification, Network analysis report, etc.

3. PROBE PACKETS This module is about the probe packets which are going to use for finding data loss and queuing delay. Probe packets are just imaginary packets which are nether less of loss and delay. It will travel through all possible links and calculating the link quality to forward admin. These packets are generally initiated from the sender part and also when a network constructed there will be probe testing occurs for checking the link estimation.

4. NETWORK ANALYSIS Once the probe packet initiated then overall network link delay and its data loss will be calculated, this report will helps to analysis the network like which link have minimum traffic delay , which link reduces data loss and which one is best to transfers the current data. These all will help in the case of both MBI and Hypothesis test.

5. PATH ESTIMATION The parameters from the MBI and hypothesis are queuing delay and data loss respectively. This will take as input for the path estimation; inference algorithm uses both these parameters and gives the valid, best and quality link for the data transformation. It does not specify the best link alone it also checks the most congested link and least congested links. 6. REPORT This module is the final module, which explains the overall data transfers, node problems, link details and the data sharing with the admin part. If the sender and receiver nodes use the probe checking then that details are reported to the admin respectively. Some of the reports details are,
o o o

Link delay and loss, Sender data transfers, Probe packet calculations etc.

CHAPTER 8 SAMPLE CODING

Modules are units of code written in access basic language. We can write and use module to automate and customize the database in very sophisticated way.

CODING Sender probe


public partial class Probe_Send : Form { public Probe_Send() { InitializeComponent(); } string fname, dest; public Probe_Send(string s1,string s2) { fname = s1; dest = s2; InitializeComponent(); } private void Probe_Send_Load(object sender, EventArgs e) { try { NetworkBrowser nb = new NetworkBrowser(); foreach (string pc in nb.getNetworkComputers()) { listBox1.Items.Add(pc); } } catch (Exception ex) { MessageBox.Show("An error occurred trying to access the network computers", MessageBoxButtons.OK, MessageBoxIcon.Error); Application.Exit(); } int n = listBox1.Items.Count;

"error",

for (int i = 0; i < n; i++) { byte[] data = new byte[1024]; byte[] data1 = new byte[1024]; byte[] data2 = new byte[1024]; string input, strdata; string str1, str2; TcpClient Server1 = new TcpClient(); int recv, recv1, recv2; str1 = DateTime.Now.TimeOfDay.ToString(); try { Server1 = new TcpClient("" +listBox1.Items[i].ToString() + "", 1111); NetworkStream ns = Server1.GetStream(); //while (true) //{ int a = fname.LastIndexOf("\\"); fname = fname.Substring(a + 1); input = dest + "&" + fname; if (input != "exit") { ns.Write(Encoding.ASCII.GetBytes(input), 0, input.Length); ns.Flush(); recv = ns.Read(data, 0, data.Length); strdata = Encoding.ASCII.GetString(data, 0, recv); // textBox2.Text = strdata; // MessageBox.Show("Response..." + strdata); str2 = DateTime.Now.TimeOfDay.ToString(); int getindex1 = Convert.ToInt32(str1.LastIndexOf('.')); str1 = str1.Substring(getindex1 + 1); int getindex2 = Convert.ToInt32(str2.LastIndexOf('.')); str2 = str2.Substring(getindex2 + 1); double d1, d2, d3; d1 = Convert.ToDouble(str1); d2 = Convert.ToDouble(str2); d3 = d2 - d1; if (d3 < 0) { d3 = d3 * (-1); textBox1.Text = d3.ToString(); } else { textBox1.Text = d3.ToString(); } ns.Close(); Server1.Close(); SqlConnection con = new SqlConnection("server=.;integrated security=true;database=mbi"); SqlCommand cmd = new SqlCommand("insert into conprobe values('" + Dns.GetHostName().ToString() + "','" + listBox1.Items[i].ToString() + "','" + DateTime.Now.ToShortDateString() + "','" + d3.ToString() + "')", con); con.Open(); cmd.ExecuteNonQuery(); con.Close(); } else

{ MessageBox.Show("Disconnecting from Server..."); //break; } } catch (SocketException) { SqlConnection con = new SqlConnection("server=.;integrated security=true;database=mbi"); SqlCommand cmd = new SqlCommand("insert into disconprobe values('" + Dns.GetHostName().ToString() + "','" + listBox1.Items[i].ToString() + "','" + DateTime.Now.ToShortDateString() + "','Time Elapsed')", con); con.Open(); cmd.ExecuteNonQuery(); con.Close(); // MessageBox.Show("Unable to connect Main Server..."); }

//} Receiver probe public partial class Probe_Receive : Form { public Probe_Receive() { InitializeComponent(); } private void Probe_Receive_Load(object sender, EventArgs e) { } string str; string packsize; private void button1_Click(object sender, EventArgs e) { MessageBox.Show("Please wait a while to Receive Data"); try { int recv; byte[] data = new byte[1024]; TcpListener newsock = new TcpListener(9999); newsock.Start(); TcpClient client = newsock.AcceptTcpClient(); NetworkStream ns = client.GetStream(); recv = ns.Read(data, 0, data.Length); str = Encoding.ASCII.GetString(data, 0, recv); int aa1 = str.LastIndexOf("$"); textBox1.Text = str.Substring(aa1 + 1); packsize = str.Substring(0, aa1); string welcome = "Ack"; data = Encoding.ASCII.GetBytes(welcome);

ns.Write(data, 0, data.Length); ns.Close(); client.Close(); newsock.Stop();

} catch (Exception ex) { } try { int recv; byte[] data = new byte[1024]; TcpListener newsock = new TcpListener(5555); newsock.Start(); TcpClient client = newsock.AcceptTcpClient(); NetworkStream ns = client.GetStream(); recv = ns.Read(data, 0, data.Length); str = Encoding.ASCII.GetString(data, 0, recv); textBox2.Text = str.ToString(); FileInfo fl = new FileInfo("\\\\" + Dns.GetHostName().ToString() + "\\Recv\\" + textBox2.Text); long l = fl.Length; string packsize2 = l.ToString(); string welcome; if (packsize == packsize2) { welcome = "Successful Delivery of Data."; } else { welcome = "The Received Data Got Affected by some Anonymous Factor."; } data = Encoding.ASCII.GetBytes(welcome); ns.Write(data, 0, data.Length); ns.Close(); client.Close(); newsock.Stop(); } catch (Exception ex) { } } USER REGISTRATION SqlConnection con = new SqlConnection("server=.;integrated security=true;database=mbi"); SqlCommand cmd; private void button1_Click(object sender, EventArgs e) { if (textBox1.Text == "" || textBox2.Text == "" || textBox3.Text == "" || textBox4.Text == "") { MessageBox.Show("Empty columns will not be allowed for Registration");

} else if (textBox2.Text == textBox3.Text) { cmd = new SqlCommand("insert into tbl_user values('" + textBox1.Text + "','" + textBox2.Text + "','" + textBox3.Text + "','" + textBox4.Text + "')", con); con.Open(); cmd.ExecuteNonQuery(); con.Close(); con.Open(); cmd = new SqlCommand("select max(uid) from tbl_user", con); string s = Convert.ToString(cmd.ExecuteScalar()); textBox5.Text = s.ToString(); con.Close(); MessageBox.Show("Registered Informations Updated successfully"); MessageBox.Show("User ID: " + textBox5.Text + " generated. Make Note This ID for Security purpose"); this.Close(); Log_User lus = new Log_User(); lus.Show(); } else { MessageBox.Show("Password Mismatch"); }

Authentication module private void button1_Click(object sender, EventArgs e) { if (textBox1.Text == "" || textBox2.Text == "") { MessageBox.Show("User name password should not be empty"); } else { con.Open(); cmd = new SqlCommand("select * from tbl_user where uname='" + textBox1.Text + "' and pass='" + textBox2.Text + "'", con); SqlDataReader dr = cmd.ExecuteReader(); if (dr.Read()) { Main_User mus = new Main_User(textBox1.Text); mus.Show(); this.Close(); //Process.Start(@"E:\Laurus\Projects New\ProbeLink\ProbeLink\bin\Debug\ProbeLink.exe"); //Probe_Link pb = new Probe_Link(); //pb.Show(); } else { MessageBox.Show("Username password Incorrect. Try Again later");

} con.Close(); } Inference SqlConnection con = new SqlConnection("server=.;integrated security=true;database=mbi"); SqlCommand cmd; private void rpt_Inference_Load(object sender, EventArgs e) { cmd = new SqlCommand("select * from queueingdelay", con); con.Open(); SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); da.Fill(ds, "queue"); dataGridView1.DataSource = ds.Tables[0].DefaultView; con.Close(); cmd = new SqlCommand("select * from senddata", con); con.Open(); da = new SqlDataAdapter(cmd); ds = new DataSet(); da.Fill(ds, "senddata"); dataGridView2.DataSource = ds.Tables[0].DefaultView; con.Close();

Hypothesis SqlConnection con = new SqlConnection("server=.;integrated security=true;database=mbi"); SqlCommand cmd; private void rpt_Hypothesis_Load(object sender, EventArgs e) { cmd = new SqlCommand("select count(distinct(probelink)) from conprobe", con); con.Open(); string s = Convert.ToString(cmd.ExecuteScalar()); con.Close(); cmd = new SqlCommand("select count(distinct(probelink)) from disconprobe", con); con.Open(); string ss = Convert.ToString(cmd.ExecuteScalar()); con.Close(); cmd = new SqlCommand("select distinct(probelink) from conprobe", con); con.Open(); SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); da.Fill(ds, "conp"); dataGridView1.DataSource = ds.Tables[0].DefaultView; con.Close();

cmd = new SqlCommand("select distinct(probelink) from disconprobe", con); con.Open(); da = new SqlDataAdapter(cmd); ds = new DataSet(); da.Fill(ds, "dconp"); dataGridView2.DataSource = ds.Tables[0].DefaultView; con.Close();

CHAPTER 9 SCREEN SHOTS

We can register the username,password,and conform password are entered, then it will disply the registered information updated sucessfully.

In this screen shot displyed user identification of the number can be suceessfully generated.then the identification number can be make note this ID for security purpose.

This Login window is used to provide security to the system. That is only authenticated user can enter into the system. Whenever the user wants to enter they must enter their username and password details. The system verifies all the details and then it allows entering into the system one only if the details are valid. Otherwise it wont allow entering the system.

The probe link can be started ,sender probe can ready to sends the message to the receiver side then it shows the queuing delay time to deliver data.after admin to the form can be controlled and maintain the process.finally hypothesis report shows total number of probe and failure of the probe can be viewed.

CHAPTER 10 CONCLUSION AND FUTURE ENHANCEMENTS


CONCLUSION In this paper, we provided a formal yet intuitive definition of dominant congested link and proposed two simple hypothesis tests for identifying whether a dominant congested link exists along a path. We then developed a novel model-based approach for dominant congested link identification from one-way endend measurements. So this project provides the fast anomaly detection about the best links in the network. The efficiency of the network communication will be longer than the other network. Some the concludes from this projects are as follows

Fast identification, More accuracy, High security, Bandwidth, And capacity.

FUTURE ENHANCEMENT This process can be implementing for the multiple routing methods like multicasting. In several destination processes the route efficiency will calculate easily by the mbi method so it provides time consumption and security in the multicasting too. This method can additional add the other criteria like the data capability, robustness and etc..in further we can add the workgroup communication with the MBI functionalities to transfer data more secure. There will be many possibilities to implement in the mobile adhoc networks which will helps to send data more efficient.

REFERENCES

N. Duffield, Network tomography of binary network performance characteristics, IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5373 5388, Dec. 2006.

S. Floyd, R. Gummadi, and S. Shenker, Adaptive RED: An algorithm for increasing the robustness of REDs active queue management, Tech. rep., Aug. 2001. [Online]. Available: http://www.icir.org/ floyd/papers/adaptiveRed.pdf

K. Harfoush, A. Bestavros, and J. Byers, Measuring bottleneck bandwidth of targeted path segments, in Proc. IEEE INFOCOM, Apr. 2001, vol. 3, pp. 20792089.

N. Hu, L. E. Li, Z. M. Mao, P. Steenkiste, and J. Wang, Locating Internet bottlenecks: Algorithms, measurements, and implications, in Proc. ACM SIGCOMM, Aug. 2004, pp. 4154.

N. Hu and P. Steenkiste, Evaluation and characterization of available bandwidth probing techniques, IEEE J. Sel. Areas Commun., vol. 21, no. 6, pp. 879894, Aug. 2003.

V. Jacobson, PathcharA tool to infer characteristics of Internet paths, Apr. 1997 [Online]. Available: ftp://ftp.ee.lbl.gov/pathchar. M. Jain and C. Dovrolis, End-to-end dynamics, available relation bandwidth: with TCP

Measurement

methodology,

and

throughput, in Proc. ACM SIGCOMM, Aug. 2002, pp. 295308. M. Jain and C. Dovrolis, Pathload: A measurement tool for end-to-end available bandwidth, in Proc. PAM, Mar. 2002, pp. 1425.

D. Katabi, I. Bazzi, and X. Yang, A passive approach for detecting shared bottlenecks, in Proc. ICCCN, Oct. 2001, pp. 174181. J. Liu and M. Crovella, Using loss pairs to discover network properties, in Proc. ACM SIGCOMM Internet Meas. Workshop, Nov. 2001, pp. 127138.

J. Liu, I. Matta, and M. Crovella, End-to-end inference of loss nature in a hybrid wired/wireless environment, in Proc. WiOpt, Mar. 2003. B. A. Mah, pchar: A tool for measuring Internet path characteristics, 2005. [Online]. Available: http://www.kitchenlab.org/www/ bmah/Software/pchar/

R. Mahajan, N. Spring, D. Wetherall, and T. Anderson, User-level Internet path diagnosis, in Proc. ACM SOSP, Oct. 2003, pp. 106119. B. Melander, M. Bjorkman, and P. Gunningberg, A new end-to-end probing and analysis method for estimating bandwidth bottlenecks, in Proc. IEEE GLOBECOM, Nov. 2000, vol. 1, pp. 415420.

H. X. Nguyen and P. Thiran, The boolean solution to the congested IP link location problem: Theory and practice, in Proc. IEEE INFOCOM, May 2007, pp. 21172125.

H. X. Nguyen and P. Thiran, Network loss inference with second order statistics of end-to-end flows, in Proc. ACM SIGCOMM IMC, Oct. 2007, pp. 227240.

V. N. Padmanabhan, L. Qiu, and H. J. Wang, Server-based inference of Internet link lossiness, in Proc. IEEE INFOCOM, Mar.Apr. 2003, vol. 1, pp. 145155.

V. Paxson, End-to-end internet packet dynamics, IEEE/ACM Trans. Netw., vol. 7, no. 3, pp. 277292, Jun. 1999. e

Das könnte Ihnen auch gefallen