Sie sind auf Seite 1von 8

Literature survey Literature survey is the most important step in software development process.

Be fore developing the tool it is necessary to determine the time factor, economy a nd company strength. Once these things are satisfied, then next steps are to det ermine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of externa l support. This support can be obtained from senior programmers, from book or fr om websites. Before building the system the above consideration are taken into a ccount for developing the proposed system. Overview In networking, a token is a special series of bits that travels around a token-r ing network. As the token circulates, computers attached to the network can capt ure it. The token acts like a ticket, enabling its owner to send a message acros s the network. There is only one token for each network, so there is no possibil ity that two computers will attempt to transmit messages at the same time. Congestion is a problem that occurs on shared networks when multiple users conte nd for access to the same resources (bandwidth, buffers, and queues). Many vehic les enter the freeway without regard for impending or existing congestion. As mo re vehicles enter the freeway, congestion gets worse. Eventually, the on-ramps m ay back up, preventing vehicles from getting on at all. In packet-switched networks, packets move in and out of the buffers and queues o f switching devices as they traverse the network. In fact, a packet-switched net work is often referred to as a "network of queues." A characteristic of packet-s witched networks is that packets may arrive in bursts from one or more sources. Buffers help routers absorb bursts until they can catch up. If traffic is excess ive, buffers fill up and new incoming packets are dropped. Increasing the size o f the buffers is not a solution, because excessive buffer size can lead to exces sive delay. Congestion typically occurs where multiple links feed into a single link, such a s where internal LANs are connected to WAN links. Congestion also occurs at rout ers in core networks where nodes are subjected to more traffic than they are des igned to handle. TCP/IP networks such as the Internet are especially susceptible to congestion because of their basic connection- less nature. There are no virt ual circuits with guaranteed bandwidth. Packets are injected by any host at any time, and those packets are variable in size, which make predicting traffic patt erns and providing guaranteed service impossible. While connectionless networks have advantages, quality of service is not one of them. Shared LANs such as Ethernet have their own congestion control mechanisms in the form of access controls that prevent multiple nodes from transmitting at the sa me time. In order to congestion a set of rules are necessary, set of rules governing how data is transferred over networks, how they are compressed, how they are presen ted on the screen and so on. These set of rules are called protocols. There are many protocols, each one governing the way a certain technology works. For examp le, the IP protocol defines a set of rules governing the way computers use IP pa ckets to send data over the Internet or any other IP-based network. It also defi nes addressing in IP. Likewise, IP is often used together with the Transport Con trol Protocol (TCP) and referred to interchangeably as TCP/IP. IP functions at l ayer 3 of the OSI model. It can therefore run on top of different data link inte rfaces including Ethernet and Wi-Fi. we have other protocols like: TCP:Transmission Control Protocol, used for the reliable transmission of data ov er a network. HTTP: Hypertext Transfer Protocol, used for transmitting and displaying informat ion in the form of web pages on browsers. FTP: File Transfer Protocol, used for file transfer (uploading and downloading) over the Internet.

SMTP: Simple Mail Transfer Protocol, used for email. Ethernet: Used for data transmission over a LAN. Wi-Fi: One of the wireless protocols. The TCP congestion avoidance algorithm is the primary basis for congestion contr ol in the Internet.Problems occur when many concurrent TCP flows are experiencin g port queue buffer tail-drops. Then TCP's automatic congestion avoidance is not enough. All flows that experience port queue buffer tail-drop will begin a TCP retrain at the same moment - this is called TCP global synchronization. Random early detection One solution is to use random early detection (RED) on the network equipment's p ort queue buffer. On network equipment ports with more than one queue buffer, we ighted random early detection (WRED) could be used if available. RED indirectly signals to sender and receiver by deleting some packets, e.g. whe n the average queue buffer lengths are more than e.g. 50% (lower threshold) fill ed and deletes linearly more or (better according to paper) cubical more packets , [10] up to e.g. 100% (higher threshold). The average queue buffer lengths are computed over 1 second at a time. Robust random early detection (RRED) Robust Random Early Detection (RRED) algorithm was proposed to improve the TCP t hroughput against Denial-of-Service (DoS) attacks, particularly Low-rate Deinalof-Service (LDoS) attacks. Experiments have confirmed that the existing RED-like algorithms are notably vulnerable under Low-rate Denial-of-Service (LDoS) attac ks due to the oscillating TCP queue size caused by the attacks. RRED algorithm c an significantly improve the performance of TCP under Low-rate Denial-of-Service attacks. Flowbased-RED/WRED Some network equipment are equipped with ports that can follow and measure each flow (flowbased-RED/WRED) and are hereby able to signal to a too big bandwidth f low according to some QoS policy. A policy could divide the bandwidth among all flows by some criteria. IP ECN Another approach is to use IP ECN. ECN is only used when the two hosts signal th at they want to use it. With this method, an ECN bit is used to signal that ther e is explicit congestion. This is better than the indirect packet delete congest ion notification performed by the RED/WRED algorithms, but it requires explicit support by both hosts to be effective. Some outdated or buggy network equipment drops packets with the ECN bit set, rather than ignoring the bit. More informati on on the status of ECN including the version required for Cisco IOS, by Sally F loyd, one of the authors of ECN. When a router receives a packet marked as ECN capable and anticipates (using RED ) congestion, it will set an ECN-flag notifying the sender of congestion. The se nder then ought to decrease its transmission bandwidth; e.g. by decreasing the t cp window size (sending rate) or by other means. Cisco AQM: Dynamic buffer limiting (DBL) Cisco has taken a step further in their Catalyst 4000 series with engine IV and V. Engine IV and V has the possibility to classify all flows in "aggressive" (ba d) and "adaptive" (good). It ensures that no flows fill the port queues for a lo ng time. DBL can utilize IP ECN instead of packet-delete-signalling. TCP Window Shaping Congestion avoidance can also efficiently be achieved by reducing the amount of traffic flowing into a network. When an application requests a large file, graph ic or web page, it usually advertises a "window" of between 32K and 64K. This re sults in the server sending a full window of data (assuming the file is larger t han the window). When there are many applications simultaneously requesting down loads, this data creates a congestion point at an upstream provider by flooding the queue much faster than it can be emptied. By using a device to reduce the wi ndow advertisement, the remote servers will send less data, thus reducing the co ngestion and allowing traffic to flow more freely. This technique can reduce con gestion in a network by a factor of 40. Peer-to-peer is a communications model in which each party has the same capabil

ities and either party can initiate a communication session. Other models with w hich it might be contrasted include the client/server model and the master/slave model. In some cases, peer-to-peer communications is implemented by giving each communication node both server and client capabilities. In recent usage, peer-t o-peer has come to describe applications in which users can use the Internet to exchange files with each other directly or through a mediating server. IBM's Advanced Peer-to-Peer Networking (APPN) is an example of a product that su pports the peer-to-peer communication model. On the Internet, peer-to-peer (referred to as P2P) is a type of transient Intern et network that allows a group of computer users with the same networking progra m to connect with each other and directly access files from one another's hard d rives. Napster and Gnutella are examples of this kind of peer-to-peer software. Major producers of content, including record companies, have shown their concern about what they consider illegal sharing of copyrighted content by suing some P 2P users. Meanwhile, corporations are looking at the advantages of using P2P as a way for employees to share files without the expense involved in maintaining a centraliz ed server and as a way for businesses to exchange information with each other di rectly. The user must first download and execute a peer-to-peer networking program. (Gnu tellanet is currently one of the most popular of these decentralized P2P program s because it allows users to exchange all types of files.) After launching the p rogram, the user enters the IP address of another computer belonging to the netw ork. (Typically, the Web page where the user got the download will list several IP addresses as places to begin). Once the computer finds another network member on-line, it will connect to that user's connection (who has gotten their IP add ress from another user's connection and so on).Users can choose how many member connections to seek at one time and determine which files they wish to share or password protect. However,a key challenge in P2P multicast is robustness. Unlike routers in IP mul ticast or dedicated servers in an infrastructure-based content distribution netw ork such as Akamai, peers or end hosts are inherently unreliable due to crashes, disconnections, or shifts in user focus (e.g., a user may hop between streaming sessions or launch other bandwidth-hungry applications). Core-Stateless Fair Queueing (CSFQ) In this section, we propose an architecture that approximates the service provid ed by an island of Fair Queueing routers, but has a much lower complexity in the core routers. The architecture has two key aspects. First, to avoid maintaining per ow state a t each router, we use a distributed We still employ a drop-on-input scheme, exce pt that now we drop packets rather than bits. Because the rate estimation (descr ibed below) incorporates the packet size, the dropping probability is independen t of the packet size and depends only, as above, on the rate ri(t) and fair shar e rate(t). We are left with two remaining challenges: estimating the rates ri(t) and the fa ir share We address these two issues in turn in the next two subsections, and th en discuss the rewriting of the labels. We should note, however, that the main p oint of our paper is the overall architecture and that the detailed algorithm pr esented below represents only an initial prototype. While it serves adequately a s a proof-of-concept of our architecture, we fully expect that the details of th is design will continue to evolve. For a flow with limited access token resource, there is an unique optimal soluti on to label the Token-Level of sent packet for achieving best throughput, which is equal to the tkback in the back-channel. For a Bit-Torrent application with limited access resource, it can achieve better throughput, but do not hurt the p erformance of networks.

Presently the Internet accommodates simultaneous audio, video, and data traffic. This requires the Internet to guarantee the packet loss which at its turn depen ds very much on congestion control. A series of protocols have been introduced t o supplement the insufficient TCP mechanism controlling the network congestion. CSFQ was designed as an open-loop controller to provide the fair best effort ser vice for supervising the per-flow bandwidth consumption and has become helpless when the P2P flows started to dominate the traffic of the Internet. Token-Based Congestion Control (TBCC) is based on a closed-loop congestion control principl e, which restricts token resources consumed by an end-user and provides the fair best effort service with O(1) complexity. As Self-Verifying CSFQ and Re-feedbac k, it experiences a heavy load by policing inter-domain traffic for lack of trus t. In this paper, Stable Token-Limited Congestion Control (STLCC) is introduced as new protocols which appends inter-domain congestion control to TBCC and make the congestion control system to be stable. STLCC is able to shape output and in put traffic at the inter-domain link with O(1) complexity. STLCC produces a cong estion index, pushes the packet loss to the network edge and improves the networ k performance. Finally, the simple version of STLCC is introduced. This version is deployable in the Internet without any IP protocols modifications and preserv es also the packet datagram. Modern IP network services provide for the simultaneous digital transmission of voice, video, and data. These services require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion control is therefore, the cornerstone of packet switching networks. I t should prevent congestion collapse, provide fairness to competing flows and op timize transport performance indexes such as throughput, delay and loss. The lit erature abounds in papers on this subject; there are papers on high-level models of the flow of packets through the network, and on specific network architectur e. Despite this vast literature, congestion control in telecommunication networks s truggles with two major problems that are not completely solved. The first one i s the time-varying delay between the control point and the traffic sources. The second one is related to the possibility that the traffic sources do not follow the feedback signal. This latter may happen because some sources are silent as t hey have nothing to transmit. Congestion control of the best-effort service in t he Internet was originally designed for a cooperative environment. It is still m ainly dependent on the TCP congestion control algorithm at terminals, supplement ed with load shedding at congestion links. This model is called the Terminal Dep endent Congestion Control case. Although routers equipped with Active Queue Management such as RED can improve t ransport performance, they are neither able to prevent congestion collapse nor p rovide fairness to competing flows. In order to enhance fairness in high speed n etworks, Core-Stateless Fair Queuing (CSFQ) set up an open- loop control system at the network layer, which inserts the label of the flow arrival rate onto the packet header at edge routers and drops the packet at core routers based on the rate label if congestion happens. CSFQ is the first to achieve approximate fair bandwidth allocation among flows with O(1) complexity at core routers. According to CacheLogic report, P2P traffic was 60% of all the Internet traffic in 2004, of which Bit-Torrent was responsible for about 30% of the above, although the re port generated quite a lot of discussions around the real numbers. In networks w ith P2P traffic, CSFQ can provide fairness to competing flows, but unfortunately it is not what end-users and operators really want. Token-Based Congestion Cont rol (TBCC) restricts the total token resource consumed by an end-user. So, no m atter how many connections the end-user has set up, it cannot obtain extra bandw idth resources when TBCC is used. The Self-Verifying CSFQ tries to expand CSFQ across the domain border. It rando mly selects a flow, re-estimates the flow s rate, and checks whether the re-estima ted rate is consistent with the label on the flow s packet. Consequently Self-Veri

fying CSFQ will put a heavy load on the border router and makes the weighted CSF Q null and void. The authors present a congestion control architecture Re-feedba ck, which aims to provide the fixed cost to end-users and bulk inter-domain cong estion charging to network operators. Re-feedback not only demands very high lev el complexity to identify the malignant end-user, but also is difficult to provi de the fixed congestion charging to the inter-domain interconnection with low co mplexity. There are three types of inter-domain interconnection polices, the Int ernet Exchange Points, the private peering and the transit. In the private peeri ng polices, the Sender Keep All (SKA) peering arrangements are those in which tr affic is exchanged between two domains without mutual charge. As Re-feedback is based on congestion charges to the peer domain, it is difficult for Re-feedback to support the requirements of SKA. In this paper a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic is proposed. In this new method the edge and the core routers will write a measure of the quality of service guaran teed by the router by writing a digital number in the Option Field of the datagr am of the packet. This is called a token. The token is read by the path routers and interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token number the edge router at the source s edge point will shape the traffic generated by the source, thus reducing the congesti on on the path. In Token-Limited Congestion Control (TLCC), the inter-do main ro uter restricts the total output token rate to peer domains. When the output toke n rate exceeds the threshold, TLCC will decreases the Token-Level of output pack ets, and then the output token rate will decrease. Similarly to CSFQ and TBCC, T LCC uses also the iterative algorithm to estimate the congestion level of its ou tput link, and requires a long period of time to reach a stable state. With bad parameter configuration, TLCC may cause the traffic to fall into an oscillated p rocess. The window size of TCP flows will always increase when acknowledge packe ts are received, and the congestion level will increase at the congested link. A t congestion times many flows will lose their packets. Then, the link will be id le and the congestion level will decrease. The two steps may be repeated alterna tely, and then the congestion control system will never reach stability. To solve the oscillation problem, the Stable Token-Limited Congestion Control (S TLCC) is introduced. It integrates the algorithms of TLCC and XCP altogether. In STLCC, the output rate of the sender is controlled according to the algorithm o f XCP, so there is almost no packet lost at the congested link. At the same time , the edge router allocates all the access token resource to the incoming flows equally. When congestion happens, the incoming token rate increases at the core router, and then the congestion level of the congested link will also increase. Thus STLCC can measure the congestion level analytically, allocate network resou rces according to the access link, and further keep the congestion control syste m stable. This paper is organized as follows. The architecture of Token-Based Congestion C ontrol (TBCC), which provides fair bandwidth allocation to end-users in the same domain will be introduced. It evaluates two congestion control algorithms CSFQ and TBCC. STLCC is presented and the simulation is designed to demonstrate its v alidity. It presents the Unified Congestion Control Model which is the abstract model of CSFQ, Re-feedback and STLCC. The simple version of STLCC is proposed, w hich can be deployed on the current Internet. Finally, conclusions will be given . To inter-connect two TBCC domains, the inter-domain router is added to the TBC C system as in Figure 8. To support the SKA arrangement, the inter-domain router should limit its output token rate to the rate of the other domains and police the incoming token rate from peer domains. To limit the output token rate, three elements tkprev, tkdown and tkbackdown are inserted into the extended header tk head. At the source edge router, the tkprev is set to the same value as the tkle vel and cannot be modified by routers. The sum of tkdown represents the decremen ts of Token-Level at all the inter-domain routers in the transmission path. When

the packet arrives at the destination, the sum of tkpath and tkdown is the Co ngestion-Index of the transmission path. In the reverse packet, the tkbackdown i s used to return the elements of tkdown in the forwarding packet header to the s ource edge router. Although many randomized asynchronous protocols have been designed throughout th e years only recently one implementation of a stack of randomized multicast and agreement protocols has been reported, SINTRA. These protocols are built on top of a binary consensus protocol that follows a Rabin-style approach, and in pract ice terminates in one or two communication steps. The protocols, however, depend heavily on public-key cryptography primitives like digital and threshold signat ures. The implementation of the stack is in Java and uses several threads. RITAS uses a different approach, Ben-Or-style, and resorts only to fast cryptographic operations such as hash functions. Randomization is only one of the techniques that can be used to circumvent the F LP impossibility result. Other techniques include failure detectors, partial syn chrony and distributed wormholes. Some of these techniques have been employed in the past to build other intrusion-tolerant protocol suites.

Hardware Requirements: System : Pentium IV 2.4 GHz. Hard Disk : 40 GB. Floppy Drive : 1.44 Mb. Monitor : 15 VGA Colour. Mouse : Logitech. RAM : 256 Mb. Software Requirements: i. ii. Operating system Front End : Windows XP Professional : JAVA,RMI, Swing(JFC)

EXISTING SYSTEM In the existing system, the sender sends the packets without the i ntermediate station. The data packets has been losses many and time is wasted. R etransmission of data packets is difficulty. PROPOSED SYSTEM Modern IP network services provide for the simultaneous digital transmission of voice, video, and data. These services require congestion control protocols and algorithms which can solve the packet loss parameter can be kept under control. Congestion control is therefore, the cornerstone of packet switching networks . It should prevent congestion collapse, provide fairness to competing flows and o ptimize transport performance indexes such as throughput, delay and loss. The li terature abounds in papers on this subject; there are papers on high-level model s of the flow of packets through the network, and on specific network architectu re.

MODULE DESCRIPTION: NETWORK CONGESTION: Congestion occurs when the number of packets being transmitted through the netwo rk approaches the packet handling capacity of the network Congestion control aims to keep number of packets below level at which performan ce falls off dramatically STABLE TOKEN LIMIT CONGESTION CONTROL (STLCC): STLCC is able to shape output and input traffic at theinter-d omain link with O(1) complexity. STLCC produces a congestion index, pushes the p acket loss to the network edge and improves the network performance. To solve th e oscillation problem, the Stable Token-Limited Congestion Control (STLCC) is in troduced. It integrates the algorithms of TLCC and XCP [10] altogether. In STLCC , the output rate of the sender is controlled according to the algorithm of XCP, so there is almost no packet lost at the congested link. At the same time, t he edge router allocates all the access token resource to the incoming flows eq ually. When congestion happens, the incoming token rate increases at the core ro uter, and then the congestion level of the congested link will also increase. Th us STLCC can measure the congestion level analytically, allocate network resourc es according to the access link, and further keep the congestion control system stable. TOKEN In this paper a new and better mechanism for congestion control with application to Packet Loss in networks with P2P traffic is proposed. In th is new method the edge and the core routers will write a measure of the quality of service guaranteed by the router by writing a digital number in the Option Fi eld of the datagram of the packet. This is called a token. The token is read by the path routers and interpreted as its value will give a measure of the congest ion especially at the edge routers. Based on the token number the edge router at the source, thus reducing the congestion on the path. CORE ROUTER: A core router is a router designed to operate in the Internet Backbone or core. To fulfill this role, a router must be able to support multiple telecommunicati ons interfaces of the highest speed in use in the core Internet and must be able to forward IP packets at full speed on all of them. It must also support the ro uting protocols being used in the core. A core router is distinct from an edge r outers. EDGE ROUTER: Edge routers sit at the edge of a backbone network and connect to core routers. The token is read by the path routers and interpreted as its value will give a measure of the congestion especially at the edge routers. Based on the token num ber the edge router at the source, thus reducing the congestion on the path.

CONCLUSION This paper is organized as follows. In section II, the architecture of Token-Based Congestion Control (TBCC), which provides fair bandwidth allocati on to end-users in the same domain will be introduced. Section III evaluates two

congestion control algorithms CSFQ and TBCC. In section IV, STLCC is presented and the simulation is designed to demonstrate its validity. Section V presents t he Unified Congestion Control Model which is the abstract model of CSFQ, Re-feed back and STLCC. In section VI, the simple version of STLCC is proposed, which ca n be deployed on the current Internet. Finally, conclusions will be given. To in ter-connect two TBCC domains, the inter-domain router is added to the TBCC syste m as in Figure 8. To support the SKA arrangement, the inter-domain router should limit its output token rate to the rate of the other domains and police the inc oming token rate from peer domains.

SIGNATURE OF THE INTGERNAL GUIDE

Das könnte Ihnen auch gefallen