Sie sind auf Seite 1von 30

CONGESTION

Congestion is a situation in which too many packets are present in (a part of) the subnet, performance degrades. Factors causing congestion: - The input traffic rate exceeds the capacity of the output lines. - The routers are too slow to perform bookkeeping tasks (queueing buffers, updating tables, etc.). - The routers' buffer is too limited.

When too much traffic is offered, congestion sets in and performance degrades sharply.

Policies that affect congestion.

Congestion control is different from flow control: 1.Congestion is a global issue, involving the behavior of all the hosts, all the routers, the store-and-forward processing within the routers, etc. 2.Flow control relates to the point-to-point traffic between a given sender and a given receiver.

General principles of congestion control


Open loop solutions solve the problem by good design, in essence, to make sure the problem does not occur in the first place. Tools include deciding when to accept new traffic, when to discard packets and which ones, and how to schedule packets at various points in the network. A common fact: they make decisions without regard to the current state of the network.

Closed Loop solution


Closed loop solutions are based on the concept of a feedback loop, which consists of the following three parts:
Monitor the system to detect when and where congestion occurs. Pass this information to places where actions can be taken.

Adjust system operation to correct the problem

Chief metrics for monitoring the subnet for congestion are: the percentage of all packets discarded for lack of buffer space, the average queue lengths, the number of packets that time out and are retransmitted, the average packet delay, and the standard deviation of packet delay.

How to propagate the monitored congestion information ?


The router detecting the congestion sends a separate warning packet to the traffic source. A bit or field can be reserved in each packet. When a router detects a congested state, it fills in the field in all outgoing packets to warn the neighbors. Hosts or routers send probe packets out periodically to explicitly ask about congestion and to route traffic around problem areas

How to correct the congestion problem ?


Increase the resource:
Using an additional line to temporarily increase the bandwidth between certain points. Splitting traffic over multiple routes. Using spare routers.

Decrease the load:


denying service to some users, degrading service to some or all users, and

having users schedule their demands in a more predictable way

Congestion Control in VirtualCircuit Subnets


One technique that is widely used to keep congestion that has already started from getting worse is admission control. The idea is simple: once congestion has been signaled, no more virtual circuits are set up until the problem has gone away.

Congestion Control in VirtualCircuit Subnets


Thus, attempts to set up new transport layer connections fail. Letting more people in just makes matters worse. While this approach is crude, it is simple and easy to carry out. In the telephone system, when a switch gets overloaded, it also practices admission control by not giving dial tones.

Congestion Control in VirtualCircuit Subnets


An alternative approach is to allow new virtual circuits but carefully route all new virtual circuits around problem areas. Suppose that a host attached to router A wants to set up a connection to a host attached to router B. Normally, this connection would pass through one of the congested routers. Omitting the congested routers and all of their lines. The dashed line shows a possible route for the virtual circuit that avoids the congested routers.

(a) A congested subnet. (b) A redrawn subnet that eliminates the congestion. A virtual circuit from A to B is also shown.

Congestion Control in VirtualCircuit Subnets


Another (open loop) strategy relating to virtual circuits is to negotiate an agreement between the host and subnet, so that the subnet can reserve resources along the path when the circuit is set up. Since all necessary resources are guaranteed to be available, congestion is unlikely to occur on the new virtual circuits. Reservation can be done all the time, or only when the subnet is congested.

Traffic shaping
One of the main causes of congestion is that traffic is often bursty. Another open loop method is forcing the packets to be transmitted at a more predictable rate. This method is widely used in ATM networks and is called traffic shaping.

Leaky bucket algorithm


Figure:(a) A leaky bucket with water. (b) A leaky bucket with packets.

Leaky bucket algorithm


Each host is connected to the network by an interface containing a leaky bucket - a finite internal queue. The outflow is at a constant rate when there is any packet in the bucket, and zero when the bucket is empty. If a packet arrives at the bucket when it is full, the packet is discarded.

The token bucket algorithm


The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter how bursty the traffic is. For many applications, it is better to allow the output to speed up somewhat when large bursts arrive, so a more flexible algorithm is needed, preferably on that never loses data.

The token bucket algorithm.


The bucket holds tokens, generated by a clock at the rate of one token every sec. For a packet to be transmitted, it must capture and destroy one token. After.The token bucket algorithm allows saving up to the maximum size of the bucket, , which means that bursts of up to packets can be sent at the maximum speed (for a certain period of time).

The token bucket algorithm. (a) Before. (b) After.

Choke packet
This method can be used in both virtual circuit and datagram subnets. Each line is associated with a variable , whose value (0.0 - 1.0) reflects the recent utilization: Whenever moves above the threshold, the output line enters a ``warning'' state.

Choke packet
If a newly arriving packet is outputting in a line in warning state, the router sends a choke packet back to the source host, giving it the destination found in the packet. When the source host gets the choke packet, it is required to reduce the traffic sent to destination by certain percent.

Choke packet
The host should ignore choke packets referring to the same destination for a fixed time interval (why ?). If no choke packets arrive during the listening period, the host may increase the flow again. Some variations on this congestion control algorithm exist.

Figure 24.7 Choke packet

(a) A choke packet that affects only the source. (b) A choke packet that affects each hop it passes through.

Load shedding
When none of the above methods make the congestion disappear, routers can bring out the heavy artillery: load shedding, i.e, just throwing packets away. Which packet to discard ? Discard any one at random.

Load shedding
Discard the younger ones (the wine policy suitable for file transfer). Discard the older ones (the milk policy suitable for multimedia transfer.) Discard less important ones (this intelligence requires the senders to mark their packets in priority classes to indicate how important they are).

Jitter control
For audio and video transmission, it is important to ensure constant transmission time. So the agreement between the host and the subnet might be that 99 % of the packets be delivered with a delay in the range of 24.5 msec to 25.5 msec. The jitter can be controlled by computing the expected transit time for each hop along the path:

Jitter control
When a packet arrives at a router, the router checks to see how much the packet is behind or ahead of its schedule. If the packet is ahead of schedule, it is held just long enough to get it back on schedule. If it is behind schedule, the router tries to get it out the door quickly.

(a) High jitter. (b) Low jitter.

Das könnte Ihnen auch gefallen