Sie sind auf Seite 1von 8

Design Considerations

This chapter explores important design considerations in network or future switching


applications. First, the way in which the delays and loss impact on application’s design and
performance is considered. Next, the tradeoff between efficiency and features is examined,
listing considerations influencing the decision of what is best for your application. Some
practical implications in deciding whether to build a dedicated private network, a shared public
network, or a hybrid private/public network are then discussed.

Applications Impacts of Delay and Loss


This section reviews the impact of delay and loss on applications. Applications may be either
bandwidth or latency limited. Loss impacts the usable throughput for most network and
transport layer protocols.

Impact of Delay on Applications


Two situations occur when a source sends a burst of data at a certain transmission rate across a
network with certain delay, or latency: we call these bandwidth limited and latency limited. A
bandwidth limited occurs when the receiver begins receiving data before the transmitter has
completed transmission of the burst. A latency limited applications occurs when the transmitter
finishes sending the burst of data before the receiver begins receiving any data.

The consequence of sending a burst of length b equal to 100,000 bits at a peak rate of R Mbps
across the domestic United States with a propagation delay of 30 ms. It takes 30 ms for the bit
stream to propagate from the originating station to the receiving station across approximately
4000 miles of fiber since the speed of light in fiber is less than that of free space, and fiber is
usually not routed along the most direct path. When the peak rate between originator and
destination is 1 Mbps, and after 30 ms, only about one-third of the burst is in the transmission
media and the remainder is still buffered in the transmitting terminal. This is called bandwidth
limited application because the lack of bandwidth to hold the transmission is limiting the
transmitter from releasing the entire message immediately.

Now let’s look at the case where the transmitter has sent the entire transmission before the
receiver has received any data. When the peak rate is increased to 10 Mbps, the situation
changes significantly – the entire burst is sent by the workstation before it can even reach the
destination. Indeed, only about one third of the bits propagating through the fiber transmission

1
system are occupied by the burst! If the sending terminal must receive a response before the
next burst is sent, then we see that a significant reduction in throughput will result. This type of
situation is called latency limited because the latency of the response from the receiver limits
additional transmission of information.

Impact of Loss on Applications


Loss can be another enemy of applications. For many applications, the loss of a single cell
results in the loss of an entire packet because the SAR sub layer in the AAL will fail in its attempt
at reassembly. Loss can result in a time-out or negative acknowledgement in a higher layer
protocol, such as at the transport layer. If the round trip time is long with respect to the
application window size, then the achievable throughput can be markedly reduced. The amount
of buffering required in the network is proportional to the delay, bandwidth product.

Higher layer protocols recover from detected errors, or time-outs, by one of the two basic
methods: either all information that was sent after the detected errors or time-out is
retransmitted, or only the information that was actually in error or timed-out in selectively
retransmitted. Resending all of the information means that if N packets were sent after the
detected errors or timed-out, then N packets are retransmitted, reducing the usable
throughput. This scheme is often called a Go-Back-N retransmission strategy. The second
method is where the packet that has a detected error, or causes a time-out, is explicitly
identified by the higher layer protocol; then only that packet need be transmitted. This scheme
is often called a selective-reject retransmission strategy.

A simple model of the performance of these two retransmission strategies is presented to


illustrate the impact of cell loss on higher layer protocols.

The number of cells in the retransmission window W is determined by the transmission rate R,
the packet size p (in bytes), and the propagation delay as follows:

W = 2 * propagation delay * R

8*p

The probability that an individual packet is lost due to a random cell loss probability pi is
derived from the Cell Loss Ratio (CLR) and is approximately the following:

pi = p * CLR

48

2
In the Go-Back-N strategy, if a single packet is in error of a window of W packets, then the
entire window of W packets must be retransmitted, for the Go-Back-N retransmission strategy,
the usable throughput is approximately the inverse of the average number of times the entire
window must be sent, which is approximately

Go Back N = 1-pi

1+pi[W]

In the selective reject strategy, if a single packet is in error, then only that packet is
retransmitted. For the selective reject retransmission strategy, the usable throughput is
approximately the inverse of the average number of times any individual packet must be sent,
which is

Selective Reject = (1-pi)

This formula is valid for the case in which only one packet needs to be transmitted within the
round trip delay window. It also applies to a more sophisticated protocol that can retransmit
multiple packets, such as SSCOP.

Private versus Public Networking


In this section we will give some objective criteria for assistance in making this sometimes
difficult decision. One key element to be considered is the overall cost. Overall costs should
include planning, design, implementation, support, service, maintenance, and ongoing
enhancements. These can require the dedication of significant resources for a private network,
but are often included in public network services.

The current network will likely consist of mainframes, minicomputers, and/or LANs
interconnected by bridges and routers. Almost every existing network contains some form of
legacy SNA protocols. In the private network, it is important to estimate the capital and ongoing
costs accurately. It is common mistake to overlook, or underestimate, the planning, support,
and upgrade costs of a private network.

If you have the site locations and estimated traffic between sites, a carrier will often be able to
respond with a fixed cost and recurring cost proposal. These proposals often offer both fixed
and usage pricing option. The performance of the public service will be guaranteed by the
service provider, while in a private network this can be controlled to some extent by the
network designer. In a public network switches and trunks can be shared across multiple
customers, reducing cost and achieving economies of scale which are difficult to achieve in

3
private network. Carriers will often implement a shared trunk speed higher than any access line
speed, and consequently can achieve lower delay and loss than in a private network due to the
economy of scale inherent in the large numbers required for the statistical multiplexing gain.
This decreases costs for the individual user with performance that is suitable for most
applications.

Chicago Chicago

2500 2000
Public ATM
4000 1500 LA NYC
Network
LA NYC

2500 2000

Dallas Dallas

Dual homed public network

NYC LA Chicago Dallas

NYC 26 21 17

LA 22 16 7

Chicago 19 11 2

Dallas 12 3 8

Traffic Matrix (Mbps)

Variations in Delay
When designing a network of multiple switches connected by trunks, there are several things to
consider. Some of these considerations are common to private and public networks, while

4
some are specific to each. This section focuses on those aspects that are common to private
and public networks, and point out some unique considerations that occur in each.

Impact of Cell Delay Variation on Applications

Recall the cell delay variation (CDV) QoS parameter, which measures the clumping that can
occur when cells are multiplexed together from multiple sources, either in the end system or at
any switch, multiplexer, or immediate system in the network. The resulting delay effect can
accumulate when traversing multiple switches in the network. This becomes critical when
transporting delay-sensitive applications such as video, audio, and interactive data over ATM. In
order to support these applications, there is a need to provide a buffer to absorb the jitter
introduced in an end-to-end network.

Jitter refers to the rate of clumping or dispersion that occurs to cells that were nominally
spaced prior to transfer across an ATM network. The accumulated jitter must be
accommodated by a playback buffer. The playback buffer must be sized to allow the underrun
or overrun events to occur infrequently enough, according to the CDV (clumping) and cell
dispersion that accrues across a network.

In the overrun scenario cells arrive too closely clumped together, until finally a cell arrives and
there is no space in the playback buffer; thus cells are lost. This can be serious event for a video
coded signal because an entire frame may be lost due to the loss of one overrun cell.

Overrun cells

Impact

Cell

Stream

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40

Buffer

Occupancy

5
Output

Cell

Stream

0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48

Cell time

Same is the case in the underrun scenario but in the reverse order.

In the underrun scenario the cells arrive too dispersed in time, such that when the time arrives
for the next cell to leave from the playback buffer, the buffer is empty. This too can have a
negative consequence on a video application because the continuity of motion or even the
timing may be disrupted.

Estimation Cell Delay Variation in Network

The accumulation of CDV across multiple switches nodes can be approximated by the square
root rule from the ATM Forum B-ICI specification. This states that the end-to-end CDV is
approximately equal to the CDV for an individual switch times the square root of the number of
switching nodes in the end-to-end connection. The reason that we don’t just add the variation
per node is that the extremes in variation are unlikely to occur simultaneously, and tend to
cancel each other out somewhat. For example, while the variation is high in one node, it may
be low on another node.

We show in the probability distribution that the delay assumes a certain value at various points
on the network. The assumed delay distribution has a fixed delay of 25 micro sec per ATM
switch. The ATM switch also adds a normally distributed delay with mean and standard
deviation also equals to 25 micro sec. Therefore, the average delay added per node is 50 micro
sec. Starting from the left-hand side of node A, the traffic has no variation. The first ATM switch
adds a fixed delay plus a random delay resulting in a modified distribution. The next ATM switch
adds the same constant delay and an independent random delay. The next two ATM switches
add the same constant and random delay characteristics independently. The resulting delay
distribution after traversing four nodes is markedly different as can be seen from the plots – not
four times worse but approximately twice as bad.

Prob[Delay >= x] approximately equals to Q(x-Na)


Nb

6
Where N is the number of nodes

Fixed delay is given by a

Standard deviation is given by b

Delay

ATM ATM ATM ATM


Switch Switch Switch Switch

Delay Delay

Delay (micro sec.)

Delay (micro sec.)

Network Performance Modeling

Most of the users are concerned with the modeling with modeling the performance of a
network. There are two basic modeling approaches: simulation and analysis. A simulation is
usually much more accurate, but can become a formidable computational task when trying to
simulate the performance of the large network. Analysis can be less computationally intensive,
but is often inaccurate.

Simulation models are very useful in investigating the detailed operation of an ATM system,
which can lead to the key insights into equipment, network, or application design. Simulations
generally take too long to execute to be used as an effective network design tool.

A good way to bootstrap the analytical method is to simulate the detailed ATM node’s
performance under the expected mix of traffic inputs. Often an analytical approximation to the
empirical simulation results can be developed as input to the analytical tool. An assumption
often made in network modeling is that the nodes operate independently and that the traffic
mixes and splits independently and randomly.

7
The inputs and outputs of the network model are similar for any packet switched network
design problem. The inputs are the topology, traffic and routing. The network topology must be
defined, usually as graph with nodes and links. The characteristic of each node and link relevant
to the simulation and analytical model must be described. Next the pattern of traffic offered
between the nodes must be defined. For point-to-point traffic this is commonly done via a
traffic matrix. The routing or set of links that traffic follows from source to destination must be
defined.

The principal outputs are measures of performance and cost. The principal performance
measures of a model are loss and delay statistics. A model will often produce an economic cost
to allow the network designer to select an effective price-performance tradeoff.

Reaction to Extreme Situations

Another important consideration in network design is the desired behavior of the network
under extreme situations. We will consider the extreme situations of significant failure, traffic
overload, and unexpected peak traffic patterns.

It may have different performance objectives under failure situations than under normal
circumstances. Also, you may desire that some traffic be preempted during a failure scenario so
that support for mission-critical traffic is maintained. A failure effectively reduces either a
bandwidth or switching resource, and hence can be cause of congestion.

Traffic overloads and unexpected traffic parameters can also cause congestion. For example,
offered traffic in access of the contract may create congestion. Congestion can drive a switch or
multiple points in a network into overload, reducing overall throughput significantly if
congestion collapse occurs. If you expect this situation to occur, then the mechanism to detect
congestion, correlate it with its cause, and provide some feedback in order to isolate various
traffic sources and achieve the required measures of fairness is desirable. If you are uncertain
as to how long such overloads will persist, then some slow reacting feedback controls may
actually reduce throughput because the reaction may occur after congestion as already abated
(such as we say with higher layer protocols such as TCP).

Das könnte Ihnen auch gefallen