Sie sind auf Seite 1von 21

Practical IP Network QoS

IP Network Traffic Control and Modeling


Held by: Leonardo Balliache@Google+.
CCNA Certification
CCNA Exam Simulator
CCNA Certification Books
Home
General QoS
An study of network load
Flows: how they compete for network resources
Buying additional bandwidth is always the solution?
Quality of Service
TCP-specific issues to improve protocol...
Some TCP related notes
Some OSPF related notes
Some MPLS related notes
Some DCCP related notes
Linux QoS
Differentiated Service on Linux HOWTO
Voice over IP using Linux
Cisco QoS
Network QoS using Cisco HOWTO
Differentiated Service on Cisco HOWTO
Windows QoS
Network Modeling using Windows
OPALSOFT, C.A.
Previous Content Next
2.0.- One UDP flow
Warning: this page has a bundle of png and gif pictures; it will last a little if you have an slow Internet connection.
To continue with our study we are going to repeat the tests we did before with TCP protocol in section 1.0.- One TCP flow, this time using UDP
protocol. The topology is the same in this case, i.e., a simple link connecting two hosts, but the end applications have to be changed to be used
with UDP:
In both hosts we have UDP. Host A has a CBR source (Constant Bandwidth Rate) that sends packets at a fixed
rate of 1.9 Mbps. The packets are 1000-byte size. Host B is in charge only to destroy the packets it receives. The
link is a 2 Mbps link having a propagation delay of 10 milliseconds. The buffer size is 20 packets, then our
queue can have a maximum of 20 packets. The queue is DropTail (just a FIFO queue). Most of the actual
routers and hosts in the Internet use this kind of queue.
The tcl code to feed the ns-2 simulator is in upd-1flow.tcl.
At time 0.1 seconds the CBR source in A starts to send packets, and at time 10.0 seconds it stops to send
packets.
To get information from the simulation we use, besides the ns-2 simulator, the awk program to parse the ns-2
output, and the gnuplot utility to plot the results.
For each test we are going to get the following information:
One UDP flow file:///X:/One UDP flow.htm
1 of 21 23/10/2014 03:03
Instant throughput in kbps 1.
System throughput in kbps 2.
Latency in ms 3.
Jitter in ms 4.
Bytes trasferred 5.
Losses 6.
To filter noises we use EWMA (Exponential Weighted Moving Average) to smooth the outputs using = 0.9,
before ploting.
The first graph obtained from the test is instant throughput in kbps:
Okay. After the initial and very fast rate increase the steady state throughput is reached and maintained very stable, just an horizontal line, at a
little less of our setting of 1.9 Mbps. The small difference is due because ns-2 estimate 1.9 Mbps as being 1900000 bits/sec using an interval for
1000-byte packets of 4.210526 ms. But the udp1flow.tcl source file calculates throughput as being 1900000 / 1024 kbps = 1855 kbps, just the
value you see in the graph above. Nevertheless, the important thing to observe is the very stable rate offered by the UDP protocol.
Lets see now the system throughput. Exactly the same as it was for TCP, the instant throughput is calculated dividing the bytes of each packet
(1000 bytes) by the time required to send the packet. The system throughput is calculated dividing the total bytes sent up to a given time by the
total time transcurred in seconds. In fact, the system throughput is what really have importance because it says what is the power of your
transmission, this means, how many bytes you are transfering by unit of time.
One UDP flow file:///X:/One UDP flow.htm
2 of 21 23/10/2014 03:03
Okay, as expected, the system throughput increase a little more slowly until reaching an steady state of almost 1.855 Mbps.
Lets see how latency goes with UDP, here we have the graph:
Fine!! Very, very stable at 14 ms. Dont be foolish for this graph. Observe that the variation is negligible, from 13.999 ms to 14.001 ms. Really a
very, very stable and low latency. Dont forget that TCP latency was almost 73 ms for the same environment. This is one of the reason why
real-time traffic is normally transported on top of UDP instead of TCP. A little more about real-time traffic is here where we talk about VoIP.
With this beautiful latency we will expect a jitter of being almost zero; lets see this:
One UDP flow file:///X:/One UDP flow.htm
3 of 21 23/10/2014 03:03
Almost no jittering. UDP is the right choice when latency and jittering could be a problem for the transported traffic.
What about the total bytes transferred in these 10 seconds? ; here we have:
KB sent: 2296.875000
Calculating the system throughput (st) from this data we have:
st = 2296.875 KB * 8 bits/byte / 9.9 seconds = 1856 Kbps
Almost the same we calculated above.
Finally lets see the losses. As we did for TCP, we calculate losses dividing the number of packets that are dropped by the total number of
packets that are sent; then, we have:
Packets sent: 2352
Packets dropped: 0
Total losses: 0.000000%
Great!! no losses at all. We adjusted our transmission rate a little less that the link bandwidth and all things go right. This is a big difference with
TCP. When using TCP we were not worried about any rate of transmission. The FTP source was always ready to send a packet when TCP was
ready to receive and transmit it. Using UDP we have to set the transmission rate of our CBR source to be sure that the transmission capacity of
the link will not be exceeded. Keep on reading to see what happen when we obviate this important consideration.
Okay, following the same procedure we used before with TCP lets see what happen when we set the packet size to 100-byte size. Lets start
having a look to the instant throughput for this new scenario:
One UDP flow file:///X:/One UDP flow.htm
4 of 21 23/10/2014 03:03
Well, the graph is almost the same, or exactly the same? We need to have hawk eye to catch any difference. It seems that UDP hasnt any
problem with the packet size. On the contrary, TCP was severly affected when the packet size was reduced. For 100-byte size packets TCP
system throughput was reduced almost one-half. Lets see how system throughput goes here with UDP:
Again, the same. We found an advantage when using UDP instead of TCP. The packet size is not a matter in this case, or at least, for the two
tests we did.
Lets check the latency. Being the packets smaller the latency should be less now. Here we have the graph:
One UDP flow file:///X:/One UDP flow.htm
5 of 21 23/10/2014 03:03
Yes... Latency is now 10.4 ms, this means, practically the link propagation delay. Also the variation is negligible. This latency is exactly as
expected. A 100-byte packet takes 0.421053 ms to be transmiited using a transmission rate of 1900000 bits/sec. Adding this value to the 10 ms
link propagation delay we have a total latency of 10.4 ms.
Latency is very stable, then the jitter should be very close to zero, lets see this:
No more than 0.002 ms. Couldnt be better for real-time traffic transportation.
Now lets have a look to the number of bytes transferred; because our system throughput didnt suffer with the new packet size, we have to
expect the same quantity of bytes transferred; here we have:
KB sent: 2296.191406
Almost no de-rating. Very nice and efficient the job UDP protocol has done using these tiny packets. Its time to check losses:
Packets sent: 23513
One UDP flow file:///X:/One UDP flow.htm
6 of 21 23/10/2014 03:03
Packets dropped: 0
Total losses: 0.000000%
Fine!! Number of packets are now almost ten times more but we have no losses at all.
Well, UDP olimpically ignored the packet size reduction and maintained its high score; then, a curious question would be: what about if the
packet is so big as 4000-byte packet size?. The first graph using this bigger packet size is the instant throughput:
What we have here? Incredible..., the same movie. UDP is not very concerned about the packet size; it simply makes its job. Again is almost
impossible to find any difference with the tests using 1000-byte and 100-byte packet sizes.
The system throughput should be the same thing, just a copy of the previous one; lets see this:
Exactly. Packet size is not a matter in this case. Latency should be higher because of the bigger packet size; here we have:
One UDP flow file:///X:/One UDP flow.htm
7 of 21 23/10/2014 03:03
Latency climbs up to 26 ms. Well, the time required to transmit a 4000-byte size packet on the 1.9 Mbps bandwidth link is 16.84 ms + the link
bandwidth propagation of 10 ms and we are close to our 26 ms latency. But this result is infinitely better than the TCP answer of more than 300
ms for this test. Again UDP shows us that it is the right choice when dealing with traffic requiring very low latency and jittering.
Lets see if jitter is worst when using these bulk packets:
In spite of the higher latency the new value is almost constant and jittering continue being very close to zero.
The number of bytes transferred should not suffer when using bigger packets; lets check this asseveration:
KB sent: 2296.875000
The answer is exactly the same we got before when using 1000-byte size packets.
Now is time to see if losses are present when using bigger packets; here we have the ns-2 output:
One UDP flow file:///X:/One UDP flow.htm
8 of 21 23/10/2014 03:03
Packets sent: 588
Packets dropped: 0
Total losses: 0.000000%
Less packets but no losses at all.
Well, UDP surpasses all tests until now with very high score, enough higher than TCP. But the competition is not over yet. Its time to check
how UDP will react when the link bandwidth is reduced. Lets adjust the link bandwidth to 1 Mbps instead of the original value of 2 Mbps. The
propagation delay will be kept in 10ms. Here we have the instant throughput for this case:
Well, as was expected the instant throughput reduced itself to one-half with this change. Nevertheless, the graph is very similar, having a high
steady state behavior.
The system throughput should also adapt to this new link bandwidth capacity; lets see this:
Nothing unexpected. Lets check the latency; it should be the same having the same 1000-byte size packets and 10ms link propagation delay.
Here we have the graph:
One UDP flow file:///X:/One UDP flow.htm
9 of 21 23/10/2014 03:03
Something terrible has happened here. Why the latency climb to these higher values? Immediately we suspect
of the queue behavior. Lets have a look first to the queue graph when using a 2 Mbps link. With this
bandwidth the latency was 14ms. The graph is here:
The maximum number of packets in the queue is just one, being most of its time empty. Then, queue length
doesnt contribute to increase latency in this case. Lets see now the queue when link bandwidth is reduced to 1
Mbps:
One UDP flow file:///X:/One UDP flow.htm
10 of 21 23/10/2014 03:03
Touche!! Here we have the reason of our increased latency. The queue length is now oscillating around the 19 packets. We have 19000 bytes
waiting to be served creating an additional delay of 152 ms to the already fixed portion of 10 ms due to the link propagation delay. Total
calculated latency is then 162 ms.
Whats the reason of this bottleneck? Why the queue fill practically up to its maximum capacity? The reason should be that we have a constant
rate bandwidth (CBR) source generating 1.9 Mbps, but our link is only capable to transmitting 1 Mbps. If this was the case we must have losses
when packets leave the source and try to enter the link. We will check this asseveration when having a look to losses, just below.
This latency of 170ms destroys any possibility to transport real-time traffic over UDP in these conditions. This is a very important consideration
when transmitting traffic using UDP. The link bandwidth capacity has not to be surpassed by the packet source if we want to have ideal
conditions for real-time traffic transporting. The real-time application has to be enough intelligent to self control its rate of source packets when
it observes losses to adapt itself to the network capacity. This is an advantage that TCP protocol gives us for free; just because TCP was
designed to test, constantly, network conditions trying to utilize as much as is possible of the link bandwidth capacity, controlling at the same
time any incipient presence of congestion.
With this high latency we should have problems too with jittering; lets see the jitter graph for this environment:
One UDP flow file:///X:/One UDP flow.htm
11 of 21 23/10/2014 03:03
Yes, we have almost a constant jitter of about 3.789ms. This represents the oscillation we see above in the latency graph and in the queue length
graph. This problem avoids the possibility of using this environment to transport real-time traffic.
Lets see now the number of bytes transferred; they should be limited by the new link bandwidth capacity. Here we have:
KB sent: 1227.539062
The number of bytes transferred are now a little bit more than a half of the previously sent. Problem here is that because we are using a constant
bandwidth rate source, we must have high losses close to the source, just before the flow enters the link. Lets check the losses to verify this
asseveration:
Packets sent: 2352
Packets dropped: 1095
Total losses: 46.556122%
Just as was thought. Observe that packets sent are exactly the same (2352 packets), but now only 1257 of them survive the bottleneck and can
travel through the network to the other end of the link.
Of course, we could think that this problem would be the same if we use TCP, but instead of having an FTP source that send a packet just when
the network is ready to receive it, we set a constant bandwidth rate (CBR) source that simply sends packets to a fixed rate without having to be
worried about network condition. Lets check what happen in this case; using the same environment we used before for UDP lets just change
the transporting protocol to TCP. We need to replace the receiving application with a TCP sink that can acknowledge the packets it receives.
The tcl code to feed the ns-2 simulator is in tcp-1flow-cbr.tcl.
The instant throughput in kbps is:
Okay, this graph is very similar that we got before using an FTP source instead. Have a look here.
Lets see now the system throughput:
One UDP flow file:///X:/One UDP flow.htm
12 of 21 23/10/2014 03:03
Again, the patter is almost the same; have a look to the same graph but using an FTP source here:
And now, lets check what we think will be the "ultimate test" for TCP protocol performance using CBR sources; here we have losses:
Packets sent: 1264
Packets dropped: 0
Total losses: 0.000000%
Great!! TCP controls fairly well the CBR source. Only 1264 packets are generated and none of them is lost. Observe that if we want or need to
use UDP for whatever reason we have, then we must be very worried about the rate of transmission of our sources. This, because UDP cant
exercise any control and then, when rate of source is higher that network capability we are going to have, surely, a lot of losses.
This is a great advantage of TCP over UDP. To be sure and avoid network congestion using UDP, we should use some sort of admission control
before accepting a new UDP flow in our network. TCP has not this restriction. We can use admission control with TCP but if we dont use it, we
know that TCP will do the best to avoid congestion and losses in the network, when the amount of incoming flows surpass the network capacity.
Continuing with our UDP protocol study, now we are going to see the system response when we change the buffer size in the link. Our buffer is
actually 20 packets. What about if we take this value to 50 packets?
Here we have the UDP instant throughput:
One UDP flow file:///X:/One UDP flow.htm
13 of 21 23/10/2014 03:03
Really it is difficult to catch any difference with the 20 packets buffer we saw above. Lets see the system throughput:
Again, any difference, in case of having, it is imperceptible. UDP is responding very similar to TCP when trying with an overbuffering network.
Lets see now latency:
One UDP flow file:///X:/One UDP flow.htm
14 of 21 23/10/2014 03:03
Exactly the same respect to the case of having a 20 packets buffering. We can suspect that despite of the bigger buffer, it is not being used at all.
To confirm our mind lets check the queue length; here we have:
Confirmed!! The queue behavior is the same as before when dealing with a 20 packets buffering; have a look above and you can see that both
graphs are almost identical (if not identical at all). The maximum number of packets in the queue is just one, being most of its time empty. Then,
queue length doesnt contribute to increase latency in this case neither.
We would expect the same behavior for jittering; here we have:
One UDP flow file:///X:/One UDP flow.htm
15 of 21 23/10/2014 03:03
Again, the response coincides with the previous one. Lets check now the number of bytes transferred and losses. We have in these cases:
KB sent: 2296.875000
Exactly the same quantity of bytes transferred, and:
Packets sent: 2352
Packets dropped: 0
Total losses: 0.000000%
Again, same quantity of packets sent with no losses. Then, UDP has reacted the same as TCP when the link is overbuffered but not congested.
Next step is to verify how a buffer reduction to a size of 5 packets, instead of 20 packets can affect UDP behavior. The instant throughput graph
is as follows:
One UDP flow file:///X:/One UDP flow.htm
16 of 21 23/10/2014 03:03
Well, this graph cant be more similar to the previous test having a 20 packets buffering. It seems that UDP is not very concerned about buffer
size. On the contrary TCP felt the hit; check the TCP behavior for this case just here. UDP has a better response in front of the underbuffering
network. Lets check now the system throughput:
No perceptible difference as you can verify. Latency shouldnt suffer in front of this scenario; lets see:
Just 14 ms as before. UDP is not very worried about the new reduced buffereing capacity. Lets see the jitter response:
One UDP flow file:///X:/One UDP flow.htm
17 of 21 23/10/2014 03:03
Practically no jitter at all. We should expect this behavior. When we checked above, with 20 and 50 packets buffering, the forming queue under
UDP, we saw that it never was above one packet. However, to be sure in this case, lets check this graph for this environment:
Exactly the same response. Then, it doesnt matter how large the link buffer is, it is maintained by UDP almost empty most of the time. To finish
this test lets check bytes transferred and loss response:
KB sent: 2296.875000
This answer is exactly the same. And losses:
Packets sent: 2352
Packets dropped: 0
Total losses: 0.000000%
Same great response. No losses and all packets sent were received at the receiver end.
One UDP flow file:///X:/One UDP flow.htm
18 of 21 23/10/2014 03:03
Well, UDP had a better response that TCP when the network is underbuffered. Practically the response was the same with 20, 50, or 5 packets
buffering. The main problem with UDP protocol is that we have to know in advance how much bandwidth the traffic it is carrying will require.
If we can resolve this problem, perhaps using some sort of admission control, then UDP is really a great contender against TCP. If we cant have
some way to be sure that the incoming flows are going to be supported by the network, it is better to go with TCP; it is safer.
Our last test for one UDP flow will be to see what happen when the link is longer, i.e., the propagation delay is higher. Next tests will be done
using a propagation delay of 100ms instead of 10ms, almost a ten times longer link. The instant throughput is:
Okay, in this environment the time required to reach the final throughput is a little bit longer as you see. This increased delay should affect the
time necessary to reach an stable system throughput, lets check this graph:
Yes, the curve slope is a little bit less steep in this case. The longer link propagation delay requires that it will take a little more time to reach the
steady state condition.
With this longer link propagation delay the latency should be higher; lets have a look to this graph:
One UDP flow file:///X:/One UDP flow.htm
19 of 21 23/10/2014 03:03
Latency, as expected, is the link propagation delay (100ms) plus the time required to move the packet from end to end (4ms). Nevertheless,
jittering should be almost zero because latency is very stable. Jitter is here:
Nice, no problems here. Of course, the long link propagation delay is not the best for real-time traffic. Lets
see now what about the bytes transferred:
KB sent: 2296.875000
The bytes transferred are exactly the same; no de-rating here. Next lets check losses:
Packets sent: 2352
Packets dropped: 0
Total losses: 0.000000%
One UDP flow file:///X:/One UDP flow.htm
20 of 21 23/10/2014 03:03
Excelent!! Same number of packets sent with no losses. The longer link propagation delay doesnt affect to much UDP traffic except for the
initial delay in reaching the steady state condition and, of course, the longer latency, something that is impossible to avoid in this case.
Compare to TCP in this test, UDP surpasses TCP response. TCP was deeply affected when the link propagation delay was increased from 10ms
to 100ms as you can see here. Problem to select UDP instead of TCP as a viable transport protocol is how to control losses when the network get
congested. If we have a controlled network where admission control can be implemented, and if we can shape UDP traffic to be sure it respects
the initial solicited bandwidth, then UDP would be a very nice solution for traffic transport. But, because this ideal conditions cant be
implemented yet in Internet, and being the main goal to protect the health of the network against congestion disease, the only solution is to run it
with the safer TCP.
But UDP is actually being used to transport real-time traffic in Internet, competing with TCP for the network available resources. It is, then,
very interesting to verify how both protocols share the same network, and how, the behavior of each of them affects the other. But, previously,
we are going to investigate a little how two flows of the same protocol compete between them for network resources. Our next step will be to
study how Two TCP Flows share the same network; but this is a theme for our next section.
Previous Content Next
Copyright (c) 2012 Practical IP Network QoS
One UDP flow file:///X:/One UDP flow.htm
21 of 21 23/10/2014 03:03

Das könnte Ihnen auch gefallen