Beruflich Dokumente
Kultur Dokumente
Tables
Contents
1 Network Overview ....................................................................................................................1-1
About This Chapter ............................................................................................................................................. 1-1
1.1 General mobile networks architecture and usage of IP ................................................................................. 1-1
1.2 Comparison of PS and CS networks ............................................................................................................. 1-5
1.3 Identities (e.164, e.212, FQDN, etc) ............................................................................................................. 1-6
1.4 Codecs (AMR, PCM, G.711, etc) and transcoding ....................................................................................... 1-7
1.5 Overview of redundancy and multi-homing.................................................................................................. 1-8
4 Applications ..............................................................................................................................4-35
4.1 Application Layer General ....................................................................................................................... 4-35
4.2 Dynamic Host Configuration Protocol DHCP ......................................................................................... 4-35
4.3 Domain Name System- DNS ...................................................................................................................... 4-37
4.4 Internet Service Provider and its Services ................................................................................................... 4-41
4.5 Address translation, NAT ............................................................................................................................ 4-42
4.6 IP-based application providing user services: email, web browsing, ftp, etc .............................................. 4-44
Issue 06 (2006-03-01)
6 IP in GPRS/UMTS/EPS ...........................................................................................................6-58
6.1 Remote access and login procedures for mobile users: ............................................................................... 6-58
6.2 PDP Context Activation .............................................................................................................................. 6-60
6.3 Default Bearer set-up .................................................................................................................................. 6-61
6.4 User profile.................................................................................................................................................. 6-62
6.5 Access Point Name APN .......................................................................................................................... 6-62
6.6 QoS for the IP connectivity of a UE ............................................................................................................ 6-63
6.7 Roaming ...................................................................................................................................................... 6-63
6.8 The GPRS Tunneling Protocol GTP......................................................................................................... 6-65
7 Quality of Service.....................................................................................................................7-67
7.1 QoS definition in HSPA: Conversational, Streaming, Interactive and background .................................... 7-67
7.2 QoS definition in GPRS/EPS: MBR, GBR, ARP, THI and QCI ................................................................. 7-70
7.3 TFTs and its usage ..................................................................................................................................... 7-73
7.4 IP QoS General Network Edge .............................................................................................................. 7-76
7.5 QoS provisioning: ....................................................................................................................................... 7-92
ii
Issue 06 (2006-03-01)
Tables
Figures
Figure 1-1 Mobile Network Overview (R99) ..................................................................................................... 1-2
Figure 1-2 LTE/EPS Network Overview ............................................................................................................ 1-3
Figure 1-3 Internet Standardization Bodies ........................................................................................................ 1-3
Figure 1-4 OSI vs IP Stack ................................................................................................................................. 1-4
Figure 1-5 Layered Communication in TCP/IP.................................................................................................. 1-5
Figure 1-6 Identities ........................................................................................................................................... 1-7
Figure 1-7 Codecs .............................................................................................................................................. 1-8
Figure 2-1 IPv4 Addressing, RFC 791 ............................................................................................................. 2-10
Figure 2-2 IPv6 Addressing, RFC 4291 ........................................................................................................... 2-10
Figure 2-3 IPv6 Abbreviation Choices ............................................................................................................. 2-11
Figure 2-4 IPv6 Address Structure ................................................................................................................... 2-11
Figure 2-5 Global Unicast Address Format (RFC 4291) .................................................................................. 2-11
Figure 2-6 Unique Local Address Space (ULA) (RFC 4193) .......................................................................... 2-11
Figure 2-7 Link Local Address Space (RFC 4862) .......................................................................................... 2-12
Figure 2-8 Multicast Address Space (RFC 4291) ............................................................................................. 2-12
Figure 2-9 IPv6 Address Structure ................................................................................................................... 2-12
Figure 2-10 Comparison between DHCPv4 and DHCPv6 ............................................................................... 2-13
Figure 2-11 IPv6 Address Allocation by IANA ................................................................................................ 2-13
Figure 2-12 IPv6 Private Addresses, Link Local Address (RFC 4862) ............................................................ 2-14
Figure 2-13 Direct and Indirect Routing principle ........................................................................................... 2-15
Figure 2-14 Example of a network ................................................................................................................... 2-15
Figure 2-15 IPv4 Header .................................................................................................................................. 2-16
Figure 2-16 IPv6 Header .................................................................................................................................. 2-19
Figure 3-1 Layered Communication ................................................................................................................ 3-25
Figure 3-2 Internet Service Provider (ISP) ....................................................................................................... 3-26
Figure 3-3 UDP Header .................................................................................................................................... 3-27
Issue 06 (2006-03-01)
iii
iv
Issue 06 (2006-03-01)
Tables
Issue 06 (2006-03-01)
vi
Issue 06 (2006-03-01)
Tables
Tables
Table 1-1 Some key differences between switch and router ............................................................................... 1-5
Table 2-1 Example of a routing table ................................................................................................................ 2-16
Table 2-2 IPv4 and IPv6 main differences ........................................................................................................ 2-19
Table 3-1 Transport Protocols Summary of functions general....................................................................... 3-34
Table 7-1 End to end Delay Budget .................................................................................................................. 7-77
Table 7-2 Codec Type and Sample Size Effects on Bandwidth ........................................................................ 7-77
Table 7-3 Assured Forwarding (AF) Behavior Group ...................................................................................... 7-93
Issue 06 (2006-03-01)
vii
Network Overview
Issue 06 (2006-03-01)
1-1
GERAN
Core Network
PSTN /CS
MS
BTS
BSC
MSC
GMSC
HLR/HSS
UTRAN
Packet Data
Network(s)
UE
NB
RNC
SGSN
GGSN
Internet
Intranet
IMS
improving efficiency
lowering costs
reducing complexity
The outcome from this project was a set of specifications defining the functionality and
requirements of an evolved, packet based, radio access network and a new radio access
technology. The new radio access network is referred to as the Evolved UTRAN (E-UTRAN).
In parallel to, and coordinated with, the LTE project there was also a project for the core
network. This project was called System Architecture Evolution (SAE) standardizing the new
Evolved Packet Core (EPC) aiming at a higher data rate, lower latency, packet optimized
system that supports multiple RATs.
Please note that EPC is a fully IP-based core network (all-IP) supporting access not only via
GERAN, UTRAN and E-UTRAN but also via WiFi, WiMAX and wired technologies such as
xDSL.
The combination of the E-UTRAN and the EPC is referred to as the Evolved Packet System
(EPS). However, in daily life, the term LTE is often used more or less synonymous to Evolved
UTRAN and sometimes also to the entire EPS.
1-2
Issue 06 (2006-03-01)
In the figure above we see the organization chart for standardizations in internet community.
At the top there is ISOC that has the top responsibility for Internet infrastructure. Below ISOC
we find IAB that is responsible for three other organizations, ICANN, IRTF and IETF.
Issue 06 (2006-03-01)
1-3
ICANN is responsible for making sure that the IP addresses and web addresses are unique
globally and manages five other organizations, namely: ARIN, RIPE, APNIC, LATNIC and
AFRINIC.
IRTF is responsible for research and manages Reasearch Groups, while IETF is responsible
for implementation and manages Working Groups.
Figure 1-4 OSI vs IP Stack
In the picture above we see a comparison between the OSI model and the TCP/IP protocol
stack.
OSI model is a theoretical reference point not implemented in reality. Every protocol stack
refers to it since in the OSI model there is a detailed description of what should be the
responsibility and functionality of every layer. The Application Layer in TCP/IP Protocol
Stack corresponds to the top 3 layers in the OSI model. Furthermore the button 2 layers in the
OSI model are corresponding to the Network Access layer in the TCP/IP Protocol Stack.
1-4
Issue 06 (2006-03-01)
Layered communication is about that two protocols are exchanging information logically but
the packet will physically pass through the stack. Every layer will add its own header to the
packet and the header is opened and processed at the same layer on the corresponding other
side.
Telephone Number
IP Address
Address Used in
Set-up
In each Packet
At Congestion
No Connection
Delay
Delay
Constant
Variable
Transport Network
Statefull
Stateless
Expensive
Less Expensive
Issue 06 (2006-03-01)
1-5
Address Info: The address information in a switch is a telephone number (E.164 address)
while the addressing used by a router is the IP- Address. The difference is that the E.164
addresses are globally unique and never changed, while IP-addresses are changed and
may be private addresses.
Address Used in: Address in a switch is used basically only during the set-up phase
when switches need to create circuits. After the set-up phase all user-data traffic will be
passed through the circuit that was created during the set-up phase, thus no need for the
address. Every IP packet holds IP-addresses in the header, the senders IP-address and the
destinations IP-address. A router needs to read the destination IP address for every
passing IP packet to be able to rout the packet to the right destination, see the section for
routing for more info. This characteristic will result in that the router for every packet it
processes needs to read the IP address.
Delay: As a result of the characteristics described above, the delay in a switch is constant
(low) and the delay in a router is variable depending on how congested the router is in
that certain moment of time.
Transport Network: Since circuits are set-up in a switch and the user plane traffic is
send in the reserved circuit, a switch is considered to be statefull (Knows the state). A
router however, is looking up the destination IP-address in the routing table for every
passing packet and decide an output for that packet on that point of time. This
characteristic is called stateless (Dont know the state). Packets can however take
different paths for the same flow.
Cost (Price per MB): Cost is referred to the cost/price per MByte. Intuitively, since a
switch reserves a circuit, that capacity is reserved for the circuit even if the users are not
sending anything in the user plane traffic. Foe example in a normal telephone
conversation, even though a full-duplex circuit is reserved, only part of the capacity is
used. This is because one user talks and the other listens and vise versa. Statistics shows
that it is around 35% of the capacity that is used in an average phone conversation. In a
PS domain having a router as network element; if a user talks, then the codecs will
generate a lot of IP-packets, while if a user just listens; then the codecs will basically
only generate small packets used for refresh etc. All this is resulting in that a router has a
better usage of the overall resources and gives a low cost per MByte.
1-6
IMSI (International Subscriber Identity), is an identity used within the network (Core
Network) following the E.212 numbering plan. IMSI number is 14 or15 digits and
resides inside the (U)SIM card. The IMSI number is changed by the network elements to
a temporary number called T-IMSI (Temporary IMSI) or P-TMSI (Packet T-IMSI) for
security reasons.
Issue 06 (2006-03-01)
2.
MSISDN, (Mobile Station ISDN Number) is an identity used primary outside the mobile
network for interconnection to legacy network such as PSTN. MSISDN follows the
E.164 numbering plan also used in PSTN.
3.
FQDN (Fully Qualified Domain Name) is an identity used in the PS networks that points
out a machine or adomain, e.g. www.company.com. FQDN is built up by: Top Level
Domain (com), Domain name (company) and the application (www), with dots in
between.
4.
IP-address is the identity of the device at IP layer that will be examined in detail in the
next chapter. The end-user device may use an IPv.4 or IPv.6.
Issue 06 (2006-03-01)
1-7
G.711 codec: Is a codec used for voice and it is about how to put the result of a PCM-codec
(64 Kbps) in a PCM channel having a capacity of exactly 64 Kbps
Codecs in the IP stack: There are many codecs used in the commercial applications and most
of them are company specific for example PDF being the codec used by Adobe products such
as Adobe Acrobat Reader.
Figure 1-7 shows where some of the codecs are used in a mobile network.
Transcoding:
Transcoding is the operation of changing the codec. Phylosofically this operation is needed
for example in GSM where we have in one side; a PCM codec giving 64 Kbps and in the
other side; FR/HR/EFR used by the GSM RAN (Radio Access Network) and having other
bitrates (12.2, 9.6 and 4.75 Kbps). Transcoder in GSM was traditionally put in the BTS (Base
Tranciever Station) and moved to BSC (Base Station Controller) and nowadays put in the
MSC (Mobile Switching Center). However according to the specifications, the transcoder is a
part of the GSM BSS (Base Station System), but the GSM BSS is actually ending up in the
MSC. In the UMTS the transcoding is done by MSC that is a network element belonging to
the CS Core.
Figure 1-7 Codecs
1-8
Issue 06 (2006-03-01)
Multi-homing variants:
Single Link, Multiple IP address (spaces)
The host has multiple IP Addresss (e.g. 2001:db8::1 and 2001:db8::2 in IPv6), but
only one physical upstream link. When the single link fails, connectivity is down for all
addresses.
Issue 06 (2006-03-01)
1-9
Internet Protocol, IP
2.1.2 IPv6:
Figure 2-2 IPv6 Addressing, RFC 4291
Abbreviation choices:
2-10
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
2-11
2-12
Issue 06 (2006-03-01)
DHCPv6 gives the full 128 bits address to the client, Not integrated with
DHCPv4 and it uses different message types, following configurations can be
used:
1. Auto allocation, A-DHCP - permanent IP address to client
2. Manual allocation, M-DHCP - assigns a fixed IP, based on
clients MAC-address
3. Dynamic allocation, D-DHCP - for limited time
Figure 2-10 Comparison between DHCPv4 and DHCPv6
Combination
The address autoconfigured by the device (stateless)
Other parameters from DHCPv6, eg. DNS, NTP (Network Time Protocol)
server, etc.
Issue 06 (2006-03-01)
2-13
10.0.0.0 10.255.255.255
172.16.0.0 172.31.255.255
192.168.0.0 192.168.255.255
Link Local Address Space is defined to be used as private IPv6 Addressing, these
addresses are used only on a particular link, eg. Ethernet and are Not routable
The structure of the private IPv6 addresses are shown in Figure 2-12
Figure 2-12 IPv6 Private Addresses, Link Local Address (RFC 4862)
Direct Routing:
Mapping of the IP address to the physical address, using Address Resolution Protocol,
ARP
Indirect Routing:
Forwarding of the packets by using the routing table
2-14
Issue 06 (2006-03-01)
Lets have a look at the routing table that is stored in the router in the interconnection of 3
Networks in the picture above, marked by red. The routing table may look as show in Table
2-1:
Issue 06 (2006-03-01)
2-15
Above shown a possible configuration stored in the the routing table in the router marked by
red color. This router will use the routing table in the case of indirect routing to find the next
hop.
Version
2-16
The first header field in an IP packet is the four-bit version field. For IPv4, this has a
value of 4 (hence the name IPv4).
The second field (4 bits) is the Internet Header Length (IHL), which is the number of
32-bit words in the header. Since an IPv4 header may contain a variable number of
options, this field specifies the size of the header (this also coincides with the offset to
the data). The minimum value for this field is 5 (RFC 791), which is a length of 532
= 160 bits = 20 bytes. Being a 4-bit value, the maximum length is 15 words (1532
bits) or 480 bits = 60 bytes.
Issue 06 (2006-03-01)
Originally defined as the Type of Service field, this field is now defined by RFC 2474
for Differentiated services (DiffServ). New technologies are emerging that require
real-time data streaming and therefore make use of the DSCP field. An example is
Voice over IP (VoIP), which is used for interactive data voice exchange.
This field is defined in RFC 3168 and allows end-to-end notification of network
congestion without dropping packets. ECN is an optional feature that is only used
when both endpoints support it and are willing to use it. It is only effective when
supported by the underlying network.
Total Length
Identification
Issue 06 (2006-03-01)
This field is an identification field and is primarily used for uniquely identifying
fragments of an original IP datagram. Some experimental work has suggested using
the ID field for other purposes, such as for adding packet-tracing information to help
trace datagrams with spoofed source addresses.
Flags
A three-bit field follows and is used to control or identify fragments. They are (in
order, from high order to low order):
If the DF flag is set, and fragmentation is required to route the packet, then the packet
is dropped. This can be used when sending packets to a host that does not have
sufficient resources to handle fragmentation. It can also be used for Path MTU
Discovery, either automatically by the host IP software, or manually using diagnostic
tools such as ping or traceroute.
For unfragmented packets, the MF flag is cleared. For fragmented packets, all
fragments except the last have the MF flag set. The last fragment has a non-zero
Fragment Offset field, differentiating it from an unfragmented packet.
Fragment Offset
This 16-bit field defines the entire packet (fragment) size, including header and data,
in bytes. The minimum-length packet is 20 bytes (20-byte header + 0 bytes data) and
the maximum is 65,535 bytes the maximum value of a 16-bit word. The largest
datagram that any host is required to be able to reassemble is 576 bytes, but most
modern hosts handle much larger packets. Sometimes subnetworks impose further
restrictions on the packet size, in which case datagrams must be fragmented.
Fragmentation is handled in either the host or router in IPv4.
The fragment offset field, measured in units of eight-byte blocks, is 13 bits long and
specifies the offset of a particular fragment relative to the beginning of the original
unfragmented IP datagram. The first fragment has an offset of zero. This allows a
maximum offset of (213 1) 8 = 65,528 bytes, which would exceed the maximum
IP packet length of 65,535 bytes with the header length included (65,528 + 20 =
65,548 bytes).
2-17
An eight-bit time to live field helps prevent datagrams from persisting (e.g. going in
circles) on an internet. This field limits a datagram's lifetime. It is specified in
seconds, but time intervals less than 1 second are rounded up to 1. In practice, the
field has become a hop countwhen the datagram arrives at a router, the router
decrements the TTL field by one. When the TTL field hits zero, the router discards
the packet and typically sends a ICMP Time Exceeded message to the sender.
The program traceroute uses these ICMP Time Exceeded messages to print the
routers used by packets to go from the source to the destination.
Protocol
Header Checksum
The 16-bit checksum field is used for error-checking of the header. When a packet
arrives at a router, the router calculates the checksum of the header and compares it to
the checksum field. If the values do not match, the router discards the packet. Errors
in the data field must be handled by the encapsulated protocol. Both UDP and TCP
have checksum fields.
When a packet arrives at a router, the router decreases the TTL field. Consequently,
the router must calculate a new checksum. RFC 1071 defines the checksum
calculation:
The checksum field is the 16-bit one's complement of the one's complement sum of
all 16-bit words in the header. For purposes of computing the checksum, the value of
the checksum field is zero.
Source address
This field is the IPv4 address of the sender of the packet. Note that this address may
be changed in transit by a network address translation device.
Destination address
This field defines the protocol used in the data portion of the IP datagram. The
Internet Assigned Numbers Authority maintains a list of IP protocol numbers which
was originally defined in RFC 790.
This field is the IPv4 address of the receiver of the packet. As with the source
address, this may be changed in transit by a network address translation device.
Options
The options field is not often used. Note that the value in the IHL field must include
enough extra 32-bit words to hold all the options (plus any padding needed to ensure
that the header contains an integral number of 32-bit words). The list of options may
be terminated with an EOL (End of Options List, 0x00) option; this is only necessary
if the end of the options would not otherwise coincide with the end of the header.
2-18
Issue 06 (2006-03-01)
The following table Table 2-2 summarizes some main different functionalities between IPv4
and IPv6.
Table 2-2 IPv4 and IPv6 main differences
Issue 06 (2006-03-01)
2-19
2-20
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
2-21
maintains separate link state databases for each area it serves and maintains summarized
routes for all areas in the network.
OSPF does not use a TCP/IP transport protocol (UDP, TCP), but is encapsulated directly in IP
datagrams with protocol number 89. This is in contrast to other routing protocols, such as the
Routing Information Protocol (RIP), or the Border Gateway Protocol (BGP). OSPF handles
its own error detection and correction functions.
OSPF uses multicast addressing for route flooding on a broadcast domain. For non-broadcast
networks special provisions for configuration facilitate neighbor discovery. OSPF multicast IP
packets never traverse IP routers(never traverse Broadcast Domains), they never travel more
than one hop. OSPF reserves the multicast addresses 224.0.0.5 for IPv4 or FF02::5 for
IPv6 (all SPF/link state routers, also known as AllSPFRouters) and 224.0.0.6 for IPv4
or FF02::6 for IPv6 (all Designated Routers, AllDRouters), as specified in RFC 2328
and RFC 5340.
For routing multicast IP traffic, OSPF supports the Multicast Open Shortest Path First
protocol (MOSPF) as defined in RFC 1584. PIM (Protocol Independent Multicast) in
conjunction with OSPF or other IGPs, (Interior Gateway Protocol), is widely deployed.
The OSPF protocol, when running on IPv4, can operate securely between routers, optionally
using a variety of authentication methods to allow only trusted routers to participate in
routing. OSPFv3, running on IPv6, no longer supports protocol-internal authentication.
Instead, it relies on IPv6 protocol security (IPsec).
OSPF version 3 introduces modifications to the IPv4 implementation of the protocol. Except
for virtual links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6
protocol runs per link, rather than based on the subnet. All IP prefix information has been
removed from the link-state advertisements and from the Hello discovery packet making
OSPFv3 essentially protocol-independent. Despite the expanded IP addressing to 128-bits in
IPv6, area and router Identifications are still based on 32-bit values.
2-22
Issue 06 (2006-03-01)
directly, BGP is one of the most important protocols of the Internet. Compare this with
Signaling System 7 (SS7), which is the inter-provider core call setup protocol on the PSTN.
Very large private IP networks use BGP internally. An example would be the joining of a
number of large OSPF (Open Shortest Path First) networks where OSPF by itself would not
scale to size. Another reason to use BGP is multihoming a network for better redundancy,
either to multiple access points of a single ISP (RFC 1998), or to multiple ISPs.
- RIP [RFC 2453], Slow, Update every 30 S, The whole Routing table, Small NWs
- OSPF [RFC 2328], Multicasting, 224.0.0.5, Only the changes, Fast, Large NWs
Issue 06 (2006-03-01)
2-23
3-24
Issue 06 (2006-03-01)
3.2
Issue 06 (2006-03-01)
3-25
port number that network clients connect to for service initiation, after which communication
can be reestablished on other connection-specific port numbers.
The server has a well-known port open for the service it offers (1-1023).
3-26
Issue 06 (2006-03-01)
interface level. Time-sensitive applications often use UDP because dropping packets is
preferable to waiting for delayed packets, which may not be an option in a real-time system.
Figure 3-3 shows th UDP header
Figure 3-3 UDP Header
If the SYN flag is set (1), then this is the initial sequence number. The sequence number
of the actual first data byte and the acknowledged number in the corresponding ACK are
then this sequence number plus 1.
Issue 06 (2006-03-01)
3-27
3-28
If the SYN flag is clear (0), then this is the accumulated sequence number of the first data
byte of this segment for the current session.
Acknowledgment number (32 bits) if the ACK flag is set then the value of this field is
the next sequence number that the receiver is expecting. This acknowledges receipt of all
prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial
sequence number itself, but no data.
Data offset (4 bits) specifies the size of the TCP header in 32-bit words. The minimum
size header is 5 words and the maximum is 15 words thus giving the minimum size of 20
bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header.
This field gets its name from the fact that it is also the offset from the start of the TCP
segment to the actual data.
CWR (1 bit) Congestion Window Reduced (CWR) flag is set by the sending host to
indicate that it received a TCP segment with the ECE flag set and had responded in
congestion control mechanism (added to header by RFC 3168).
If the SYN flag is set (1), that the TCP peer is ECN capable.
If the SYN flag is clear (0), that a packet with Congestion Experienced flag in IP header
set is received during normal transmission (added to header by RFC 3168).
ACK (1 bit) indicates that the Acknowledgment field is significant. All packets after the
initial SYN packet sent by the client should have this flag set.
PSH (1 bit) Push function. Asks to push the buffered data to the receiving application.
SYN (1 bit) Synchronize sequence numbers. Only the first packet sent from each end
should have this flag set. Some other flags change meaning based on this flag, and some
are only valid for when it is set, and others when it is clear.
Window size (16 bits) the size of the receive window, which specifies the number of
bytes (beyond the sequence number in the acknowledgment field) that the sender of this
segment is currently willing to receive (see Flow control and Window Scaling)
Checksum (16 bits) The 16-bit checksum field is used for error-checking of the header
and data
Urgent pointer (16 bits) if the URG flag is set, then this 16-bit field is an offset from the
sequence number indicating the last urgent data byte
Options (Variable 0320 bits, divisible by 32) The length of this field is determined by
the data offset field. Options have up to three fields: Option-Kind (1 byte),
Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type
of option, and is the only field that is not optional. Depending on what kind of option we
are dealing with, the next two fields may be set: the Option-Length field indicates the
total length of the option, and the Option-Data field contains the value of the option, if
applicable. For example, an Option-Kind byte of 0x01 indicates that this is a No-Op
option used only for padding, and does not have an Option-Length or Option-Data byte
following it. An Option-Kind byte of 0 is the End Of Options option, and is also only one
byte. An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size
option, and will be followed by a byte specifying the length of the MSS field (should be
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd
Issue 06 (2006-03-01)
0x04). Note that this length is the total length of the given options field, including
Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in
two bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short,
an MSS option field with a value of 0x05B4 will show up as (0x02 0x04 0x05B4) in the
TCP options section.
Padding The TCP header padding is used to ensure that the TCP header ends and data
begins on a 32 bit boundary. The padding is composed of zeros.
Figure 3-6 shows a general TCP traffic case and the usage of flags in the header
Issue 06 (2006-03-01)
3-29
3-30
Issue 06 (2006-03-01)
In SCTP a 4-way handshaking in used during the set-up phase and a 3-way handshaking is
used for the termination of SCTP-associatiom. Figure 3-8 shows an example of a SCTP traffic
case, Set-up (Initiation), Data Transfer and Termination is shown.
Figure 3-8 SCTP Traffic Case
In SCTP chunk types are defined to be used for different purposes. Figure 3-9 shows the
definition of chunk types.
Issue 06 (2006-03-01)
3-31
3-32
Issue 06 (2006-03-01)
3.6
Transport
Protocols
functionalities
and
comparison
of
The following table Table 3-1 summarizes some main protocol functionalities and differences
of Transport Layer Protocols
Issue 06 (2006-03-01)
3-33
3-34
Issue 06 (2006-03-01)
Applications
Issue 06 (2006-03-01)
4-35
route and one or more DNS server addresses from a DHCP server. The DHCP client then uses
this information to configure its host. Once the configuration process is complete, the host is
able to communicate on the internet.
The DHCP server maintains a database of available IP addresses and configuration
information. When it receives a request from a client, the DHCP server determines the
network to which the DHCP client is connected, and then allocates an IP address or prefix that
is appropriate for the client, and sends configuration information appropriate for that client.
Because the DHCP protocol must work correctly even before DHCP clients have been
configured, the DHCP server and DHCP client must be connected to the same network link.
In larger networks, this is not practical. On such networks, each network link contains one or
more DHCP relay agents. These DHCP relay agents receive messages from DHCP clients and
forward them to DHCP servers. DHCP servers send responses back to the relay agent, and the
relay agent then sends these responses to the DHCP client on the local network link.
DHCP servers typically grant IP addresses to clients only for a limited interval. DHCP clients
are responsible for renewing their IP address before that interval has expired, and must stop
using the address once the interval has expired, if they have not been able to renew it.
DHCP is used for IPv4 and IPv6. While both versions serve much the same purpose, the
details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered
separate protocols.
Hosts that do not use DHCP for address configuration may still use it to obtain other
configuration information. Alternatively, IPv6 hosts may use stateless address
autoconfiguration. IPv4 hosts may use link-local addressing to achieve limited local
connectivity.
Figure 4-2 shows a traffic cas of DHCP
4-36
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
4-37
The Domain Name System also specifies the technical functionality of this database service.
It defines the DNS protocol, a detailed specification of the data structures and communication
exchanges used in DNS, as part of the Internet Protocol Suite.
The Internet maintains two principal namespaces, the domain name hierarchy[1] and the
Internet Protocol (IP) address spaces.[2] The Domain Name System maintains the domain
name hierarchy and provides translation services between it and the address spaces. Internet
name servers and a communication protocol implement the Domain Name System.[3] A DNS
name server is a server that stores the DNS records for a domain name, such as address (A)
records, name server (NS) records, and mail exchanger (MX) records (see also list of DNS
record types); a DNS name server responds with answers to queries against its database.
Figure 4-3 shows a traffic case of DNS as it is defined in RFC 1034 and RFC 1035. The client
tries to resolve the address of www.apis.se.
Figure 4-3 Domain Name System, RFC 1034, RFC 1035
Root DNS:
A root name server is a name server for the Domain Name System's root zone. It directly
answers requests for records in the root zone and answers other requests returning a list of the
designated authoritative name servers for the appropriate top-level domain (TLD). The root
name servers are a critical part of the Internet because they are the first step in translating
(resolving) human readable host names into IP addresses that are used in communication
between Internet hosts.
A combination of limits in the DNS and certain protocols, namely the practical size of
unfragmented User Datagram Protocol (UDP) packets, resulted in a limited number of root
server addresses that can be accommodated in DNS name query responses. This limit has
determined the number of name server installations at (currently) 13 clusters, serving the
needs of the entire public Internet worldwide.
Figure 4-4 shows Top Level Domains TLDs and different kind of TLDs.
4-38
Issue 06 (2006-03-01)
Below a list of all Country Code Top Level Domains, ccTLD, Figure 4-5.
Issue 06 (2006-03-01)
4-39
4-40
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
4-41
4-42
Issue 06 (2006-03-01)
In the mid-1990s NAT became a popular tool for alleviating the consequences of IPv4 address
exhaustion. It has become a common, indispensable feature in routers for home and
small-office Internet connections. Most systems using NAT do so in order to enable multiple
hosts on a private network to access the Internet using a single public IP address.
Network address translation has serious drawbacks in terms of the quality of Internet
connectivity and requires careful attention to the details of its implementation. In particular,
all types of NAT break the originally envisioned model of IP end-to-end connectivity across
the Internet and NAPT makes it difficult for systems behind a NAT to accept incoming
communications. As a result, NAT traversal methods have been devised to alleviate the issues
encountered.
Figure 4-9 shows two private networks behind a NAT. The NAT has a public IP address
towards the public network.
Figure 4-9 Network Address Translation NAT
Figure 4-10 shows the address translation and port generation done by NAT and how the
packet looks like after the translation. IP addresses and port numbers are shown.
Issue 06 (2006-03-01)
4-43
4-44
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
5-45
(IETF) rules define behavior for such applications. The voice and video stream
communications in SIP applications are carried over another application protocol, the
Real-time Transport Protocol (RTP). Parameters (port numbers, protocols, codecs) for these
media streams are defined and negotiated using the Session Description Protocol (SDP) which
is transported in the SIP packet body.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based
communications that can support a superset of the call processing functions and features
present in the public switched telephone network (PSTN). SIP by itself does not define these
features; rather, its focus is call-setup and signaling. The features that permit familiar
telephone-like operations: dialing a number, causing a phone to ring, hearing ringback tones
or a busy signal - are performed by proxy servers and user agents. Implementation and
terminology are different in the SIP world but to the end-user, the behavior is similar.
SIP-enabled telephony networks can also implement many of the more advanced call
processing features present in Signaling System 7 (SS7), though the two protocols themselves
are very different. SS7 is a centralized protocol, characterized by a complex central network
architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer
protocol, thus it requires only a simple (and thus scalable) core network with intelligence
distributed to the network edge, embedded in endpoints (terminating devices built in either
hardware or software). SIP features are implemented in the communicating endpoints (i.e. at
the edge of the network) contrary to traditional SS7 features, which are implemented in the
network.
Although several other VoIP signaling protocols exist (such as BICC, H.323, MGCP,
MEGACO), SIP is distinguished by its proponents for having roots in the IP community
rather than the telecommunications industry. SIP has been standardized and governed
primarily by the IETF, while other protocols, such as H.323, have traditionally been
associated with the International Telecommunication Union (ITU).
The first proposed standard version (SIP 1.0) was defined by RFC 2543. This version of the
protocol was further refined to version 2.0 and clarified in RFC 3261, although some
implementations are still relying on the older definitions.
User Agent
A SIP user agent (UA) is a logical network end-point used to create or receive SIP messages
and thereby manage a SIP session. A SIP UA can perform the role of a User Agent Client
(UAC), which sends SIP requests, and the User Agent Server (UAS), which receives the
requests and returns a SIP response. These roles of UAC and UAS only last for the duration of
a SIP transaction.
A SIP phone is a SIP user agent that provides the traditional call functions of a telephone,
such as dial, answer, reject, hold/unhold, and call transfer. SIP phones may be implemented as
a hardware device or as a softphone. As vendors increasingly implement SIP as a standard
telephony platform, often driven by 4G efforts, the distinction between hardware-based and
software-based SIP phones is being blurred and SIP elements are implemented in the basic
5-46
Issue 06 (2006-03-01)
firmware functions of many IP-capable devices. Examples are devices from Nokia and
Research in Motion.
In SIP, as in HTTP, the user agent may identify itself using a message header field
'User-Agent', containing a text description of the software/hardware/product involved. The
User-Agent field is sent in request messages, which means that the receiving SIP server can
see this information. SIP network elements sometimes store this information, and it can be
useful in diagnosing SIP compatibility problems.
Proxy server
An intermediary entity that acts as both a server (UAS) and a client (UAC) for the purpose of
making requests on behalf of other clients. A proxy server primarily plays the role of routing,
which means its job is to ensure that a request is sent to another entity "closer" to the targeted
user. Proxies are also useful for enforcing policy (for example, making sure a user is allowed
to make a call). A proxy interprets, and, if necessary, rewrites specific parts of a request
message before forwarding it.
Registrar
A server that accepts REGISTER requests and places the information it receives in those
requests into the location service for the domain it handles which registers one or more IP
addresses to a certain SIP URI, indicated by the sip: scheme, although other protocol
schemes are possible (such as tel:). More than one user agent can register at the same URI,
with the result that all registered user agents will receive a call to the SIP URI.
SIP registrars are logical elements, and are commonly co-located with SIP proxies. But it is
also possible and often good for network scalability to place this location service with a
redirect server.
Redirect server
A user agent server that generates 3xx (Redirection) responses to requests it receives,
directing the client to contact an alternate set of URIs. The redirect server allows proxy
servers to direct SIP session invitations to external domains.
Session border controller
Session border controllers Serve as middle boxes between UA and SIP server for various types
of functions, including network topology hiding, and assistance in NAT traversal.
Gateway
Gateways can be used to interface a SIP network to other networks, such as the public
switched telephone network, which use different protocols or technologies.
Figure 5-1 shows SIP domain architecture and its network elements
Issue 06 (2006-03-01)
5-47
In the below traffic case a SIP REGISTRATION and INVITATION is shown only looking at
the SIP Methods, later in this chapter the headers are examined including SDPs.
Figure 5-2 SIP Traffic Case
5-48
Issue 06 (2006-03-01)
REGISTER: Used by a UA to indicate its current IP address and the URLs for which it
would like to receive calls.
Issue 06 (2006-03-01)
5-49
Below is a list of SIP methods as they are defined in RFC 3261 and also a list of SIP methods
defined to be used in IMS defined by 3GPP, Figure 5-4.
5-50
Issue 06 (2006-03-01)
SIP Headers
There are 6 mandatory headers defined in RFC 3261. SIP can be extended by new headers.
The format of the header is the header name followed by colon and header specific
information, e.g. contact:<sip:192.154,78.23>
Below is a list of mandatory SIP headers:
Figure 5-5 SIP headers
SIP Responses
The SIP response types defined in RFC 3261 fall in one of the following categories:
Success (2xx): The action was successfully received, understood, and accepted.
Redirection (3xx): Further action needs to be taken (typically by sender) to complete the
request.
Client Error (4xx): The request contains bad syntax or cannot be fulfilled at the server.
Server Error (5xx): The server failed to fulfill an apparently valid request.
Issue 06 (2006-03-01)
5-51
5-52
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
5-53
Issue 06 (2006-03-01)
RTP standard defines a pair of protocols, RTP and RTCP. RTP is used for transfer of
multimedia data, and the RTCP is used to periodically send control information and QoS
parameters. Figure 5-10 shows the RTP header with corresponding header fields.
Figure 5-10 RTP Header
Issue 06 (2006-03-01)
5-55
5-56
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
5-57
IP in GPRS/UMTS/EPS
6.1.1 GPRS/UMTS
In GPRS/UMTS the bearer is called a PDP-Context. A primary PDP-context is activated in
the beginning of login providing the user an IP address. The user may also activate a
secondary PDP-context keeping the same IP as the primary PDP-Context but having different
QoS for the bearer. Figure 6-1 shows the general concept.
6-58
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
6-59
Subscriber's IP address
Subscriber's IMSI
Subscriber's
The Tunnel Endpoint ID (TEID) is a number allocated by the GSN which identifies the
tunnelled data related to a particular PDP context.
Several PDP contexts may use the same IP address. The Secondary PDP Context Activation
procedure may be used to activate a PDP context while reusing the PDP address and other
PDP context information from an already active PDP context, but with a different QoS
profile.[1] Note that the procedure is called secondary, not the resulting PDP contexts that
have no such relationship with the one the PDP address of which they reused.
A total of 11 PDP contexts (with any combination of primary and secondary) can co-exist.
NSAPI are used to differentiate the different PDP context. Figure 6-4 shows a traffic case
used to activate a PDP-Context.
6-60
Issue 06 (2006-03-01)
EPS Bearer:
Each EPS bearer context represents an EPS bearer between the UE and a PDN. EPS bearer
contexts can remain activated even if the radio and S1 bearers constituting the corresponding
EPS bearers between UE and MME are temporarily released. An EPS bearer context can be
either a default bearer context or a dedicated bearer context. A default EPS bearer context is
activated when the UE requests a connection to a PDN. The first default EPS bearer context,
is activated during the EPS attach procedure. Additionally, the network can activate one or
several dedicated EPS bearer contexts in parallel. Figure 6-5 shows a traffic case for setting
up the default bearer in EPS.
Issue 06 (2006-03-01)
6-61
6-62
Issue 06 (2006-03-01)
wireless device, what security methods should be used, and how or if, it should be connected
to some private customer network.
More specifically, the APN identifies the packet data network (PDN), that a mobile data user
wants to communicate with. In addition to identifying a PDN, an APN may also be used to
define the type of service, (e.g. connection to wireless application protocol (WAP) server,
multimedia messaging service (MMS)), that is provided by the PDN. APN is used in 3GPP
data access networks, e.g. general packet radio service (GPRS), evolved packet core (EPC).
Structure of an APN
A structured APN consists of two parts as shown in Figure 6-7.
Network Identifier: Defines the external network to which the Gateway GPRS Support
Node (GGSN) is connected. Optionally, it may also include the service requested by the
user. This part of the APN is mandatory
Operator Identifier: Defines the specific operators packet domain network in which the
GGSN is located. This part of the APN is optional. The MCC is the Mobile Country
Code and the MNC is the Mobile Network Code which together uniquely identify a
mobile network operator.
6.7 Roaming
In the case when the user is roaming, the gateway (GGSN or PGW) can be in the HPLMN or
VPLMN depending on network setting and policies. There is a flag in the HLR/HSS
indicating if home routed traffic should apply. The flag is called VPLMN-Allowed and can
be set to yes or no. However the IP of the user is allways allocated by the gateway that
remains the same for duration of the bearer.
Issue 06 (2006-03-01)
6-63
Figure 6-8 shows the place for network elements in the case of home routed traffic. As seen
the gateway GGSN/PGW is in the home network thus the same settings (IP Address, DNS, etc)
will be the same as the user was in the home network.
Figure 6-8 Roaming and Home Routed Traffic
Local Breakout
In the case where operators (both home operator and the visited operator) allow local breakout,
the gateway can be in the visited network. The flag in the HLR/HSS should in this case be set
to yes. Figure 6-9 shows the general concept of local breakout. The visited gateway
GGSN/PGW is in this case responsible for the bearer settings, such as IP address, QoS, etc
6-64
Issue 06 (2006-03-01)
GTP-u: For transfer of user data in separated tunnels for each Packet Data Protocol (PDP) context, GTPv1 is
used in GTP-u
Figure 6-10 shows the GTP and its usage in EPS for tunneling of user data between SGW and
PGW. As shown previously the GTP is used in the core for the UMTS/PS and EPS both for
control and user plane. (see protocol stacks for UMTS and EPS)
EPS uses GTPv2 while UMTS/GPRS are using GTPv1 for GTP-c
Issue 06 (2006-03-01)
6-65
6-66
Issue 06 (2006-03-01)
Quality of Service
Conversational class
Streaming class
Interactive class
Background class
The main distinguishing factor between these QoS classes is how delay sensitive the traffic is:
Conversational class is meant for traffic which is very delay sensitive while Background class
is the most delay insensitive traffic class.
Conversational and Streaming classes are mainly intended to be used to carry real-time traffic
flows. The main divider between them is how delay sensitive the traffic is. Conversational
Issue 06 (2006-03-01)
7-67
real-time services, like video telephony, are the most delay sensitive applications and those
data streams should be carried in Conversational class.
Interactive class and Background are mainly meant to be used by traditional Internet
applications like WWW, Email, Telnet, FTP and News. Due to looser delay requirements,
compare to conversational and streaming classes, both provide better error rate by means of
channel coding and retransmission. The main difference between Interactive and Background
class is that Interactive class is mainly used by interactive applications, e.g. interactive Email
or interactive Web browsing, while Background class is meant for background traffic, e.g.
background download of Emails or background file downloading. Responsiveness of the
interactive applications is ensured by separating interactive and background applications.
Traffic in the Interactive class has higher priority in scheduling than Background class traffic,
so background applications use transmission resources only when interactive applications do
not need them. This is very important in wireless environment where the bandwidth is low
compared to fixed networks.
However, these are only typical examples of usage of the traffic classes. There is in particular
no strict one-to-one mapping between classes of service (as defined in TS 22.105 [5]) and the
traffic classes defined in this TS. For instance, a service interactive by nature can very well
use the Conversational traffic class if the application or the user has tight requirements on
delay.
7-68
Issue 06 (2006-03-01)
This scheme is one of the newcomers in data communication, raising a number of new
requirements in both telecommunication and data communication systems. It is characterised
by that the time relations (variation) between information entities (i.e. samples, packets)
within a flow shall be preserved, although it does not have any requirements on low transfer
delay.
The delay variation of the end-to-end flow shall be limited, to preserve the time relation
(variation) between information entities of the stream. But as the stream normally is time
aligned at the receiving end (in the user equipment), the highest acceptable delay variation
over the transmission media is given by the capability of the time alignment function of the
application. Acceptable delay variation is thus much greater than the delay variation given by
the limits of human perception.
Real time streams - fundamental characteristics for QoS:
Issue 06 (2006-03-01)
7-69
A QCI Figure 7-2 is a scalar that is used as a reference to access node-specific parameters that
control bearer level packet forwarding treatment (e.g. scheduling weights, admission
thresholds, queue management thresholds, link layer protocol configuration, etc.), and that
have been pre-configured by the operator owning the access node (e.g. eNodeB). A
one-to-one mapping of standardized QCI values to standardized characteristics is captured
TS 23.203.
Figure 7-2 QCI and some examples
On the radio interface and on S1, each PDU (e.g. RLC PDU or GTP-U PDU) is indirectly
associated with one QCI via the bearer identifier carried in the PDU header. The same applies
to the S5 and S8 interfaces if they are based on GTP.
The ARP shall contain information about the priority level (scalar), the pre-emption capability
(flag) and the pre-emption vulnerability (flag). The primary purpose of ARP is to decide
whether a bearer establishment / modification request can be accepted or needs to be rejected
due to resource limitations (typically available radio capacity for GBR bearers). The priority
level information of the ARP is used for this decision to ensure that the request of the bearer
with the higher priority level is preferred. In addition, the ARP can be used (e.g. by the
eNodeB) to decide which bearer(s) to drop during exceptional resource limitations (e.g. at
handover). The pre-emption capability information of the ARP defines whether a bearer with
7-70
Issue 06 (2006-03-01)
a lower ARP priority level should be dropped to free up the required resources. The
pre-emption vulnerability information of the ARP defines whether a bearer is applicable for
such dropping by a pre-emption capable bearer with a higher ARP priority value. Once
successfully established, a bearer's ARP shall not have any impact on the bearer level packet
forwarding treatment (e.g. scheduling and rate control). Such packet forwarding treatment
should be solely determined by the other EPS bearer QoS parameters: QCI, GBR and MBR,
and by the AMBR parameters. The ARP is not included within the EPS QoS Profile sent to
the UE.
The ARP should be understood as "Priority of Allocation and Retention"; not as "Allocation,
Retention, and Priority".
Video telephony is one use case where it may be beneficial to use EPS bearers with different
ARP values for the same UE. In this use case an operator could map voice to one bearer with
a higher ARP, and video to another bearer with a lower ARP. In a congestion situation (e.g.
cell edge) the eNodeB can then drop the "video bearer" without affecting the "voice bearer".
This would improve service continuity.
The ARP may also be used to free up capacity in exceptional situations, e.g. a disaster
situation. In such a case the eNodeB may drop bearers with a lower ARP priority level to free
up capacity if the pre-emption vulnerability information allows this.
Figure 7-3 EPS Bearers and QoS
Each GBR bearer is additionally associated with the following bearer level QoS parameters:
The GBR denotes the bit rate that can be expected to be provided by a GBR bearer. The MBR
limits the bit rate that can be expected to be provided by a GBR bearer (e.g. excess traffic may
get discarded by a rate shaping function).
Each APN access, by a UE, is associated with the following QoS parameter:
The APN-AMBR is a subscription parameter stored per APN in the HSS. It limits the
aggregate bit rate that can be expected to be provided across all Non-GBR bearers and across
all PDN connections of the same APN (e.g. excess traffic may get discarded by a rate shaping
Issue 06 (2006-03-01)
7-71
function). Each of those Non-GBR bearers could potentially utilize the entire APN-AMBR,
e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are outside the
scope of APN-AMBR. The P-GW enforces the APN-AMBR in downlink. Enforcement of
APN-AMBR in uplink is done in the UE and additionally in the P-GW.
All simultaneous active PDN connections of a UE that are associated with the same APN shall
be provided by the same PDN GW.
APN-AMBR applies to all PDN connections of an APN. For the case of multiple PDN
connections of an APN, if a change of APN-AMBR occurs due to local policy or the PGW is
provided the updated APN-AMBR for each PDN connection from the MME or PCRF, the
PGW initiates explicit signaling for each PDN connection to update the APN-AMBR value.
Each UE in state EMM-REGISTERED is associated with the following bearer aggregate level
QoS parameter:
The UE-AMBR is limited by a subscription parameter stored in the HSS. The MME shall set
the UE-AMBR to the sum of the APN-AMBR of all active APNs up to the value of the
subscribed UE-AMBR. The UE-AMBR limits the aggregate bit rate that can be expected to
be provided across all Non-GBR bearers of a UE (e.g. excess traffic may get discarded by a
rate shaping function). Each of those Non-GBR bearers could potentially utilize the entire
UE-AMBR, e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are
outside the scope of UE AMBR. The E-UTRAN enforces the UE-AMBR in uplink and
downlink.
The GBR and MBR denote bit rates of traffic per bearer while UE-AMBR/APN-AMBR
denote bit rates of traffic per group of bearers. Each of those QoS parameters has an uplink
and a downlink component. On S1_MME the values of the GBR, MBR, and AMBR refer to
the bit stream excluding the GTP-U/IP header overhead of the tunnel on S1-U.
Figure shows a general situation for the UE having multiple PDN-Connections with multiple
PWSs
7-72
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
7-73
evaluation precedence index of the corresponding filter(s) (e.g., a "match all filter") need to be
set to a value that is higher than the values used for filters associated with dedicated bearers.
A TFT of a unidirectional EPS bearer is either associated with UL packet filter(s) or DL
packet filter(s) that matches the unidirectional traffic flow(s) and a DL packet filter or a UL
packet filter that effectively disallows any useful packet flows (see clause 15.3.3.4 in
TS 23.060 for an example of such packet filter.
The UE routes uplink packets to the different EPS bearers based on uplink packet filters in the
TFTs assigned to these EPS bearers. The UE evaluates for a match, first the uplink packet
filter amongst all TFTs that has the lowest evaluation precedence index and, if no match is
found, proceeds with the evaluation of uplink packet filters in increasing order of their
evaluation precedence index. This procedure shall be executed until a match is found or all
uplink packet filters have been evaluated. If a match is found, the uplink data packet is
transmitted on the EPS bearer that is associated with the TFT of the matching uplink packet
filter. If no match is found, the uplink data packet shall be sent via the EPS bearer that has not
been assigned any uplink packet filter. If all EPS bearers (including the default EPS bearer for
that PDN) have been assigned one or more uplink packet filters, the UE shall discard the
uplink data packet.
The above algorithm implies that there is at most one EPS bearer without any uplink packet
filter (i.e. either EPS bearer without any TFT or an EPS bearer with only DL packet filter(s)).
Therefore, some UEs may expect that during the lifetime of a PDN connection (where only
network has provided TFT packet filters) at most one EPS bearer exist without a TFT or with
a TFT without any uplink packet filter.
To ensure that at most one EPS bearer exist without a TFT or with a TFT without any uplink
packet filter, the PCEF (for GTP-based S5/S8) and the BBERF (for PMIP-based S5/S8)
applies the Session Management restrictions as described in clause 9.2.0 of TS 23.060.
An EPS bearer is realized by the following elements:
7-74
In the UE, the UL TFT maps a traffic flow aggregate to an EPS bearer in the uplink
direction;
In the PDN GW, the DL TFT maps a traffic flow aggregate to an EPS bearer in the
downlink direction;
An S5/S8 bearer transports the packets of an EPS bearer between a Serving GW and
a PDN GW;
A UE stores a mapping between an uplink packet filter and a radio bearer to create
the mapping between a traffic flow aggregate and a radio bearer in the uplink;
A PDN GW stores a mapping between a downlink packet filter and an S5/S8 bearer
to create the mapping between a traffic flow aggregate and an S5/S8 bearer in the
downlink;
Issue 06 (2006-03-01)
The PDN GW routes downlink packets to the different EPS bearers based on the downlink
packet filters in the TFTs assigned to the EPS bearers in the PDN connection. Upon reception
of a downlink data packet, the PDN GW evaluates for a match, first the downlink packet filter
that has the lowest evaluation precedence index and, if no match is found, proceeds with the
evaluation of downlink packet filters in increasing order of their evaluation precedence index.
This procedure shall be executed until a match is found, in which case the downlink data
packet is tunnelled to the Serving GW on the EPS bearer that is associated with the TFT of the
matching downlink packet filter. If no match is found, the downlink data packet shall be sent
via the EPS bearer that does not have any downlink packet filter assigned. If all EPS bearers
(including the default EPS bearer for that PDN) have been assigned a downlink packet filter,
the PDN GW shall discard the downlink data packet.
Figure 7-5 shows an example of TFT definitions and its usage.
Figure 7-5 QoS Concept and an example of TFT
The IP header and UDP/TCP headers are used for defining a TFT. Source/Destination IP,
Source/Destination Port numbers and the protocol field in tht IP headers are used to vreate
TFTs in the example above. Figure 7-6 shows the IP and the UDP headers.
Issue 06 (2006-03-01)
7-75
Additional bandwidth
Congestion Avoidance
Traffic Engineering
7-76
Issue 06 (2006-03-01)
After reviewing this table, you might be asking yourself why 24 kbps of bandwidth is
consumed when youre using an 8-kbps codec. This occurs due to a phenomenon called The
IP Tax. The G.729 codec using two 10-ms samples consumes 20 bytes per frame, which
works out to 8 kbps. The packet headers that include IP, RTP, and User Datagram Protocol
(UDP) add 40 bytes to each frame. This IP Tax header is twice the amount of the payload.
Using G.729 with two 10-ms samples as an example, without RTP header compression, 24
kbps are consumed in each direction per call. Although this might not be a large amount for
Issue 06 (2006-03-01)
7-77
T1 (1.544- mbps), E1 (2.048-mbps), or higher circuits, it is a large amount (42 percent) for a
56-kbps circuit.
Also, keep in mind that the bandwidth in Table 8-1 does not include Layer 2 headers (PPP,
Frame Relay, and so on). It includes headers from Layer 3 (network layer) and above only.
Therefore, the same G.729 call can consume different amounts of bandwidth based upon
which data link layer is used (Ethernet, Frame Relay, PPP, and so on).
With cRTP, the amount of traffic per VoIP call is reduced from 24 kbps to 11.2 kbps. This is a
major improvement for low-bandwidth links. A 56-kbps link, for example, can now carry four
G.729 VoIP calls at 11.2 kbps each. Without cRTP, only two G.729 VoIP calls at 24 kbps can
be used.
To avoid the unnecessary consumption of available bandwidth, cRTP is used on a link-by-link
basis. This compression scheme reduces the IP/RTP/UDP header to 2 bytes when UDP
checksums are not used, or 4 bytes when UDP checksums are used.
cRTP uses some of the same techniques as Transmission Control Protocol (TCP) header
compression. In TCP header compression, the first factor-of-two reduction in data rate occurs
because half of the bytes in the IP and TCP headers remain constant over the life of the
connection.
7-78
Issue 06 (2006-03-01)
The big gain, however, comes from the fact that the difference from packet to packet is often
constant, even though several fields change in every packet. Therefore, the algorithm can
simply add 1 to every value received. By maintaining both the uncompressed header and the
first-order differences in the session state shared between the compressor and the
decompressor, cRTP must communicate only an indication that the second-order difference is
zero. In that case, the decompressor can reconstruct the original header without any loss of
information, simply by adding the first-order differences to the saved, uncompressed header
as each compressed packet is received.
Just as TCP/IP header compression maintains shared state for multiple, simultaneous TCP
connections, this IP/RTP/UDP compression must maintain state for multiple session contexts.
A session context is defined by the combination of the IP source and destination addresses, the
UDP source and destination ports, and the RTP synchronization source (SSRC) field. A
compressor implementation might use a hash function on these fields to index a table of
stored session contexts.
The compressed packet carries a small integer, called the session context identifier, or CID, to
indicate in which session context that packet should be interpreted. The decompressor can use
the CID to index its table of stored session contexts.
cRTP can compress the 40 bytes of header down to 2 to 4 bytes most of the time. As
such, about 98 percent of the time the compressed packet will be sent. Periodically, however,
an entire uncompressed header must be sent to verify that both sides have the correct state.
Sometimes, changes occur in a field that is usually constantsuch as the payload type field,
for instance. In such cases, the IP/RTP/UDP header cannot be compressed, so an
uncompressed header must be sent.
You should use cRTP on any WAN interface where bandwidth is a concern and a high portion
of RTP traffic exists.
You should not use cRTP on high-speed interfaces, as the disadvantages of doing so outweigh
the advantages. High-speed network is a relative term: Usually anything higher than T1 or
E1 speed does not need cRTP. The need for compression is simply a comparison between
the costs of the transmission link versus the cost and overhead of the compression. If you are
willing to pay the extra cost of time for compression/decompression and the additional
hardware costs that might be involved, then compression can work on almost any
transmission link.
As with any compression, the CPU incurs extra processing duties to compress the packet.
This increases the amount of CPU utilization on the edge device. Therefore, you must weigh
the advantages (lower bandwidth requirements) against the disadvantages (higher CPU
utilization). An edge device with higher CPU utilization can experience problems running
other tasks. As such, it is usually a good rule of thumb to keep CPU utilization at less than 60
to 70 percent to keep your network running smoothly.
7-79
Todays networks, with their variety of applications, protocols, and users, require a way to
classify different traffic. Going back to the tollbooth example, a special lane is necessary to
enable some cars to get bumped forward in line. The New Jersey Turnpike, as well as many
other toll roads, has a carpool lane, or a lane that allows you to pay for the toll electronically,
for instance.
Likewise, Huawei has several queuing tools that enable a network administrator to specify
what type of traffic is special or important and to queue the traffic based on that information
instead of when a packet arrives. The most popular of these queuing techniques is known as
WFQ. If you have a Huawei router, it is highly likely that it is using the WFQ algorithm
because it is the default for any router interface less than 2 mbps.
7-80
Issue 06 (2006-03-01)
traffic in a fair manner. Fair queuing offers reduced jitter and enables efficient sharing of
available bandwidth between all applications.
The weighting in WFQ is currently affected by three mechanisms: IP Precedence, Frame
Relay forward explicit congestion notification (FECN), backward explicit congestion
notification (BECN), and Discard Eligible (DE) bits.
The IP Precedence field has values between 0 (the default) and 7. As the precedence value
increases, the algorithm allocates more bandwidth to that conversation or flow. This enables
the flow to transmit more frequently. See the Packet Classification section later in this
chapter for more information on weighting WFQ.
In a Frame Relay network, FECN and BECN bits usually flag the presence of congestion.
When congestion is flagged, the weights the algorithm uses change such that the conversation
encountering the congestion transmits less frequently.
WFQ also is not intended to run on interfaces that are clocked higher than 2.048 mbps.
Figure 7-9 shows a generic figure of WFQ mechanism.
Figure 7-9 Weighted Fair Queuing, WFQ
Issue 06 (2006-03-01)
7-81
highly granular approach to queuing, which is what some customers prefer. Figure 7-10 shows
a generic flow of data sent through a CQ configuration.
Figure 7-10 Custom Queuing
7-82
Issue 06 (2006-03-01)
Issue 06 (2006-03-01)
7-83
Since LLQ of CBQ can also be used to solve real-time service, it is recommended not to use
RTP together with CBQ.
Figure 7-12 RTP queuing diagram
7-84
Issue 06 (2006-03-01)
harm than good to your network. With PQ or CQ, it is imperative that you know your traffic
and your applications.
Many customers who deploy VoIP networks in low-bandwidth environments (less than 768
kbps) use IP RTP Priority or LLQ to prioritize their voice traffic above all other traffic flows.
2.
When the length of queue is less than the minimum limitation, no packet will be
dropped.
When the length of queue exceeds the maximum limitation, all the incoming packets will
be dropped.
When the length of queue between maximum limitation and minimum limitation, the
packet will be dropped randomly. The longer the length of queue, the higher the
dropping probability is, but a maximum dropping probability will remain.
Unlike RED, the random number of WRED generated is based on priority. It uses IP
precedence to determine the dropping policy thus the dropping probability of packets with
high priority will relatively decrease.
RED and WRED employ the random packet dropping policy to avoid TCP synchronization,
when packet of one TCP connection is dropped with a decreased sending rate, other TCP
connections will still keep higher sending rate. So there are always some TCP connections,
which have a higher sending rate to realize more efficient network bandwidth utilization.
3.
Issue 06 (2006-03-01)
7-85
maximum/minimum limitations will treat the bursting data stream unfairly and influence
the transmission of data stream. WRED uses the average queue and maximum/minimum
limitations comparison to determine the dropping probability. The average queue length
is the result of low pass filtering of queue length. The average queue length reflects the
changing of queue and is insensitive to bursting change of queue length, preventing the
unfair treatment for the bursting data stream.
When WFQ is adopted, you can set index, maximum limitation, minimum limitation,
and packet-dropping probability when calculating average queue length for different
queues that has different priorities. So packet with different priority will have different
packet dropping characters. When FIFO, PQ and CQ are adopted, you can set index,
maximum limitation, minimum limitation, and packet-dropping probability when
calculating average queue length for each queue. So packet with different priority will
have different packet dropping characters.
4.
Associating WRED with WFQ, the flow-based WRED can be realized. Because different
flow has its own queue during packet classification, the flow with small traffic always has a
short queue length, so the packet dropping probability will be small. The flow with high
traffic will have the longer queue length and will drop more packets, so we can protect the
benefits of the flow with small traffic.
Issue 06 (2006-03-01)
address, source port number, destination IP address, destination port number and the Transport
Protocol), or can be all packets to a network segment.
In general, while packets being classified on the network border, the precedence bits in the
ToS byte of IP header are set so that IP precedence can be used as a direct packet
classification standard within the network. The queuing technologies can use IP precedence to
handle the packets. Downstream network can receive the packets classification results from
upstream network selectively, or re-classify the packets with its own standard.
Traffic classification is used to provide differentiated service, so it must be associated with
certain kinds of traffic policing or resource-assignment mechanisms. To adopt what kind of
traffic policing action will depend on the current stage and load status of the network. For
example, to police the packets according to the committed rate when they enter the network,
to make traffic shaping before they flow out the nodes, to perform queuing management in the
event of congestion and to employ congestion avoidance when congestion becomes worse.
Several priorities are described as Figure 7-14:
Figure 7-14 DS Field and ToS Byte
The ToS byte of IP header contains 8 bits: the first three bits (0 to 2) indicates IP precedence,
valued in the range 0 to 7; the following 4 bits (3 to 6) indicates ToS priority, valued in the
range 0 to 15. In RFC2474, the ToS field of IP packet header is redefined as DS field, where
the DiffServ code point (DSCP) priority is indicated by the first 6 bits (0 to 5), valued in the
range 0 to 63. The remaining 2 bits (6 and 7) are reserved.
Issue 06 (2006-03-01)
7-87
Whether or not the token quantity of the Token Bucket can satisfy the packets forwarding is
the basis for Token Bucket to measure the traffic specification. If enough tokens are available
for forwarding packets, traffic is regarded conforming the specification (generally, one token
is associated to the forwarding ability of one bit), otherwise, non-conform or excess.
When measuring the traffic with Token Bucket, these parameters are included:
Mean rate: The rate of putting Token into Bucket, i.e. average rate of the permitting
traffic. Generally set as CIR (Committed Information Rate).
Burst size: Token Buckets capability, i.e. the maximum traffic size of every burst.
Generally, it is set as CBS (Committed Burst Size), and the bursting size must be
greater than the maximum packets size.
A new evaluation will be made when a new packet arrives. If there are enough tokens in
bucket for each evaluation, it shows that traffics are within the bound, and at this time the
amount of tokens appropriate for the packets forwarding rights, need to be taken out.
Otherwise, it shows that too many tokens have been used, and traffic specifications are
exceeded.
7-88
Issue 06 (2006-03-01)
Complicated evaluation:
Two Token Buckets can be configured to evaluate conditions that are more complex and to
implement more flexible regulation policy. For example, Traffic Policing (TP) has three
parameters, as follows:
It uses two Token Buckets with the token-putting rate of every bucket set as CIR equally, but
with different capabilities: CBS and EBS (CBS < EBS, called C Bucket and E Bucket), which
represents different bursting class permitted. In each evaluation, you may use different traffic
control policies for different situations, such as "C bucket has enough tokens; "Tokens of C
bucket are deficient, but those of E bucket are enough; "Tokens of C bucket and E bucket are
all deficient.
Traffic Policing:
Typically, traffic policing is used to monitor the specification of certain traffic entering the
network and keep it within a reasonable bound, or it will make penalty on the exceeding
traffic so as to protect network resources and profits of carriers. For example, it can restrict
HTTP packets to occupy network bandwidth of no more than 50%. Once finding the traffic of
a connection exceeds, it may drop the packets or reset the precedence of packets.
Traffic policing allows you to define match rules based on IP precedence or DiffServ code
point (DSCP). It is widely used by ISP to police the network traffic. TP also includes the
traffic classification service for the policed traffics, and depending upon the different
evaluation results, it will implement the pre-configured policing actions, which are described
as the following:
Modifying the precedence of the packets whose evaluation result is conforming and
forwarding them.
Modifying the precedence of the packets whose evaluation result is conforming and
entering the next-rank TP.
Entering the next-rank policing (TP can stack rank by rank, with each rank policing
more detailed objects).
Traffic Shaping:
Traffic shaping is an active way to adjust the traffic output rate. A typical application is to
control the output traffic with TP index based upon downstream network nodes.
The main difference between traffic shaping and traffic policing is: the packets to be dropped
in traffic policing will be stored during traffic shaping generally they will be put into buffer
or queues (also called Traffic Shaping - TS; seeFigure 7-16). Once there are enough tokens in
Token Bucket, those stored packets will be evenly sent. Another difference is that traffic
shaping may intensify delay, yet traffic policing seldom does so.
Issue 06 (2006-03-01)
7-89
For example, in the implementation shown in Figure 4, Router A sends packets to Router B.
And Router B implements TP on those packets, and directly drops exceeding traffic.
Figure 7-17 Traffic Shaping Implementation
To reduce packets dropping, GTS can be used for the packets on the egress interface of Router
A. The packets beyond the traffic features of GTS will be stored in Router A. While sending
the next set of packets, GTS will take out those packets from buffer or queues and send them.
Thus, all the packets sent to Router B accord with the traffic regulation of Router B.
On a physical interface, you can enforce line rates below the physical line rate to limit the
overall transmitting rate (including the rate sending critical packets).
LR also uses Token Bucket for traffic control. If LR is configured on an interface of a router,
all packets to be sent via the interface will be firstly handled by Token Bucket of LR. If there
are enough tokens in Token Bucket, then those packets can be forwarded; otherwise, those
packets will be put into QoS queues for congestion management, so that the packet traffics
through the physical port can be managed.
7-90
Issue 06 (2006-03-01)
If Token Bucket is used to control the traffics, when there are tokens in Token Bucket, the
packets can be sent in burst; if no tokens are available, packets will not be sent until new
tokens generate in the Token Bucket. Thus, the traffic of packets is restricted under the
generating rate of new token to achieve the goal of restricting the traffics while allowing
bursting traffic overpass.
Compared with TP, LR can restrict all packets via physical port. TP is realized at IP layer
implementing no function for packets that do not go through IP layer. If users demand to limit
the rate of all packets, LR is easier to be implemented.
Issue 06 (2006-03-01)
7-91
Traffic Engineering is the process of arranging how traffic flows through the network so that
congestion caused by uneven network utilization can be avoided. Constraint Based Routing is
an important tool for making the Traffic Engineering process automatic.
Avoiding congestion and providing graceful degradation of performance in the case of
congestion are complementary. Traffic Engineering therefore complements Differentiated
Services.
While determining a route, Constraint Based Routing considers not only topology of the
network, but also the requirement of the flow, the resource availability of the links, and
possibly other policies specified by the network administrators. Therefore, Constraint Based
Routing may find a longer and lightly-loaded path better than the heavily-loaded shortest
path. Network traffic is thus distributed more evenly.
In order to do Constraint Based Routing, routers need to distribute new link state information
and to compute routes based on such information.
7-92
Issue 06 (2006-03-01)
Some measure of priority and proportional fairness is defined between traffic in different
classes. Should congestion occur between classes, the traffic in the higher class is given
priority. Rather than using strict priority queuing, more balanced queue servicing algorithms
such as fair queuing or weighted fair queuing (WFQ) are likely to be used. If congestion
occurs within a class, the packets with the higher drop precedence are discarded first. To
prevent issues associated with tail drop, more sophisticated drop selection algorithms such as
random early detection (RED) are often used.
Issue 06 (2006-03-01)
7-93
IETF agreed to reuse the TOS octet as the DS field for DiffServ networks. In order to
maintain backward compatibility with network devices that still use the Precedence field,
DiffServ defines the Class Selector PHB.
The Class Selector code points are of the form 'xxx000'. The first three bits are the IP
precedence bits. Each IP precedence value can be mapped into a DiffServ class. If a packet is
received from a non-DiffServ aware router that used IP precedence markings, the DiffServ
router can still understand the encoding as a Class Selector code point.
a 3-bit Traffic Class field for QoS (quality of service) priority (experimental) and ECN
(Explicit Congestion Notification).
a 1-bit bottom of stack flag. If this is set, it signifies that the current label is the last in the
stack.
These MPLS-labeled packets are switched after a label lookup/switch instead of a lookup into
the IP table. As mentioned above, when MPLS was conceived, label lookup and label
switching were faster than a routing table or RIB (Routing Information Base) lookup because
they could take place directly within the switched fabric and not the CPU.
Routers that perform routing based only on the label are called label switch routers (LSRs).
The entry and exit points of an MPLS network are called label edge routers (LERs), which,
respectively, push an MPLS label onto an incoming packet[note 1] and pop it off the outgoing
packet. Alternatively, under penultimate hop popping this function may instead be performed
by the LSR directly connected to the LER.
Labels are distributed between LERs and LSRs using the Label Distribution Protocol
(LDP).[6] LSRs in an MPLS network regularly exchange label and reachability information
with each other using standardized procedures in order to build a complete picture of the
network they can then use to forward packets. Label-switched paths (LSPs) are established by
the network operator for a variety of purposes, such as to create network-based IP virtual
private networks or to route traffic along specified paths through the network. In many
respects, LSPs are not different from permanent virtual circuits (PVCs) in ATM or Frame
Relay networks, except that they are not dependent on a particular layer-2 technology.
In the specific context of an MPLS-based virtual private network (VPN), LERs that function
as ingress and/or egress routers to the VPN are often called PE (Provider Edge) routers.
Devices that function only as transit routers are similarly called P (Provider) routers. See RFC
4364. The job of a P router is significantly easier than that of a PE router, so they can be less
complex and may be more dependable because of this.
7-94
Issue 06 (2006-03-01)
When an unlabeled packet enters the ingress router and needs to be passed on to an MPLS
tunnel, the router first determines the forwarding equivalence class (FEC) the packet should
be in, and then inserts one or more labels in the packet's newly created MPLS header. The
packet is then passed on to the next hop router for this tunnel.
When a labeled packet is received by an MPLS router, the topmost label is examined. Based
on the contents of the label a swap, push (impose) or pop (dispose) operation can be
performed on the packet's label stack. Routers can have prebuilt lookup tables that tell them
which kind of operation to do based on the topmost label of the incoming packet so they can
process the packet very quickly.
In a swap operation the label is swapped with a new label, and the packet is forwarded along
the path associated with the new label.
In a push operation a new label is pushed on top of the existing label, effectively
"encapsulating" the packet in another layer of MPLS. This allows hierarchical routing of
MPLS packets. Notably, this is used by MPLS VPNs.
In a pop operation the label is removed from the packet, which may reveal an inner label
below. This process is called "decapsulation". If the popped label was the last on the label
stack, the packet "leaves" the MPLS tunnel. This is usually done by the egress router, but see
Penultimate Hop Popping (PHP) below.
During these operations, the contents of the packet below the MPLS Label stack are not
examined. Indeed transit routers typically need only to examine the topmost label on the
stack. The forwarding of the packet is done based on the contents of the labels, which allows
"protocol-independent packet forwarding" that does not need to look at a protocol-dependent
routing table and avoids the expensive IP longest prefix match at each hop.
At the egress router, when the last label has been popped, only the payload remains. This can
be an IP packet, or any of a number of other kinds of payload packet. The egress router must
therefore have routing information for the packet's payload, since it must forward it without
the help of label lookup tables. An MPLS transit router has no such requirement.
In some special cases, the last label can also be popped off at the penultimate hop (the hop
before the egress router). This is called penultimate hop popping (PHP). This may be
interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and
thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected
directly to this egress router effectively offload it, by popping the last label themselves.
MPLS can make use of existing ATM network or Frame Relay infrastructure, as its labeled
flows can be mapped to ATM or Frame Relay virtual-circuit identifiers, and vice versa.
Figure 7-20 shows the general architecture of MPLS implemented in a network.
Issue 06 (2006-03-01)
7-95
7-96
Issue 06 (2006-03-01)
Network Security
Access control
Connectionless integrity
These services are provided at the IP layer, offering protection in a standard fashion for all
protocols that may be carried over IP (including IP itself).
IPsec architecture defines the elements listed below:
1.
Security Protocols:
- Authentication Header (AH)
- Encapsulating Security Payload (ESP)
2.
3.
4.
Most of the security services are provided through use of two traffic security protocols, the
Authentication Header (AH) and the Encapsulating Security Payload (ESP), and through the
use of cryptographic key management procedures and protocols. The set of IPsec protocols
employed in a context, and the ways in which they are employed, will be determined by the
users/administrators in that context. It is the goal of the IPsec architecture to ensure that
compliant implementations include the services and management interfaces needed to meet
the security requirements of a broad user population.
When IPsec is correctly implemented and deployed, it ought not adversely affect users, hosts,
and other Internet components that do not employ IPsec for traffic protection. IPsec security
protocols (AH and ESP, and to a lesser extent, IKE) are designed to be cryptographic
algorithm independent. This modularity permits selection of different sets of cryptographic
algorithms as appropriate, without affecting the other parts of the implementation.
Issue 06 (2006-03-01)
8-97
For example, different user communities may select different sets of cryptographic algorithms
(creating cryptographically-enforced cliques) if required.
To facilitate interoperability in the global Internet, a set of default cryptographic algorithms
for use with AH and ESP is specified in [Eas05] and a set of mandatory-to-implement
algorithms for IKEv2 is specified in [Sch05]. [Eas05] and [Sch05] will be periodically
updated to keep pace with computational and cryptologic advances. By specifying these
algorithms in documents that are separate from the AH, ESP, and IKEv2 specifications, these
algorithms can be updated or replaced without affecting the standardization progress of the
rest of the IPsec document suite. The use of these cryptographic algorithms, in conjunction
with IPsec traffic protection and key management protocols, is intended to permit system and
application developers to deploy high quality, Internet-layer, cryptographic security
technology.
Threats that are faced in an IP environment can be listed as:
IP Sniffing
IP Spoofing
IP Hijacking
DoD Attack
IPsec and IPsec architecture is defined to be able to avoid the listed threats or at least make
the network or the connection resistant to such attacks.
The IP Authentication Header (AH) [Ken05b] offers integrity and data origin
authentication, with optional (at the discretion of the receiver) anti-replay features.
2.
The Encapsulating Security Payload (ESP) protocol [Ken05a] offers the same set of
services, and also offers confidentiality. Use of ESP to provide confidentiality without
integrity is NOT RECOMMENDED. When ESP is used with confidentiality enabled,
there are provisions for limited traffic flow confidentiality, i.e. provisions for concealing
packet length, and for facilitating efficient generation and discard of dummy packets.
This capability is likely to be effective primarily in virtual private network (VPN) and
overlay network contexts.
3.
Both AH and ESP offer access control, enforced through the distribution of
cryptographic keys and the management of traffic flows as dictated by the Security
Policy Database (SPD, Section).
8-98
Issue 06 (2006-03-01)
AH provides authentication for as much of the IP header as possible, as well as for next level
protocol data. However, some IP header fields may change in transit and the value of these
fields, when the packet arrives at the receiver, may not be predictable by the sender. The
values of such fields cannot be protected by AH. Thus, the protection provided to the IP
header by AH is piecemeal.
AH may be applied alone, in combination with the IP Encapsulating Security Payload (ESP)
[Ken-ESP], or in a nested fashion.
Security services can be provided between a pair of communicating hosts, between a pair of
communicating security gateways, or between a security gateway and a host. ESP may be
used to provide the same anti-replay and similar integrity services, and it also provides a
confidentiality (encryption) service. The primary difference between the integrity provided by
ESP and AH is the extent of the coverage. Specifically, ESP does not protect any IP header
fields unless those fields are encapsulated by ESP (e.g., via use of tunnel mode).
By using AH the following can be achieved:
Message authentication
Integrity
Replay avoidance
NO privacy
2.
3.
4.
5.
Issue 06 (2006-03-01)
8-99
The anti-replay service may be selected for an SA only if the integrity service is selected for
that SA. The selection of this service is solely at the discretion of the receiver and thus need
not be negotiated. However, to make use of the Extended Sequence Number feature in an
interoperable fashion, ESP does impose a requirement on SA management protocols to be
able to negotiate this feature.
8-100
Issue 06 (2006-03-01)
The traffic flow confidentiality (TFC) service generally is effective only if ESP is employed
in a fashion that conceals the ultimate source and destination addresses of correspondents,
e.g., in tunnel mode between security gateways, and only if sufficient traffic flows between
IPsec peers (either naturally or as a result of generation of masking traffic) to conceal the
characteristics of specific, individual subscriber traffic flows. (ESP may be employed as part
of a higher-layer TFC system, e.g., Onion Routing [Syverson], but such systems are outside
the scope of this standard.) New TFC features present in ESP facilitate efficient generation
and discarding of dummy traffic and better padding of real traffic, in a backward- compatible
fashion.
The following security aspects are achieved by ESP:
Message authentication
Integrity
Replay avoidance
Privacy
2.
3.
4.
5.
6.
Issue 06 (2006-03-01)
8-101
Figure 8-3 Applying ESP in Tunnel mode and Transport Mode Example: IMS
8-102
Issue 06 (2006-03-01)
Each entry in the SA Database (SAD) (Section 4.4.2) must indicate whether the SA lookup
makes use of the destination IP address, or the destination and source IP addresses, in addition
to the SPI. For multicast SAs, the protocol field is not employed for SA lookups. For each
inbound, IPsec-protected packet, an implementation must conduct its search of the SAD such
that it finds the entry that matches the "longest" SA identifier. In this context, if two or more
SAD entries match based on the SPI value, then the entry that also matches based on
destination address, or destination and source address (as indicated in the SAD entry) is the
"longest" match. This implies a logical ordering of the SAD search as follows:
1.
Search the SAD for a match on the combination of SPI, destination address, and source
address. If an SAD entry matches, then process the inbound packet with that matching
SAD entry. Otherwise, proceed to step 2.
2.
Search the SAD for a match on both SPI and destination address. If the SAD entry
matches, then process the inbound packet with that matching SAD entry. Otherwise,
proceed to step 3.
3.
Search the SAD for a match on only SPI if the receiver has chosen to maintain a single
SPI space for AH and ESP, and on both SPI and protocol, otherwise. If an SAD entry
matches, then process the inbound packet with that matching SAD entry. Otherwise,
discard the packet and log an auditable event.
In practice, an implementation may choose any method (or none at all) to accelerate this
search, although its externally visible behavior MUST be functionally equivalent to having
searched the SAD in the above order. For example, a software-based implementation could
index into a hash table by the SPI. The SAD entries in each hash table bucket's linked list
could be kept sorted to have those SAD entries with the longest SA identifiers first in that
linked list. Those SAD entries having the shortest SA identifiers could be sorted so that they
are the last entries in the linked list. A hardware-based implementation may be able to effect
the longest match search intrinsically, using commonly available Ternary
Content-Addressable Memory (TCAM) features.
The indication of whether source and destination address matching is required to map inbound
IPsec traffic to SAs MUST be set either as a side effect of manual SA configuration or via
negotiation using an SA management protocol, e.g., IKE or Group Domain of Interpretation
(GDOI) [RFC3547]. Typically, Source-Specific Multicast (SSM) [HC03] groups use a 3-tuple
SA identifier composed of an SPI, a destination multicast address, and source address. An
Any-Source Multicast group SA requires only an SPI and a destination multicast address as an
identifier.
If different classes of traffic (distinguished by Differentiated Services Code Point (DSCP) bits
[NiBlBaBL98], [Gro02]) are sent on the same SA, and if the receiver is employing the
optional anti-replay feature available in both AH and ESP, this could result in inappropriate
discarding of lower priority packets due to the windowing mechanism used by this feature.
Therefore, a sender SHOULD put traffic of different classes, but with the same selector
values, on different SAs to support Quality of Service (QoS) appropriately. To permit this, the
IPsec implementation MUST permit establishment and maintenance of multiple SAs between
a given sender and receiver, with the same selectors. Distribution of traffic among these
parallel SAs to support QoS is locally determined by the sender and is not negotiated by IKE.
The receiver MUST process the packets from the different SAs without prejudice. These
requirements apply to both transport and tunnel mode SAs. In the case of tunnel mode SAs,
the DSCP values in question appear in the inner IP header. In transport mode, the DSCP value
might change en route, but this should not cause problems with respect to IPsec processing
since the value is not employed for SA selection and MUST NOT be checked as part of
SA/packet validation. However, if significant re-ordering of packets occurs in an SA, e.g., as a
result of changes to DSCP values en route, this may trigger packet discarding by a receiver
due to application of the anti-replay mechanism.
Issue 06 (2006-03-01)
8-103
Transport mode
Tunnel mode.
IKE creates pairs of SAs, so for simplicity, we choose to require that both SAs in a pair be of
the same mode, transport or tunnel.
A transport mode SA is an SA typically employed between a pair of hosts to provide
end-to-end security services. When security is desired between two intermediate systems
along a path (vs. end-to-end use of IPsec), transport mode MAY be used between security
gateways or between a security gateway and a host. In the case where transport mode is used
between security gateways or between a security gateway and a host, transport mode may be
used to support in-IP tunneling (e.g., IP-in-IP [Per96] or Generic Routing Encapsulation
(GRE) tunneling [FaLiHaMeTr00] or dynamic routing [ToEgWa04]) over transport mode
SAs. To clarify, the use of transport mode by an intermediate system (e.g., a security gateway)
is permitted only when applied to packets whose source address (for outbound packets) or
destination address (for inbound packets) is an address belonging to the intermediate system
itself. The access control functions that are an important part of IPsec are significantly limited
in this context, as they cannot be applied to the end-to-end headers of the packets that traverse
a transport mode SA used in this fashion. Thus, this way of using transport mode should be
evaluated carefully before being employed in a specific context.
In IPv4, a transport mode security protocol header appears immediately after the IP header
and any options, and before any next layer protocols (e.g., TCP or UDP). In IPv6, the security
protocol header appears after the base IP header and selected extension headers, but may
appear before or after destination options; it MUST appear before next layer protocols (e.g.,
TCP, UDP, Stream Control Transmission Protocol (SCTP)). In the case of ESP, a transport
mode SA provides security services only for these next layer protocols, not for the IP header
or any extension headers preceding the ESP header. In the case of AH, the protection is also
extended to selected portions of the IP header preceding it, selected portions of extension
headers, and selected options (contained in the IPv4 header, IPv6 Hop-by-Hop extension
header, or IPv6 Destination extension headers). For more details on the coverage afforded by
AH, A tunnel mode SA is essentially an SA applied to an IP tunnel, with the access controls
applied to the headers of the traffic inside the tunnel. Two hosts MAY establish a tunnel mode
SA between themselves. Aside from the two exceptions below, whenever either end of a
security association is a security gateway, the SA MUST be tunnel mode. Thus, an SA
between two security gateways is typically a tunnel mode SA, as is an SA between a host and
a security gateway.
The two exceptions are as follows.
Where traffic is destined for a security gateway, e.g., Simple Network Management
Protocol (SNMP) commands, the security gateway is acting as a host and transport mode
is allowed. In this case, the SA terminates at a host (management) function within a
security gateway and thus merits different treatment.
As noted above, security gateways MAY support a transport mode SA to provide security
for IP traffic between two intermediate systems along a path, e.g., between a host and a
security gateway or between two security gateways.
Several concerns motivate the use of tunnel mode for an SA involving a security gateway.
For example, if there are multiple paths (e.g., via different security gateways) to the same
destination behind a security gateway, it is important that an IPsec packet be sent to the
security gateway with which the SA was negotiated. Similarly, a packet that might be
fragmented en route must have all the fragments delivered to the same IPsec instance for
8-104
Issue 06 (2006-03-01)
In summary,
A host implementation of IPsec MUST support both transport and tunnel mode. This is
true for native, BITS, and BITW implementations for hosts.
A security gateway MUST support tunnel mode and MAY support transport mode. If it
supports transport mode, that should be used only when the security gateway is acting as
a host, e.g., for network management, or to provide security between two intermediate
systems along a path.
Figure 8-4 shows a general view of where in th protocol stack security can be applied as well
as a general view of IPsec architecture including Security Gateways
Issue 06 (2006-03-01)
8-105
8-106
Issue 06 (2006-03-01)
techniques often employ statically configured, symmetric keys, though other options also
exist.
2.
Issue 06 (2006-03-01)
8-107