Sie sind auf Seite 1von 115

TCP/IP in mobile world

Tables

Contents
1 Network Overview ....................................................................................................................1-1
About This Chapter ............................................................................................................................................. 1-1
1.1 General mobile networks architecture and usage of IP ................................................................................. 1-1
1.2 Comparison of PS and CS networks ............................................................................................................. 1-5
1.3 Identities (e.164, e.212, FQDN, etc) ............................................................................................................. 1-6
1.4 Codecs (AMR, PCM, G.711, etc) and transcoding ....................................................................................... 1-7
1.5 Overview of redundancy and multi-homing.................................................................................................. 1-8

2 Internet Protocol, IP .................................................................................................................2-10


2.1 IP address structure: .................................................................................................................................... 2-10
2.2 Private addresses in IPv4 and IPv6 ............................................................................................................. 2-14
2.3 IP routing principles .................................................................................................................................... 2-14
2.4 Comparison of IPv4 and IPv6 functionalities ............................................................................................. 2-16
2.5 Overview of routing protocols: ................................................................................................................... 2-20

3 Transport Protocols: TCP, SCTP and UDP..........................................................................3-24


3.1 Functions of transport Protocols: ................................................................................................................ 3-24
3.2 Usage of port numbers ................................................................................................................................ 3-25
3.3 User Datagram Protocol, UDP .................................................................................................................... 3-26
3.4 Transmission Control Protocol, TCP ........................................................................................................... 3-27
3.5 Stream Control Transport Protocol, SCTP .................................................................................................. 3-30
3.6 Transport Protocols and comparison of functionalities ............................................................................... 3-33

4 Applications ..............................................................................................................................4-35
4.1 Application Layer General ....................................................................................................................... 4-35
4.2 Dynamic Host Configuration Protocol DHCP ......................................................................................... 4-35
4.3 Domain Name System- DNS ...................................................................................................................... 4-37
4.4 Internet Service Provider and its Services ................................................................................................... 4-41
4.5 Address translation, NAT ............................................................................................................................ 4-42
4.6 IP-based application providing user services: email, web browsing, ftp, etc .............................................. 4-44

5 Multimedia over IP (MoIP) ....................................................................................................5-45


5.1 Network architectures for SIP-domain (RFC 3261) .................................................................................... 5-45
5.2 Session Initiation Protocol-SIP ................................................................................................................... 5-49
5.3 Session Description Protocol SDP ........................................................................................................... 5-52

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

TCP/IP in mobile world


5.4 Real Time transport Protocol RTP and RTCP .......................................................................................... 5-54
5.5 Interworking between SIP-based networks and the PSTN/ISDN networks ................................................ 5-57

6 IP in GPRS/UMTS/EPS ...........................................................................................................6-58
6.1 Remote access and login procedures for mobile users: ............................................................................... 6-58
6.2 PDP Context Activation .............................................................................................................................. 6-60
6.3 Default Bearer set-up .................................................................................................................................. 6-61
6.4 User profile.................................................................................................................................................. 6-62
6.5 Access Point Name APN .......................................................................................................................... 6-62
6.6 QoS for the IP connectivity of a UE ............................................................................................................ 6-63
6.7 Roaming ...................................................................................................................................................... 6-63
6.8 The GPRS Tunneling Protocol GTP......................................................................................................... 6-65

7 Quality of Service.....................................................................................................................7-67
7.1 QoS definition in HSPA: Conversational, Streaming, Interactive and background .................................... 7-67
7.2 QoS definition in GPRS/EPS: MBR, GBR, ARP, THI and QCI ................................................................. 7-70
7.3 TFTs and its usage ..................................................................................................................................... 7-73
7.4 IP QoS General Network Edge .............................................................................................................. 7-76
7.5 QoS provisioning: ....................................................................................................................................... 7-92

8 Network Security .....................................................................................................................8-97


8.1 Security requirements faced by modern telecom networks, IPsec .............................................................. 8-97
8.2 Overview of security protocols ................................................................................................................... 8-98
8.3 Security Association (SA) ......................................................................................................................... 8-102
8.4 Security Association and Key Management .............................................................................................. 8-106
8.5 Cryptographic algorithms for authentication and encryption .................................................................... 8-107

ii

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Tables

Figures
Figure 1-1 Mobile Network Overview (R99) ..................................................................................................... 1-2
Figure 1-2 LTE/EPS Network Overview ............................................................................................................ 1-3
Figure 1-3 Internet Standardization Bodies ........................................................................................................ 1-3
Figure 1-4 OSI vs IP Stack ................................................................................................................................. 1-4
Figure 1-5 Layered Communication in TCP/IP.................................................................................................. 1-5
Figure 1-6 Identities ........................................................................................................................................... 1-7
Figure 1-7 Codecs .............................................................................................................................................. 1-8
Figure 2-1 IPv4 Addressing, RFC 791 ............................................................................................................. 2-10
Figure 2-2 IPv6 Addressing, RFC 4291 ........................................................................................................... 2-10
Figure 2-3 IPv6 Abbreviation Choices ............................................................................................................. 2-11
Figure 2-4 IPv6 Address Structure ................................................................................................................... 2-11
Figure 2-5 Global Unicast Address Format (RFC 4291) .................................................................................. 2-11
Figure 2-6 Unique Local Address Space (ULA) (RFC 4193) .......................................................................... 2-11
Figure 2-7 Link Local Address Space (RFC 4862) .......................................................................................... 2-12
Figure 2-8 Multicast Address Space (RFC 4291) ............................................................................................. 2-12
Figure 2-9 IPv6 Address Structure ................................................................................................................... 2-12
Figure 2-10 Comparison between DHCPv4 and DHCPv6 ............................................................................... 2-13
Figure 2-11 IPv6 Address Allocation by IANA ................................................................................................ 2-13
Figure 2-12 IPv6 Private Addresses, Link Local Address (RFC 4862) ............................................................ 2-14
Figure 2-13 Direct and Indirect Routing principle ........................................................................................... 2-15
Figure 2-14 Example of a network ................................................................................................................... 2-15
Figure 2-15 IPv4 Header .................................................................................................................................. 2-16
Figure 2-16 IPv6 Header .................................................................................................................................. 2-19
Figure 3-1 Layered Communication ................................................................................................................ 3-25
Figure 3-2 Internet Service Provider (ISP) ....................................................................................................... 3-26
Figure 3-3 UDP Header .................................................................................................................................... 3-27

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

iii

TCP/IP in mobile world


Figure 3-4 TCP Header..................................................................................................................................... 3-27
Figure 3-5 Flags and Window in TCP .............................................................................................................. 3-29
Figure 3-6 TCP Traffic Case and the usage of flags ......................................................................................... 3-30
Figure 3-7 SCTP Header .................................................................................................................................. 3-31
Figure 3-8 SCTP Traffic Case .......................................................................................................................... 3-31
Figure 3-9 SCTP Chunk Types ......................................................................................................................... 3-32
Figure 3-10 SCTP-SACK Chunk ..................................................................................................................... 3-33
Figure 3-11 SACK usage example ................................................................................................................... 3-33
Figure 3-12 SACK header in the above figure ................................................................................................. 3-33
Figure 4-1 TCP/IP Applications ....................................................................................................................... 4-35
Figure 4-2 DHCP Traffic Case ......................................................................................................................... 4-37
Figure 4-3 Domain Name System, RFC 1034, RFC 1035 ............................................................................... 4-38
Figure 4-4 Top Level Domains TLD ............................................................................................................. 4-39
Figure 4-5 ccTLD (Part 1) ................................................................................................................................ 4-40
Figure 4-6 ccTLD (Part 2) ................................................................................................................................ 4-40
Figure 4-7 ccTLD (Part 3) ................................................................................................................................ 4-40
Figure 4-8 TCP/IP Applications and ISP .......................................................................................................... 4-42
Figure 4-9 Network Address Translation NAT .............................................................................................. 4-43
Figure 4-10 NAT Address Translation and Port Generation ............................................................................. 4-44
Figure 4-11 IP based applications ..................................................................................................................... 4-44
Figure 5-1 SIP Domain Architecture ................................................................................................................ 5-48
Figure 5-2 SIP Traffic Case .............................................................................................................................. 5-48
Figure 5-3 Syntax of SIP Method, INVITE ...................................................................................................... 5-50
Figure 5-4 SIP Methods and RFCs ................................................................................................................... 5-51
Figure 5-5 SIP headers ..................................................................................................................................... 5-51
Figure 5-6 SIP Responses ................................................................................................................................. 5-52
Figure 5-7 Session Description Protocol SDP ............................................................................................... 5-53
Figure 5-8 SDP General Protocol Description .............................................................................................. 5-53
Figure 5-9 SDP Media Description ............................................................................................................... 5-53
Figure 5-10 RTP Header ................................................................................................................................... 5-55
Figure 5-11 RTCP Sender Report ..................................................................................................................... 5-55
Figure 5-12 RTCP Receiver Report .................................................................................................................. 5-57
Figure 5-13 PSTN Breakout ............................................................................................................................. 5-57

iv

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Tables

Figure 6-1 IP Connectivity in GPRS/UMTS .................................................................................................... 6-59


Figure 6-2 IP connectivity in EPS .................................................................................................................... 6-59
Figure 6-3 Multiple PDN Connections ............................................................................................................. 6-60
Figure 6-4 PDP Context Activation in GPRS/UMTS ....................................................................................... 6-61
Figure 6-5 Default Bearer Activation in EPS ................................................................................................... 6-62
Figure 6-6 User profile for EPS user ................................................................................................................ 6-62
Figure 6-7 Access Point Name APN.............................................................................................................. 6-63
Figure 6-8 Roaming and Home Routed Traffic ................................................................................................ 6-64
Figure 6-9 Roaming and Local Breakout ......................................................................................................... 6-65
Figure 6-10 GPRS Tunneling Protocol GTP ................................................................................................. 6-66
Figure 7-1 QoS classes in UMTS ..................................................................................................................... 7-67
Figure 7-2 QCI and some examples ................................................................................................................. 7-70
Figure 7-3 EPS Bearers and QoS ..................................................................................................................... 7-71
Figure 7-4 EPS QoS Concept and AMBR ........................................................................................................ 7-73
Figure 7-5 QoS Concept and an example of TFT ............................................................................................. 7-75
Figure 7-6 EPS QoS Concept, IP and UDP headers ......................................................................................... 7-76
Figure 7-7 RTP Header Compression ............................................................................................................... 7-78
Figure 7-8 FIFO, First In First Out Queuing .................................................................................................... 7-80
Figure 7-9 Weighted Fair Queuing, WFQ ........................................................................................................ 7-81
Figure 7-10 Custom Queuing ........................................................................................................................... 7-82
Figure 7-11 Priority Queuing............................................................................................................................ 7-83
Figure 7-12 RTP queuing diagram ................................................................................................................... 7-84
Figure 7-13 Relation between WRED and queue mechanism .......................................................................... 7-86
Figure 7-14 DS Field and ToS Byte ................................................................................................................. 7-87
Figure 7-15 Measuring the traffic with Token Bucket ...................................................................................... 7-88
Figure 7-16 Traffic Shaping Diagram .............................................................................................................. 7-90
Figure 7-17 Traffic Shaping Implementation ................................................................................................... 7-90
Figure 7-18 Line Rate (LR) processing Diagram ............................................................................................. 7-91
Figure 7-19 Diffserv Architecture .................................................................................................................... 7-93
Figure 7-20 MPLS General architecture .......................................................................................................... 7-96
Figure 8-1 Authentication Header AH ........................................................................................................... 8-99
Figure 8-2 Encapsulation Security Payload - ESP ......................................................................................... 8-101
Figure 8-3 Applying ESP in Tunnel mode and Transport Mode Example: IMS ......................................... 8-102

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

TCP/IP in mobile world


Figure 8-4 Security Association and IPsec Architecture ................................................................................ 8-106

vi

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Tables

Tables
Table 1-1 Some key differences between switch and router ............................................................................... 1-5
Table 2-1 Example of a routing table ................................................................................................................ 2-16
Table 2-2 IPv4 and IPv6 main differences ........................................................................................................ 2-19
Table 3-1 Transport Protocols Summary of functions general....................................................................... 3-34
Table 7-1 End to end Delay Budget .................................................................................................................. 7-77
Table 7-2 Codec Type and Sample Size Effects on Bandwidth ........................................................................ 7-77
Table 7-3 Assured Forwarding (AF) Behavior Group ...................................................................................... 7-93

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

vii

TCP/IP in mobile world

Network Overview

About This Chapter


This chapter gives a general overview of a mobile network architecture and IP usage in this
network. A comparison between PS and CS networks are examined as well as a comparison
between a switch and a router that are the main devices used to build up the network. Also
identities and their usage in different networks are discussed. Codecs and their usage in CS
domain are discussed. Furthermore redundancy issues in the CS/PS networks are addressed.

1.1 General mobile networks architecture and usage of IP


As shown in the Figure 1-1 a mobile network is built up by two subsystems Access Network
and the Core Network. The access network is called GERAN or UTRAN in the 2G and 3G.
The end user device is called MS in GSM and GPRS while it is called a UE in UMTS
networks R99 and upwards. The core network for the CS domain is used to setup CS bearer
while the core network for the PS domain (SGSN and GGSN) gives the possibility to setup
PS bearers. The assumption in this course will be that the core network is a PS core. To the
right side of the figure we see 2 examples of PDNs (Packet Data Network). PSTN
representing a CS network and Internet representing a PS PDN. Organisations for these
PDNs are IETF standardizing the PS domain (e.g. Internet) and ITU defining the CS domain.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

1-1

TCP/IP in mobile world

Figure 1-1 Mobile Network Overview (R99)

GERAN

Core Network
PSTN /CS

MS

BTS

BSC

MSC

GMSC
HLR/HSS

UTRAN
Packet Data
Network(s)
UE

NB
RNC

SGSN

GGSN

Internet
Intranet
IMS

Figure 1-2 Shows a general Overview of the LTE/EPS Network.


Long Term Evolution (LTE) was originally the name of a project within 3GPP to improve
mobile system standards to cope with future requirements. Goals included:

improving efficiency

lowering costs

reducing complexity

improving service performance

making use of new spectrum opportunities, and

better integration with other open standards (e.g.WLAN).

The outcome from this project was a set of specifications defining the functionality and
requirements of an evolved, packet based, radio access network and a new radio access
technology. The new radio access network is referred to as the Evolved UTRAN (E-UTRAN).
In parallel to, and coordinated with, the LTE project there was also a project for the core
network. This project was called System Architecture Evolution (SAE) standardizing the new
Evolved Packet Core (EPC) aiming at a higher data rate, lower latency, packet optimized
system that supports multiple RATs.
Please note that EPC is a fully IP-based core network (all-IP) supporting access not only via
GERAN, UTRAN and E-UTRAN but also via WiFi, WiMAX and wired technologies such as
xDSL.
The combination of the E-UTRAN and the EPC is referred to as the Evolved Packet System
(EPS). However, in daily life, the term LTE is often used more or less synonymous to Evolved
UTRAN and sometimes also to the entire EPS.

1-2

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 1-2 LTE/EPS Network Overview

Figure 1-3 Internet Standardization Bodies

In the figure above we see the organization chart for standardizations in internet community.
At the top there is ISOC that has the top responsibility for Internet infrastructure. Below ISOC
we find IAB that is responsible for three other organizations, ICANN, IRTF and IETF.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

1-3

TCP/IP in mobile world

ICANN is responsible for making sure that the IP addresses and web addresses are unique
globally and manages five other organizations, namely: ARIN, RIPE, APNIC, LATNIC and
AFRINIC.
IRTF is responsible for research and manages Reasearch Groups, while IETF is responsible
for implementation and manages Working Groups.
Figure 1-4 OSI vs IP Stack

In the picture above we see a comparison between the OSI model and the TCP/IP protocol
stack.
OSI model is a theoretical reference point not implemented in reality. Every protocol stack
refers to it since in the OSI model there is a detailed description of what should be the
responsibility and functionality of every layer. The Application Layer in TCP/IP Protocol
Stack corresponds to the top 3 layers in the OSI model. Furthermore the button 2 layers in the
OSI model are corresponding to the Network Access layer in the TCP/IP Protocol Stack.

1-4

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 1-5 Layered Communication in TCP/IP

Layered communication is about that two protocols are exchanging information logically but
the packet will physically pass through the stack. Every layer will add its own header to the
packet and the header is opened and processed at the same layer on the corresponding other
side.

1.2 Comparison of PS and CS networks


Below is a table summarizing the main differences in characteristics between a switch and a
router.
Table 1-1 Some key differences between switch and router
Switch vs Router Characteristics
Address Info

Telephone Number

IP Address

Address Used in

Set-up

In each Packet

At Congestion

No Connection

Delay

Delay

Constant

Variable

Transport Network

Statefull

Stateless

Cost (Price per MB)

Expensive

Less Expensive

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

1-5

TCP/IP in mobile world

Address Info: The address information in a switch is a telephone number (E.164 address)
while the addressing used by a router is the IP- Address. The difference is that the E.164
addresses are globally unique and never changed, while IP-addresses are changed and
may be private addresses.

Address Used in: Address in a switch is used basically only during the set-up phase
when switches need to create circuits. After the set-up phase all user-data traffic will be
passed through the circuit that was created during the set-up phase, thus no need for the
address. Every IP packet holds IP-addresses in the header, the senders IP-address and the
destinations IP-address. A router needs to read the destination IP address for every
passing IP packet to be able to rout the packet to the right destination, see the section for
routing for more info. This characteristic will result in that the router for every packet it
processes needs to read the IP address.

At Congestion: A switch is characterized by processing capacity and number of


inputs/outputs, resulting in that when the switch is congested it will not be able to set-up
more circuits and consequently it will not be able to create new connections. A router
however, when congested will start to buffer the incoming packets. A router is
characterized by processing capacity and memory, resulting in that the memory will be
full in a congested situation. As a consequence of this, the router will have a full buffer
and it will then empty the memory and restart.

Delay: As a result of the characteristics described above, the delay in a switch is constant
(low) and the delay in a router is variable depending on how congested the router is in
that certain moment of time.

Transport Network: Since circuits are set-up in a switch and the user plane traffic is
send in the reserved circuit, a switch is considered to be statefull (Knows the state). A
router however, is looking up the destination IP-address in the routing table for every
passing packet and decide an output for that packet on that point of time. This
characteristic is called stateless (Dont know the state). Packets can however take
different paths for the same flow.

Cost (Price per MB): Cost is referred to the cost/price per MByte. Intuitively, since a
switch reserves a circuit, that capacity is reserved for the circuit even if the users are not
sending anything in the user plane traffic. Foe example in a normal telephone
conversation, even though a full-duplex circuit is reserved, only part of the capacity is
used. This is because one user talks and the other listens and vise versa. Statistics shows
that it is around 35% of the capacity that is used in an average phone conversation. In a
PS domain having a router as network element; if a user talks, then the codecs will
generate a lot of IP-packets, while if a user just listens; then the codecs will basically
only generate small packets used for refresh etc. All this is resulting in that a router has a
better usage of the overall resources and gives a low cost per MByte.

1.3 Identities (e.164, e.212, FQDN, etc)


A mobile device is identified by different identities depending on the subsystem, below are
some examples of identities for a mobile device
1.

1-6

IMSI (International Subscriber Identity), is an identity used within the network (Core
Network) following the E.212 numbering plan. IMSI number is 14 or15 digits and
resides inside the (U)SIM card. The IMSI number is changed by the network elements to
a temporary number called T-IMSI (Temporary IMSI) or P-TMSI (Packet T-IMSI) for
security reasons.

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

2.

MSISDN, (Mobile Station ISDN Number) is an identity used primary outside the mobile
network for interconnection to legacy network such as PSTN. MSISDN follows the
E.164 numbering plan also used in PSTN.

3.

FQDN (Fully Qualified Domain Name) is an identity used in the PS networks that points
out a machine or adomain, e.g. www.company.com. FQDN is built up by: Top Level
Domain (com), Domain name (company) and the application (www), with dots in
between.

4.

IP-address is the identity of the device at IP layer that will be examined in detail in the
next chapter. The end-user device may use an IPv.4 or IPv.6.

Figure 1-6 Identities

1.4 Codecs (AMR, PCM, G.711, etc) and transcoding


A codec is a software/hardware that makes the bit-stream to a media. In fact A media is
described by a codec. For example MP3 is a codec that makes/describes a stream of bits to
music. There are some hundreds of codecs out there in the market and many of the codecs are
proprietary for the products they are used in such as Skype-Codec, Viber-Codec, etc.
Below are some examples of codecs:
PCM Codec:
A very fast codec with a sampling rate of 8000 samples per second and every sample
represented by 8 bits. 8000X8=64 Kbps
Codecs used in GSM:
Full Rate/Half Rate/Enhanced Full Rate: Since the 64Kbps as a result of a PCM codecs is too
much to be sent over the GSM radio interface, in GSM three new codecs were introduced.
These codecs will give different bitrates.
Adaptive Multi Rate and Wide Band AMR:
The AMR was introduced in R99 and and it introduces 16 different frame types where 8 of
them are used for user date, some of the frame types are used for compatibility and some
frame types are for future use. The idea is that the user can change codec every 20 ms based
on the radio conditions.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

1-7

TCP/IP in mobile world

G.711 codec: Is a codec used for voice and it is about how to put the result of a PCM-codec
(64 Kbps) in a PCM channel having a capacity of exactly 64 Kbps
Codecs in the IP stack: There are many codecs used in the commercial applications and most
of them are company specific for example PDF being the codec used by Adobe products such
as Adobe Acrobat Reader.

Figure 1-7 shows where some of the codecs are used in a mobile network.

Transcoding:
Transcoding is the operation of changing the codec. Phylosofically this operation is needed
for example in GSM where we have in one side; a PCM codec giving 64 Kbps and in the
other side; FR/HR/EFR used by the GSM RAN (Radio Access Network) and having other
bitrates (12.2, 9.6 and 4.75 Kbps). Transcoder in GSM was traditionally put in the BTS (Base
Tranciever Station) and moved to BSC (Base Station Controller) and nowadays put in the
MSC (Mobile Switching Center). However according to the specifications, the transcoder is a
part of the GSM BSS (Base Station System), but the GSM BSS is actually ending up in the
MSC. In the UMTS the transcoding is done by MSC that is a network element belonging to
the CS Core.
Figure 1-7 Codecs

1.5 Overview of redundancy and multi-homing


In IP networks, redundancy is referred to configuration allowing that other possible paths
would be existing in the case when one path is down/broken. One option for redundant system
is to use multi-homing.
Multi-homing is generally used to eliminate network connectivity as a potential Single Point
Of Failure, SPOF.

1-8

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Multi-homing variants:
Single Link, Multiple IP address (spaces)
The host has multiple IP Addresss (e.g. 2001:db8::1 and 2001:db8::2 in IPv6), but
only one physical upstream link. When the single link fails, connectivity is down for all
addresses.

Multiple Interfaces, Single IP address per interface


The host has multiple interfaces and each interface has one, or more, IP addresses. If one of
the links fails, then its IP address becomes unreachable, but the other IP addresses will still
work. Hosts that have multiple IPv6 or IPv4 records enabled can then still be reachable at the
penalty of having the client program time out and retry on the broken address. Existing
connections can't be taken over by the other interface, as TCP does not support this. To
remedy this, one could use SCTP which does allow this situation. However SCTP is not used
very much in practice. A new protocol based on TCP, Multipath TCP, taking form as a TCP
extension, is also being actively worked on at the IETF as of March 2012. It would also
remedy this issue, as well as providing better performances by making use of every available
network interface.

Multiple Links, Single IP address (space)


This is what is generally meant by Multi-homing. With the use of a routing protocol, in most
cases BGP, the end-site announces this address space to its upstream links. When one of the
links fails, the protocol notices this on both sides and traffic is not sent over the failing link
any more. This method is usually employed to multi-home a site and not for single hosts.

Multiple Links, Multiple IP address (spaces)


This approach uses a specialized Link Load Balancer (or WAN Load Balancer) appliance
between the firewall and the link routers. No special configuration is required in the ISPs
routers. It allows use of all links at the same time to increase the total available bandwidth and
detects link saturation and failures in real time to redirect traffic. Algorithms allow traffic
management. Incoming balancing is usually performed with a real time DNS
resolution. Another common use of this variant is to control routing between the separate
address spaces used by each interface. This is often used for PC Server based firewalls.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

1-9

TCP/IP in mobile world

Internet Protocol, IP

2.1 IP address structure:


2.1.1 IPv4:
Figure 2-1 IPv4 Addressing, RFC 791

2.1.2 IPv6:
Figure 2-2 IPv6 Addressing, RFC 4291

Abbreviation choices:

2-10

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 2-3 IPv6 Abbreviation Choices

IPv6 Address Structure


Figure 2-4 IPv6 Address Structure

IPv6 Address types


Type indicated by the first few bits of the Global Routing Prefix:

Unicast : one to one

Anycast: one to many, ok if one recieves

Multicast: One to many

IPv6 Address Types

Global Unicast Address Format (RFC 4291)

Figure 2-5 Global Unicast Address Format (RFC 4291)

Unique Local Address Space (ULA) (RFC 4193)

Figure 2-6 Unique Local Address Space (ULA) (RFC 4193)

Issue 06 (2006-03-01)

Link Local Address Space (RFC 4862)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-11

TCP/IP in mobile world

Figure 2-7 Link Local Address Space (RFC 4862)

Used only on a particular link, eg. Ethernet, Not routable

Multicast Address Space (RFC 4291)

Figure 2-8 Multicast Address Space (RFC 4291)

Identifies a group of interfaces, typically on different nodes

All group members share the same group Id

An interface may have multiple multicast addresses

E.g. FF01::2 = node-local all routers address

FF05::1:3 = site-local all DHCP server address

IPv6 Address Allocation

Figure 2-9 IPv6 Address Structure

Stateless allocation, RFC 2462


The device autoconfigures its address without external interaction
NW Prefix:
By concatenating the NW-prefix of the subnet to which the device
is connected.
Neighbour Discovery Protocol can be used for this task, RFC 2461
giving following functions:
- presence of other IPv6 nodes
- to identify their link layer address
- to discover routers
- to perform DAD (Duplicate Address Detection)
Interface Id:
Derived from the devices MAC-address

Statefull allocation, RFC 3315

2-12

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

DHCPv6 gives the full 128 bits address to the client, Not integrated with
DHCPv4 and it uses different message types, following configurations can be
used:
1. Auto allocation, A-DHCP - permanent IP address to client
2. Manual allocation, M-DHCP - assigns a fixed IP, based on
clients MAC-address
3. Dynamic allocation, D-DHCP - for limited time
Figure 2-10 Comparison between DHCPv4 and DHCPv6

Combination
The address autoconfigured by the device (stateless)
Other parameters from DHCPv6, eg. DNS, NTP (Network Time Protocol)
server, etc.

IPv6 Address Allocation by IANA


Figure 2-11 IPv6 Address Allocation by IANA

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-13

TCP/IP in mobile world

2.2 Private addresses in IPv4 and IPv6


IPv4 Private addresses [RFC 1918]
The following ranges of IPv4-addresses are reserved for private usage (Not-Routable)
-

10.0.0.0 10.255.255.255

172.16.0.0 172.31.255.255

192.168.0.0 192.168.255.255

IP v6 Private Addresses (RFC 4862)


-

Link Local Address Space is defined to be used as private IPv6 Addressing, these
addresses are used only on a particular link, eg. Ethernet and are Not routable

The structure of the private IPv6 addresses are shown in Figure 2-12

Figure 2-12 IPv6 Private Addresses, Link Local Address (RFC 4862)

2.3 IP routing principles


Direct Routing Versus Indirect Routing
Figure 2-13 shows the principle for direct and indirect routing.

Direct Routing:
Mapping of the IP address to the physical address, using Address Resolution Protocol,
ARP

Indirect Routing:
Forwarding of the packets by using the routing table

2-14

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 2-13 Direct and Indirect Routing principle

Figure 2-14 shows an example of some networks connected through routers.


Figure 2-14 Example of a network

Lets have a look at the routing table that is stored in the router in the interconnection of 3
Networks in the picture above, marked by red. The routing table may look as show in Table
2-1:

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-15

TCP/IP in mobile world

Table 2-1 Example of a routing table

Above shown a possible configuration stored in the the routing table in the router marked by
red color. This router will use the routing table in the case of indirect routing to find the next
hop.

2.4 Comparison of IPv4 and IPv6 functionalities


Figure 2-15 IPv4 Header

IPv4 header (RFC 791) is shown in Figure 2-1515

Version

Internet Header Length (IHL)

2-16

The first header field in an IP packet is the four-bit version field. For IPv4, this has a
value of 4 (hence the name IPv4).
The second field (4 bits) is the Internet Header Length (IHL), which is the number of
32-bit words in the header. Since an IPv4 header may contain a variable number of
options, this field specifies the size of the header (this also coincides with the offset to
the data). The minimum value for this field is 5 (RFC 791), which is a length of 532
= 160 bits = 20 bytes. Being a 4-bit value, the maximum length is 15 words (1532
bits) or 480 bits = 60 bytes.

Differentiated Services Code Point (DSCP)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Originally defined as the Type of Service field, this field is now defined by RFC 2474
for Differentiated services (DiffServ). New technologies are emerging that require
real-time data streaming and therefore make use of the DSCP field. An example is
Voice over IP (VoIP), which is used for interactive data voice exchange.

Explicit Congestion Notification (ECN) , first 2 bits in ToS field

This field is defined in RFC 3168 and allows end-to-end notification of network
congestion without dropping packets. ECN is an optional feature that is only used
when both endpoints support it and are willing to use it. It is only effective when
supported by the underlying network.

Total Length

Identification

Issue 06 (2006-03-01)

This field is an identification field and is primarily used for uniquely identifying
fragments of an original IP datagram. Some experimental work has suggested using
the ID field for other purposes, such as for adding packet-tracing information to help
trace datagrams with spoofed source addresses.

Flags

A three-bit field follows and is used to control or identify fragments. They are (in
order, from high order to low order):

bit 0: Reserved; must be zero.

bit 1: Don't Fragment (DF)

bit 2: More Fragments (MF)

If the DF flag is set, and fragmentation is required to route the packet, then the packet
is dropped. This can be used when sending packets to a host that does not have
sufficient resources to handle fragmentation. It can also be used for Path MTU
Discovery, either automatically by the host IP software, or manually using diagnostic
tools such as ping or traceroute.

For unfragmented packets, the MF flag is cleared. For fragmented packets, all
fragments except the last have the MF flag set. The last fragment has a non-zero
Fragment Offset field, differentiating it from an unfragmented packet.

Fragment Offset

This 16-bit field defines the entire packet (fragment) size, including header and data,
in bytes. The minimum-length packet is 20 bytes (20-byte header + 0 bytes data) and
the maximum is 65,535 bytes the maximum value of a 16-bit word. The largest
datagram that any host is required to be able to reassemble is 576 bytes, but most
modern hosts handle much larger packets. Sometimes subnetworks impose further
restrictions on the packet size, in which case datagrams must be fragmented.
Fragmentation is handled in either the host or router in IPv4.

The fragment offset field, measured in units of eight-byte blocks, is 13 bits long and
specifies the offset of a particular fragment relative to the beginning of the original
unfragmented IP datagram. The first fragment has an offset of zero. This allows a
maximum offset of (213 1) 8 = 65,528 bytes, which would exceed the maximum
IP packet length of 65,535 bytes with the header length included (65,528 + 20 =
65,548 bytes).

Time To Live (TTL)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-17

TCP/IP in mobile world

An eight-bit time to live field helps prevent datagrams from persisting (e.g. going in
circles) on an internet. This field limits a datagram's lifetime. It is specified in
seconds, but time intervals less than 1 second are rounded up to 1. In practice, the
field has become a hop countwhen the datagram arrives at a router, the router
decrements the TTL field by one. When the TTL field hits zero, the router discards
the packet and typically sends a ICMP Time Exceeded message to the sender.

The program traceroute uses these ICMP Time Exceeded messages to print the
routers used by packets to go from the source to the destination.

Protocol

Header Checksum

The 16-bit checksum field is used for error-checking of the header. When a packet
arrives at a router, the router calculates the checksum of the header and compares it to
the checksum field. If the values do not match, the router discards the packet. Errors
in the data field must be handled by the encapsulated protocol. Both UDP and TCP
have checksum fields.

When a packet arrives at a router, the router decreases the TTL field. Consequently,
the router must calculate a new checksum. RFC 1071 defines the checksum
calculation:

The checksum field is the 16-bit one's complement of the one's complement sum of
all 16-bit words in the header. For purposes of computing the checksum, the value of
the checksum field is zero.

Source address

This field is the IPv4 address of the sender of the packet. Note that this address may
be changed in transit by a network address translation device.

Destination address

This field defines the protocol used in the data portion of the IP datagram. The
Internet Assigned Numbers Authority maintains a list of IP protocol numbers which
was originally defined in RFC 790.

This field is the IPv4 address of the receiver of the packet. As with the source
address, this may be changed in transit by a network address translation device.

Options

The options field is not often used. Note that the value in the IHL field must include
enough extra 32-bit words to hold all the options (plus any padding needed to ensure
that the header contains an integral number of 32-bit words). The list of options may
be terminated with an EOL (End of Options List, 0x00) option; this is only necessary
if the end of the options would not otherwise coincide with the end of the header.

IPv6 Header is shown in Figure 2-16

2-18

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 2-16 IPv6 Header

An IPv6 packet has two parts: a header and payload.


The header consists of a fixed portion with minimal functionality required for all packets and
may be followed by optional extensions to implement special features.
The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the
source and destination addresses, traffic classification options, a hop counter, and the type of
the optional extension or payload which follows the header. This Next Header field tells the
receiver how to interpret the data which follows the header. If the packet contains options, this
field contains the option type of the next option. The "Next Header" field of the last option,
points to the upper-layer protocol that is carried in the packet's payload.
Extension headers carry options that are used for special treatment of a packet in the network,
e.g., for routing, fragmentation, and for security using the IPsec framework.
Without special options, a payload must be less than 64kB. With a Jumbo Payload option (in a
Hop-By-Hop Options extension header), the payload must be less than 4 GB.
Unlike in IPv4, routers never fragment a packet. Hosts are expected to use Path MTU
Discovery to make their packets small enough to reach the destination without needing to be
fragmented. See IPv6 Packet#Fragmentation.

The following table Table 2-2 summarizes some main different functionalities between IPv4
and IPv6.
Table 2-2 IPv4 and IPv6 main differences

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-19

TCP/IP in mobile world

2.5 Overview of routing protocols:


A routing protocol specifies how routers communicate with each other, disseminating
information that enables them to select routes between any two nodes on a computer network,
the choice of the route being done by routing algorithms. Each router has a priori knowledge
only of networks attached to it directly. A routing protocol shares this information first among
immediate neighbors, and then throughout the network. This way, routers gain knowledge of
the topology of the network.

Distance Vector (Bellman-Ford) Routing:


The term distance vector refers to a class of algorithms that routers use to propagate routing
information. The idea is quite simple; the router keeps a list of all known routers in a table.
When it boots, a router initialize its routing table to contain an entry for each directly
connected network. Each entry in the table identifies a destination network and gives the
distance to that network, usually measured in hops. Example of a Distance Vector algorithm is
the RIP algorithm:

2.5.2 Routing Information Protocol, RIP:


The Routing Information Protocol (RIP) is a distance-vector routing protocol, which
employs the hop count as a routing metric. RIP prevents routing loops by implementing a
limit on the number of hops allowed in a path from the source to a destination. The maximum
number of hops allowed for RIP is 15. This hop limit, however, also limits the size of
networks that RIP can support. A hop count of 16 is considered an infinite distance and used
to deprecate inaccessible, inoperable, or otherwise undesirable routes in the selection process.
RIP implements the split horizon, route poisoning and holddown mechanisms to prevent
incorrect routing information from being propagated. These are some of the stability features
of RIP. It is also possible to use the so called RMTI(Routing Information Protocol with
Metric-based Topology Investigation) algorithm to cope with the count-to-infinity problem.
With its help, it is possible to detect every possible loop with a very small computation effort.
Originally each RIP router transmitted full updates every 30 seconds. In the early
deployments, routing tables were small enough that the traffic was not significant. As
networks grew in size, however, it became evident there could be a massive traffic burst every
30 seconds, even if the routers had been initialized at random times. It was thought, as a result
of random initialization, the routing updates would spread out in time, but this was not true in
practice. Sally Floyd and Van Jacobson showed in 1994 that, without slight randomization of
the update timer, the timers synchronized over time. In most current networking
environments, RIP is not the preferred choice for routing as its time to converge and
scalability are poor compared to EIGRP, OSPF, or IS-IS (the latter two being link-state
routing protocols), and (without RMTI) a hop limit severely limits the size of network it can
be used in. However, it is easy to configure, because RIP does not require any parameters on a
router unlike other protocols (see here for an animation of basic RIP simulation visualizing
RIP configuration and exchanging of Request and Response to discover new routes).
RIP uses the User Datagram Protocol (UDP) as its transport protocol, and is assigned the
reserved port number 520.

2-20

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Link-State (SPF) Routing:


The main disadvantage of the distance-vector algorithm is that it does not scale well. Beside
the problem of slow response to changes, it requires exchange of large messages. Because
each routing update contains an entry for very possible network, message size is proportional
to the total number of networks in an Internet. Furthermore, because a distance-vector
algorithm requires every router to participate, the volume of information exchange can be
enormous. The primary alternative to distance-vector algorithm is a class of algorithms known
as Link State, Link Status, or Shortest Path First (SPF).
The SPF algorithm requires each participating router to have complete topology information.
Easiest way to imagine this is to think about that every router has a map that shows all other
routers and the networks to which they connect.
Instead of sending messages that contain lists of destinations, a router participating in an SPF
algorithm performs two tasks.
Actively test the status of all neighbor routers
Periodically propagate the link status information to all other routers.
Below is an example of a Link-State algorithm, the OSPF algorithm.

2.5.3 Open Shortest Path First, OSPF:


OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a
single routing domain (autonomous system). It gathers link state information from available
routers and constructs a topology map of the network. The topology determines the routing
table presented to the Internet Layer which makes routing decisions based solely on the
destination IP address found in IP packets. OSPF was designed to support variable-length
subnet masking (VLSM) or Classless Inter-Domain Routing (CIDR) addressing models.
OSPF detects changes in the topology, such as link failures, very quickly and converges on a
new loop-free routing structure within seconds. It computes the shortest path tree for each
route using a method based on Dijkstra's algorithm, a shortest path first algorithm.
The link-state information is maintained on each router as a link-state database (LSDB)
which is a tree-image of the entire network topology. Identical copies of the LSDB are
periodically updated through flooding on all OSPF routers.
The OSPF routing policies to construct a route table are governed by link cost factors
(external metrics) associated with each routing interface. Cost factors may be the distance of a
router (round-trip time), network throughput of a link, or link availability and reliability,
expressed as simple unitless numbers. This provides a dynamic process of traffic load
balancing between routes of equal cost.
An OSPF network may be structured, or subdivided, into routing areas to simplify
administration and optimize traffic and resource utilization. Areas are identified by 32-bit
numbers, expressed either simply in decimal, or often in octet-based dot-decimal notation,
familiar from IPv4 address notation.
By convention, area 0 (zero) or 0.0.0.0 represents the core or backbone region of an OSPF
network. The identifications of other areas may be chosen at will; often, administrators select
the IP address of a main router in an area as the area's identification. Each additional area
must have a direct or virtual connection to the backbone OSPF area. Such connections are
maintained by an interconnecting router, known as area border router (ABR). An ABR

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-21

TCP/IP in mobile world

maintains separate link state databases for each area it serves and maintains summarized
routes for all areas in the network.
OSPF does not use a TCP/IP transport protocol (UDP, TCP), but is encapsulated directly in IP
datagrams with protocol number 89. This is in contrast to other routing protocols, such as the
Routing Information Protocol (RIP), or the Border Gateway Protocol (BGP). OSPF handles
its own error detection and correction functions.
OSPF uses multicast addressing for route flooding on a broadcast domain. For non-broadcast
networks special provisions for configuration facilitate neighbor discovery. OSPF multicast IP
packets never traverse IP routers(never traverse Broadcast Domains), they never travel more
than one hop. OSPF reserves the multicast addresses 224.0.0.5 for IPv4 or FF02::5 for
IPv6 (all SPF/link state routers, also known as AllSPFRouters) and 224.0.0.6 for IPv4
or FF02::6 for IPv6 (all Designated Routers, AllDRouters), as specified in RFC 2328
and RFC 5340.
For routing multicast IP traffic, OSPF supports the Multicast Open Shortest Path First
protocol (MOSPF) as defined in RFC 1584. PIM (Protocol Independent Multicast) in
conjunction with OSPF or other IGPs, (Interior Gateway Protocol), is widely deployed.
The OSPF protocol, when running on IPv4, can operate securely between routers, optionally
using a variety of authentication methods to allow only trusted routers to participate in
routing. OSPFv3, running on IPv6, no longer supports protocol-internal authentication.
Instead, it relies on IPv6 protocol security (IPsec).
OSPF version 3 introduces modifications to the IPv4 implementation of the protocol. Except
for virtual links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6
protocol runs per link, rather than based on the subnet. All IP prefix information has been
removed from the link-state advertisements and from the Hello discovery packet making
OSPFv3 essentially protocol-independent. Despite the expanded IP addressing to 128-bits in
IPv6, area and router Identifications are still based on 32-bit values.

2.5.4 Border Gateway Protocol, BGP:


Border Gateway Protocol(BGP) is the protocol which makes core routing decisions on the
Internet. It maintains a table of IP networks or "prefixes" which designate network
reach-ability among autonomous systems (AS). It is a path vector protocol, or a variant of a
Distance-vector routing protocol. BGP does not use traditional Interior Gateway Protocol
(IGP) metrics, but makes routing decisions based on path, network policies and/or rule-sets.
For this reason, it is more appropriately termed a reach-ability protocol rather than routing
protocol.
BGP was created to replace the Exterior Gateway Protocol (EGP) to allow fully decentralized
routing in order to transition from the core ARPAnet model to a decentralized system that
included the NSFNET backbone and its associated regional networks. This allowed the
Internet to become a truly decentralized system. Since 1994, version four of the BGP has been
in use on the Internet. All previous versions are now obsolete. The major enhancement in
version 4 was support of Classless Inter-Domain Routing and use of route aggregation to
decrease the size of routing tables. Since January 2006, version 4 is codified in RFC 4271,
which went through more than 20 drafts based on the earlier RFC 1771 version 4. RFC 4271
version corrected a number of errors, clarified ambiguities and brought the RFC much closer
to industry practices.
Most Internet service providers must use BGP to establish routing between one another
(especially if they are multihomed). Therefore, even though most Internet users do not use it

2-22

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

directly, BGP is one of the most important protocols of the Internet. Compare this with
Signaling System 7 (SS7), which is the inter-provider core call setup protocol on the PSTN.
Very large private IP networks use BGP internally. An example would be the joining of a
number of large OSPF (Open Shortest Path First) networks where OSPF by itself would not
scale to size. Another reason to use BGP is multihoming a network for better redundancy,
either to multiple access points of a single ISP (RFC 1998), or to multiple ISPs.

Summary Routing Protocols:

- RIP [RFC 2453], Slow, Update every 30 S, The whole Routing table, Small NWs

- OSPF [RFC 2328], Multicasting, 224.0.0.5, Only the changes, Fast, Large NWs

- BGP [RFC 1771], Reliable, TCP, Policies andLink QoS

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

2-23

TCP/IP in mobile world

Transport Protocols: TCP, SCTP and UDP

3.1 Functions of transport Protocols:


In computer networking, the transport layer or layer 4 provides end-to-end communication
services for applications within a layered architecture of network components and protocols.
The transport layer provides convenient services such as connection-oriented data stream
support, reliability, flow control, and multiplexing.
Transport layers are contained in both the TCP/IP model (RFC 1122), which is the foundation
of the Internet, and the Open Systems Interconnection (OSI) model of general networking.
The definitions of the transport layer are slightly different in these two models. This article
primarily refers to the TCP/IP model, in which TCP is largely for a convenient application
programming interface to internet hosts, as opposed to the OSI-model definition of the
transport layer.
The most well-known transport protocol is the Transmission Control Protocol (TCP). It lent
its name to the title of the entire Internet Protocol Suite, TCP/IP. It is used for
connection-oriented transmissions, whereas the connectionless User Datagram Protocol
(UDP) is used for simpler messaging transmissions. TCP is the more complex protocol, due to
its stateful design incorporating reliable transmission and data stream services. Other
prominent protocols in this group are the Datagram Congestion Control Protocol (DCCP) and
the Stream Control Transmission Protocol (SCTP).
In Figure 3-1 layered communication is shown. Transport layer is used fom session handling
and allows multiple applications to run in a single mashin.

3-24

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 3-1 Layered Communication

3.2

Usage of port numbers


In computer networking a port is an application-specific or process-specific software
construct serving as a communications endpoint in a computer's host operating system. A port
is associated with an IP address of the host, as well as the type of protocol used for
communication. In plain English, the purpose of ports is to uniquely identify different
applications or processes running on a single computer and thereby enable them to share a
single physical connection to a packet-switched network like the Internet.
The protocols that primarily use ports are the Transport Layer protocols, such as the
Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet
Protocol Suite. A port is identified for each address and protocol by a 16-bit number,
commonly known as the port number. The port number, added to a computer's IP address,
completes the destination address for a communications session. That is, data packets are
routed across the network to a specific destination IP address, and then, upon reaching the
destination computer, are further routed to the specific process bound to the destination port
number.
Note that it is the combination of IP address and port number together that must be globally
unique. Thus, different IP addresses or protocols may use the same port number for
communication; e.g., on a given host or interface UDP and TCP may use the same port
number, or on a host with two interfaces, both addresses may be associated with a port having
the same number.
Of the thousands of enumerated ports, about 250 well-known ports are reserved by
convention to identify specific service types on a host. In the client-server model of
application architecture, ports are used to provide a multiplexing service on each server-side

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

3-25

TCP/IP in mobile world

port number that network clients connect to for service initiation, after which communication
can be reestablished on other connection-specific port numbers.

Client and Server Model:


Figure 3-2 shows the general functions of an ISP, the Client and Server model is applied.

Client and Server Modell:

The server has a well-known port open for the service it offers (1-1023).

The client opens a random unused, non-reserved port (1024-65535).

The port points out the application.

Figure 3-2 Internet Service Provider (ISP)

3.3 User Datagram Protocol, UDP


The User Datagram Protocol (UDP) is one of the core members of the Internet protocol
suite, the set of network protocols used for the Internet. With UDP, computer applications can
send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol
(IP) network without prior communications to set up special transmission channels or data
paths. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.
UDP uses a simple transmission model with a minimum of protocol mechanism. It has no
handshaking dialogues, and thus exposes any unreliability of the underlying network protocol
to the user's program. As this is normally IP over unreliable media, there is no guarantee of
delivery, ordering or duplicate protection. UDP provides checksums for data integrity, and
port numbers for addressing different functions at the source and destination of the datagram.
UDP is suitable for purposes where error checking and correction is either not necessary or
performed in the application, avoiding the overhead of such processing at the network

3-26

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

interface level. Time-sensitive applications often use UDP because dropping packets is
preferable to waiting for delayed packets, which may not be an option in a real-time system.
Figure 3-3 shows th UDP header
Figure 3-3 UDP Header

3.4 Transmission Control Protocol, TCP


The Transmission Control Protocol (TCP) is one of the core protocols of the Internet
Protocol Suite. TCP is one of the two original components of the suite, complementing the
Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP
provides reliable, ordered delivery of a stream of octets from a program on one computer to
another program on another computer. TCP is the protocol used by major Internet applications
such as the World Wide Web, email, remote administration and file transfer. Other
applications, which do not require reliable data stream service, may use the User Datagram
Protocol (UDP), which provides a datagram service that emphasizes reduced latency over
reliability.
Figure 3-4 shows the TCP header with its fields.
Figure 3-4 TCP Header

Source port (16 bits) identifies the sending port

Destination port (16 bits) identifies the receiving port

Sequence number (32 bits) has a dual role:

If the SYN flag is set (1), then this is the initial sequence number. The sequence number
of the actual first data byte and the acknowledged number in the corresponding ACK are
then this sequence number plus 1.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

3-27

TCP/IP in mobile world

3-28

If the SYN flag is clear (0), then this is the accumulated sequence number of the first data
byte of this segment for the current session.

Acknowledgment number (32 bits) if the ACK flag is set then the value of this field is
the next sequence number that the receiver is expecting. This acknowledges receipt of all
prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial
sequence number itself, but no data.

Data offset (4 bits) specifies the size of the TCP header in 32-bit words. The minimum
size header is 5 words and the maximum is 15 words thus giving the minimum size of 20
bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header.
This field gets its name from the fact that it is also the offset from the start of the TCP
segment to the actual data.

Reserved (3 bits) for future use and should be set to zero

Flags (9 bits) (aka Control bits) contains 9 1-bit flags

NS (1 bit) ECN-nonce concealment protection (added to header by RFC 3540).

CWR (1 bit) Congestion Window Reduced (CWR) flag is set by the sending host to
indicate that it received a TCP segment with the ECE flag set and had responded in
congestion control mechanism (added to header by RFC 3168).

ECE (1 bit) ECN-Echo indicates

If the SYN flag is set (1), that the TCP peer is ECN capable.

If the SYN flag is clear (0), that a packet with Congestion Experienced flag in IP header
set is received during normal transmission (added to header by RFC 3168).

URG (1 bit) indicates that the Urgent pointer field is significant

ACK (1 bit) indicates that the Acknowledgment field is significant. All packets after the
initial SYN packet sent by the client should have this flag set.

PSH (1 bit) Push function. Asks to push the buffered data to the receiving application.

RST (1 bit) Reset the connection

SYN (1 bit) Synchronize sequence numbers. Only the first packet sent from each end
should have this flag set. Some other flags change meaning based on this flag, and some
are only valid for when it is set, and others when it is clear.

FIN (1 bit) No more data from sender

Window size (16 bits) the size of the receive window, which specifies the number of
bytes (beyond the sequence number in the acknowledgment field) that the sender of this
segment is currently willing to receive (see Flow control and Window Scaling)

Checksum (16 bits) The 16-bit checksum field is used for error-checking of the header
and data

Urgent pointer (16 bits) if the URG flag is set, then this 16-bit field is an offset from the
sequence number indicating the last urgent data byte

Options (Variable 0320 bits, divisible by 32) The length of this field is determined by
the data offset field. Options have up to three fields: Option-Kind (1 byte),
Option-Length (1 byte), Option-Data (variable). The Option-Kind field indicates the type
of option, and is the only field that is not optional. Depending on what kind of option we
are dealing with, the next two fields may be set: the Option-Length field indicates the
total length of the option, and the Option-Data field contains the value of the option, if
applicable. For example, an Option-Kind byte of 0x01 indicates that this is a No-Op
option used only for padding, and does not have an Option-Length or Option-Data byte
following it. An Option-Kind byte of 0 is the End Of Options option, and is also only one
byte. An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size
option, and will be followed by a byte specifying the length of the MSS field (should be
Huawei Proprietary and Confidential
Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

0x04). Note that this length is the total length of the given options field, including
Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in
two bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short,
an MSS option field with a value of 0x05B4 will show up as (0x02 0x04 0x05B4) in the
TCP options section.

Padding The TCP header padding is used to ensure that the TCP header ends and data
begins on a 32 bit boundary. The padding is composed of zeros.

Figure 3-5 shows the flags and its definitions.


Figure 3-5 Flags and Window in TCP

Figure 3-6 shows a general TCP traffic case and the usage of flags in the header

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

3-29

TCP/IP in mobile world

Figure 3-6 TCP Traffic Case and the usage of flags

3.5 Stream Control Transport Protocol, SCTP


In computer networking, the Stream Control Transmission Protocol (SCTP) is a transport
layer protocol, serving in a similar role to the popular protocols Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP). It provides some of the same service
features of both: it is message-oriented like UDP and ensures reliable, in-sequence transport
of messages with congestion control like TCP.
The protocol was defined by the IETF Signaling Transport (SIGTRAN) working group in
2000,[1] and is maintained by the IETF Transport Area (TSVWG) working group. RFC 4960
defines the protocol. RFC 3286 provides an introduction. In the absence of native SCTP
support in operating systems it is possible to tunnel SCTP over UDP, as well as mapping TCP
API calls to SCTP ones.

Figure 3-7 shows the general format of the SCTP Header.

3-30

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 3-7 SCTP Header

In SCTP a 4-way handshaking in used during the set-up phase and a 3-way handshaking is
used for the termination of SCTP-associatiom. Figure 3-8 shows an example of a SCTP traffic
case, Set-up (Initiation), Data Transfer and Termination is shown.
Figure 3-8 SCTP Traffic Case

In SCTP chunk types are defined to be used for different purposes. Figure 3-9 shows the
definition of chunk types.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

3-31

TCP/IP in mobile world

Figure 3-9 SCTP Chunk Types

SACK Chunk and example of usage:


The following pictures shows the SCTP-SACK Chunk Figure 3-10 and its definition, and an
example of usage:

3-32

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 3-10 SCTP-SACK Chunk

Figure 3-11 SACK usage example

Figure 3-12 SACK header in the above figure

3.6

Transport
Protocols
functionalities

and

comparison

of

The following table Table 3-1 summarizes some main protocol functionalities and differences
of Transport Layer Protocols

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

3-33

TCP/IP in mobile world

Table 3-1 Transport Protocols Summary of functions general

3-34

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Applications

4.1 Application Layer General


In TCP/IP, the application layer contains all protocols and methods that fall into the realm of
process-to-process communications across an Internet Protocol (IP) network. Application
layer methods use the underlying transport layer protocols to establish host-to-host
connections.
The Internet protocol suite (TCP/IP) and the Open Systems Interconnection model (OSI
model) of computer networking each specify a group of protocols and methods identified by
the name application layer.
In the OSI model, the definition of its application layer is narrower in scope, explicitly
distinguishing additional functionality above the transport layer at two additional levels, the
session layer and the presentation layer. OSI specifies strict modular separation of
functionality at these layers and provides protocol implementations for each layer.
Figure 4-1 TCP/IP Applications

4.2 Dynamic Host Configuration Protocol DHCP


The Dynamic Host Configuration Protocol (DHCP) is a network protocol that is used to
configure network devices so that they can communicate on an IP network. A DHCP client
uses the DHCP protocol to acquire configuration information, such as an IP address, a default

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

4-35

TCP/IP in mobile world

route and one or more DNS server addresses from a DHCP server. The DHCP client then uses
this information to configure its host. Once the configuration process is complete, the host is
able to communicate on the internet.
The DHCP server maintains a database of available IP addresses and configuration
information. When it receives a request from a client, the DHCP server determines the
network to which the DHCP client is connected, and then allocates an IP address or prefix that
is appropriate for the client, and sends configuration information appropriate for that client.
Because the DHCP protocol must work correctly even before DHCP clients have been
configured, the DHCP server and DHCP client must be connected to the same network link.
In larger networks, this is not practical. On such networks, each network link contains one or
more DHCP relay agents. These DHCP relay agents receive messages from DHCP clients and
forward them to DHCP servers. DHCP servers send responses back to the relay agent, and the
relay agent then sends these responses to the DHCP client on the local network link.
DHCP servers typically grant IP addresses to clients only for a limited interval. DHCP clients
are responsible for renewing their IP address before that interval has expired, and must stop
using the address once the interval has expired, if they have not been able to renew it.
DHCP is used for IPv4 and IPv6. While both versions serve much the same purpose, the
details of the protocol for IPv4 and IPv6 are sufficiently different that they may be considered
separate protocols.
Hosts that do not use DHCP for address configuration may still use it to obtain other
configuration information. Alternatively, IPv6 hosts may use stateless address
autoconfiguration. IPv4 hosts may use link-local addressing to achieve limited local
connectivity.
Figure 4-2 shows a traffic cas of DHCP

4-36

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 4-2 DHCP Traffic Case

4.3 Domain Name System- DNS


The Domain Name System (DNS) is a hierarchical distributed naming system for computers,
services, or any resource connected to the Internet or a private network. It associates various
information with domain names assigned to each of the participating entities. A Domain
Name Service resolves queries for these names into IP addresses for the purpose of locating
computer services and devices worldwide. By providing a worldwide, distributed
keyword-based redirection service, the Domain Name System is an essential component of
the functionality of the Internet.
An often-used analogy to explain the Domain Name System is that it serves as the phone
book for the Internet by translating human-friendly computer hostnames into IP addresses.
For example, the domain name www.example.com translates to the addresses
192.0.43.10 (IPv4) and 2620:0:2d0:200::10 (IPv6). Unlike a phone book,
however, DNS can be quickly updated and these updates distributed, allowing a service's
location on the network to change without affecting the end users, who continue to use the
same hostname. Users take advantage of this when they recite meaningful Uniform Resource
Locators (URLs) and e-mail addresses without having to know how the computer actually
locates the services.
The Domain Name System distributes the responsibility of assigning domain names and
mapping those names to IP addresses by designating authoritative name servers for each
domain. Authoritative name servers are assigned to be responsible for their particular
domains, and in turn can assign other authoritative name servers for their sub-domains. This
mechanism has made the DNS distributed and fault tolerant and has helped avoid the need for
a single central register to be continually consulted and updated. Additionally, the
responsibility for maintaining and updating the master record for the domains is spread among
many domain name registrars, who compete for the end-user's, domain-owner's, business.
Domains can be moved from registrar to registrar at any time.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

4-37

TCP/IP in mobile world

The Domain Name System also specifies the technical functionality of this database service.
It defines the DNS protocol, a detailed specification of the data structures and communication
exchanges used in DNS, as part of the Internet Protocol Suite.
The Internet maintains two principal namespaces, the domain name hierarchy[1] and the
Internet Protocol (IP) address spaces.[2] The Domain Name System maintains the domain
name hierarchy and provides translation services between it and the address spaces. Internet
name servers and a communication protocol implement the Domain Name System.[3] A DNS
name server is a server that stores the DNS records for a domain name, such as address (A)
records, name server (NS) records, and mail exchanger (MX) records (see also list of DNS
record types); a DNS name server responds with answers to queries against its database.

Figure 4-3 shows a traffic case of DNS as it is defined in RFC 1034 and RFC 1035. The client
tries to resolve the address of www.apis.se.
Figure 4-3 Domain Name System, RFC 1034, RFC 1035

Root DNS:
A root name server is a name server for the Domain Name System's root zone. It directly
answers requests for records in the root zone and answers other requests returning a list of the
designated authoritative name servers for the appropriate top-level domain (TLD). The root
name servers are a critical part of the Internet because they are the first step in translating
(resolving) human readable host names into IP addresses that are used in communication
between Internet hosts.
A combination of limits in the DNS and certain protocols, namely the practical size of
unfragmented User Datagram Protocol (UDP) packets, resulted in a limited number of root
server addresses that can be accommodated in DNS name query responses. This limit has
determined the number of name server installations at (currently) 13 clusters, serving the
needs of the entire public Internet worldwide.
Figure 4-4 shows Top Level Domains TLDs and different kind of TLDs.

4-38

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 4-4 Top Level Domains TLD

Below a list of all Country Code Top Level Domains, ccTLD, Figure 4-5.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

4-39

TCP/IP in mobile world

Figure 4-5 ccTLD (Part 1)

Figure 4-6 ccTLD (Part 2)

Figure 4-7 ccTLD (Part 3)

4-40

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

4.4 Internet Service Provider and its Services


Figure 4-8 shows a general figure of some TCP/IP applications and how they can be used
towards the servers. The ISPs network is marked in the figure with corresponding
applications. Foa a mobile operator wanting to act as an ISP for wireless data access some or
all of applications in Figure 4-8 need to be implemented inside the network in one way or
other. In todays network there is a DHCP, RADIUS, DNS as well as WAP-servers etc inside
the opertors network.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

4-41

TCP/IP in mobile world

Figure 4-8 TCP/IP Applications and ISP

4.5 Address translation, NAT


In computer networking, network address translation (NAT) is the process of modifying IP
address information in IP packet headers while in transit across a traffic routing device.
The simplest type of NAT provides a one-to-one translation of IP addresses. RFC 2663 refers
to this type of NAT as basic NAT. It is often also referred to as one-to-one NAT. In this type
of NAT only the IP addresses, IP header checksum and any higher level checksums that
include the IP address need to be changed. The rest of the packet can be left untouched (at
least for basic TCP/UDP functionality, some higher level protocols may need further
translation). Basic NATs can be used when there is a requirement to interconnect two IP
networks with incompatible addressing.
However, it is common to hide an entire IP address space, usually consisting of private IP
addresses, behind a single IP address (or in some cases a small group of IP addresses) in
another (usually public) address space. To avoid ambiguity in the handling of returned
packets, a one-to-many NAT must alter higher level information such as TCP/UDP ports in
outgoing communications and must maintain a translation table so that return packets can be
correctly translated back. RFC 2663 uses the term NAPT (network address and port
translation) for this type of NAT. Other names include PAT (port address translation), IP
masquerading, NAT Overload and many-to-one NAT. Since this is the most common type
of NAT it is often referred to simply as NAT.
As described, the method enables communication through the router only when the
conversation originates in the masqueraded network, since this establishes the translation
tables. For example, a web browser in the masqueraded network can browse a website
outside, but a web browser outside could not browse a web site in the masqueraded network.
However, most NAT devices today allow the network administrator to configure translation
table entries for permanent use. This feature is often referred to as "static NAT" or port
forwarding and allows traffic originating in the "outside" network to reach designated hosts in
the masqueraded network.

4-42

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

In the mid-1990s NAT became a popular tool for alleviating the consequences of IPv4 address
exhaustion. It has become a common, indispensable feature in routers for home and
small-office Internet connections. Most systems using NAT do so in order to enable multiple
hosts on a private network to access the Internet using a single public IP address.
Network address translation has serious drawbacks in terms of the quality of Internet
connectivity and requires careful attention to the details of its implementation. In particular,
all types of NAT break the originally envisioned model of IP end-to-end connectivity across
the Internet and NAPT makes it difficult for systems behind a NAT to accept incoming
communications. As a result, NAT traversal methods have been devised to alleviate the issues
encountered.
Figure 4-9 shows two private networks behind a NAT. The NAT has a public IP address
towards the public network.
Figure 4-9 Network Address Translation NAT

Figure 4-10 shows the address translation and port generation done by NAT and how the
packet looks like after the translation. IP addresses and port numbers are shown.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

4-43

TCP/IP in mobile world

Figure 4-10 NAT Address Translation and Port Generation

4.6 IP-based application providing user services: email,


web browsing, ftp, etc
Figure 4-11 IP based applications

4-44

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Multimedia over IP (MoIP)

5.1 Network architectures for SIP-domain (RFC 3261)


The Session Initiation Protocol (SIP) is an IETF-defined signaling protocol widely used for
controlling communication sessions such as voice and video calls over Internet Protocol (IP).
The protocol can be used for creating, modifying and terminating two-party (unicast) or
multiparty (multicast) sessions. Sessions may consist of one or several media streams.
Other SIP applications include video conferencing, streaming multimedia distribution, instant
messaging, presence information, file transfer and online games.
The SIP protocol is an Application Layer protocol designed to be independent of the
underlying Transport Layer; it can run on Transmission Control Protocol (TCP), User
Datagram Protocol (UDP), or Stream Control Transmission Protocol (SCTP).It is a text-based
protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the
Simple Mail Transfer Protocol (SMTP).

5.1.1 Protocol Design


SIP employs design elements similar to the HTTP request/response transaction model. Each
transaction consists of a client request that invokes a particular method or function on the
server and at least one response. SIP reuses most of the header fields, encoding rules and
status codes of HTTP, providing a readable text-based format.
Each resource of a SIP network, such as a user agent or a voicemail box, is identified by a
uniform resource identifier (URI), based on the general standard syntax also used in Web
services and e-mail. A typical SIP URI is of the form:
sip:username:password@host:port. The URI scheme used for SIP is sip:.
If secure transmission is required, the scheme sips: is used and mandates that each hop
over which the request is forwarded up to the target domain must be secured with Transport
Layer Security (TLS). The last hop from the proxy of the target domain to the user agent has
to be secured according to local policies. TLS protects against attackers which try to listen on
the signaling link. It does not provide real end-to-end security, since encryption is only
hop-by-hop and every single intermediate proxy has to be trusted.
SIP works in concert with several other protocols and is only involved in the signaling portion
of a communication session. SIP clients typically use TCP or UDP on port numbers 5060
and/or 5061 to connect to SIP servers and other SIP endpoints. Port 5060 is commonly used
for non-encrypted signaling traffic whereas port 5061 is typically used for traffic encrypted
with Transport Layer Security (TLS). SIP is primarily used in setting up and tearing down
voice or video calls. It also allows modification of existing calls. The modification can
involve changing addresses or ports, inviting more participants, and adding or deleting media
streams. SIP has also found applications in messaging applications, such as instant messaging,
and event subscription and notification. A suite of SIP-related Internet Engineering Task Force

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-45

TCP/IP in mobile world

(IETF) rules define behavior for such applications. The voice and video stream
communications in SIP applications are carried over another application protocol, the
Real-time Transport Protocol (RTP). Parameters (port numbers, protocols, codecs) for these
media streams are defined and negotiated using the Session Description Protocol (SDP) which
is transported in the SIP packet body.
A motivating goal for SIP was to provide a signaling and call setup protocol for IP-based
communications that can support a superset of the call processing functions and features
present in the public switched telephone network (PSTN). SIP by itself does not define these
features; rather, its focus is call-setup and signaling. The features that permit familiar
telephone-like operations: dialing a number, causing a phone to ring, hearing ringback tones
or a busy signal - are performed by proxy servers and user agents. Implementation and
terminology are different in the SIP world but to the end-user, the behavior is similar.
SIP-enabled telephony networks can also implement many of the more advanced call
processing features present in Signaling System 7 (SS7), though the two protocols themselves
are very different. SS7 is a centralized protocol, characterized by a complex central network
architecture and dumb endpoints (traditional telephone handsets). SIP is a peer-to-peer
protocol, thus it requires only a simple (and thus scalable) core network with intelligence
distributed to the network edge, embedded in endpoints (terminating devices built in either
hardware or software). SIP features are implemented in the communicating endpoints (i.e. at
the edge of the network) contrary to traditional SS7 features, which are implemented in the
network.
Although several other VoIP signaling protocols exist (such as BICC, H.323, MGCP,
MEGACO), SIP is distinguished by its proponents for having roots in the IP community
rather than the telecommunications industry. SIP has been standardized and governed
primarily by the IETF, while other protocols, such as H.323, have traditionally been
associated with the International Telecommunication Union (ITU).
The first proposed standard version (SIP 1.0) was defined by RFC 2543. This version of the
protocol was further refined to version 2.0 and clarified in RFC 3261, although some
implementations are still relying on the older definitions.

5.1.2 Network Elements


SIP also defines server network elements. Although two SIP endpoints can communicate
without any intervening SIP infrastructure, which is why the protocol is described as
peer-to-peer, this approach is often impractical for a public service. RFC 3261 defines these
server elements.

User Agent
A SIP user agent (UA) is a logical network end-point used to create or receive SIP messages
and thereby manage a SIP session. A SIP UA can perform the role of a User Agent Client
(UAC), which sends SIP requests, and the User Agent Server (UAS), which receives the
requests and returns a SIP response. These roles of UAC and UAS only last for the duration of
a SIP transaction.
A SIP phone is a SIP user agent that provides the traditional call functions of a telephone,
such as dial, answer, reject, hold/unhold, and call transfer. SIP phones may be implemented as
a hardware device or as a softphone. As vendors increasingly implement SIP as a standard
telephony platform, often driven by 4G efforts, the distinction between hardware-based and
software-based SIP phones is being blurred and SIP elements are implemented in the basic

5-46

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

firmware functions of many IP-capable devices. Examples are devices from Nokia and
Research in Motion.
In SIP, as in HTTP, the user agent may identify itself using a message header field
'User-Agent', containing a text description of the software/hardware/product involved. The
User-Agent field is sent in request messages, which means that the receiving SIP server can
see this information. SIP network elements sometimes store this information, and it can be
useful in diagnosing SIP compatibility problems.

Proxy server
An intermediary entity that acts as both a server (UAS) and a client (UAC) for the purpose of
making requests on behalf of other clients. A proxy server primarily plays the role of routing,
which means its job is to ensure that a request is sent to another entity "closer" to the targeted
user. Proxies are also useful for enforcing policy (for example, making sure a user is allowed
to make a call). A proxy interprets, and, if necessary, rewrites specific parts of a request
message before forwarding it.

Registrar
A server that accepts REGISTER requests and places the information it receives in those
requests into the location service for the domain it handles which registers one or more IP
addresses to a certain SIP URI, indicated by the sip: scheme, although other protocol
schemes are possible (such as tel:). More than one user agent can register at the same URI,
with the result that all registered user agents will receive a call to the SIP URI.
SIP registrars are logical elements, and are commonly co-located with SIP proxies. But it is
also possible and often good for network scalability to place this location service with a
redirect server.

Redirect server
A user agent server that generates 3xx (Redirection) responses to requests it receives,
directing the client to contact an alternate set of URIs. The redirect server allows proxy
servers to direct SIP session invitations to external domains.
Session border controller
Session border controllers Serve as middle boxes between UA and SIP server for various types
of functions, including network topology hiding, and assistance in NAT traversal.

Gateway
Gateways can be used to interface a SIP network to other networks, such as the public
switched telephone network, which use different protocols or technologies.
Figure 5-1 shows SIP domain architecture and its network elements

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-47

TCP/IP in mobile world

Figure 5-1 SIP Domain Architecture

In the below traffic case a SIP REGISTRATION and INVITATION is shown only looking at
the SIP Methods, later in this chapter the headers are examined including SDPs.
Figure 5-2 SIP Traffic Case

5-48

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

5.2 Session Initiation Protocol-SIP


SIP Methods
SIP is a text-based protocol with syntax similar to that of HTTP. There are two different types
of SIP messages: requests and responses. The first line of a request has a method, defining the
nature of the request, and a Request-URI, indicating where the request should be sent. The
first line of a response has a response code.
For SIP requests, RFC 3261 defines the following methods:

REGISTER: Used by a UA to indicate its current IP address and the URLs for which it
would like to receive calls.

INVITE: Used to establish a media session between user agents.

ACK: Confirms reliable message exchanges.

CANCEL: Terminates a pending request.

BYE: Terminates a session between two users in a conference.

OPTIONS: Requests information about the capabilities of a caller, without setting up a


call.

A new method has been introduced in SIP in RFC 3262:

Issue 06 (2006-03-01)

PRACK (Provisional Response Acknowledgement): PRACK improves network


reliability by adding an acknowledgement system to the provisional Responses (1xx).
PRACK is sent in response to provisional response (1xx).

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-49

TCP/IP in mobile world

Figure 5-3 Syntax of SIP Method, INVITE

Below is a list of SIP methods as they are defined in RFC 3261 and also a list of SIP methods
defined to be used in IMS defined by 3GPP, Figure 5-4.

5-50

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 5-4 SIP Methods and RFCs

SIP Headers
There are 6 mandatory headers defined in RFC 3261. SIP can be extended by new headers.
The format of the header is the header name followed by colon and header specific
information, e.g. contact:<sip:192.154,78.23>
Below is a list of mandatory SIP headers:
Figure 5-5 SIP headers

SIP Responses
The SIP response types defined in RFC 3261 fall in one of the following categories:

Provisional (1xx): Request received and being processed.

Success (2xx): The action was successfully received, understood, and accepted.

Redirection (3xx): Further action needs to be taken (typically by sender) to complete the
request.

Client Error (4xx): The request contains bad syntax or cannot be fulfilled at the server.

Server Error (5xx): The server failed to fulfill an apparently valid request.

Global Failure (6xx): The request cannot be fulfilled at any server.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-51

TCP/IP in mobile world

Figure 5-6 SIP Responses

5.3 Session Description Protocol SDP


The Session Description Protocol (SDP) is a format for describing streaming media
initialization parameters. The IETF published the original specification as an IETF Proposed
Standard in April 1998,[1] and subsequently published a revised specification as an IETF
Proposed Standard as RFC 4566 in July 2006.
SDP is intended for describing multimedia communication sessions for the purposes of
session announcement, session invitation, and parameter negotiation. SDP does not deliver
media itself but is used for negotiation between end points of media type, format, and all
associated properties. The set of properties and parameters are often called a session profile.
SDP is designed to be extensible to support new media types and formats.
SDP started off as a component of the Session Announcement Protocol (SAP), but found
other uses in conjunction with Real-time Transport Protocol (RTP), Real-time Streaming
Protocol (RTSP), Session Initiation Protocol (SIP) and even as a standalone format for
describing multicast sessions.
Figure 5-7 shows the format of Session Description Protocol SDP. It has a general part
describing general things about the protocol followed by Media Descriptions.

5-52

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 5-7 Session Description Protocol SDP

Figure 5-8 SDP General Protocol Description

Figure 5-9 SDP Media Description

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-53

TCP/IP in mobile world

5.4 Real Time transport Protocol RTP and RTCP


Real-time Transport Protocol RTP
The Real-time Transport Protocol (RTP) defines a standardized packet format for
delivering audio and video over IP networks. RTP is used extensively in communication and
entertainment systems that involve streaming media, such as telephony, video teleconference
applications, television services and web-based push-to-talk features.
RTP is used in conjunction with the RTP Control Protocol (RTCP). While RTP carries the
media streams (e.g., audio and video), RTCP is used to monitor transmission statistics and
quality of service (QoS) and aids synchronization of multiple streams. RTP is originated and
received on even port numbers and the associated RTCP communication uses the next higher
odd port number.
RTP is one of the technical foundations of Voice over IP and in this context is often used in
conjunction with a signaling protocol which assists in setting up connections across the
network.
RTP was developed by the Audio-Video Transport Working Group of the Internet Engineering
Task Force (IETF) and first published in 1996 as RFC 1889, superseded by RFC 3550 in
2003.
RTP is designed for end-to-end, real-time, transfer of stream data. The protocol provides
facility for jitter compensation and detection of out of sequence arrival in data, that are
common during transmissions on an IP network. RTP supports data transfer to multiple
destinations through IP multicast. RTP is regarded as the primary standard for audio/video
transport in IP networks and is used with an associated profile and payload format.
Real-time multimedia streaming applications require timely delivery of information and can
tolerate some packet loss to achieve this goal. For example, loss of a packet in audio
application may result in loss of a fraction of a second of audio data, which can be made
unnoticeable with suitable error concealment algorithms. The Transmission Control Protocol
(TCP), although standardized for RTP use, is not normally used in RTP application because
TCP favors reliability over timeliness. Instead the majority of the RTP implementations are
built on the User Datagram Protocol (UDP). Other transport protocols specifically designed
for multimedia sessions are SCTP[5] and DCCP, although, as of 2010, they are not in
widespread use.
RTP was developed by the Audio/Video Transport working group of the IETF standards
organization. RTP is used in conjunction with other protocols such as H.323 and RTSP. The
5-54

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

RTP standard defines a pair of protocols, RTP and RTCP. RTP is used for transfer of
multimedia data, and the RTCP is used to periodically send control information and QoS
parameters. Figure 5-10 shows the RTP header with corresponding header fields.
Figure 5-10 RTP Header

Real-time Transport Control Protocol RTCP


The RTP Control Protocol (RTCP) is a sister protocol of the Real-time Transport Protocol
(RTP). Its basic functionality and packet structure is defined in the RTP specification RFC
3550, superseding its original standardization in 1996 (RFC 1889).
RTCP provides out-of-band statistics and control information for an RTP flow. It partners RTP
in the delivery and packaging of multimedia data, but does not transport any media streams
itself. Typically RTP will be sent on an even-numbered UDP port, with RTCP messages being
sent over the next higher odd-numbered port. The primary function of RTCP is to provide
feedback on the quality of service (QoS) in media distribution by periodically sending
statistics information to participants in a streaming multimedia session.
RTCP gathers statistics for a media connection and information such as transmitted octet and
packet counts, lost packet counts, jitter, and round-trip delay time. An application may use this
information to control quality of service parameters, perhaps by limiting flow, or using a
different codec.
RTCP itself does not provide any flow encryption or authentication methods. Such
mechanisms may be implemented, for example, with the Secure Real-time Transport Protocol
(SRTP) defined in RFC 3711.
Below are the RTCP sender-report and receiver reports. Figure 5-11

RTCP - Sender report (SR)


The sender report is sent periodically by the active senders in a conference to report
transmission and reception statistics for all RTP packets sent during the interval. The sender
report includes an absolute timestamp, which is the number of seconds elapsed since midnight
on January 1, 1900. The absolute timestamp allows the receiver to synchronize RTP
messages. It is particularly important when both audio and video are transmitted
simultaneously, because audio and video streams use independent relative timestamps.
Figure 5-11 RTCP Sender Report

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-55

TCP/IP in mobile world

RTCP - Receiver report (RR)


The receiver report Figure 5-12 is for passive participants, those that do not send RTP packets.
The report informs the sender and other receivers about the quality of service.

5-56

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 5-12 RTCP Receiver Report

5.5 Interworking between SIP-based networks and the


PSTN/ISDN networks
Figure 5-13 PSTN Breakout

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

5-57

TCP/IP in mobile world

IP in GPRS/UMTS/EPS

6.1 Remote access and login procedures for mobile users:


For a device as part of the mobile network, in order to be able to start data services such as
WEB or mail or even streaming, the mobile device need to have a data connectivity. This data
connectivity enables the device to send and receive data/IP-packets over the cellular network.
Another result for of the data connectivity is that the device is given an IP address (IPv4/IPv6)
and can be reached from the network side such as for an incoming packet paging. The data
connectivity in GPRS/UMTS terminology is called a PDP-comtext while in EPS (Evolved
Packet Core) terminology is called a default bearer. Below is an explanation od PDP-cpntext
and Default Bearer and how they are set up.

6.1.1 GPRS/UMTS
In GPRS/UMTS the bearer is called a PDP-Context. A primary PDP-context is activated in
the beginning of login providing the user an IP address. The user may also activate a
secondary PDP-context keeping the same IP as the primary PDP-Context but having different
QoS for the bearer. Figure 6-1 shows the general concept.

6-58

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 6-1 IP Connectivity in GPRS/UMTS

6.1.2 Evolved Packet System EPS


In EPS terminology the first bearer that the user sets up is called a Default bearer getting the
QoS profile that the user subscribe for (For that APN - Access Point Name). After Default
Bearer activation, the user may activate a number of bearers having the same IP but different
QoS depending on the services that the user may want to start or Network
settings/configurations regarding the QoS of the bearer. Figure 6-2 shows the general concept.
Figure 6-2 IP connectivity in EPS

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

6-59

TCP/IP in mobile world

6.1.3 Multiple PDN-Connections


Both in GPRS/UMTS and EPS the user may activate multiple PDN-Connections each
addressed by an APN having different QoS profile as subscribed in HLR/HSS. Below Figure
6-3 is a general concept of multiple PDN connections in EPS.
Figure 6-3 Multiple PDN Connections

6.2 PDP Context Activation


The packet data protocol (PDP; e.g., IP, X.25, FrameRelay) context is a data structure present
on both the serving GPRS support node (SGSN) and the gateway GPRS support node
(GGSN) which contains the subscriber's session information when the subscriber has an
active session. When a mobile wants to use GPRS, it must first attach and then activate a
PDP context. This allocates a PDP context data structure in the SGSN that the subscriber is
currently visiting and the GGSN serving the subscriber's access point. The data recorded
includes

Subscriber's IP address

Subscriber's IMSI

Subscriber's

Tunnel Endpoint ID (TEID) at the GGSN

Tunnel Endpoint ID (TEID) at the SGSN

The Tunnel Endpoint ID (TEID) is a number allocated by the GSN which identifies the
tunnelled data related to a particular PDP context.
Several PDP contexts may use the same IP address. The Secondary PDP Context Activation
procedure may be used to activate a PDP context while reusing the PDP address and other
PDP context information from an already active PDP context, but with a different QoS
profile.[1] Note that the procedure is called secondary, not the resulting PDP contexts that
have no such relationship with the one the PDP address of which they reused.
A total of 11 PDP contexts (with any combination of primary and secondary) can co-exist.
NSAPI are used to differentiate the different PDP context. Figure 6-4 shows a traffic case
used to activate a PDP-Context.

6-60

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 6-4 PDP Context Activation in GPRS/UMTS

6.3 Default Bearer set-up


The UE and the network execute the attach procedure, the default EPS bearer context
activation procedure in parallel. During the EPS attach procedure the network activates a
default EPS bearer context. The EPS session management messages for the default EPS
bearer context activation are transmitted in an information element in the EPS mobility
management messages. The UE and network complete the combined default EPS bearer
context activation procedure and the attach procedure before the dedicated EPS bearer context
activation procedure is completed. The success of the attach procedure is dependent on the
success of the default EPS bearer context activation procedure. If the attach procedure fails,
then the ESM session management procedures also fails.

EPS Bearer:
Each EPS bearer context represents an EPS bearer between the UE and a PDN. EPS bearer
contexts can remain activated even if the radio and S1 bearers constituting the corresponding
EPS bearers between UE and MME are temporarily released. An EPS bearer context can be
either a default bearer context or a dedicated bearer context. A default EPS bearer context is
activated when the UE requests a connection to a PDN. The first default EPS bearer context,
is activated during the EPS attach procedure. Additionally, the network can activate one or
several dedicated EPS bearer contexts in parallel. Figure 6-5 shows a traffic case for setting
up the default bearer in EPS.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

6-61

TCP/IP in mobile world

Figure 6-5 Default Bearer Activation in EPS

6.4 User profile


User profiles for mobile users are stored in HLR (UMTS/GPRS) or HSS (UMTS R5+). A part
of user profile is QoS for each subscribed PDN Connection that is pointed out by an APN.
Figure 6-6 shows the user profile stored in HSS for an EPS user.
Figure 6-6 User profile for EPS user

6.5 Access Point Name APN


A mobile device making a data connection must be configured with an APN to present to the
carrier. The carrier will then examine this identifier to determine what type of network
connection should be created, for example: what IP addresses should be assigned to the

6-62

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

wireless device, what security methods should be used, and how or if, it should be connected
to some private customer network.
More specifically, the APN identifies the packet data network (PDN), that a mobile data user
wants to communicate with. In addition to identifying a PDN, an APN may also be used to
define the type of service, (e.g. connection to wireless application protocol (WAP) server,
multimedia messaging service (MMS)), that is provided by the PDN. APN is used in 3GPP
data access networks, e.g. general packet radio service (GPRS), evolved packet core (EPC).

Structure of an APN
A structured APN consists of two parts as shown in Figure 6-7.

Network Identifier: Defines the external network to which the Gateway GPRS Support
Node (GGSN) is connected. Optionally, it may also include the service requested by the
user. This part of the APN is mandatory

Operator Identifier: Defines the specific operators packet domain network in which the
GGSN is located. This part of the APN is optional. The MCC is the Mobile Country
Code and the MNC is the Mobile Network Code which together uniquely identify a
mobile network operator.

Figure 6-7 Access Point Name APN

6.6 QoS for the IP connectivity of a UE


As discussed previously the QoS for the IP bearers are well defined in mobile networks. The
QoS for the default bearer or the primary PDP context is inside the HLR/HSS and is
downloaded to th SGSN/SGW during the attach/registration.
However QoS for the secondary PDP context or the Dedicated bearer in EPS depends on the
service that the user starts. In the case of IMS and SIP, the QoS required for the service,
described by SDP (codecs and bitrates), the QoS will be mapped to the corresponding
Secondary PDP Context or Dedicated bearer. This QoS required and negotiated will be used
to setup the bearers from the network side. More of QoS, see the chapter for IP QoS.

6.7 Roaming
In the case when the user is roaming, the gateway (GGSN or PGW) can be in the HPLMN or
VPLMN depending on network setting and policies. There is a flag in the HLR/HSS
indicating if home routed traffic should apply. The flag is called VPLMN-Allowed and can
be set to yes or no. However the IP of the user is allways allocated by the gateway that
remains the same for duration of the bearer.

Home Routed Traffic

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

6-63

TCP/IP in mobile world

Figure 6-8 shows the place for network elements in the case of home routed traffic. As seen
the gateway GGSN/PGW is in the home network thus the same settings (IP Address, DNS, etc)
will be the same as the user was in the home network.
Figure 6-8 Roaming and Home Routed Traffic

Local Breakout
In the case where operators (both home operator and the visited operator) allow local breakout,
the gateway can be in the visited network. The flag in the HLR/HSS should in this case be set
to yes. Figure 6-9 shows the general concept of local breakout. The visited gateway
GGSN/PGW is in this case responsible for the bearer settings, such as IP address, QoS, etc

6-64

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 6-9 Roaming and Local Breakout

6.8 The GPRS Tunneling Protocol GTP


GPRS Tunnelling Protocol is the defining IP-based protocol of the GPRS core network.
Primarily it is the protocol which allows end users of a GSM or WCDMA network to move
from place to place while continuing to connect to the Internet as if from one location at the
Gateway GPRS support node (GGSN). It does this by carrying the subscriber's data from the
subscriber's current serving GPRS support node (SGSN) to the GGSN which is handling the
subscriber's session. Mainly two forms of GTP are used by the GPRS core network.

GTP-u: For transfer of user data in separated tunnels for each Packet Data Protocol (PDP) context, GTPv1 is
used in GTP-u

GTP-c: for control reasons including:

setup and deletion of PDP contexts

verification of GSN reachability

updates; e.g., as subscribers move from one SGSN/SGW to another.

Figure 6-10 shows the GTP and its usage in EPS for tunneling of user data between SGW and
PGW. As shown previously the GTP is used in the core for the UMTS/PS and EPS both for
control and user plane. (see protocol stacks for UMTS and EPS)
EPS uses GTPv2 while UMTS/GPRS are using GTPv1 for GTP-c

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

6-65

TCP/IP in mobile world

Figure 6-10 GPRS Tunneling Protocol GTP

6-66

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Quality of Service

7.1 QoS definition in HSPA: Conversational, Streaming,


Interactive and background
When defining the UMTS QoS classes, also referred to as traffic classes, the restrictions and
limitations of the air interface have to be taken into account. It is not reasonable to define
complex mechanisms as have been in fixed networks due to different error characteristics of
the air interface. The QoS mechanisms provided in the cellular network have to be robust and
capable of providing reasonable QoS resolution. Table 1 illustrates the QoS classes for
UMTS.
There are four different QoS classes:

Conversational class

Streaming class

Interactive class

Background class

Figure 7-1 QoS classes in UMTS

The main distinguishing factor between these QoS classes is how delay sensitive the traffic is:
Conversational class is meant for traffic which is very delay sensitive while Background class
is the most delay insensitive traffic class.
Conversational and Streaming classes are mainly intended to be used to carry real-time traffic
flows. The main divider between them is how delay sensitive the traffic is. Conversational

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-67

TCP/IP in mobile world

real-time services, like video telephony, are the most delay sensitive applications and those
data streams should be carried in Conversational class.
Interactive class and Background are mainly meant to be used by traditional Internet
applications like WWW, Email, Telnet, FTP and News. Due to looser delay requirements,
compare to conversational and streaming classes, both provide better error rate by means of
channel coding and retransmission. The main difference between Interactive and Background
class is that Interactive class is mainly used by interactive applications, e.g. interactive Email
or interactive Web browsing, while Background class is meant for background traffic, e.g.
background download of Emails or background file downloading. Responsiveness of the
interactive applications is ensured by separating interactive and background applications.
Traffic in the Interactive class has higher priority in scheduling than Background class traffic,
so background applications use transmission resources only when interactive applications do
not need them. This is very important in wireless environment where the bandwidth is low
compared to fixed networks.
However, these are only typical examples of usage of the traffic classes. There is in particular
no strict one-to-one mapping between classes of service (as defined in TS 22.105 [5]) and the
traffic classes defined in this TS. For instance, a service interactive by nature can very well
use the Conversational traffic class if the application or the user has tight requirements on
delay.

7.1.2 Conversational class


The most well known use of this scheme is telephony speech (e.g. GSM). But with Internet
and multimedia a number of new applications will require this scheme, for example voice
over IP and video conferencing tools. Real time conversation is always performed between
peers (or groups) of live (human) end-users. This is the only scheme where the required
characteristics are strictly given by human perception.
Real time conversation scheme is characterised by that the transfer time shall be low because
of the conversational nature of the scheme and at the same time that the time relation
(variation) between information entities of the stream shall be preserved in the same way as
for real time streams. The maximum transfer delay is given by the human perception of video
and audio conversation. Therefore the limit for acceptable transfer delay is very strict, as
failure to provide low enough transfer delay will result in unacceptable lack of quality. The
transfer delay requirement is therefore both significantly lower and more stringent than the
round trip delay of the interactive traffic case.
Real time conversation - fundamental characteristics for QoS:

Preserve time relation (variation) between information entities of the stream

Conversational pattern (stringent and low delay)

7.1.3 Streaming class


When the user is looking at (listening to) real time video (audio) the scheme of real time
streams applies. The real time data flow is always aiming at a live (human) destination. It is a
one way transport.

7-68

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

This scheme is one of the newcomers in data communication, raising a number of new
requirements in both telecommunication and data communication systems. It is characterised
by that the time relations (variation) between information entities (i.e. samples, packets)
within a flow shall be preserved, although it does not have any requirements on low transfer
delay.
The delay variation of the end-to-end flow shall be limited, to preserve the time relation
(variation) between information entities of the stream. But as the stream normally is time
aligned at the receiving end (in the user equipment), the highest acceptable delay variation
over the transmission media is given by the capability of the time alignment function of the
application. Acceptable delay variation is thus much greater than the delay variation given by
the limits of human perception.
Real time streams - fundamental characteristics for QoS:

Preserve time relation (variation) between information entities of the stream

7.1.4 Interactive class


When the end-user, that is either a machine or a human, is on line requesting data from remote
equipment (e.g. a server), this scheme applies. Examples of human interaction with the remote
equipment are: web browsing, data base retrieval, server access. Examples of machines
interaction with remote equipment are: polling for measurement records and automatic data
base enquiries (tele-machines).
Interactive traffic is the other classical data communication scheme that on an overall level is
characterised by the request response pattern of the end-user. At the message destination there
is an entity expecting the message (response) within a certain time. Round trip delay time is
therefore one of the key attributes. Another characteristic is that the content of the packets
shall be transparently transferred (with low bit error rate).
Interactive traffic - fundamental characteristics for QoS:

Request response pattern

Preserve payload content

7.1.5 Background class


When the end-user, that typically is a computer, sends and receives data-files in the
background, this scheme applies. Examples are background delivery of E-mails, SMS,
download of databases and reception of measurement records.
Background traffic is one of the classical data communication schemes that on an overall
level is characterised by that the destination is not expecting the data within a certain time.
The scheme is thus more or less delivery time insensitive. Another characteristic is that the
content of the packets shall be transparently transferred (with low bit error rate).
Background traffic - fundamental characteristics for QoS:

The destination is not expecting the data within a certain time;

Preserve payload content.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-69

TCP/IP in mobile world

7.2 QoS definition in GPRS/EPS: MBR, GBR, ARP, THI


and QCI
The EPS bearer QoS profile includes the parameters QCI, ARP, GBR and MBR, described
bnelow. This clause also describes QoS parameters which are applied to an aggregated set of
EPS Bearers: APN-AMBR and UE-AMBR.
Each EPS bearer (GBR and Non-GBR) is associated with the following bearer level QoS
parameters:

QoS Class Identifier (QCI);

Allocation and Retention Priority (ARP).

A QCI Figure 7-2 is a scalar that is used as a reference to access node-specific parameters that
control bearer level packet forwarding treatment (e.g. scheduling weights, admission
thresholds, queue management thresholds, link layer protocol configuration, etc.), and that
have been pre-configured by the operator owning the access node (e.g. eNodeB). A
one-to-one mapping of standardized QCI values to standardized characteristics is captured
TS 23.203.
Figure 7-2 QCI and some examples

On the radio interface and on S1, each PDU (e.g. RLC PDU or GTP-U PDU) is indirectly
associated with one QCI via the bearer identifier carried in the PDU header. The same applies
to the S5 and S8 interfaces if they are based on GTP.
The ARP shall contain information about the priority level (scalar), the pre-emption capability
(flag) and the pre-emption vulnerability (flag). The primary purpose of ARP is to decide
whether a bearer establishment / modification request can be accepted or needs to be rejected
due to resource limitations (typically available radio capacity for GBR bearers). The priority
level information of the ARP is used for this decision to ensure that the request of the bearer
with the higher priority level is preferred. In addition, the ARP can be used (e.g. by the
eNodeB) to decide which bearer(s) to drop during exceptional resource limitations (e.g. at
handover). The pre-emption capability information of the ARP defines whether a bearer with

7-70

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

a lower ARP priority level should be dropped to free up the required resources. The
pre-emption vulnerability information of the ARP defines whether a bearer is applicable for
such dropping by a pre-emption capable bearer with a higher ARP priority value. Once
successfully established, a bearer's ARP shall not have any impact on the bearer level packet
forwarding treatment (e.g. scheduling and rate control). Such packet forwarding treatment
should be solely determined by the other EPS bearer QoS parameters: QCI, GBR and MBR,
and by the AMBR parameters. The ARP is not included within the EPS QoS Profile sent to
the UE.
The ARP should be understood as "Priority of Allocation and Retention"; not as "Allocation,
Retention, and Priority".
Video telephony is one use case where it may be beneficial to use EPS bearers with different
ARP values for the same UE. In this use case an operator could map voice to one bearer with
a higher ARP, and video to another bearer with a lower ARP. In a congestion situation (e.g.
cell edge) the eNodeB can then drop the "video bearer" without affecting the "voice bearer".
This would improve service continuity.
The ARP may also be used to free up capacity in exceptional situations, e.g. a disaster
situation. In such a case the eNodeB may drop bearers with a lower ARP priority level to free
up capacity if the pre-emption vulnerability information allows this.
Figure 7-3 EPS Bearers and QoS

Each GBR bearer is additionally associated with the following bearer level QoS parameters:

Guaranteed Bit Rate (GBR);

Maximum Bit Rate (MBR).

The GBR denotes the bit rate that can be expected to be provided by a GBR bearer. The MBR
limits the bit rate that can be expected to be provided by a GBR bearer (e.g. excess traffic may
get discarded by a rate shaping function).
Each APN access, by a UE, is associated with the following QoS parameter:

Per APN Aggregate Maximum Bit Rate (APN-AMBR).

The APN-AMBR is a subscription parameter stored per APN in the HSS. It limits the
aggregate bit rate that can be expected to be provided across all Non-GBR bearers and across
all PDN connections of the same APN (e.g. excess traffic may get discarded by a rate shaping

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-71

TCP/IP in mobile world

function). Each of those Non-GBR bearers could potentially utilize the entire APN-AMBR,
e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are outside the
scope of APN-AMBR. The P-GW enforces the APN-AMBR in downlink. Enforcement of
APN-AMBR in uplink is done in the UE and additionally in the P-GW.
All simultaneous active PDN connections of a UE that are associated with the same APN shall
be provided by the same PDN GW.
APN-AMBR applies to all PDN connections of an APN. For the case of multiple PDN
connections of an APN, if a change of APN-AMBR occurs due to local policy or the PGW is
provided the updated APN-AMBR for each PDN connection from the MME or PCRF, the
PGW initiates explicit signaling for each PDN connection to update the APN-AMBR value.
Each UE in state EMM-REGISTERED is associated with the following bearer aggregate level
QoS parameter:

Per UE Aggregate Maximum Bit Rate (UE-AMBR).

The UE-AMBR is limited by a subscription parameter stored in the HSS. The MME shall set
the UE-AMBR to the sum of the APN-AMBR of all active APNs up to the value of the
subscribed UE-AMBR. The UE-AMBR limits the aggregate bit rate that can be expected to
be provided across all Non-GBR bearers of a UE (e.g. excess traffic may get discarded by a
rate shaping function). Each of those Non-GBR bearers could potentially utilize the entire
UE-AMBR, e.g. when the other Non-GBR bearers do not carry any traffic. GBR bearers are
outside the scope of UE AMBR. The E-UTRAN enforces the UE-AMBR in uplink and
downlink.
The GBR and MBR denote bit rates of traffic per bearer while UE-AMBR/APN-AMBR
denote bit rates of traffic per group of bearers. Each of those QoS parameters has an uplink
and a downlink component. On S1_MME the values of the GBR, MBR, and AMBR refer to
the bit stream excluding the GTP-U/IP header overhead of the tunnel on S1-U.
Figure shows a general situation for the UE having multiple PDN-Connections with multiple
PWSs

7-72

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 7-4 EPS QoS Concept and AMBR

7.3 TFTs and its usage


The EPS bearer traffic flow template (TFT) is the set of all packet filters associated with that
EPS bearer.
An UpLink Traffic Flow Template (UL TFT) is the set of uplink packet filters in a TFT. A
DownLink Traffic Flow Template (DL TFT) is the set of downlink packet filters in a TFT.
Every dedicated EPS bearer is associated with a TFT. The UE uses the UL TFT for mapping
traffic to an EPS bearer in the uplink direction. The PCEF (for GTP-based S5/S8) or the
BBERF (for PMIP-based S5/S8) uses the DL TFT for mapping traffic to an EPS bearer in the
downlink direction. The UE may use the UL TFT and DL TFT to associate EPS Bearer
Activation or Modification procedures to an application and to traffic flow aggregates of the
application. Therefore the PDN GW shall, in the Create Dedicated Bearer Request and the
Update Bearer Request messages, provide all available traffic flow description information
(e.g. source and destination IP address and port numbers and the protocol information).
For the UE, the evaluation precedence order of the packet filters making up the UL TFTs is
signalled from the P-GW to the UE as part of any appropriate TFT operations.
The evaluation precedence index of the packet filters associated with the default bearer, in
relation to those associated with the dedicated bearers, is up to operator configuration. It is
possible to "force" certain traffic onto the default bearer by setting the evaluation precedence
index of the corresponding filters to a value that is lower than the values used for filters
associated with the dedicated bearers. It is also possible use the default bearer for traffic that
does not match any of the filters associated with the dedicated bearers. In this case, the

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-73

TCP/IP in mobile world

evaluation precedence index of the corresponding filter(s) (e.g., a "match all filter") need to be
set to a value that is higher than the values used for filters associated with dedicated bearers.
A TFT of a unidirectional EPS bearer is either associated with UL packet filter(s) or DL
packet filter(s) that matches the unidirectional traffic flow(s) and a DL packet filter or a UL
packet filter that effectively disallows any useful packet flows (see clause 15.3.3.4 in
TS 23.060 for an example of such packet filter.
The UE routes uplink packets to the different EPS bearers based on uplink packet filters in the
TFTs assigned to these EPS bearers. The UE evaluates for a match, first the uplink packet
filter amongst all TFTs that has the lowest evaluation precedence index and, if no match is
found, proceeds with the evaluation of uplink packet filters in increasing order of their
evaluation precedence index. This procedure shall be executed until a match is found or all
uplink packet filters have been evaluated. If a match is found, the uplink data packet is
transmitted on the EPS bearer that is associated with the TFT of the matching uplink packet
filter. If no match is found, the uplink data packet shall be sent via the EPS bearer that has not
been assigned any uplink packet filter. If all EPS bearers (including the default EPS bearer for
that PDN) have been assigned one or more uplink packet filters, the UE shall discard the
uplink data packet.
The above algorithm implies that there is at most one EPS bearer without any uplink packet
filter (i.e. either EPS bearer without any TFT or an EPS bearer with only DL packet filter(s)).
Therefore, some UEs may expect that during the lifetime of a PDN connection (where only
network has provided TFT packet filters) at most one EPS bearer exist without a TFT or with
a TFT without any uplink packet filter.
To ensure that at most one EPS bearer exist without a TFT or with a TFT without any uplink
packet filter, the PCEF (for GTP-based S5/S8) and the BBERF (for PMIP-based S5/S8)
applies the Session Management restrictions as described in clause 9.2.0 of TS 23.060.
An EPS bearer is realized by the following elements:

7-74

In the UE, the UL TFT maps a traffic flow aggregate to an EPS bearer in the uplink
direction;

In the PDN GW, the DL TFT maps a traffic flow aggregate to an EPS bearer in the
downlink direction;

A radio bearer (defined in TS 36.300) transports the packets of an EPS bearer


between a UE and an eNodeB. If a radio bearer exists, there is a one-to-one mapping
between an EPS bearer and this radio bearer;

An S1 bearer transports the packets of an EPS bearer between an eNodeB and a


Serving GW;

An E-RAB (E-UTRAN Radio Access Bearer) refers to the concatenation of an S1


bearer and the corresponding radio bearer, as defined in TS 36.300.

An S5/S8 bearer transports the packets of an EPS bearer between a Serving GW and
a PDN GW;

A UE stores a mapping between an uplink packet filter and a radio bearer to create
the mapping between a traffic flow aggregate and a radio bearer in the uplink;

A PDN GW stores a mapping between a downlink packet filter and an S5/S8 bearer
to create the mapping between a traffic flow aggregate and an S5/S8 bearer in the
downlink;

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

An eNodeB stores a one-to-one mapping between a radio bearer and an S1 Bearer to


create the mapping between a radio bearer and an S1 bearer in both the uplink and
downlink;

A Serving GW stores a one-to-one mapping between an S1 Bearer and an S5/S8


bearer to create the mapping between an S1 bearer and an S5/S8 bearer in both the
uplink and downlink.

The PDN GW routes downlink packets to the different EPS bearers based on the downlink
packet filters in the TFTs assigned to the EPS bearers in the PDN connection. Upon reception
of a downlink data packet, the PDN GW evaluates for a match, first the downlink packet filter
that has the lowest evaluation precedence index and, if no match is found, proceeds with the
evaluation of downlink packet filters in increasing order of their evaluation precedence index.
This procedure shall be executed until a match is found, in which case the downlink data
packet is tunnelled to the Serving GW on the EPS bearer that is associated with the TFT of the
matching downlink packet filter. If no match is found, the downlink data packet shall be sent
via the EPS bearer that does not have any downlink packet filter assigned. If all EPS bearers
(including the default EPS bearer for that PDN) have been assigned a downlink packet filter,
the PDN GW shall discard the downlink data packet.
Figure 7-5 shows an example of TFT definitions and its usage.
Figure 7-5 QoS Concept and an example of TFT

The IP header and UDP/TCP headers are used for defining a TFT. Source/Destination IP,
Source/Destination Port numbers and the protocol field in tht IP headers are used to vreate
TFTs in the example above. Figure 7-6 shows the IP and the UDP headers.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-75

TCP/IP in mobile world

Figure 7-6 EPS QoS Concept, IP and UDP headers

7.4 IP QoS General Network Edge


In a well-engineered network, you must be careful to separate functions that occur on the
edges of a network from functions that occur in the core or backbone of a network. It is
important to separate edge and backbone functions to achieve the best QoS possible.
This chapter discusses the following tools associated with the edge of a network:

Additional bandwidth

Compressed Real-Time Transport Protocol (cRTP)

Congestion Management and Queuing:

Weighted Fair Queuing (WFQ)

Custom Queuing (CQ)

Priority Queuing (PQ)

Class-Based Weighted Fair Queuing (CB-WFQ)

Priority QueuingClass-Based Weighted Fair Queuing

Congestion Avoidance

Traffic Policing and Traffic Shaping

Traffic evaluation and Token Bucket

Traffic Engineering

7.4.1 Additional bandwidth


Voice is a mission-critical application and requires significant planning to ensure that the
appropriate service level agreement (SLA) can be met. One of the elements of this planning is
to understand the amount of delay that is in the budget. Some of these elements of delay can
be controlled and tuned, while others are simply due to physics. Refer to Table 7-1 for more
details on items that are within controllable delay budget.

7-76

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Table 7-1 End to end Delay Budget

The International Telecommunication Union Telecommunication Standardization Sector


(ITU-T) G.114 recommendation suggests no more than 150 milliseconds (ms) of end-to-end
delay to maintain good voice quality. Any customers definition of good might mean
more or less delay, so keep in mind that 150 ms is merely a recommendation.
The first issue of major concern when designing a VoIP network is bandwidth constraints.
Depending upon which codec you use and how many voice samples you want per packet, the
amount of bandwidth per call can increase dramatically. For an explanation of packet sizes
and bandwidth consumed, seeTable 7-2.
Table 7-2 Codec Type and Sample Size Effects on Bandwidth

After reviewing this table, you might be asking yourself why 24 kbps of bandwidth is
consumed when youre using an 8-kbps codec. This occurs due to a phenomenon called The
IP Tax. The G.729 codec using two 10-ms samples consumes 20 bytes per frame, which
works out to 8 kbps. The packet headers that include IP, RTP, and User Datagram Protocol
(UDP) add 40 bytes to each frame. This IP Tax header is twice the amount of the payload.
Using G.729 with two 10-ms samples as an example, without RTP header compression, 24
kbps are consumed in each direction per call. Although this might not be a large amount for

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-77

TCP/IP in mobile world

T1 (1.544- mbps), E1 (2.048-mbps), or higher circuits, it is a large amount (42 percent) for a
56-kbps circuit.
Also, keep in mind that the bandwidth in Table 8-1 does not include Layer 2 headers (PPP,
Frame Relay, and so on). It includes headers from Layer 3 (network layer) and above only.
Therefore, the same G.729 call can consume different amounts of bandwidth based upon
which data link layer is used (Ethernet, Frame Relay, PPP, and so on).

7.4.2 Compressed Real-Time Transport Protocol (cRTP)


To reduce the large percentage of bandwidth consumed on point-to-point WAN links by a
G.729 voice call, you can use cRTP. cRTP enables you to compress the 40-byte IP/RTP/UDP
header to 2 to 4 bytes most of the time (see Figure 7-7).
Figure 7-7 RTP Header Compression

With cRTP, the amount of traffic per VoIP call is reduced from 24 kbps to 11.2 kbps. This is a
major improvement for low-bandwidth links. A 56-kbps link, for example, can now carry four
G.729 VoIP calls at 11.2 kbps each. Without cRTP, only two G.729 VoIP calls at 24 kbps can
be used.
To avoid the unnecessary consumption of available bandwidth, cRTP is used on a link-by-link
basis. This compression scheme reduces the IP/RTP/UDP header to 2 bytes when UDP
checksums are not used, or 4 bytes when UDP checksums are used.
cRTP uses some of the same techniques as Transmission Control Protocol (TCP) header
compression. In TCP header compression, the first factor-of-two reduction in data rate occurs
because half of the bytes in the IP and TCP headers remain constant over the life of the
connection.

7-78

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

The big gain, however, comes from the fact that the difference from packet to packet is often
constant, even though several fields change in every packet. Therefore, the algorithm can
simply add 1 to every value received. By maintaining both the uncompressed header and the
first-order differences in the session state shared between the compressor and the
decompressor, cRTP must communicate only an indication that the second-order difference is
zero. In that case, the decompressor can reconstruct the original header without any loss of
information, simply by adding the first-order differences to the saved, uncompressed header
as each compressed packet is received.
Just as TCP/IP header compression maintains shared state for multiple, simultaneous TCP
connections, this IP/RTP/UDP compression must maintain state for multiple session contexts.
A session context is defined by the combination of the IP source and destination addresses, the
UDP source and destination ports, and the RTP synchronization source (SSRC) field. A
compressor implementation might use a hash function on these fields to index a table of
stored session contexts.
The compressed packet carries a small integer, called the session context identifier, or CID, to
indicate in which session context that packet should be interpreted. The decompressor can use
the CID to index its table of stored session contexts.
cRTP can compress the 40 bytes of header down to 2 to 4 bytes most of the time. As
such, about 98 percent of the time the compressed packet will be sent. Periodically, however,
an entire uncompressed header must be sent to verify that both sides have the correct state.
Sometimes, changes occur in a field that is usually constantsuch as the payload type field,
for instance. In such cases, the IP/RTP/UDP header cannot be compressed, so an
uncompressed header must be sent.
You should use cRTP on any WAN interface where bandwidth is a concern and a high portion
of RTP traffic exists.
You should not use cRTP on high-speed interfaces, as the disadvantages of doing so outweigh
the advantages. High-speed network is a relative term: Usually anything higher than T1 or
E1 speed does not need cRTP. The need for compression is simply a comparison between
the costs of the transmission link versus the cost and overhead of the compression. If you are
willing to pay the extra cost of time for compression/decompression and the additional
hardware costs that might be involved, then compression can work on almost any
transmission link.
As with any compression, the CPU incurs extra processing duties to compress the packet.
This increases the amount of CPU utilization on the edge device. Therefore, you must weigh
the advantages (lower bandwidth requirements) against the disadvantages (higher CPU
utilization). An edge device with higher CPU utilization can experience problems running
other tasks. As such, it is usually a good rule of thumb to keep CPU utilization at less than 60
to 70 percent to keep your network running smoothly.

7.4.3 Congestion Management and Queuing


Queuing in and of itself is a fairly simple concept. The easiest way to think about queuing is
to compare it to the highway system. Lets say you are on the New Jersey Turnpike driving at
a decent speed. When you approach a tollbooth, you must slow down, stop, and pay the toll.
During the time it takes to pay the toll, a backup of cars ensues, creating congestion.
As in the tollbooth line, in queuing the concept of first in, first out (FIFO) exists, which means
that if you are the first to get in the line, you are the first to get out of the line. FIFO queuing
was the first type of queuing to be used in routers, and it is still useful depending upon the
networks topology. Figure 7-8 shows a general operation of a FIFO queue.
Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-79

TCP/IP in mobile world

Figure 7-8 FIFO, First In First Out Queuing

Todays networks, with their variety of applications, protocols, and users, require a way to
classify different traffic. Going back to the tollbooth example, a special lane is necessary to
enable some cars to get bumped forward in line. The New Jersey Turnpike, as well as many
other toll roads, has a carpool lane, or a lane that allows you to pay for the toll electronically,
for instance.
Likewise, Huawei has several queuing tools that enable a network administrator to specify
what type of traffic is special or important and to queue the traffic based on that information
instead of when a packet arrives. The most popular of these queuing techniques is known as
WFQ. If you have a Huawei router, it is highly likely that it is using the WFQ algorithm
because it is the default for any router interface less than 2 mbps.

7.4.4 Weighted Fair Queuing


FIFO queuing places all packets it receives in one queue and transmits them as bandwidth
becomes available. WFQ, on the other hand, uses multiple queues to separate flows and gives
equal amounts of bandwidth to each flow. This prevents one application, such as File Transfer
Protocol (FTP), from consuming all available bandwidth.
WFQ ensures that queues do not starve for bandwidth and that traffic gets predictable service.
Low-volume data streams receive preferential service, transmitting their entire offered loads
in a timely fashion. High-volume traffic streams share the remaining capacity, obtaining equal
or proportional bandwidth.
WFQ is similar to time-division multiplexing (TDM), as it divides bandwidth equally among
different flows so that no one application is starved. WFQ is superior to TDM, however,
simply because when a stream is no longer present, WFQ dynamically adjusts to use the free
bandwidth for the flows that are still transmitting.
Fair queuing dynamically identifies data streams or flows based on several factors. These data
streams are prioritized based upon the amount of bandwidth that the flow consumes. This
algorithm enables bandwidth to be shared fairly, without the use of access lists or other timeconsuming administrative tasks. WFQ determines a flow by using the source and destination
address, protocol type, socket or port number, and QoS/ToS values.
Fair queuing enables low-bandwidth applications, which make up most of the traffic, to have
as much bandwidth as needed, relegating higher-bandwidth traffic to share the remaining

7-80

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

traffic in a fair manner. Fair queuing offers reduced jitter and enables efficient sharing of
available bandwidth between all applications.
The weighting in WFQ is currently affected by three mechanisms: IP Precedence, Frame
Relay forward explicit congestion notification (FECN), backward explicit congestion
notification (BECN), and Discard Eligible (DE) bits.
The IP Precedence field has values between 0 (the default) and 7. As the precedence value
increases, the algorithm allocates more bandwidth to that conversation or flow. This enables
the flow to transmit more frequently. See the Packet Classification section later in this
chapter for more information on weighting WFQ.
In a Frame Relay network, FECN and BECN bits usually flag the presence of congestion.
When congestion is flagged, the weights the algorithm uses change such that the conversation
encountering the congestion transmits less frequently.
WFQ also is not intended to run on interfaces that are clocked higher than 2.048 mbps.
Figure 7-9 shows a generic figure of WFQ mechanism.
Figure 7-9 Weighted Fair Queuing, WFQ

7.4.5 Custom Queuing


Custom queuing (CQ) enables users to specify a percentage of available bandwidth to a
particular protocol. You can define up to 16 output queues. Each queue is served sequentially
in a round- robin fashion, transmitting a percentage of traffic on each queue before moving on
to the next queue. The router determines how many bytes from each queue should be
transmitted, based on the speed of the interface as well as the configured traffic percentage. In
other words, another traffic type can use unused bandwidth from queue A until queue A
requires its full percentage.
CQ requires knowledge of port types and traffic types. This equates to a large amount of
administrative overhead. But after the administrative overhead is complete, CQ offers a

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-81

TCP/IP in mobile world

highly granular approach to queuing, which is what some customers prefer. Figure 7-10 shows
a generic flow of data sent through a CQ configuration.
Figure 7-10 Custom Queuing

7.4.6 Priority Queuing


PQ enables the network administrator to configure four traffic prioritieshigh, normal,
medium, and low. Inbound traffic is assigned to one of the four output queues. Traffic in the
high-priority queue is serviced until the queue is empty; then, packets in the next priority
queue are transmitted.
This queuing arrangement ensures that mission-critical traffic is always given as much
bandwidth as it needs; however, it starves other applications to do so.
Therefore, it is important to understand traffic flows when using this queuing mechanism so
that applications are not starved of needed bandwidth. PQ is best used when the
highest-priority traffic consumes the least amount of line bandwidth.
PQ enables a network administrator to starve applications. An improperly configured PQ
can service one queue and completely disregard all other queues. This can, in effect, force
some applications to stop working. As long as the system administrator realizes this caveat,
PQ can be the proper alternative for some customers. Figure 7-11 shows a generic view of a
Priority Queuing operation.

7-82

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 7-11 Priority Queuing

7.4.7 CB-WFQ (CB=Class Based)


CB-WFQ has all the benefits of WFQ, with the additional functionality of providing granular
support for network administrator-defined classes of traffic. CB-WFQ also can run on
high-speed interfaces (up to T3).
CB-WFQ enables you to define what constitutes a class based on criteria that exceed the
confines of flow. Using CB-WFQ, you can create a specific class for voice traffic. The
network administrator defines these classes of traffic through access lists. These classes of
traffic determine how packets are grouped in different queues.
The most interesting feature of CB-WFQ is that it enables the network administrator to
specify the exact amount of bandwidth to be allocated per class of traffic. CB-WFQ can
handle 64 different classes and control bandwidth requirements for each class.
With standard WFQ, weights determine the amount of bandwidth allocated per conversation.
It is dependent on how many flows of traffic occur at a given moment.
With CB-WFQ, each class is associated with a separate queue. You can allocate a specific
minimum amount of guaranteed bandwidth to the class as a percentage of the link, or in kbps.
Other classes can share unused bandwidth in proportion to their assigned weights. When
configuring CB-WFQ, you should consider that bandwidth allocation does not necessarily
mean the traffic belonging to a class experiences low delay; however, you can skew weights
to simulate PQ.

7.4.8 RTP (Real-time Transport Protocol) priority queuing


RTP priority queuing technology is used to solve the QoS problems of real-time service
(including audio and video services). Its principle is to put RTP packets carrying audio or
video into high-priority queue and send it first, thus minimizing delay and jitter and ensuring
the quality of audio or video service which is sensitive to delay. As shown in the above figure,
an RTP packet is sent into a high priority queue. RTP packet is the UDP packet whose port
number is even. The range of the port number is configurable. RTP priority queue can be used
along with any queue (e.g., FIFO, PQ, CQ, WFQ and CBQ), while it has the highest priority.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-83

TCP/IP in mobile world

Since LLQ of CBQ can also be used to solve real-time service, it is recommended not to use
RTP together with CBQ.
Figure 7-12 RTP queuing diagram

7.4.9 PQ within CB-WFQ (Low Latency Queuing)


PQ within CB-WFQ (LLQ) is a mouthful of an acronym. This queuing mechanism was
developed to give absolute priority to voice traffic over all other traffic on an interface.
The LLQ feature brings to CB-WFQ the strict-priority queuing functionality of IP RTP
Priority required for delay-sensitive, real-time traffic, such as voice. LLQ enables use of a
strict PQ.
Although it is possible to queue various types of traffic to a strict PQ, it is strongly
recommended that you direct only voice traffic to this queue. This recommendation is based
upon the fact that voice traffic is well behaved and sends packets at regular intervals; other
applications transmit at irregular intervals and can ruin an entire network if configured
improperly.
With LLQ, you can specify traffic in a broad range of ways to guarantee strict priority
delivery. To indicate the voice flow to be queued to the strict PQ, you can use an access list.
This is different from IP RTP Priority, which allows for only a specific UDP port range.
Although this mechanism is relatively new to IOS, it has proven to be powerful and it gives
voice packets the necessary priority, latency, and jitter required for good-quality voice.

7.4.10 Queuing Summary


Although a one-size-fits-all answer to queuing problems does not exist, many customers today
use WFQ to deal with queuing issues. WFQ is simple to deploy, and it requires little
additional effort from the network administrator. Setting the weights with WFQ can further
enhance its benefits.
Customers who require more granular and strict queuing techniques can use CQ or PQ. Be
sure to utilize great caution when enabling these techniques, however, as you might do more

7-84

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

harm than good to your network. With PQ or CQ, it is imperative that you know your traffic
and your applications.
Many customers who deploy VoIP networks in low-bandwidth environments (less than 768
kbps) use IP RTP Priority or LLQ to prioritize their voice traffic above all other traffic flows.

7.4.11 Congestion Avoidence


Excessive congestion can endanger network resources greatly, so some avoidance measures
must be taken. The Congestion Avoidance refers to a traffic control mechanism that can
monitor the occupancy status of network resources (such as the queues or buffer). As
congestion becomes worse, the system actively drops packets and tries to avoid the network
overload through adjusting the network traffics.
Comparing with the end-to-end traffic control, this traffic control has broader significance,
which affects more loads of application streams through device. Of course, while dropping
packets, the device may cooperate with traffic control action on the source end, such as TCP
traffic control to adjust the network's traffic to a reasonable load level. A good combination of
packet-dropping policy with traffic control mechanism at the source end can maximize the
throughput and utilization of network and minimize the packet dropping and delay.
1.

Traditional packet-dropping policy:


Traditional policy of dropping packets adopts the Tail-Drop method. When the amount of
packets in a queue reaches a certain maximum value, all newly arrived packets will be
dropped.
This kind of dropping policy will lead to phenomenon of TCP synchronization when
queues drop packets of several TCP connections at the same time, it will lead these TCP
connections to enter congestion avoidance and slow start status to adjust traffics
simultaneously, then reach a high peak of traffics simultaneously. In this way, network
traffic keeps frequent rises and decreases.

2.

RED and WRED:


To avoid the phenomenon of TCP synchronization, Random Early Detection (RED) or
Weighted Random Early Detection (WRED) can be used.
In RED algorithm, it sets minimum and maximum limitations for each queue. The
packets are processed as follows:

When the length of queue is less than the minimum limitation, no packet will be
dropped.

When the length of queue exceeds the maximum limitation, all the incoming packets will
be dropped.

When the length of queue between maximum limitation and minimum limitation, the
packet will be dropped randomly. The longer the length of queue, the higher the
dropping probability is, but a maximum dropping probability will remain.

Unlike RED, the random number of WRED generated is based on priority. It uses IP
precedence to determine the dropping policy thus the dropping probability of packets with
high priority will relatively decrease.
RED and WRED employ the random packet dropping policy to avoid TCP synchronization,
when packet of one TCP connection is dropped with a decreased sending rate, other TCP
connections will still keep higher sending rate. So there are always some TCP connections,
which have a higher sending rate to realize more efficient network bandwidth utilization.
3.

Issue 06 (2006-03-01)

Average queue length:


To drop the packets through comparing the length of the queue with the

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-85

TCP/IP in mobile world

maximum/minimum limitations will treat the bursting data stream unfairly and influence
the transmission of data stream. WRED uses the average queue and maximum/minimum
limitations comparison to determine the dropping probability. The average queue length
is the result of low pass filtering of queue length. The average queue length reflects the
changing of queue and is insensitive to bursting change of queue length, preventing the
unfair treatment for the bursting data stream.
When WFQ is adopted, you can set index, maximum limitation, minimum limitation,
and packet-dropping probability when calculating average queue length for different
queues that has different priorities. So packet with different priority will have different
packet dropping characters. When FIFO, PQ and CQ are adopted, you can set index,
maximum limitation, minimum limitation, and packet-dropping probability when
calculating average queue length for each queue. So packet with different priority will
have different packet dropping characters.
4.

Relationship between WRED and queue mechanism


The relation between WRED and Queue mechanism is shown as in

Figure 7-13 Relation between WRED and queue mechanism

Associating WRED with WFQ, the flow-based WRED can be realized. Because different
flow has its own queue during packet classification, the flow with small traffic always has a
short queue length, so the packet dropping probability will be small. The flow with high
traffic will have the longer queue length and will drop more packets, so we can protect the
benefits of the flow with small traffic.

7.4.12 Packet Classification


Traffic classification is the prerequisite and foundation for differentiated services, which uses
certain rules to identify the packets with certain features.
To discriminate flows, you can set traffic classification rules using the priority bits of ToS
(type of service) field in the IP packet header. Alternatively, the network administrator may
define a traffic classification policy, for instance, integrating information such as source IP
address, destination IP address, MAC address, IP protocol, or port number of the applications
to classify the traffic. In general, it can be a narrow range defined by a quintuple (source IP
7-86

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

address, source port number, destination IP address, destination port number and the Transport
Protocol), or can be all packets to a network segment.
In general, while packets being classified on the network border, the precedence bits in the
ToS byte of IP header are set so that IP precedence can be used as a direct packet
classification standard within the network. The queuing technologies can use IP precedence to
handle the packets. Downstream network can receive the packets classification results from
upstream network selectively, or re-classify the packets with its own standard.
Traffic classification is used to provide differentiated service, so it must be associated with
certain kinds of traffic policing or resource-assignment mechanisms. To adopt what kind of
traffic policing action will depend on the current stage and load status of the network. For
example, to police the packets according to the committed rate when they enter the network,
to make traffic shaping before they flow out the nodes, to perform queuing management in the
event of congestion and to employ congestion avoidance when congestion becomes worse.
Several priorities are described as Figure 7-14:
Figure 7-14 DS Field and ToS Byte

The ToS byte of IP header contains 8 bits: the first three bits (0 to 2) indicates IP precedence,
valued in the range 0 to 7; the following 4 bits (3 to 6) indicates ToS priority, valued in the
range 0 to 15. In RFC2474, the ToS field of IP packet header is redefined as DS field, where
the DiffServ code point (DSCP) priority is indicated by the first 6 bits (0 to 5), valued in the
range 0 to 63. The remaining 2 bits (6 and 7) are reserved.

7.4.13 Traffic policing and Traffic shaping with Token Bucket


If no restrictions are imposed on the traffics from the users, bursting data sent by mass users
continuously will make the network become more congested. Thus for more efficient network
function and better network service for more users, the traffics from the users must be
restricted, for example, to restrict a traffic can only acquire the specific assigned resources in
certain time interval so as to prevent the network congestion caused by excess burst.
Traffic policing and traffic shaping is a traffic monitoring policy to restrict the traffic and
resources through comparing with the traffic specification. To know whether the traffic
exceeds the specification or not is a prerequisite for traffic policing or shaping. Then based
upon the evaluation result you can implement a regulation policy. Usually, Token Bucket is
used to value the traffic specification.

Traffic Evaluation and Token Bucket

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-87

TCP/IP in mobile world

Token bucket features:


Token bucket can be regarded as a container capable of holding a certain number of
tokens. The system will put Tokens into the Bucket at a defined rate. In case the Bucket
is full, the extra Tokens will overflow and no more Tokens will be added.

Figure 7-15 Measuring the traffic with Token Bucket

Measuring the traffic with Token Bucket:

Whether or not the token quantity of the Token Bucket can satisfy the packets forwarding is
the basis for Token Bucket to measure the traffic specification. If enough tokens are available
for forwarding packets, traffic is regarded conforming the specification (generally, one token
is associated to the forwarding ability of one bit), otherwise, non-conform or excess.
When measuring the traffic with Token Bucket, these parameters are included:

Mean rate: The rate of putting Token into Bucket, i.e. average rate of the permitting
traffic. Generally set as CIR (Committed Information Rate).

Burst size: Token Buckets capability, i.e. the maximum traffic size of every burst.
Generally, it is set as CBS (Committed Burst Size), and the bursting size must be
greater than the maximum packets size.

A new evaluation will be made when a new packet arrives. If there are enough tokens in
bucket for each evaluation, it shows that traffics are within the bound, and at this time the
amount of tokens appropriate for the packets forwarding rights, need to be taken out.
Otherwise, it shows that too many tokens have been used, and traffic specifications are
exceeded.

7-88

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Complicated evaluation:

Two Token Buckets can be configured to evaluate conditions that are more complex and to
implement more flexible regulation policy. For example, Traffic Policing (TP) has three
parameters, as follows:

CIR (Committed Information Rate)

CBS (Committed Burst Size)

EBS (Excess Burst Size)

It uses two Token Buckets with the token-putting rate of every bucket set as CIR equally, but
with different capabilities: CBS and EBS (CBS < EBS, called C Bucket and E Bucket), which
represents different bursting class permitted. In each evaluation, you may use different traffic
control policies for different situations, such as "C bucket has enough tokens; "Tokens of C
bucket are deficient, but those of E bucket are enough; "Tokens of C bucket and E bucket are
all deficient.

Traffic Policing:

Typically, traffic policing is used to monitor the specification of certain traffic entering the
network and keep it within a reasonable bound, or it will make penalty on the exceeding
traffic so as to protect network resources and profits of carriers. For example, it can restrict
HTTP packets to occupy network bandwidth of no more than 50%. Once finding the traffic of
a connection exceeds, it may drop the packets or reset the precedence of packets.
Traffic policing allows you to define match rules based on IP precedence or DiffServ code
point (DSCP). It is widely used by ISP to police the network traffic. TP also includes the
traffic classification service for the policed traffics, and depending upon the different
evaluation results, it will implement the pre-configured policing actions, which are described
as the following:

Forwarding the packet whose evaluation result is conforming.

Dropping the packets whose evaluation result is nonconforming.

Modifying the precedence of the packets whose evaluation result is conforming and
forwarding them.

Modifying the precedence of the packets whose evaluation result is conforming and
entering the next-rank TP.

Entering the next-rank policing (TP can stack rank by rank, with each rank policing
more detailed objects).

Traffic Shaping:

Traffic shaping is an active way to adjust the traffic output rate. A typical application is to
control the output traffic with TP index based upon downstream network nodes.
The main difference between traffic shaping and traffic policing is: the packets to be dropped
in traffic policing will be stored during traffic shaping generally they will be put into buffer
or queues (also called Traffic Shaping - TS; seeFigure 7-16). Once there are enough tokens in
Token Bucket, those stored packets will be evenly sent. Another difference is that traffic
shaping may intensify delay, yet traffic policing seldom does so.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-89

TCP/IP in mobile world

Figure 7-16 Traffic Shaping Diagram

For example, in the implementation shown in Figure 4, Router A sends packets to Router B.
And Router B implements TP on those packets, and directly drops exceeding traffic.
Figure 7-17 Traffic Shaping Implementation

To reduce packets dropping, GTS can be used for the packets on the egress interface of Router
A. The packets beyond the traffic features of GTS will be stored in Router A. While sending
the next set of packets, GTS will take out those packets from buffer or queues and send them.
Thus, all the packets sent to Router B accord with the traffic regulation of Router B.

Line Rate on Physical Port:

On a physical interface, you can enforce line rates below the physical line rate to limit the
overall transmitting rate (including the rate sending critical packets).
LR also uses Token Bucket for traffic control. If LR is configured on an interface of a router,
all packets to be sent via the interface will be firstly handled by Token Bucket of LR. If there
are enough tokens in Token Bucket, then those packets can be forwarded; otherwise, those
packets will be put into QoS queues for congestion management, so that the packet traffics
through the physical port can be managed.

7-90

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 7-18 Line Rate (LR) processing Diagram

If Token Bucket is used to control the traffics, when there are tokens in Token Bucket, the
packets can be sent in burst; if no tokens are available, packets will not be sent until new
tokens generate in the Token Bucket. Thus, the traffic of packets is restricted under the
generating rate of new token to achieve the goal of restricting the traffics while allowing
bursting traffic overpass.
Compared with TP, LR can restrict all packets via physical port. TP is realized at IP layer
implementing no function for packets that do not go through IP layer. If users demand to limit
the rate of all packets, LR is easier to be implemented.

7.4.14 Traffic Engineering and Constraint Based Routing


QoS schemes such as Integrated Services/RSVP and Differentiated Services essentially
provide differenti- ated degradation of performance for different traffic when traffic load is
heavy. When the load is light, Integrated Services/RSVP, Differentiated Services and Best
Effort Service make little difference. Then, why not avoid congestion at the first place? This is
the motivation for Traffic Engineering.

7.4.15 Traffic Engineering


Network congestion can be caused by lack of network resources or by uneven distribution of
traffic. In the first case, all routers and links are overloaded and the only solution is to provide
more resources by upgrading the infrastructure. In the second case, some parts of the network
are overloaded while other parts are lightly loaded. Uneven traffic distribution can be caused
by the current Dynamic Routing protocols such as RIP, OSPF and IS-IS, because they always
select the shortest paths to forward packets. As a result, routers and links along the shortest
path between two nodes may become congested while routers and links along a longer path
are idle. The Equal-Cost Multi-Path (ECMP) option of OSPF, and recently of IS-IS, is useful
in distributing load to several shortest paths. But, if there is only one shortest path, ECMP
does not help. For simple networks, it may be possible for network administrators to manually
configure the cost of the links, so that traffic can be evenly distributed. For complex ISP
networks, this is almost impossible.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-91

TCP/IP in mobile world

Traffic Engineering is the process of arranging how traffic flows through the network so that
congestion caused by uneven network utilization can be avoided. Constraint Based Routing is
an important tool for making the Traffic Engineering process automatic.
Avoiding congestion and providing graceful degradation of performance in the case of
congestion are complementary. Traffic Engineering therefore complements Differentiated
Services.

7.4.16 Constraint Based Routing


In a sentence, Constraint Based Routing is used to compute routes that are subject to multiple
constraints. Constraint Based Routing evolves from QoS Routing. Given the QoS request of a
flow or an aggregation of flows, QoS Routing returns the route that is most likely to be able to
meet the QoS requirements. Con- straint Based Routing extends QoS Routing by considering
other constraints of the network such as policy.
The goals of Constraint Based Routing are:

To select routes that can meet certain QoS requirement

To increase the utilization of the network

While determining a route, Constraint Based Routing considers not only topology of the
network, but also the requirement of the flow, the resource availability of the links, and
possibly other policies specified by the network administrators. Therefore, Constraint Based
Routing may find a longer and lightly-loaded path better than the heavily-loaded shortest
path. Network traffic is thus distributed more evenly.
In order to do Constraint Based Routing, routers need to distribute new link state information
and to compute routes based on such information.

7.5 QoS provisioning:


7.5.1 Differentiated Services Diffserv
Differentiated services or DiffServ is a computer networking architecture that specifies a
simple, scalable and coarse-grained mechanism for classifying and managing network traffic
and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be
used to provide low-latency to critical network traffic such as voice or streaming media while
providing simple best-effort service to non-critical services such as web traffic or file
transfers.
DiffServ uses the 6-bit Differentiated Services Field (DS field) in the IP header for packet
classification purposes. The DS field and ECN field replace the outdated IPv4 TOS field.

7-92

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Figure 7-19 Diffserv Architecture

Assured Forwarding (AF) PHB group


The IETF defines the Assured Forwarding behavior in RFC 2597 and RFC 3260. Assured
forwarding allows the operator to provide assurance of delivery as long as the traffic does not
exceed some subscribed rate. Traffic that exceeds the subscription rate faces a higher
probability of being dropped if congestion occurs.
The AF behavior group defines four separate AF classes with Class 4 having the highest
priority. Within each class, packets are given a drop precedence (high, medium or low). The
combination of classes and drop precedence yields twelve separate DSCP encodings from
AF11 through AF43 (see Table 7-3)
Table 7-3 Assured Forwarding (AF) Behavior Group

Some measure of priority and proportional fairness is defined between traffic in different
classes. Should congestion occur between classes, the traffic in the higher class is given
priority. Rather than using strict priority queuing, more balanced queue servicing algorithms
such as fair queuing or weighted fair queuing (WFQ) are likely to be used. If congestion
occurs within a class, the packets with the higher drop precedence are discarded first. To
prevent issues associated with tail drop, more sophisticated drop selection algorithms such as
random early detection (RED) are often used.

Class Selector (CS) PHB


Prior to DiffServ, IPv4 networks could use the Precedence field in the TOS byte of the IPv4
header to mark priority traffic. The TOS octet and IP precedence were not widely used. The

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-93

TCP/IP in mobile world

IETF agreed to reuse the TOS octet as the DS field for DiffServ networks. In order to
maintain backward compatibility with network devices that still use the Precedence field,
DiffServ defines the Class Selector PHB.
The Class Selector code points are of the form 'xxx000'. The first three bits are the IP
precedence bits. Each IP precedence value can be mapped into a DiffServ class. If a packet is
received from a non-DiffServ aware router that used IP precedence markings, the DiffServ
router can still understand the encoding as a Class Selector code point.

7.5.2 Multi Protocol Label Switching - MPLS


Multiprotocol Label Switching (MPLS) is a mechanism in high-performance
telecommunications networks that directs data from one network node to the next based on
short path labels rather than long network addresses, avoiding complex lookups in a routing
table. The labels identify virtual links (paths) between distant nodes rather than endpoints.
MPLS can encapsulate packets of various network protocols. MPLS supports a range of
access technologies, including T1/E1, ATM, Frame Relay, and DSL.
MPLS works by prefixing packets with an MPLS header, containing one or more labels. This
is called a label stack. Each label stack entry contains four fields:

A 20-bit label value.

a 3-bit Traffic Class field for QoS (quality of service) priority (experimental) and ECN
(Explicit Congestion Notification).

a 1-bit bottom of stack flag. If this is set, it signifies that the current label is the last in the
stack.

an 8-bit TTL (time to live) field.

These MPLS-labeled packets are switched after a label lookup/switch instead of a lookup into
the IP table. As mentioned above, when MPLS was conceived, label lookup and label
switching were faster than a routing table or RIB (Routing Information Base) lookup because
they could take place directly within the switched fabric and not the CPU.
Routers that perform routing based only on the label are called label switch routers (LSRs).
The entry and exit points of an MPLS network are called label edge routers (LERs), which,
respectively, push an MPLS label onto an incoming packet[note 1] and pop it off the outgoing
packet. Alternatively, under penultimate hop popping this function may instead be performed
by the LSR directly connected to the LER.
Labels are distributed between LERs and LSRs using the Label Distribution Protocol
(LDP).[6] LSRs in an MPLS network regularly exchange label and reachability information
with each other using standardized procedures in order to build a complete picture of the
network they can then use to forward packets. Label-switched paths (LSPs) are established by
the network operator for a variety of purposes, such as to create network-based IP virtual
private networks or to route traffic along specified paths through the network. In many
respects, LSPs are not different from permanent virtual circuits (PVCs) in ATM or Frame
Relay networks, except that they are not dependent on a particular layer-2 technology.
In the specific context of an MPLS-based virtual private network (VPN), LERs that function
as ingress and/or egress routers to the VPN are often called PE (Provider Edge) routers.
Devices that function only as transit routers are similarly called P (Provider) routers. See RFC
4364. The job of a P router is significantly easier than that of a PE router, so they can be less
complex and may be more dependable because of this.

7-94

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

When an unlabeled packet enters the ingress router and needs to be passed on to an MPLS
tunnel, the router first determines the forwarding equivalence class (FEC) the packet should
be in, and then inserts one or more labels in the packet's newly created MPLS header. The
packet is then passed on to the next hop router for this tunnel.
When a labeled packet is received by an MPLS router, the topmost label is examined. Based
on the contents of the label a swap, push (impose) or pop (dispose) operation can be
performed on the packet's label stack. Routers can have prebuilt lookup tables that tell them
which kind of operation to do based on the topmost label of the incoming packet so they can
process the packet very quickly.
In a swap operation the label is swapped with a new label, and the packet is forwarded along
the path associated with the new label.
In a push operation a new label is pushed on top of the existing label, effectively
"encapsulating" the packet in another layer of MPLS. This allows hierarchical routing of
MPLS packets. Notably, this is used by MPLS VPNs.
In a pop operation the label is removed from the packet, which may reveal an inner label
below. This process is called "decapsulation". If the popped label was the last on the label
stack, the packet "leaves" the MPLS tunnel. This is usually done by the egress router, but see
Penultimate Hop Popping (PHP) below.
During these operations, the contents of the packet below the MPLS Label stack are not
examined. Indeed transit routers typically need only to examine the topmost label on the
stack. The forwarding of the packet is done based on the contents of the labels, which allows
"protocol-independent packet forwarding" that does not need to look at a protocol-dependent
routing table and avoids the expensive IP longest prefix match at each hop.
At the egress router, when the last label has been popped, only the payload remains. This can
be an IP packet, or any of a number of other kinds of payload packet. The egress router must
therefore have routing information for the packet's payload, since it must forward it without
the help of label lookup tables. An MPLS transit router has no such requirement.
In some special cases, the last label can also be popped off at the penultimate hop (the hop
before the egress router). This is called penultimate hop popping (PHP). This may be
interesting in cases where the egress router has lots of packets leaving MPLS tunnels, and
thus spends inordinate amounts of CPU time on this. By using PHP, transit routers connected
directly to this egress router effectively offload it, by popping the last label themselves.
MPLS can make use of existing ATM network or Frame Relay infrastructure, as its labeled
flows can be mapped to ATM or Frame Relay virtual-circuit identifiers, and vice versa.
Figure 7-20 shows the general architecture of MPLS implemented in a network.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

7-95

TCP/IP in mobile world

Figure 7-20 MPLS General architecture

7-96

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Network Security

8.1 Security requirements faced by modern telecom


networks, IPsec
Source: RFC 4301
IPsec is designed to provide interoperable, high quality, cryptographically-based security for
IPv4 and IPv6. The set of security services offered includes:

Access control

Connectionless integrity

Data origin authentication

Detection and rejection of replays (a form of partial sequence integrity)

Confidentiality (via encryption)

These services are provided at the IP layer, offering protection in a standard fashion for all
protocols that may be carried over IP (including IP itself).
IPsec architecture defines the elements listed below:
1.

Security Protocols:
- Authentication Header (AH)
- Encapsulating Security Payload (ESP)

2.

Security Associations (SA)

3.

Key Management, manual and automatic (IKE)

4.

Cryptographic algorithms for authentication and encryption

Above listed elements of IPsec are described in this chapter

Most of the security services are provided through use of two traffic security protocols, the
Authentication Header (AH) and the Encapsulating Security Payload (ESP), and through the
use of cryptographic key management procedures and protocols. The set of IPsec protocols
employed in a context, and the ways in which they are employed, will be determined by the
users/administrators in that context. It is the goal of the IPsec architecture to ensure that
compliant implementations include the services and management interfaces needed to meet
the security requirements of a broad user population.
When IPsec is correctly implemented and deployed, it ought not adversely affect users, hosts,
and other Internet components that do not employ IPsec for traffic protection. IPsec security
protocols (AH and ESP, and to a lesser extent, IKE) are designed to be cryptographic
algorithm independent. This modularity permits selection of different sets of cryptographic
algorithms as appropriate, without affecting the other parts of the implementation.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-97

TCP/IP in mobile world

For example, different user communities may select different sets of cryptographic algorithms
(creating cryptographically-enforced cliques) if required.
To facilitate interoperability in the global Internet, a set of default cryptographic algorithms
for use with AH and ESP is specified in [Eas05] and a set of mandatory-to-implement
algorithms for IKEv2 is specified in [Sch05]. [Eas05] and [Sch05] will be periodically
updated to keep pace with computational and cryptologic advances. By specifying these
algorithms in documents that are separate from the AH, ESP, and IKEv2 specifications, these
algorithms can be updated or replaced without affecting the standardization progress of the
rest of the IPsec document suite. The use of these cryptographic algorithms, in conjunction
with IPsec traffic protection and key management protocols, is intended to permit system and
application developers to deploy high quality, Internet-layer, cryptographic security
technology.
Threats that are faced in an IP environment can be listed as:

IP Sniffing

IP Spoofing

IP Hijacking

DoD Attack

IPsec and IPsec architecture is defined to be able to avoid the listed threats or at least make
the network or the connection resistant to such attacks.

8.2 Overview of security protocols


The protocols for IPsec and their functions can be summarized as below:
1.

The IP Authentication Header (AH) [Ken05b] offers integrity and data origin
authentication, with optional (at the discretion of the receiver) anti-replay features.

2.

The Encapsulating Security Payload (ESP) protocol [Ken05a] offers the same set of
services, and also offers confidentiality. Use of ESP to provide confidentiality without
integrity is NOT RECOMMENDED. When ESP is used with confidentiality enabled,
there are provisions for limited traffic flow confidentiality, i.e. provisions for concealing
packet length, and for facilitating efficient generation and discard of dummy packets.
This capability is likely to be effective primarily in virtual private network (VPN) and
overlay network contexts.

3.

Both AH and ESP offer access control, enforced through the distribution of
cryptographic keys and the management of traffic flows as dictated by the Security
Policy Database (SPD, Section).

8.2.2 Authentication header (AH)


Source: RFC 4202
The IP Authentication Header (AH) is used to provide connectionless integrity and data origin
authentication for IP datagrams (hereafter referred to as just "integrity") and to provide
protection against replays. This latter, optional service may be selected, by the receiver, when
a Security Association (SA) is established. (The protocol default requires the sender to
increment the sequence number used for anti-replay, but the service is effective only if the
receiver checks the sequence number.) However, to make use of the Extended Sequence
Number feature in an interoperable fashion, AH does impose a requirement on SA
management protocols to be able to negotiate this new feature.

8-98

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

AH provides authentication for as much of the IP header as possible, as well as for next level
protocol data. However, some IP header fields may change in transit and the value of these
fields, when the packet arrives at the receiver, may not be predictable by the sender. The
values of such fields cannot be protected by AH. Thus, the protection provided to the IP
header by AH is piecemeal.
AH may be applied alone, in combination with the IP Encapsulating Security Payload (ESP)
[Ken-ESP], or in a nested fashion.
Security services can be provided between a pair of communicating hosts, between a pair of
communicating security gateways, or between a security gateway and a host. ESP may be
used to provide the same anti-replay and similar integrity services, and it also provides a
confidentiality (encryption) service. The primary difference between the integrity provided by
ESP and AH is the extent of the coverage. Specifically, ESP does not protect any IP header
fields unless those fields are encapsulated by ESP (e.g., via use of tunnel mode).
By using AH the following can be achieved:

Message authentication

Integrity

Replay avoidance

NO privacy

Using and applying AH can be summerized as steps below:


1.

Add AH with Auth. Data=0

2.

May do padding to get even length

3.

Hashing based on total length

4.

Include Auth. Data in the header

5.

Add IP-header, protoc=51


Below in Figure 8-1 the AH is shown with its fields:

Figure 8-1 Authentication Header AH

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-99

TCP/IP in mobile world

8.2.3 Encapsulation Security Payload (ESP)


Source RFC 4303
The Encapsulating Security Payload (ESP) header is designed to provide a mix of security
services in IPv4 and IPv6 [DH98]. ESP may be applied alone, in combination with AH
[Ken-AH], or in a nested fashion. Security services can be provided between a pair of
communicating hosts, between a pair of communicating security gateways, or between a
security gateway and a host. For more details on how to use ESP and AH in various network
environments, see the Security Architecture document [Ken-Arch].
The ESP header is inserted after the IP header and before the next layer protocol header
(transport mode) or before an encapsulated IP header (tunnel mode). These modes are
described in more detail below. ESP can be used to provide confidentiality, data origin
authentication, connectionless integrity, an anti-replay service (a form of partial sequence
integrity), and (limited) traffic flow confidentiality. The set of services provided depends on
options selected at the time of Security Association (SA) establishment and on the location of
the implementation in a network topology. Using encryption-only for confidentiality is
allowed by ESP. However, it should be noted that in general, this will provide defense only
against passive attackers. Using encryption without a strong integrity mechanism on top of it
(either in ESP or separately via AH) may render the confidentiality service insecure against
some forms of active attacks [Bel96, Kra01]. Moreover, an underlying integrity service, such
as AH, applied before encryption does not necessarily protect the encryption-only
confidentiality against active attackers [Kra01]. ESP allows encryption-only SAs because this
may offer considerably better performance and still provide adequate security, e.g., when
higher-layer authentication/integrity protection is offered independently. However, this
standard does not require ESP implementations to offer an encryption-only service.
Data origin authentication and connectionless integrity are joint services, hereafter referred to
jointly as "integrity". (This term is employed because, on a per-packet basis, the computation
being performed provides connectionless integrity directly; data origin authentication is
provided indirectly as a result of binding the key used to verify the integrity to the identity of
the IPsec peer. Typically, this binding is effected through the use of a shared, symmetric key.)
Integrity-only ESP MUST be offered as a service selection option, e.g., it must be negotiable
in SA management protocols and MUST be configurable via management interfaces.
Integrity-only ESP is an attractive alternative to AH in many contexts, e.g., because it is faster
to process and more amenable to pipelining in many implementations. Although
confidentiality and integrity can be offered independently, ESP typically will employ both
services, i.e., packets will be protected with regard to confidentiality and integrity. Thus, there
are three possible ESP security service combinations involving these services:

Confidentiality-only (MAY be supported)

Integrity only (MUST be supported)

Confidentiality and integrity (MUST be supported)

The anti-replay service may be selected for an SA only if the integrity service is selected for
that SA. The selection of this service is solely at the discretion of the receiver and thus need
not be negotiated. However, to make use of the Extended Sequence Number feature in an
interoperable fashion, ESP does impose a requirement on SA management protocols to be
able to negotiate this feature.

8-100

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

The traffic flow confidentiality (TFC) service generally is effective only if ESP is employed
in a fashion that conceals the ultimate source and destination addresses of correspondents,
e.g., in tunnel mode between security gateways, and only if sufficient traffic flows between
IPsec peers (either naturally or as a result of generation of masking traffic) to conceal the
characteristics of specific, individual subscriber traffic flows. (ESP may be employed as part
of a higher-layer TFC system, e.g., Onion Routing [Syverson], but such systems are outside
the scope of this standard.) New TFC features present in ESP facilitate efficient generation
and discarding of dummy traffic and better padding of real traffic, in a backward- compatible
fashion.
The following security aspects are achieved by ESP:

Message authentication

Integrity

Replay avoidance

Privacy

Below are steps to be taken when applying ESP header:


1.

Add ESP trailer

2.

Encrypt payload + trailer

3.

Add ESP header

4.

ESP header + payload + ESP trailer => auth. Data

5.

Add auth. data

6.

Add IP-header, protoc=50

Figure 8-2 shows the ESP header:


Figure 8-2 Encapsulation Security Payload - ESP

8.2.4 Applying AH and ESP


IPsec can be run in tunnel mode and transport mode.In tunnel mode: at least one part of the
tunnel must be a Security Gateway (SG). In Figure 8-3 applying ESP in tunnel mode is shown

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-101

TCP/IP in mobile world

Figure 8-3 Applying ESP in Tunnel mode and Transport Mode Example: IMS

8.3 Security Association (SA)


Source: RFC 4301
Both AH and ESP make use of SAs, and a major function of IKE is the establishment and
maintenance of SAs. All implementations of AH or ESP MUST support the concept of an SA
as described below.
A SA is a simplex "connection" that affords security services to the traffic carried by it.
Security services are afforded to an SA by the use of AH, or ESP, but not both. If both AH
and ESP protection are applied to a traffic stream, then two SAs must be created and
coordinated to effect protection through iterated application of the security protocols. To
secure typical, bi-directional communication between two IPsec-enabled systems, a pair of
SAs (one in each direction) is required. IKE explicitly creates SA pairs in recognition of this
common usage requirement. For an SA used to carry unicast traffic, the Security Parameters
Index (SPI) by itself suffices to specify an SA.
However, as a local matter, an implementation may choose to use the SPI in conjunction with
the IPsec protocol type (AH or ESP) for SA identification. If an IPsec implementation
supports multicast, then it MUST support multicast SAs using the algorithm below for
mapping inbound IPsec datagrams to SAs. Implementations that support only unicast traffic
need not implement this de-multiplexing algorithm.
In many secure multicast architectures, e.g., [RFC3740], a central Group Controller/Key
Server unilaterally assigns the Group Security Association's (GSA's) SPI. This SPI assignment
is not negotiated or coordinated with the key management (e.g., IKE) subsystems that reside
in the individual end systems that constitute the group. Consequently, it is possible that a GSA
and a unicast SA can simultaneously use the same SPI. A multicast-capable IPsec
implementation MUST correctly de-multiplex inbound traffic even in the context of SPI
collisions.

8-102

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

Each entry in the SA Database (SAD) (Section 4.4.2) must indicate whether the SA lookup
makes use of the destination IP address, or the destination and source IP addresses, in addition
to the SPI. For multicast SAs, the protocol field is not employed for SA lookups. For each
inbound, IPsec-protected packet, an implementation must conduct its search of the SAD such
that it finds the entry that matches the "longest" SA identifier. In this context, if two or more
SAD entries match based on the SPI value, then the entry that also matches based on
destination address, or destination and source address (as indicated in the SAD entry) is the
"longest" match. This implies a logical ordering of the SAD search as follows:
1.

Search the SAD for a match on the combination of SPI, destination address, and source
address. If an SAD entry matches, then process the inbound packet with that matching
SAD entry. Otherwise, proceed to step 2.

2.

Search the SAD for a match on both SPI and destination address. If the SAD entry
matches, then process the inbound packet with that matching SAD entry. Otherwise,
proceed to step 3.

3.

Search the SAD for a match on only SPI if the receiver has chosen to maintain a single
SPI space for AH and ESP, and on both SPI and protocol, otherwise. If an SAD entry
matches, then process the inbound packet with that matching SAD entry. Otherwise,
discard the packet and log an auditable event.

In practice, an implementation may choose any method (or none at all) to accelerate this
search, although its externally visible behavior MUST be functionally equivalent to having
searched the SAD in the above order. For example, a software-based implementation could
index into a hash table by the SPI. The SAD entries in each hash table bucket's linked list
could be kept sorted to have those SAD entries with the longest SA identifiers first in that
linked list. Those SAD entries having the shortest SA identifiers could be sorted so that they
are the last entries in the linked list. A hardware-based implementation may be able to effect
the longest match search intrinsically, using commonly available Ternary
Content-Addressable Memory (TCAM) features.
The indication of whether source and destination address matching is required to map inbound
IPsec traffic to SAs MUST be set either as a side effect of manual SA configuration or via
negotiation using an SA management protocol, e.g., IKE or Group Domain of Interpretation
(GDOI) [RFC3547]. Typically, Source-Specific Multicast (SSM) [HC03] groups use a 3-tuple
SA identifier composed of an SPI, a destination multicast address, and source address. An
Any-Source Multicast group SA requires only an SPI and a destination multicast address as an
identifier.
If different classes of traffic (distinguished by Differentiated Services Code Point (DSCP) bits
[NiBlBaBL98], [Gro02]) are sent on the same SA, and if the receiver is employing the
optional anti-replay feature available in both AH and ESP, this could result in inappropriate
discarding of lower priority packets due to the windowing mechanism used by this feature.
Therefore, a sender SHOULD put traffic of different classes, but with the same selector
values, on different SAs to support Quality of Service (QoS) appropriately. To permit this, the
IPsec implementation MUST permit establishment and maintenance of multiple SAs between
a given sender and receiver, with the same selectors. Distribution of traffic among these
parallel SAs to support QoS is locally determined by the sender and is not negotiated by IKE.
The receiver MUST process the packets from the different SAs without prejudice. These
requirements apply to both transport and tunnel mode SAs. In the case of tunnel mode SAs,
the DSCP values in question appear in the inner IP header. In transport mode, the DSCP value
might change en route, but this should not cause problems with respect to IPsec processing
since the value is not employed for SA selection and MUST NOT be checked as part of
SA/packet validation. However, if significant re-ordering of packets occurs in an SA, e.g., as a
result of changes to DSCP values en route, this may trigger packet discarding by a receiver
due to application of the anti-replay mechanism.

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-103

TCP/IP in mobile world

As noted above, two types of SAs are defined:

Transport mode

Tunnel mode.

IKE creates pairs of SAs, so for simplicity, we choose to require that both SAs in a pair be of
the same mode, transport or tunnel.
A transport mode SA is an SA typically employed between a pair of hosts to provide
end-to-end security services. When security is desired between two intermediate systems
along a path (vs. end-to-end use of IPsec), transport mode MAY be used between security
gateways or between a security gateway and a host. In the case where transport mode is used
between security gateways or between a security gateway and a host, transport mode may be
used to support in-IP tunneling (e.g., IP-in-IP [Per96] or Generic Routing Encapsulation
(GRE) tunneling [FaLiHaMeTr00] or dynamic routing [ToEgWa04]) over transport mode
SAs. To clarify, the use of transport mode by an intermediate system (e.g., a security gateway)
is permitted only when applied to packets whose source address (for outbound packets) or
destination address (for inbound packets) is an address belonging to the intermediate system
itself. The access control functions that are an important part of IPsec are significantly limited
in this context, as they cannot be applied to the end-to-end headers of the packets that traverse
a transport mode SA used in this fashion. Thus, this way of using transport mode should be
evaluated carefully before being employed in a specific context.
In IPv4, a transport mode security protocol header appears immediately after the IP header
and any options, and before any next layer protocols (e.g., TCP or UDP). In IPv6, the security
protocol header appears after the base IP header and selected extension headers, but may
appear before or after destination options; it MUST appear before next layer protocols (e.g.,
TCP, UDP, Stream Control Transmission Protocol (SCTP)). In the case of ESP, a transport
mode SA provides security services only for these next layer protocols, not for the IP header
or any extension headers preceding the ESP header. In the case of AH, the protection is also
extended to selected portions of the IP header preceding it, selected portions of extension
headers, and selected options (contained in the IPv4 header, IPv6 Hop-by-Hop extension
header, or IPv6 Destination extension headers). For more details on the coverage afforded by
AH, A tunnel mode SA is essentially an SA applied to an IP tunnel, with the access controls
applied to the headers of the traffic inside the tunnel. Two hosts MAY establish a tunnel mode
SA between themselves. Aside from the two exceptions below, whenever either end of a
security association is a security gateway, the SA MUST be tunnel mode. Thus, an SA
between two security gateways is typically a tunnel mode SA, as is an SA between a host and
a security gateway.
The two exceptions are as follows.

Where traffic is destined for a security gateway, e.g., Simple Network Management
Protocol (SNMP) commands, the security gateway is acting as a host and transport mode
is allowed. In this case, the SA terminates at a host (management) function within a
security gateway and thus merits different treatment.

As noted above, security gateways MAY support a transport mode SA to provide security
for IP traffic between two intermediate systems along a path, e.g., between a host and a
security gateway or between two security gateways.

Several concerns motivate the use of tunnel mode for an SA involving a security gateway.
For example, if there are multiple paths (e.g., via different security gateways) to the same
destination behind a security gateway, it is important that an IPsec packet be sent to the
security gateway with which the SA was negotiated. Similarly, a packet that might be
fragmented en route must have all the fragments delivered to the same IPsec instance for

8-104

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

reassembly prior to cryptographic processing. Also, when a fragment is processed by IPsec


and transmitted, then fragmented en route, it is critical that there be inner and outer headers to
retain the fragmentation state data for the pre- and post-IPsec packet formats. Hence there
are several reasons for employing tunnel mode when either end of an SA is a security
gateway. (Use of an IP-in-IP tunnel in conjunction with transport mode can also address these
fragmentation issues. However, this configuration limits the ability of IPsec to enforce access
control policies on traffic.)
Note: AH and ESP cannot be applied using transport mode to IPv4 packets that are fragments.
Only tunnel mode can be employed in such cases. For IPv6, it would be feasible to carry a
plaintext fragment on a transport mode SA; however, for simplicity, this restriction also
applies to IPv6 packets.
For a tunnel mode SA, there is an "outer" IP header that specifies the IPsec processing source
and destination, plus an "inner" IP header that specifies the (apparently) ultimate source and
destination for the packet. The security protocol header appears after the outer IP header, and
before the inner IP header. If AH is employed in tunnel mode, portions of the outer IP header
are afforded protection (as above), as well as all of the tunneled IP packet (i.e., all of the
inner IP header is protected, as well as next layer protocols). If ESP is employed, the
protection is afforded only to the tunneled packet, not to the outer header.

In summary,

A host implementation of IPsec MUST support both transport and tunnel mode. This is
true for native, BITS, and BITW implementations for hosts.

A security gateway MUST support tunnel mode and MAY support transport mode. If it
supports transport mode, that should be used only when the security gateway is acting as
a host, e.g., for network management, or to provide security between two intermediate
systems along a path.

Figure 8-4 shows a general view of where in th protocol stack security can be applied as well
as a general view of IPsec architecture including Security Gateways

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-105

TCP/IP in mobile world

Figure 8-4 Security Association and IPsec Architecture

8.4 Security Association and Key Management


Source: RFC 4301
All IPsec implementations MUST support both manual and automated SA and cryptographic
key management. The IPsec protocols, AH and ESP, are largely independent of the associated
SA management techniques, although the techniques involved do affect some of the security
services offered by the protocols. For example, the optional anti-replay service available for
AH and ESP requires automated SA management. Moreover, the granularity of key
distribution employed with IPsec determines the granularity of authentication provided. In
general, data origin authentication in AH and ESP is limited by the extent to which secrets
used with the integrity algorithm (or with a key management protocol that creates such
secrets) are shared among multiple possible sources. The following text describes the
minimum requirements for both types of SA management.

8.4.1 Manual Techniques


The simplest form of management is manual management, in which a person manually
configures each system with keying material and SA management data relevant to secure
communication with other systems.
Manual techniques are practical in small, static environments but they do not scale well. For
example, a company could create a virtual private network (VPN) using IPsec in security
gateways at several sites. If the number of sites is small, and since all the sites come under the
purview of a single administrative domain, this might be a feasible context for manual
management techniques. In this case, the security gateway might selectively protect traffic to
and from other sites within the organization using a manually configured key, while not
protecting traffic for other destinations. It also might be appropriate when only selected
communications need to be secured. A similar argument might apply to use of IPsec entirely
within an organization for a small number of hosts and/or gateways. Manual management

8-106

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

Issue 06 (2006-03-01)

TCP/IP in mobile world

techniques often employ statically configured, symmetric keys, though other options also
exist.

8.4.2 Automated SA and Key Management


Widespread deployment and use of IPsec requires an Internet-standard, scalable, automated,
SA management protocol. Such support is required to facilitate use of the anti-replay features
of AH and ESP, and to accommodate on-demand creation of SAs, e.g., for user- and
session-oriented keying. (Note that the notion of "rekeying" an SA actually implies creation of
a new SA with a new SPI, a process that generally implies use of an automated SA/key
management protocol.)
The default automated key management protocol selected for use with IPsec is IKEv2. This
document assumes the availability of certain functions from the key management protocol
that are not supported by IKEv1. Other automated SA management protocols MAY be
employed. When an automated SA/key management protocol is employed, the output from
this protocol is used to generate multiple keys for a single SA. This also occurs because
distinct keys are used for each of the two SAs created by IKE. If both integrity and
confidentiality are employed, then a minimum of four keys are required. Additionally, some
cryptographic algorithms may require multiple keys, e.g., 3DES.
The Key Management System may provide a separate string of bits for each key or it may
generate one string of bits from which all keys are extracted. If a single string of bits is
provided, care needs to be taken to ensure that the parts of the system that map the string of
bits to the required keys do so in the same fashion at both ends of the SA. To ensure that the
IPsec implementations at each end of the SA use the same bits for the same keys, and
irrespective of which part of the system divides the string of bits into individual keys, the
encryption keys MUST be taken from the first (left-most, high-order) bits and the integrity
keys MUST be taken from the remaining bits. The number of bits for each key is defined in
the relevant cryptographic algorithm specification RFC. In the case of multiple encryption
keys or multiple integrity keys, the specification for the cryptographic algorithm must specify
the order in which they are to be selected from a single string of bits provided to the
cryptographic algorithm.

8.5 Cryptographic algorithms for authentication and


encryption
Algorithms for encryption/ciphering and authentication can be grouped into three categories
1.

One way algorithms

2.

Two way symmetric algorithms

Issue 06 (2006-03-01)

Huawei Proprietary and Confidential


Copyright Huawei Technologies Co., Ltd

8-107

Das könnte Ihnen auch gefallen