Beruflich Dokumente
Kultur Dokumente
V200R005(C00&C01&C02&C03)
Issue 07
Date 2015-12-18
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://e.huawei.com
Intended Audience
This document describes MPLS features on the device and provides configuration procedures
and configuration examples.
This document is intended for:
l Data configuration engineers
l Commissioning engineers
l Network monitoring engineers
l System maintenance engineers
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
Security Conventions
l Password setting
Declaration
This manual is only a reference for you to configure your devices. The contents in the manual,
such as web pages, command line syntax, and command outputs, are based on the device
conditions in the lab. The manual provides instructions for general scenarios, but do not cover
all usage scenarios of all product models. The contents in the manual may be different from
your actual device situations due to the differences in software versions, models, and
configuration files. The manual will not list every possible difference. You should configure
your devices according to actual situations.
The specifications provided in this manual are tested in lab environment (for example, the
tested device has been installed with a certain type of boards or only one protocol is run on
the device). Results may differ from the listed specifications when you attempt to obtain the
maximum values with multiple functions enabled on the device.
V200R005C00SPC300 V200R003C10
V200R005C00SPC500
V200R005C01
V200R005C02 V200R005C00SPC500
V200R005C03 V300R005C00
Change History
Changes between document issues are cumulative. Therefore, the latest document issue
contains all updates made in previous issues.
Contents
5 MPLS TE Configuration...........................................................................................................233
5.1 Overview.................................................................................................................................................................... 235
5.2 Principles.................................................................................................................................................................... 236
5.2.1 Concepts.................................................................................................................................................................. 236
5.2.2 Implementation........................................................................................................................................................ 242
5.2.3 Information Advertisement......................................................................................................................................244
5.2.4 Path Calculation.......................................................................................................................................................252
5.2.5 CS-LSP Setup.......................................................................................................................................................... 254
1 MPLS Overview
Definition
The Multiprotocol Label Switching (MPLS) Protocol is used on Internet Protocol (IP)
backbone networks. MPLS uses connection-oriented label switching on connectionless IP
networks. By combining Layer 3 routing technologies and Layer 2 switching technologies,
MPLS leverages the flexibility of IP routing and the simplicity of Layer 2 switching.
MPLS is based on Internet Protocol version 4 (IPv4). The core MPLS technology can be
extended to multiple network protocols, such as Internet Protocol version 6 (IPv6), Internet
Packet Exchange (IPX), and Connectionless Network Protocol (CLNP). "Multiprotocol" in
MPLS means that multiple network protocols are supported.
MPLS is used for tunneling but not a service or an application. MPLS supports multiple
protocols and services. Moreover, it ensures security of data transmission.
Purpose
IP-based routing serves well on the Internet in the mid 90s, but IP technology can be
inefficient at forwarding packets because software must search for routes using the longest
match algorithm. As a result, the forwarding capability of IP technology can act as a
bottleneck.
In contrast, Asynchronous transfer mode (ATM) technology uses labels of fixed length and
maintains a label table that is much smaller than a routing table. Compared to IP, ATM is
more efficient at forwarding packets. ATM is a complex protocol, however, with high
deployment costs, that hinder its widespread use.
Because traditional IP technology is simple and costs little to deploy, a combination of IP and
ATM capabilities would be ideal. This has sparked the emergence of MPLS technology.
MPLS was created to increase forwarding rates. Unlike IP routing and forwarding, MPLS
analyzes a packet header only on the edge of the network and not at each hop. MPLS
therefore reduces packet processing time.
The use of hardware-based functions based on application-specific integrated circuits (ASICs)
has made IP routing far more efficient, so MPLS is no longer needed for its high-speed
forwarding advantages. However, MPLS does support multi-layer labels, and its forwarding
plane is connection-oriented. For these reasons, MPLS is widely used for virtual private
network (VPN), traffic engineering (TE), and quality of service (QoS).
1.2 Principles
This section describes the implementation of MPLS.
packets are label switching routers (LSRs), which form an MPLS domain. LSRs that reside at
the edge of the MPLS domain and connect to other networks are called label edge routers
(LERs), and LSRs within the MPLS domain are core LSRs.
IP Network IP Network
LER LER
LSP
Data flow
When IP packets reach an MPLS network, the ingress LER analyzes the packets and then
adds appropriate labels to them. All LSRs on the MPLS network forward packets based on
labels. When IP packets leave the MPLS network, the egress LER pops the labels.
A path along which IP packets are transmitted on an MPLS network is called a label switched
path (LSP). An LSP is a unidirectional path in the same direction data packets traverse.
As shown in Figure 1-1, the LER at the starting point of an LSP is the ingress node, and the
LER at the end of the LSP is the egress node. The LSRs between the ingress node and egress
node along the LSP are transit nodes. An LSP may have zero, one, or several transit nodes
and only one ingress node and one egress node.
On an LSP, MPLS packets are sent from the ingress to the egress. In this transmission
direction, the ingress node is the upstream node of the transit nodes, and the transit nodes are
the downstream nodes of the ingress node. Similarly, transit nodes are the upstream nodes of
the egress node, and the egress node is the downstream node of the transit nodes.
MPLS Architecture
Figure 1-2 shows the MPLS architecture, which consists of a control plane and a forwarding
plane.
Label
A label is a short, fixed-length (4 bytes) identifier that is only locally significant. A label
identifies an FEC to which a packet belongs. In some cases, such as load balancing, a FEC
can be mapped to multiple incoming labels. Each label, however, represents only one FEC on
a device.
Compared with an IP packet, an MPLS packet has the additional 4-byte MPLS label. The
MPLS label is between the link layer header and the network layer header, and allows use of
any link layer protocol. Figure 1-3 shows position of an MPLS label and fields in the MPLS
label.
0 19 22 23 31
Label Exp S TTL
A label stack is an arrangement of labels. In Figure 1-4, the label next to the Layer 2 header is
the top of the label stack (outer MPLS label), and the label next to the Layer 3 header is the
bottom of the label stack (inner MPLS label). An MPLS label stack can contain an unlimited
number of labels. Currently, MPLS label stacks can be applied to MPLS VPN and Traffic
Engineering Fast ReRoute (TE FRR).
Link layer header Outer MPLS label Inner MPLS label Layer3 header Layer3 payload
The label stack organizes labels according to the rule of Last-In, First-Out. The labels are
processed from the top of the stack.
Label Space
The label space is the value range of the label, and the space is organized in the following
ranges:
l 0 to 15: special labels. For details about special labels, see Table 1-1.
l 16 to 1023: label space shared by static LSPs and static constraint-based routed LSPs
(CR-LSPs).
l 1024 or above: label space for dynamic signaling protocols, such as Label Distribution
Protocol (LDP), Resource Reservation Protocol-Traffic Engineering (RSVP-TE), and
MultiProtocol Border Gateway Protocol (MP-BGP).
0 IPv4 Explicit The label must be popped out (removed), and the
NULL Label packets must be forwarded based on IPv4. If the egress
node allocates a label with the value of 0 to the
penultimate hop LSR, the penultimate hop LSR pushes
label 0 to the top of the label stack and forwards the
packet to the egress node. When the egress node detects
that the label of the packet is 0, the egress node pops the
label out.
1 Router Alert A label that is only valid when it is not at the bottom of a
Label label stack. The label is similar to the Router Alert
Option field in IP packets. After receiving such a label,
the node sends it to a local software module for further
processing. Packet forwarding is determined by the next-
layer label. If the packet needs to be forwarded
continuously, the node pushes the Router Alert Label to
the top of the label stack again.
2 IPv6 Explicit The label must be popped out, and the packets must be
NULL Label forwarded based on IPv6. If the egress node allocates a
label with the value of 2 to the LSR at the penultimate
hop, the LSR pushes label 2 to the top of the label stack
and forwards the packet to the egress node. When the
egress node recognizes that the value of the label carried
in the packet is 2, the egress node immediately pops it
out.
4 to 13 Reserved None.
15 Reserved None.
A static LSP is set up without any label distribution protocols or exchange of control packets.
Static LSPs have low costs and are recommended for small-scale networks with simple and
stable topologies. Static LSPs cannot adapt to network topology changes and must be
configured by an administrator.
Dynamic LSPs are established using label distribution protocols. As the control protocol or
signaling protocol for MPLS, a label distribution protocol defines FECs, distributes labels,
and establishes and maintains LSPs.
l LDP
The Label Distribution Protocol (LDP) is designed for distributing labels. It sets up an
LSP hop by hop according to Interior Gateway Protocol (IGP) and Border Gateway
Protocol (BGP) routing information.
For details about LDP principles, see Principle Description in the 3 MPLS LDP
Configuration.
l RSVP-TE
Resource Reservation Protocol Traffic Engineering (RSVP-TE) is an extension of RSVP
and is used to set up a constraint-based routed LSP (CR-LSP). In contrast to LDP LSPs,
RSVP-TE tunnels are characterized by bandwidth reservation requests, bandwidth
constraints, link "colors" (designating administrative groups), and explicit paths.
For details about RSVP-TE principles, see Principle Description in the 5 MPLS TE
Configuration.
l MP-BGP
MP-BGP is an extension to BGP and allocates labels to MPLS VPN routes and inter-AS
VPN routes.
For details about MP-BGP principles, see BGP Configuration in S2750&S5700&S6700
Series Ethernet Switches Configuration Guide - IP Routing.
MPLS labels are distributed from downstream LSRs to upstream LSRs. As shown in Figure
1-5, a downstream LSR identifies FECs based on the IP routing table, allocates a label to each
FEC, and records the mapping between labels and FECs. The downstream LSR then
encapsulates the mapping into a message and sends the message to the upstream LSR. As this
process proceeds on all the LSRs, the LSRs create a label forwarding table and establish an
LSP.
LSP
PHP
4.
.4
2
.4
:4
IP
Data flow
LSP
As shown in Figure 1-6, the LSRs have distributed MPLS labels and set up an LSP with the
destination address of 4.4.4.2/32. MPLS packets are forwarded as follows:
1. The ingress node receives an IP packet destined for 4.4.4.2. Then, the ingress node adds
Label Z to the packet and forwards it.
2. When the downstream transit node receives the labeled packet, the node replaces Label Z
by Label Y.
3. When the transit node at the penultimate hop receives the packet with Label Y, the node
pops out Label Y because the label value is 3. The transit node then forwards the packet
to the egress node as an IP packet.
4. The egress node receives the IP packet and forwards it to 4.4.4.2/32.
l Tunnel ID
Each tunnel is assigned a unique ID to ensure that upper layer applications (such as VPN
and route management) on a tunnel use the same interface. The tunnel ID is 32 bits long
and is valid only on the local end.
l NHLFE
A next hop label forwarding entry (NHLFE) is used to guide MPLS packet forwarding.
An NHLFE specifies the tunnel ID, outbound interface, next hop, outgoing label, and
label operation.
FEC-to-NHLFE (FTN) maps each FEC to a group of NHLFEs. An FTN can be obtained
by searching for tunnel IDs that are not 0x0 in a FIB. The FTN is available on the ingress
only.
l ILM
The incoming label map (ILM) maps each incoming label to a group of NHLFEs.
The ILM specifies the tunnel ID, incoming label, inbound interface, and label operation.
The ILM on a transit node identifies bindings between labels and NHLFEs. Similar a
FIB that provides forwarding information based on destination IP addresses, the ILM
provides forwarding information based on labels.
Detailed Forwarding Process
FIB ILM
DEST Tunnel ID In Label In IF Tunnel ID In Label In IF Tunnel ID
4.4.4.2/32 0x11 Z IF1 0x15 Y IF1 0x22
PHP
Push Z IP:4.4.4.2 Swap Y IP:4.4.4.2 Pop IP:4.4.4.2 IP
:4
.2
.4
.4
.4
.4
.2
:4
IP
For details on how the ingress node processes the EXP field and TTL field, see Principle
Description in the 4 MPLS QoS Configuration and Processing MPLS TTL.
IP/MPLS
backbone network
CE PE P PE CE
l Pipe mode
As shown in Figure 1-9, the ingress node decreases the IP TTL by one and the MPLS
TTL remains constant. The TTL field in MPLS packets is processed in standard mode.
The egress node decreases the IP TTL by one. In Pipe mode, the IP TTL only decreases
by one on the ingress node and one on the egress node when packets travels across an
MPLS network.
IP/MPLS
backbone network
CE PE P PE CE
In MPLS VPN applications, the MPLS backbone network needs to be shielded to ensure
network security. The Pipe mode is recommended for private network packets.
ICMP Response Packet
On an MPLS network, when an LSR receives an MPLS packet with the TTL value of 1, the
LSR generates an ICMP response packet.
The LSR returns the ICMP response packet to the sender in the following ways:
l If the LSR has a reachable route to the sender, the LSR directly sends the ICMP response
packet to the sender through the IP route.
l If the LSR has no reachable route to the sender, the LSR forwards the ICMP response
packet along the LSP. The egress node forwards the ICMP response packet to the sender.
In most cases, the received MPLS packet contains only one label and the LSR responds to the
sender with the ICMP response packet using the first method. If the MPLS packet contains
multiple labels, the LSR uses the second method.
The MPLS VPN packets may contain only one label when they arrive at an autonomous
system boundary router (ASBR) on the MPLS VPN. These devices have no IP routes to the
sender, so they use the second method to reply to the ICMP response packets.
MPLS Ping
As shown in Figure 1-10, LSR_1 establishes an LSP to LSR_4. LSR_1 performs MPLS ping
on the LSP by performing the following steps:
1. LSR_1 checks whether the LSP exists. (On a TE tunnel, the router checks whether the
tunnel interface exists and the CR-LSP has been established.) If the LSP does not exist,
an error message is displayed and the MPLS ping stops. If the LSP exists, LSR_1
performs the following operations.
2. LSR_1 creates an MPLS echo request packet and adds 4.4.4.4 to the destination FEC in
the packet. In the IP header of the MPLS echo request packet, the destination address is
127.0.0.1/8 and the TTL value is 1. LSR_1 searches for the corresponding LSP, adds the
LSP label to the MPLS echo request packet, and sends the packet to LSR_2.
3. Transit nodes LSR_2 and LSR_3 forward the MPLS echo request packet based on
MPLS. If MPLS forwarding on a transit node fails, the transit node returns an MPLS
echo reply packet carrying the error code to LSR_1.
4. If no fault exists along the MPLS forwarding path, the MPLS echo request packet
reaches the LSP egress node LSR_4. LSR_4 returns a correct MPLS echo reply packet
after verifying that the destination IP address 4.4.4.4 is the loopback interface address.
MPLS ping is complete.
MPLS Tracert
As shown in Figure 1-10, LSR_1 performs MPLS tracert on LSR_4 (4.4.4.4/32) by
performing the following steps:
1. LSR_1 checks whether an LSP exists to LSR_4. (On a TE tunnel, the router checks
whether the tunnel interface exists and the CR-LSP has been established.) If the LSP
does not exist, an error message is displayed and the tracert stops. If the LSP exists,
LSR_1 performs the following operations.
2. LSR_1 creates an MPLS echo request packet and adds 4.4.4.4 to the destination FEC in
the packet. In the IP header of the MPLS echo request packet, the destination address is
127.0.0.1/8. Then LSR_1 adds the LSP label to the packet, sets the MPLS TTL value to
1, and sends the packet to LSR_2. The MPLS echo request packet contains a
downstream mapping TLV that carries downstream information about the LSP at the
current node, such as next-hop address and outgoing label.
3. Upon receiving the MPLS echo request packet, LSR_2 decreases the MPLS TTL by one
and finds that TTL times out. LSR_2 then checks whether the LSP exists and the next-
hop address and whether the outgoing label of the downstream mapping TLV in the
packet is correct. If so, LSR_2 returns a correct MPLS echo reply packet that carries the
downstream mapping TLV of LSR_2. If not, LSR_2 returns an incorrect MPLS echo
reply packet.
4. After receiving the correct MPLS echo reply packet, LSR_1 resends the MPLS echo
request packet that is encapsulated in the same way as step 2 and sets the MPLS TTL
value to 2. The downstream mapping TLV of this MPLS echo request packet is
replicated from the MPLS echo reply packet. LSR_2 performs common MPLS
forwarding on this MPLS echo request packet. If TTL times out when LSR_3 receives
the MPLS echo request packet, LSR_3 processes the MPLS echo request packet and
returns an MPLS echo reply packet in the same way as step 3.
5. After receiving a correct MPLS echo reply packet, LSR_1 repeats step 4, sets the MPLS
TTL value to 3, replicates the downstream mapping TLV in the MPLS echo reply packet,
and sends the MPLS echo request packet. LSR_2 and LSR_3 perform common MPLS
forwarding on this MPLS echo request packet. Upon receiving the MPLS echo request
packet, LSR_4 repeats step 3 and verifies that the destination IP address 4.4.4.4 is the
loopback interface address. LSR_4 returns an MPLS echo reply packet that does not
carry the downstream mapping TLV. MPLS tracert is complete.
When routers return the MPLS echo reply packet that carries the downstream mapping TLV,
LSR_1 obtains information about each node along the LSP.
1.3 Applications
This section describes the application scenario of MPLS.
VPN 1 VPN 2
IP/MPLS backbone CE
Site CE Site
network
P P
PE
PE
PE
VPN 2 P VPN 1
CE P CE
Site Site
1.3.2 MPLS TE
On traditional IP networks, routers select the shortest path as the route regardless of other
factors such as bandwidth. Traffic on a path is not switched to other paths even if the path is
congested. As a result, the shortest path first rule can cause severe problems on networks.
Traffic engineering (TE) monitors network traffic and the load of network components and
then adjusts parameters such as traffic management, routing, and resource restraint parameters
in real time. These adjustments help prevent network congestion caused by unbalanced traffic
distribution.
TE can be implemented on a large-scale backbone network using a simple, scalable solution.
MPLS, an overlay model, allows a virtual topology to be established over a physical topology
and maps traffic to the virtual topology. MPLS can be integrated with TE to implement MPLS
TE.
As shown in Figure 1-12, two paths are set up between LSR_1 and LSR_7: LSR_1 -> LSR_2
-> LSR_3 -> LSR_6 -> LSR_7 and LSR_1 -> LSR_2 -> LSR_4 -> LSR_5 -> LSR_6 ->
LSR_7. Bandwidth of the first path is 30 Mbit/s, and bandwidth of the second path is 80
Mbit/s. TE allocates traffic based on bandwidth, preventing link congestion. For example, 30
Mbit/s and 50 Mbit/s services are running between LSR_1 and LSR_7. TE distributes the 30
Mbit/s traffic to the 30 Mbit/s path and the 50 Mbit/s traffic to the 80 Mbit/s path.
30 Mbit/s bandwidth
MPLS TE can reserve resources by setting up LSPs along a specified path to prevent network
congestion and balance network traffic. MPLS TE has the following advantages:
l MPLS TE can reserve resources to ensure the quality of services during the
establishment of LSPs.
l The behaviors of an LSP can be easily controlled based on the attributes of the LSP such
as priority and bandwidth.
l LSP establishment consumes few resources and does not affect other network services.
l Backup path and fast reroute (FRR) protect network communication upon a failure of a
link or a node.
These advantages make MPLS TE the optimal TE solution. MPLS TE allows service
providers (SPs) to fully leverage existing network resources to provide diverse services,
optimize network resources, and efficiently manage the network.
IPv4/MPLS
backbone network
CE 6PE 6PE CE
MP-BGP
IPv6 IPv6
site site
P
Figure 1-13 shows the IPv6 packet forwarding process on an MPLS 6PE network. IPv6
packets must carry outer and inner labels when being forwarded on the IPv4 backbone
network. The inner label (L2) maps the IPv6 prefix, while the outer label (L1) maps the LSP
between 6PEs.
The MPLS 6PE technology allows ISPs to connect existing IPv4/MPLS networks to IPv6
networks by simply upgrading PEs. To ISPs, the MPLS 6PE technology is an efficient
solution for transition to IPv6.
1.4 References
This section lists references of MPLS.
You can set up a static label switched path (LSP) by manually allocating labels to label
switching routers (LSRs). Static LSPs apply to networks with simple and stable network
topologies.
2.1 Overview of Static LSPs
Static LSPs are manually set up by an administrator and apply to networks with simple and
stable network topologies. They cannot be set up using a label distribution protocol.
2.2 Specification
This section provides Static LSPs specifications supported by the device.
2.3 Configuration Notes
This section describes notes about configuring MPLS.
2.4 Default Configuration
This section provides the default static LSP configuration.
2.5 Configuring Static LSPs
This section describes how to configure static LSPs.
2.6 Maintaining Static LSPs
Maintaining static LSPs includes detecting connectivity of an LSP.
2.7 Configuration Examples
This section provides several configuration examples of static LSP together with the
configuration networking diagrams. The configuration examples explain networking
requirements and configuration roadmap.
As shown in Figure 2-1, the path through which IP packets are transmitted on an MPLS
network is called label switched path (LSP). An LSP can be manually configured or
established using label distribution protocols.
PE
VPN 1
Site IP/MPLS backbone
VPN 2
CE network Site
LSP CE
PE P P PE
VPN 2 VPN 1
Site CE CE Site
PE PE
Generally, MPLS uses the Label Distribution Protocol (LDP) to set up LSPs. LDP uses
routing information to set up LSPs. If LDP does not work properly, MPLS traffic may be lost.
Static LSPs are configured to determine the transmission path of some key data or important
services.
A static LSP is set up without using any label distribution protocol to exchange control
packets, so the static LSP consumes few resources. However, a static LSP cannot vary with
the network topology dynamically, and must be adjusted by an administrator according to the
network topology. The static LSP applies to networks with simple and stable network
topologies.
When configuring a static LSP, the administrator needs to manually allocate labels for each
Label Switching Router (LSR) in compliance with the following rule: the value of the
outgoing label of the previous node is equal to the value of the incoming label of the next
node.
In Figure 2-1, a static LSP is set up on the backbone network so that L2VPN or L3VPN
services can be easily deployed.
2.2 Specification
This section provides Static LSPs specifications supported by the device.
Pre-configuration Tasks
Before creating static LSPs, complete the following task:
l Configuring a static unicast route or an IGP to connect LSRs at the network layer
Configuration Process
Create static LSPs according to the following sequence.
Context
An LSR ID identifies an LSR on a network. An LSR does not have the default LSR ID, and
you must configure an LSR ID for it. To enhance network reliability, you are advised to use
the IP address of a loopback interface on the LSR as the LSR ID.
Perform the following steps on each node in an MPLS domain.
Procedure
Step 1 Run:
system-view
----End
Follow-up Procedure
Before changing the configured LSP ID, run the undo mpls command in the system view.
NOTICE
Running the undo mpls command to delete all MPLS configurations will interrupt MPLS
services, so plan the LSR ID of each LSP uniformly to prevent LSR ID change.
Context
Perform the following steps on each LSR in an MPLS domain:
Procedure
Step 1 Run:
system-view
Step 3 Run:
quit
Step 4 Run:
interface interface-type interface-number
Step 6 Run:
mpls
----End
Context
Static LSPs and static Constraint-based Routed LSPs (CR-LSPs) share the same label space
(16-1023). Note that the value of the outgoing label of the previous node is equal to the value
of the incoming label of the next node.
Perform the following operations on the ingress, transit, and egress nodes of the static LSP.
Figure 2-2 shows planned labels.
LSP1
Procedure
Step 1 Configure the ingress node.
1. Run:
system-view
You are advised to set up a static LSP by specifying a next hop. Ensure that the local
routing table contains the route entries, including the destination IP address and the next
hop IP addresses of the LSP to be set up.
As shown in Figure 2-2, the LSP name is LSP1, destination address is 3.3.3.9/32, next
hop address is 172.1.1.2, outbound interface is Vlanif100, and outgoing label is 100.
You are advised to set up a static LSP by specifying a next hop address. In addition,
ensure that the local routing table contains the route entries, including the destination IP
address and the next hop IP address of the LSP to be set up.
As shown in Figure 2-2, the LSP name is LSP1, the inbound interface is Vlanif100,
incoming label is 100, next hop address is 172.2.1.2, outbound interface is Vlanif200,
and outgoing label is 200.
As shown in Figure 2-2, the LSP name is LSP1, the inbound interface is Vlanif200, and
incoming label is 200.
----End
Prerequisites
The configurations of the static LSP function are complete.
Procedure
l Run the display default-parameter mpls management command to check default
configurations of the MPLS management module.
l Run the display mpls static-lsp [ lsp-name ] [ { include | exclude } ip-address mask-
length ] [ verbose ] command to check the static LSP.
l Run the display mpls label static available [ [ label-from label-index ] label-number
label-number ] command to check information about labels available for transmitting
static services.
----End
Context
When configuring static BFD for static LSPs, pay attention to the following points:
l A static BFD session can be created for non-host routes. When the static LSP becomes
Down, the associated BFD session also becomes Down. When the static LSP goes Up, a
BFD session is reestablished.
l The forwarding modes on the forwarding path and reverse path can be different (for
example, an IP packet is sent from the source to the destination through an LSP, and is
sent from the destination to the source in IP forwarding mode), but the forwarding path
and reverse path must be established over the same link. If they use different links, BFD
cannot identify the faulty path when a fault is detected.
Pre-configuration Tasks
Before configuring static BFD for static LSP, complete the following task:
Configuration Process
Configure static BFD for static LSPs according to the following sequence.
Context
BFD parameters on the ingress node include the local and remote discriminators, minimum
intervals for sending and receiving BFD packets, and local BFD detection multiplier. The
BFD parameters affect BFD session setup.
You can adjust the local detection time according to the network situation. On an unstable
link, if a small detection time is used, a BFD session may flap. You can increase the detection
time of the BFD session.
NOTE
Actual interval for the local device to send BFD packets = MAX {locally configured interval for sending
BFD packets, remotely configured interval for receiving BFD packets}
Actual interval for the local device to receive BFD packets = MAX {remotely configured interval for
sending BFD packets, locally configured interval for receiving BFD packets}
Local detection time = Actual interval for receiving BFD packets x Remotely configured BFD detection
multiplier
Perform the following steps on the ingress node of the static LSP.
Procedure
Step 1 Run:
system-view
This node is enabled with the global BFD function. The global BFD view is displayed.
By default, global BFD is disabled.
Step 3 Run:
quit
The local and remote discriminators of the two ends on a BFD session must be correctly associated. That
is, the local discriminator of the local device and the remote discriminator of the remote device are the
same, and the remote discriminator of the local device and the local discriminator of the remote device
are the same. Otherwise, the BFD session cannot be correctly set up. In addition, the local and remote
discriminators cannot be modified after being successfully configured.
The interval for sending BFD packets is set on the local device.
The interval for receiving BFD packets is set on the local device.
By default, the interval for receiving BFD packets is 1000 ms.
Step 8 (Optional) Run:
detect-multiplier multiplier
The changes of the BFD session status can be advertised to the upper-layer application.
By default, a static BFD session cannot report faults of the monitored service module to the
system.
Step 10 Run:
commit
----End
Context
BFD parameters on the egress node includes the local and remote discriminators, minimum
intervals for sending and receiving BFD packets, and local BFD detection multiplier. The
BFD parameters affect BFD session setup.
You can adjust the local detection time according to the network situation. On an unstable
link, if a small detection time is used, a BFD session may flap. You can increase the detection
time of the BFD session.
NOTE
Actual interval for the local device to send BFD packets = MAX {locally configured interval for sending
BFD packets, remotely configured interval for receiving BFD packets}
Actual interval for the local device to receive BFD packets = MAX {remotely configured interval for
sending BFD packets, locally configured interval for receiving BFD packets}
Local detection time = Actual interval for receiving BFD packets x Remotely configured BFD detection
multiplier
Procedure
Step 1 Run:
system-view
This node is enabled with the global BFD function. The global BFD view is displayed.
By default, global BFD is disabled.
Step 3 Run:
quit
The local and remote discriminators of the two ends on a BFD session must be correctly associated. That
is, the local discriminator of the local device and the remote discriminator of the remote device are the
same, and the remote discriminator of the local device and the local discriminator of the remote device
are the same. Otherwise, the BFD session cannot be correctly set up. In addition, the local and remote
discriminators cannot be modified after being successfully configured.
The interval for sending BFD packets is set on the local device.
By default, the interval for sending BFD packets is 1000 ms.
Step 7 (Optional) Run:
min-rx-interval interval
The interval for receiving BFD packets is set on the local device.
The changes of the BFD session status can be advertised to the upper-layer application.
By default, a static BFD session cannot report faults of the monitored service module to the
system.
If an LSP is used as a reverse tunnel to notify the ingress of a fault, you can run this command
to allow the reverse tunnel to switch traffic if the BFD session goes Down. If a single-hop IP
link is used as a reverse tunnel, this command can be configured. Because the process-pst
command can be only configured for BFD single-link detection.
Step 10 Run:
commit
----End
Prerequisites
The configurations of the static BFD for static LSP function are complete.
Procedure
l Run the display bfd configuration { all | static } command to check the BFD
configuration.
l Run the display bfd session { all | static } command to check information about the
BFD session.
l Run the display bfd statistics session { all | static } command to check statistics about
BFD sessions.
l Run the display mpls static-lsp [ lsp-name ] [ { include | exclude } ip-address mask-
length ] [ verbose ] command to check the status of the static LSP.
----End
Context
In MPLS, the control panel used for setting up an LSP cannot detect data forwarding failures
on the LSP. This makes network maintenance difficult.
MPLS ping checks LSP connectivity, and MPLS traceroute locates network faults in addition
to checking LSP connectivity.
MPLS ping and MPLS traceroute can be performed in any view. MPLS ping and MPLS
traceroute do not support packet fragmentation.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Run the lspv mpls-lsp-ping echo enable command to enable the response to MPLS Echo
Request packets.
By default, the device is enabled to respond to MPLS Echo Request packets.
Step 3 (Optional) Run the lspv packet-filter acl-number command to enable MPLS Echo Request
packet filtering based on source IP addresses. The filtering rule is specified in the ACL.
By default, the device does not filter MPLS Echo Request packets based on their source IP
addresses.
Step 4 Run the following command to check the LSP connectivity.
l Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r
reply-mode | -s packet-size | -t time-out | -v ] * ip destination-address mask-length [ ip-
address ] [ nexthop nexthop-address | draft6 ] command to perform an MPLS ping test.
If draft6 is specified, the command is implemented according to draft-ietf-mpls-lsp-
ping-06. By default, the command is implemented according to RFC 4379.
l Run the tracert lsp [ -a source-ip | -exp exp-value | -h ttl-value | -r reply-mode | -t time-
out | -v ] * ip destination-address mask-length [ ip-address ] [ nexthop nexthop-address |
draft6 ] command to perform an MPLS traceroute test.
If draft6 is specified, the command is implemented according to draft-ietf-mpls-lsp-
ping-06. By default, the command is implemented according to RFC 4379.
----End
Postrequisite
l Run the display lspv statistics command to check the LSPV test statistics. A large
amount of statistical information is saved in the system after MPLS ping or traceroute
tests are performed multiple times, which is unhelpful for problem analysis. To obtain
more accurate statistics, run the reset lspv statistics command to clear LSPV test
statistics before running the display lspv statistics command.
l Run the undo lspv mpls-lsp-ping echo enable command to disable response to MPLS
Echo Request packets. It is recommended that you run this command after completing an
MPLS ping or traceroute test to save system resources.
l Run the display lspv configuration command to check the current LSPV configuration.
Networking Requirements
As shown in Figure 2-3, the network topology is simple and stable, and LSR_1, LSR_2, and
LSR_3 are MPLS backbone network devices. A stable public tunnel needs to be created on
the backbone network to transmit L2VPN or L3VPN services.
Configuration Roadmap
You can configure static LSPs to meet the requirement. Configure two static LSPs: LSP1
from LSR_1 to LSR_3 with LSR_1, LSR_2, and LSR_3 as the ingress, transit, and egress
nodes respectively, and LSP2 from LSR_3 to LSR_1 with LSR_3, LSR_2, and LSR_1 as the
ingress, transit, and egress nodes respectively. The configuration roadmap is as follows:
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSR_1.
<HUAWEI> system-view
[HUAWEI] sysname LSR_1
[LSR_1] interface loopback 1
[LSR_1-LoopBack1] ip address 1.1.1.9 32
[LSR_1-LoopBack1] quit
[LSR_1] vlan batch 100
[LSR_1] interface vlanif 100
[LSR_1-Vlanif100] ip address 172.1.1.1 24
[LSR_1-Vlanif100] quit
[LSR_1] interface gigabitethernet 0/0/1
[LSR_1-GigabitEthernet0/0/1] port link-type trunk
[LSR_1-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSR_1-GigabitEthernet0/0/1] quit
The configurations of LSR_2 and LSR_3 are similar to the configuration of LSR_1, and are
not mentioned here.
Step 2 Configure OSPF to advertise the network segments that the interfaces are connected to and
the host route of the LSR ID.
# Configure LSR_1.
[LSR_1] ospf 1
[LSR_1-ospf-1] area 0
[LSR_1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSR_1-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSR_1-ospf-1-area-0.0.0.0] quit
[LSR_1-ospf-1] quit
# Configure LSR_2.
[LSR_2] ospf 1
[LSR_2-ospf-1] area 0
[LSR_2-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[LSR_2-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSR_2-ospf-1-area-0.0.0.0] network 172.2.1.0 0.0.0.255
[LSR_2-ospf-1-area-0.0.0.0] quit
[LSR_2-ospf-1] quit
# Configure LSR_3.
[LSR_3] ospf 1
[LSR_3-ospf-1] area 0
[LSR_3-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[LSR_3-ospf-1-area-0.0.0.0] network 172.2.1.0 0.0.0.255
[LSR_3-ospf-1-area-0.0.0.0] quit
[LSR_3-ospf-1] quit
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Enable basic MPLS functions on each node.
# Configure LSR_1.
[LSR_1] mpls lsr-id 1.1.1.9
[LSR_1] mpls
[LSR_1-mpls] quit
# Configure LSR_2.
# Configure LSR_3.
[LSR_3] mpls lsr-id 3.3.3.9
[LSR_3] mpls
[LSR_3-mpls] quit
# Configure LSR_2.
[LSR_2] interface vlanif 100
[LSR_2-Vlanif100] mpls
[LSR_2-Vlanif100] quit
[LSR_2] interface vlanif 200
[LSR_2-Vlanif200] mpls
[LSR_2-Vlanif200] quit
# Configure LSR_3.
[LSR_3] interface vlanif 200
[LSR_3-Vlanif200] mpls
[LSR_3-Vlanif200] quit
After the configuration is complete, run the display mpls static-lsp command on each node
to check the status of the static LSP. Use the command output on LSR_1 as an example.
[LSR_1] display mpls static-lsp
TOTAL : 1 STATIC LSP(S)
UP : 1 STATIC LSP(S)
DOWN : 0 STATIC LSP(S)
Name FEC I/O Label I/O If Status
LSP1 3.3.3.9/32 NULL/20 -/Vlanif100 Up
The LSP is unidirectional, you need to configure a static LSP from LSR_3 to LSR_1.
Step 6 Configure a static LSP from LSR_3 to LSR_1.
# Configure ingress node LSR_3.
[LSR_3] static-lsp ingress LSP2 destination 1.1.1.9 32 nexthop 172.2.1.1 out-
label 30
No : 2
LSP-Name : LSP2
LSR-Type : Ingress
FEC : 1.1.1.9/32
In-Label : NULL
Out-Label : 30
In-Interface : -
Out-Interface : Vlanif200
NextHop : 172.2.1.1
Static-Lsp Type: Normal
Lsp Status : Up
Run the ping lsp ip 1.1.1.9 32 command on LSR_3. The command output shows that the
static LSP can be pinged.
Run the ping lsp ip 3.3.3.9 32 command on LSR_1. The command output shows that the
static LSP can be pinged.
----End
Configuration Files
l Configuration file of LSR_1
#
sysname LSR_1
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.1.1.0 0.0.0.255
#
static-lsp ingress LSP1 destination 3.3.3.9 32 nexthop 172.1.1.2 out-label 20
static-lsp egress LSP2 incoming-interface Vlanif100 in-label 60
#
return
l Configuration file of LSR_2
#
sysname LSR_2
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
#
static-lsp transit LSP1 incoming-interface Vlanif100 in-label 20 nexthop
172.2.1.2 out-label 40
static-lsp transit LSP2 incoming-interface Vlanif200 in-label 30 nexthop
172.1.1.1 out-label 60
#
return
l Configuration file of LSR_3
#
sysname LSR_3
#
vlan batch 200
#
mpls lsr-id 3.3.3.9
mpls
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.2.1.0 0.0.0.255
#
static-lsp egress LSP1 incoming-interface Vlanif200 in-label 40
static-lsp ingress LSP2 destination 1.1.1.9 32 nexthop 172.2.1.1 out-label 30
#
return
Networking Requirements
As shown in Figure 2-4, PEs and Ps are backbone network devices, and static LSPs have
been set up on the backbone network to transmit network services.
Network services, such as VoIP, online game, and online video service, have high
requirements for real-timeness. Data loss caused by faulty links will seriously affect services.
It is required that services be fast switched to the backup LSP when the primary LSP becomes
faulty, minimizing packet loss. Static BFD for static LSPs is configured to fast detect static
LSPs.
Loopback1
2.2.2.9/32
G
/0 1
/ 0 VL E0
0 1 0 17 N /0/2
A
GE NIF 2/24 2.2 IF2
L A .1. .1. 00
1 V 2. 1 1/2 G
0
/
/0 00 17 4 VL E0 /
Loopback1 GE NIF1 /24
P_1 17 AN 0/1 Loopback1
1 2 .2 IF 2
1.1.1.9/32 VLA .1.1. .1. 00 4.4.4.9/32
2 /2
7 2 4
1 primary LSP
G backup LSP
PE_1 VL E0/ 2 PE_2
1 7 N 0 /2
A
2 .3 IF 3 0 /0/ 00
4
.1. 00 P_2 GE NIF 2/24
1 /2 GE A .
4 VLA 0/0 /2 VL 2.4.1
17 NI /1 0/0 0 0 17
2.3 F3 GE NIF4 /24
.1. 00 A .1
2/2
4 VL .4.1
72
Loopback1 1
3.3.3.9/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF between the PEs and Ps to implement IP connectivity on the backbone
network.
2. Configure static LSPs on PEs and P to transmit network services.
3. Configure static BFD on PEs to fast detect static LSPs. This is because faults on static
LSPs can only be detected by static BFD.
NOTE
Ensure that STP is disabled in this scenario; otherwise, the active link may be unavailable.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure PE_1.
<HUAWEI> system-view
[HUAWEI] sysname PE_1
[PE_1] interface loopback 1
[PE_1-LoopBack1] ip address 1.1.1.9 32
[PE_1-LoopBack1] quit
[PE_1] vlan batch 100 300
[PE_1] interface vlanif 100
[PE_1-Vlanif100] ip address 172.1.1.1 24
[PE_1-Vlanif100] quit
[PE_1] interface vlanif 300
[PE_1-Vlanif300] ip address 172.3.1.1 24
[PE_1-Vlanif300] quit
[PE_1] interface gigabitethernet0/0/1
[PE_1-GigabitEthernet0/0/1] port link-type trunk
[PE_1-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[PE_1-GigabitEthernet0/0/1] quit
[PE_1] interface gigabitethernet0/0/2
[PE_1-GigabitEthernet0/0/2] port link-type trunk
[PE_1-GigabitEthernet0/0/2] port trunk allow-pass vlan 300
[PE_1-GigabitEthernet0/0/2] quit
The configurations of P_1, P_2, and PE_2, are similar to the configuration of PE_1, and are
not mentioned here.
Step 2 Configure OSPF to advertise the network segments that the interfaces are connected to and
the host route of the LSR ID.
# Configure PE_1.
[PE_1] ospf 1
[PE_1-ospf-1] area 0
[PE_1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[PE_1-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[PE_1-ospf-1-area-0.0.0.0] network 172.3.1.0 0.0.0.255
[PE_1-ospf-1-area-0.0.0.0] quit
[PE_1-ospf-1] quit
The configurations of P_1, P_2, and PE_2, are similar to the configuration of PE_1, and are
not mentioned here.
After the configuration is complete, run the display ip routing-table command on each node.
You can see that the nodes learn routes from each other. The outbound interface of the route
from PE_1 to PE_2 is VLANIF 100.
Step 4 Enable basic MPLS functions on each node.
# Configure PE_1.
[PE_1] mpls lsr-id 1.1.1.9
[PE_1] mpls
[PE_1-mpls] quit
# Configure P_1.
[P_1] mpls lsr-id 2.2.2.9
[P_1] mpls
[P_1-mpls] quit
# Configure P_2.
[P_2] mpls lsr-id 3.3.3.9
[P_2] mpls
[P_2-mpls] quit
# Configure PE_2.
[PE_2] mpls lsr-id 4.4.4.9
[PE_2] mpls
[PE_2-mpls] quit
# Configure P_1.
[P_1] interface vlanif 100
[P_1-Vlanif100] mpls
[P_1-Vlanif100] quit
[P_1] interface vlanif 200
[P_1-Vlanif200] mpls
[P_1-Vlanif200] quit
# Configure P_2.
[P_2] interface vlanif 300
[P_2-Vlanif300] mpls
[P_2-Vlanif300] quit
[P_2] interface vlanif 400
[P_2-Vlanif400] mpls
[P_2-Vlanif400] quit
# Configure PE_2.
[PE_2] interface vlanif 200
[PE_2-Vlanif200] mpls
[PE_2-Vlanif200] quit
[PE_2] interface vlanif 400
[PE_2-Vlanif400] mpls
[PE_2-Vlanif400] quit
Step 6 Create a static LSP named LSP1 with PE_1 being the ingress node, P_1 being the transit
node, and PE_2 being the egress node.
Step 7 Create a static LSP named LSP2 with PE_1 being the ingress node, P_2 being the transit
node, and PE_2 being the egress node.
After the configuration is complete, run the ping lsp ip 4.4.4.9 32 command on PE_1. The
command output shows that the LSP can be pinged.
Run the display mpls static-lsp verbose command on each node to check the detailed
information about the static LSP. Use the command output on PE_1 as an example.
[PE_1] display mpls static-lsp verbose
No : 1
LSP-Name : LSP1
LSR-Type : Ingress
FEC : 4.4.4.9/32
In-Label : NULL
Out-Label : 20
In-Interface : -
Out-Interface : Vlanif100
NextHop : 172.1.1.2
Static-Lsp Type: Normal
Lsp Status : Up
No : 2
LSP-Name : LSP2
LSR-Type : Ingress
FEC : 4.4.4.9/32
In-Label : NULL
Out-Label : 30
In-Interface : -
Out-Interface : Vlanif300
NextHop : 172.3.1.2
Static-Lsp Type: Normal
Lsp Status : Down
# On egress node PE_2, configure a BFD session to notify PE_1 of faults on the static LSP.
[PE_2] bfd
[PE_2-bfd] quit
[PE_2] bfd pe2tope1 bind peer-ip 1.1.1.9
[PE_2-bfd-session-pe2tope1] discriminator local 2
[PE_2-bfd-session-pe2tope1] discriminator remote 1
[PE_2-bfd-session-pe2tope1] min-tx-interval 100
[PE_2-bfd-session-pe2tope1] min-rx-interval 100
[PE_2-bfd-session-pe2tope1] commit
[PE_2-bfd-session-pe2tope1] quit
# Run the display bfd session all command on PE_1 to check the configuration. The
command output shows that the BFD session on PE_1 is Up.
[PE_1] display bfd session all
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
1 2 4.4.4.9 Up S_STA_LSP Vlanif100
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
# Run the display bfd session all command on PE_2 to check the configuration.
[PE_2] display bfd session all
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
2 1 1.1.1.9 Up S_IP_PEER -
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
# Run the display bfd session all command on PE to check the status of the BFD session.
[PE_1] display bfd session all
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
1 2 4.4.4.9 Down S_STA_LSP -
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 0/1
----End
Configuration Files
l Configuration file of PE_1
#
sysname PE_1
#
vlan batch 100 300
#
bfd
#
mpls lsr-id 1.1.1.9
mpls
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
ospf cost 1000
mpls
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
bfd pe1tope2 bind static-lsp LSP1
discriminator local 1
discriminator remote 2
min-tx-interval 100
min-rx-interval 100
process-pst
commit
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.3.1.0 0.0.0.255
#
static-lsp ingress LSP1 destination 4.4.4.9 32 nexthop 172.1.1.2 out-label 20
static-lsp ingress LSP2 destination 4.4.4.9 32 nexthop 172.3.1.2 out-label 30
#
return
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
#
static-lsp transit LSP1 incoming-interface Vlanif 100 in-label 20 nexthop
172.2.1.2 out-label 40
#
return
The Multiprotocol Label Switching Label Distribution Protocol (MPLS LDP) defines the
messages in and procedures for distributing labels. MPLS LDP is used by Label Switching
Routers (LSRs) to negotiate session parameters, distribute labels, and then establish Label
Switched Paths (LSPs).
Definition
The Label Distribution Protocol (LDP) is a control protocol of Multiprotocol Label Switching
(MPLS) that functions like a signaling protocol on a traditional network. LDP classifies
FECs, distributes labels, and establishes and maintains LSPs. LDP defines messages used in
the label distribution process as well as procedures for processing these messages.
Purpose
MPLS is highly scalable because it allows multiple labels in a packet and has a connection-
oriented forwarding plane. This scalability enables an MPLS/IP network to provide a variety
of services. Label switching routers (LSRs) on an MPLS network use LDP to map Layer 3
routing information to Layer 2 switched paths, and establish LSPs at the network layer.
LDP is widely used to provide VPN services because of its simple deployment and
configuration, abilities to set up LSPs dynamically based on routing information, and support
for a large number of LSPs.
3.2 Principles
This section describes the implementation of MPLS LDP.
LDP Peers
Two LSRs that use LDP to set up an LDP session and exchange label messages are LDP
peers.
LDP peers learn labels from each other over the LDP session between them.
LDP Adjacency
When an LSR receives a Hello message from a peer, an LDP adjacency is set up between the
two LSRs. Two types of LDP adjacencies are used:
l Local adjacency: adjacency discovered by multicasting a Hello message (link Hello
message)
l Remote adjacency: adjacency discovered by unicasting a Hello message (targeted Hello
message)
LDP maintains peer information based on adjacencies. The type of a peer depends on the type
of its adjacency. A peer can be maintained by multiple adjacencies. If a peer is maintained by
both local and remote adjacencies, the peer is a local-and-remote peer.
LDP Session
LSRs exchange messages over an LDP session that include label mapping and release
messages. LDP sessions can be set up only between LDP peers. The following types of LDP
sessions are available:
l Local LDP session: set up between two LSRs that are directly connected
l Remote LDP session: set up between two LSRs that are directly or indirectly connected
3.2.2.1 Overview
LDP defines the label distribution process and messages transmitted during label distribution.
LSRs use LDP to map Layer 3 routing information to Layer 2 switched paths, and set up an
LSP.
LDP Messages
LDP defines the following messages:
To ensure the reliability of message transmission, LDP uses Transmission Control Protocol
(TCP) transport for Session, Advertisement, and Notification messages. LDP uses User
Datagram Protocol (UDP) transport only for transmitting the Discovery message.
After both LSR_1 and LSR_2 have accepted Keepalive messages from each other, an LDP
session is established between them.
NOTE
When the DU mode is used, LDP supports label distribution for all peers by default. Each node can send
Label Mapping messages to all peers without distinguishing upstream and downstream nodes. If an LSR
distributes labels only to upstream peers, it must identify its upstream and downstream nodes based on
routing information before sending Label Mapping messages. An upstream node cannot send Label
Mapping messages to its downstream node. If the upstream/downstream roles change because the
corresponding route changes, the new downstream node sends Label Mapping messages to its upstream
node. In this process, network convergence is slow.
LSP
Label mapping
MD5 Authentication
MD5 authentication is a standard digest algorithm defined in RFC 1321. A typical application
of MD5 is to calculate a message digest to prevent message spoofing. The MD5 message
digest is a unique result calculated by an irreversible character string conversion. If a message
is modified during transmission, a different digest is generated. After the message arrives at
the receiver, the receiver can determine whether the packet has been modified by comparing
the received digest with the pre-calculated digest.
MD5 generates a unique digest for an information segment, so LDP MD5 authentication can
prevent LDP packets from being modified. This authentication is stricter than common
checksum verification of TCP. The MD5 authentication process is as follows:
1. Before an LDP session message is sent over a TCP connection, the sender pads the TCP
header with a unique digest. The digest is calculated using the MD5 algorithm based on
the TCP header, LDP message, and configured password.
2. Upon receiving the TCP packet, the receiver obtains the TCP header, digest, and LDP
session message, and then uses MD5 to calculate a digest based on the received TCP
header, LDP session message, and locally stored password. The receiver compares the
calculated digest with the received one to check whether the packet has been modified.
A password can be set in either cipher text or simple text. The simple-text password is directly
saved in the configuration file. The cipher-text password is saved in the configuration file
after being encrypted using a special algorithm. However, the character string entered by the
user is used to calculate the digest, regardless of whether the password is in simple text or
cipher text. That is, the cipher-text password does not participate in MD5 calculation. As
devices from different vendors use proprietary password encryption algorithms, this digest
calculation method shields differences of password encryption algorithms used on different
devices.
Keychain Authentication
Compared with MD5, Keychain is an enhanced encryption algorithm that calculates a
message digest for the same LDP message to prevent the message from being modified.
During Keychain authentication, a group of passwords are defined to form a password string.
Each password is specified with encryption and decryption algorithms such as MD5 algorithm
and SHA-1, and is configured with the validity period. When sending or receiving a packet,
the system selects a valid password based on the user's configuration. Within the valid period
of the password, the system uses the encryption algorithm matching the password to encrypt
the packet before sending it out, or uses the decryption algorithm matching the password to
decrypt the packet before accepting it. In addition, the system automatically uses a new
password after the previous password expires, preventing the password from being decrypted.
The Keychain authentication password, the encryption and decryption algorithms, and the
password validity period that constitute a Keychain configuration node are configured using
different commands. A Keychain configuration node requires at least one password and
encryption and decryption algorithms.
LDP GTSM
Generalized TTL Security Mechanism (GTSM) protects services by checking whether the
TTL value in the IP header is within the pre-defined range. The prerequisites for using GTSM
are as follows:
To protect the device against attacks, GTSM verifies the TTL in a packet. LDP GTSM is
applied to LDP packets between neighbor or adjacent (based on a fixed number of hops)
devices. The TTL range is preset on each device for packets from other devices. With GTSM
enabled, if the TTL of an LDP packet received by a device configured with LDP is out of the
TTL range, the packet is considered invalid and is discarded. This protects the upper-layer
protocols.
Background
On a large-scale network, multiple IGP areas are often configured for flexible network
deployment and fast route convergence. To reduce the number of routes and conserve
resources, area border routers (ABRs) summarize the routes in their areas and advertise the
summarized routes to neighboring IGP areas. However, LDP follows the exact match
principle when establishing LSPs. LDP searches for the route exactly matching a forwarding
equivalence class (FEC) in the received Label Mapping message. If only summarized routes
are available, LDP supports only liberal LSPs and cannot set up inter-area LSPs. LDP
extensions are available to help set up inter-area LDP LSPs.
NOTE
A liberal LSP is an LSP that has been assigned labels but fails to be established.
Implementation
The network shown in Figure 3-5 have two IGP areas, Area 10 and Area 20. LSR_2 at the
border of Area 10 has two host routes to LSR_3 and LSR_4. To reduce the resources
consumed by routes, LSR_2 can run IS-IS to summarize the two routes to one route
1.3.0.0/24 and advertise this route to Area 20.
Figure 3-5 Networking topology for LDP extensions for inter-area LSPs
Loopback0
1.3.0.1/32
LSR_4
When establishing an LSP, LDP searches the routing table for the route that exactly matches
the FEC in the received Label Mapping message. In Figure 3-5, LSR_1 has only a
summarized route (1.3.0.0/24) but not 32-bit host routes in its routing table. Table 3-4 lists the
route of LSR_1 and routes carried in the FEC.
1.3.0.0/24 1.3.0.1/32
1.3.0.2/32
If only summarized routes are available, LDP supports only liberal LSPs and cannot set up
inter-area LDP LSPs. In this situation, tunnels cannot be set up on the backbone network.
To set up an LSP, LSR_1 must follow the longest match principle to find the route. There is a
summarized route 1.3.0.0/24 in the routing table of LSR_1. When LSR_1 receives a Label
Mapping message (for example, a message carrying FEC 1.3.0.1/32) from Area 10, LSR_1
finds the summarized route 1.3.0.0/24 according to the longest match principle. Then LSR_1
applies the outbound interface and next hop of the summarized route to the route 1.3.0.1/32.
An inter-area LDP LSP is established.
3.2.5.1 Overview
LDP LSP reliability technologies are necessary for the following reasons:
l If a node or link on a working LDP LSP fails, reliability technologies are required to set
up a backup LDP LSP and switch traffic to the backup LDP LSP, while minimizing
packet loss in this process.
l When a node on a working LDP LSP encounters a control plane failure but the
forwarding plane is still working, reliability technologies are required to ensure traffic
forwarding during fault recovery on the control plane.
MPLS provides multiple reliability technologies to ensure high reliability of key services
transmitted over LDP LSPs. The following table describes these reliability technologies.
Fault detection Rapidly detects faults on LDP LSPs of an l 3.2.5.2 BFD for
MPLS network and triggers protection LDP LSPs
switching.
Background
If a node or link along a working LDP LSP fails, traffic is switched to the backup LSP.
Because the fault detection mechanism of LDP is slow, traffic switching takes a relatively
long time, causing traffic loss.
As shown in Figure 3-6, an LSR periodically sends Hello messages to its neighboring LSRs
to advertise its existence on the network and maintain adjacencies. An LSR creates a Hello
timer for each neighbor to maintain an adjacency. Each time the LSR receives a Hello
message, the LSR resets the Hello timer. If the Hello timer expires before the LSR receives a
new Hello message, the LSR considers that the adjacency is terminated. This mechanism
cannot detect link faults quickly, especially when a Layer 2 device is deployed between LSRs.
Primary LSP
Backup LSP
Hello message
LSR_4
BFD can quickly detect faults on an LDP LSP and trigger a traffic switchover upon an LDP
LSP failure, minimizing packet loss and improving network reliability.
Implementation
BFD for LDP LSPs can rapidly detect a fault on an LDP LSP and notify the forwarding plane
of the fault to ensure fast traffic switchover.
A BFD session is bound to an LSP. That is, a BFD session is set up between the ingress and
egress nodes. A BFD packet is sent from the ingress node to the egress node along an LSP.
Then, the egress node responds to the BFD packet. In this manner, the ingress node can detect
the LSP status quickly. After BFD detects an LSP failure, BFD notifies the forwarding plane.
Then, the forwarding plane switches traffic to the backup LSP.
Primary LSP
Backup LSP
BFD session
LSR_4
Background
Because LDP convergence is slower than IGP route convergence, the following problems
occur on an MPLS network where primary and backup links exist:
l When the primary link fails, the IGP route of the backup link becomes primary and
traffic is switched to the backup LSP over the backup link. After the primary link
recovers, the IGP route of the primary link becomes primary before an LDP session is
established over the primary link. As a result, traffic is dropped during attempts to use
the unreachable LSP.
l When the IGP route of the primary link is reachable and an LDP session between nodes
on the primary link fails, traffic is directed using the IGP route of the primary link, while
the LSP over the primary link is torn down. Because a preferred IGP route of the backup
link is unavailable, an LSP over the backup link cannot be established, causing traffic
loss.
l When the primary/backup switchover occurs on a node, the LDP session is established
after IGP GR completion. IGP advertises the maximum cost of the link, causing route
flapping.
Synchronization between LDP and IGP helps prevent traffic loss caused by these problems.
Related Concepts
Synchronization between LDP and IGP involves three timers:
l Hold-down timer: controls the amount of time before establishing an IGP neighbor
relationship.
l Hold-max-cost timer: controls the interval for advertising the maximum link cost on an
interface.
Implementation
l As shown in Figure 3-8, when traffic is switched between primary and backup links,
synchronization between LDP and IGP is implemented as follows.
Primary LSP
Backup LSP
Link fault
LSR_4
LSP fault
LSR_3
GR Helper
GR Restarter
Primary LSP
Backup LSP
3.2.5.4 LDP GR
LDP Graceful Restart (GR) ensures uninterrupted traffic transmission during a protocol restart
or active/standby switchover because the forwarding plane is separated from the control
plane.
Background
On an MPLS network, when the GR Restarter restarts a protocol or performs an active/
standby switchover, label forwarding entries on the forwarding plane are deleted, interrupting
data forwarding.
LDP GR can address this issue and therefore improve network reliability. During a protocol
restart or active/standby switchover, LDP GR retains label forwarding entries because the
forwarding plane is separated from the control plane. The device still forwards packets based
on the label forwarding entries, ensuring data transmission. After the protocol restart or
active/standby switchover is complete, the GR Restarter can restore to the original state with
the help of the GR Helper.
Concepts
LDP GR is a high-reliability technology based on non-stop forwarding (NSF). A GR process
involves GR Restarter and GR Helper devices:
l GR Restarter: has GR capability and restarts a protocol.
l GR Helper: assists in the GR process as a GR-capable neighbor of the GR Restarter.
NOTE
A stack system can function as a GR Restarter, and a standalone device can only function as a GR
Helper.
Implementation
Figure 3-10 shows LDP GR implementation.
Negotiate GR capability
Active/Standby
switchover or
protocol restart
Send an LDP Initialization message
Reconnect
Reestablish an LDP session timer
Forwarding
State Holding
timer Exchange Label Mapping messages
Recovery
timer
Local
CE_1 PE_1 PE_2 CE_2
adjacency
PE_2 maintain both local and remote adjacencies. In this case, a local-and-remote LDP
session exists between PE_1 and PE_2. L2VPN messages are transmitted over this LDP
session.
2. When the physical link between PE_1 and PE_2 goes Down, the local LDP adjacency
goes Down. The route between PE_1 and PE_2 is reachable through P, so the remote
LDP adjacency is still Up. The session type changes to a remote session. Since the
session is still Up, L2VPN is uninformed of the session type change and does not delete
the session. This avoids the neighbor disconnection and recovery process and therefore
reduces the service interruption time.
3. When the physical link between PE_1 and PE_2 recovers, the local LDP adjacency goes
Up. The session is restored to a local-and-remote and remains Up. Again L2VPN is not
informed of the session type change and does not delete the session. This reduces the
service interruption time.
3.3 Specification
This section provides MPLS specifications supported by the device.
Table 3-6 lists the MPLS specifications.
Maximum number of LDP peers 64 local LDP peers and 128 remote LDP
peers
Configure basic You can build an MPLS network 3.7.1 Configuring Basic
functions of and establish LDP LSPs only after Functions of MPLS LDP
MPLS LDP basic functions of MPLS LDP are
configured.
Configure LDP LDP security mechanisms ensure 3.7.7 Configuring LDP Security
security security of LDP messages. Mechanisms
mechanisms
Longest-match Disabled
LDP GR Disabled
Pre-configuration Tasks
Before configuring basic functions of MPLS LDP, complete the following task:
l Configuring static routes or an IGP to ensure that IP routes between LSRs are reachable
NOTE
When Routing Information Protocol version 1 (RIP-1) is used, you need to enable LDP to search
for routes to establish LSPs according to the longest match principle. For details, see 3.7.2
Configuring LDP Extensions for Inter-Area LSPs.
Configuration Process
Configure basic functions of MPLS LDP according to the following sequence.
Context
An LSR ID identifies an LSR on a network. An LSR does not have the default LSR ID, and
you must configure an LSR ID for it. To enhance network reliability, you are advised to use
the IP address of a loopback interface on the LSR as the LSR ID.
Perform the following steps on each node in an MPLS domain.
Procedure
Step 1 Run:
system-view
----End
Follow-up Procedure
Before changing the configured LSP ID, run the undo mpls command in the system view.
NOTICE
Running the undo mpls command to delete all MPLS configurations will interrupt MPLS
services, so plan the LSR ID of each LSP uniformly to prevent LSR ID change.
Context
You can perform other MPLS configurations only after enabling global MPLS.
Perform the following steps on each node in an MPLS domain.
Procedure
Step 1 Run:
system-view
----End
Context
You can perform other MPLS LDP configurations only after enabling global MPLS LDP.
Perform the following steps on each node in an MPLS domain.
Procedure
Step 1 Run:
system-view
MPLS LDP is enabled globally and the MPLS LDP view is displayed.
By default, LDP is not enabled globally.
Step 3 (Optional) Run:
lsr-id lsr-id
----End
Context
The MPLS LDP session is classified into local LDP sessions and remote LDP sessions. You
can choose one of the following configurations according to your requirements:
A local LDP session and a remote LDP session can coexist. That is, two LSRs can establish a
local LDP session and a remote LDP session simultaneously. In this case, configurations of
the local and remote LDP sessions at both ends must be the same.
Procedure
l Configuring a local LDP session
a. Run:
system-view
The view of the interface on which the LDP session is to be set up is displayed.
c. (Optional) On an Ethernet interface, run:
undo portswitch
Perform the following steps on the LSRs on both ends of a remote LDP session.
a. Run:
system-view
The remote peer is created and the remote peer view is displayed.
c. Run:
remote-ip ip-address
This IP address must be the LSR ID of the remote MPLS LDP peer. If the LSR IDs
of the LDP instance and the local node are different, use the LSR ID of the LDP
instance.
NOTICE
l Modifying or deleting the IP address of a remote peer leads to deletion of the
remote LDP session and MPLS service interruption.
l After the IP address of the remote peer is configured using the remote-ip ip-
address command, the value of ip-address cannot be used as the IP address of
the local interface. Otherwise, the remote session will be interrupted, causing
MPLS service interruption.
----End
Context
LDP sessions are established based on TCP connections. Before two LSRs establish an LDP
session, they need to check the LDP transport address of each other, and then establish a TCP
connection.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
The view of the interface on which the LDP session is to be set up is displayed.
Step 4 Run:
mpls ldp transport-address { interface-type interface-number | interface }
The default transport address for a node on a public network is the LSR ID of the node, and
the default transport address for a node on a private network is the primary IP address of an
interface on the node.
If LDP sessions are to be established over multiple links connecting two LSRs, LDP-enabled
interfaces on either LSR must use the default transport address or the same transport address.
If multiple transport addresses are configured on an LSR, only one transport address can be
used to establish only one LDP session.
NOTICE
Changing an LDP transport address interrupts an LDP session. Exercise caution when running
this command.
----End
Context
Table 3-9 describes the timers for an LDP session.
Hello send Used to send Hello messages On an unstable network, decrease the
timer: periodically to notify a peer value of a Hello send timer, speeding
l Link-Hello LSR of the local LSR's up network fault detection.
send timer presence and establish a Hello
(for only adjacency.
local LDP
sessions)
l Target-
Hello send
timer (for
only remote
LDP
sessions)
Keepalive hold Used to send LDP PDUs over On a network with unstable links,
timer an LDP session, maintaining increase the value of the Keepalive
the LDP session. If no LDP hold timer, preventing the LDP
PDU is received after the session from flapping.
Keepalive hold timer expires,
the TCP connection is closed
and the LDP session is
terminated.
NOTE
When local and remote LDP sessions coexist, the timeout interval of the Keepalive hold timer of the
local and remote LDP sessions must be the same.
Procedure
l Configuring timers for a local LDP session
a. Run:
system-view
The default value of a link Hello send timer is one third of the value of a link Hello
hold timer.
Effective value of a link Hello send timer = Min {Configured value of the link
Hello send timer, one third of the value of the link Hello hold timer}
e. Run:
mpls ldp timer hello-hold interval
The smaller value between two configured link Hello hold timers on both ends of
the LDP session takes effect.
f. Run:
mpls ldp timer keepalive-send interval
The default value of a Keepalive send timer is one third of the value of the
Keepalive hold timer.
If more than one LDP-enabled links connect two LSRs, the values of Keepalive
send timers for all links must be the same. Otherwise, LDP sessions become
unstable.
g. Run:
mpls ldp timer keepalive-hold interval
The smaller value between two configured Keepalive hold timers on both ends of
the LDP session takes effect.
If more than one LDP-enabled links connect two LSRs, the values of Keepalive
hold timers for all links must be the same. Otherwise, LDP sessions may fail to be
set up.
NOTICE
Changing the Keepalive hold timer value in an instance will interrupt the MPLS
service in the instance because the LDP session must be reestablished.
The smaller value between two configured target Hello hold timers on both ends of
the LDP session takes effect.
e. Run:
mpls ldp timer keepalive-send interval
The default value of a Keepalive send timer is one third of the value of the
Keepalive hold timer.
If more than one LDP-enabled links connect two LSRs, the values of Keepalive
send timers for all links must be the same. Otherwise, LDP sessions become
unstable.
f. Run:
mpls ldp timer keepalive-hold interval
The smaller value between two configured Keepalive hold timers on both ends of
the LDP session takes effect.
If more than one LDP-enabled links connect two LSRs, the values of Keepalive
hold timers for all links must be the same. Otherwise, LDP sessions may fail to be
set up.
NOTICE
Changing the Keepalive hold timer value in an instance may interrupt the MPLS
service in the instance because the LDP session must be reestablished.
----End
Context
No label needs to be swapped on the egress node of an LSP. PHP can be configured on the
egress node to allow the LSR at the penultimate hop to pop out the label from an MPLS
packet and send the packet to the egress node. After receiving the packet, the egress node
directly forwards the packet through an IP link or according to the next layer label. PHP helps
reduce the burden on the egress node.
Procedure
Step 1 Run:
system-view
After the label advertise command is run to change the label distribution mode on the egress node, the
modification takes effect on new LSPs but not on existing LSPs. To enable the modification to take
effect on the existing LSPs, run the reset mpls ldp or lsp-trigger command.
----End
Context
By default, a downstream node sends Label Mapping messages to its upstream node. When
faults occur on the network, services can be fast switched to the standby path, improving
network reliability. Edge devices on the MPLS network are low-end devices. To ensure
network reliability, resources must be fully used. You can configure the Downstream on
Demand (DoD) mode to save system resources. In DoD mode, the downstream LSR sends a
Label Mapping message to the upstream LSR only when the upstream LSR sends a Label
Request message to the downstream LSR.
NOTE
l Modifying a configured label advertisement mode leads to the reestablishment of an LDP session,
resulting in MPLS service interruption.
l When the local and remote LDP sessions coexist, they must be configured with the same label
advertisement mode.
Procedure
l Configuring an LDP label advertisement mode of local LDP session.
a. Run:
system-view
A remote MPLS LDP peer is created and the remote MPLS LDP peer view is
displayed.
c. Run:
mpls ldp advertisement { dod | du }
----End
Context
On a large-scale network, to reduce the burden on edge devices, use the DoD mode. Because
edge devices cannot learn the accurate route to the remote peer, an LDP LSP cannot be set up
even if LDP extensions for inter-area LSPs are configured. You can configure the DoD mode
in which the local LSR requests a Label Mapping message from a specified downstream LSR
or all LSRs to set up an LDP LSP.
NOTE
Before configuring LDP to automatically trigger the request in DoD mode, perform the following
operations:
l Configuring a remote LDP session according to 3.7.1 Configuring Basic Functions of MPLS
LDP
l 3.7.2 Configuring LDP Extensions for Inter-Area LSPs
l 3.7.1.8 (Optional) Configuring an LDP Label Advertisement Mode
Procedure
l Configure automatic triggering of a request to a downstream node for a Label Mapping
message associated with all remote LDP peers in DoD mode.
a. Run:
system-view
By default, when the DoD is not used, the local LSR requests Label Mapping
messages from all downstream LSRs.
l Configure automatic triggering of a request to a downstream node for a Label Mapping
message associated with a remote LPD peer with a specified LSR ID in DoD mode.
a. Run:
system-view
A remote MPLS LDP peer is created and the remote MPLS-LDP peer view is
displayed.
c. Run:
remote-ip auto-dod-request [ block ]
By default, when the DoD mode is not used,the local LSR requests a Label
Mapping message from a specified downstream LSR.
----End
Context
The device does not support LDP loop detection. If the neighbor of a node supports loop
detection and requires the same loop detection function on both ends of an LDP session,
configure LDP loop detection on the local node to ensure the establishment of an LDP
session.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls ldp
Step 3 Run:
loop-detect
The device is enabled to advertise the look detection capabily during initialization of LDP
sessions.
By default, a device does not advertise loop detection capability during initialization of LDP
sessions.
NOTE
After the loop-detect command is run, the switch obtains the capability of negotiating LDP loop
detection but still does not support LDP loop detection.
By default, a maximum of 32 hops of the path vector are used for LDP loop detection.
A path vector is carried in a Mapping message to record the addresses of nodes that an LDP
LSP has passed. By setting the maximum hops that a path vector can record, you can adjust
the sensitivity of LDP loop detection. If the maximum hops of a path vector is n, the egress
LSP triggered by local routes detects a loop after n + 1 hops, and the egress LSP triggered by
non-local routes detects a loop after n hops.
----End
Context
The size of the maximum transmission unit (MTU) determines the maximum number of bytes
that can be transmitted by the sender at a time. If the MTU exceeds the maximum number of
bytes supported by the receiver or a transit device, packets are fragmented or even discarded,
which increases the network transmission load. In this manner, devices have to calculate the
MTU before the communication to ensure that sent packets reach the receiver successfully.
LDP MTU = Min {All MTUs advertised by all downstream devices, MTU of the local
outbound interface}
A downstream LSR uses the preceding formula to calculate an MTU, adds it to the MTU TLV
in a Label Mapping message, and sends the Label Mapping message to the upstream device.
If an MTU value changes (such as when the local outbound interface or its configuration is
changed), an LSR recalculates an MTU and sends a Label Mapping message carrying the new
MTU to its upstream LSR. The relationships between the MPLS MTU and the interface MTU
are as follows:
l If an interface MTU but not an MPLS MTU is configured on an interface, the interface
MTU is used.
l If both an MPLS MTU and an interface MTU are configured on an interface, the smaller
value between the MPLS MTU and the interface MTU is used.
MPLS determines the size of MPLS packets on the ingress node according to the LDP MTU
to prevent the transit node from forwarding large-sized MPLS packets.
Procedure
Step 1 Run:
system-view
The LSR is disabled from sending Label Mapping messages carrying MTU TLVs.
By default, the switch sends Label Mapping messages carrying the Huawei private MTU
TLV.
If a non-Huawei device does not support the MTU TLV, to implement interworking,
configure the device not to encapsulate the MTU TLV in Label Mapping messages. If the
LSR is disabled from sending the MTU TLV, the configured MPLS MTU does not take
effect.
l Run:
mtu-signalling apply-tlv
The LSR is configured to send Label Mapping messages carrying MTU TLVs that
comply with RFC 3988.
By default, the switch sends Label Mapping messages carrying Huawei proprietary MTU
TLV.
If a non-Huawei device supports the MTU TLV, to implement interworking, configure
the device to send Label Mapping messages carrying MTU TLVs that comply with RFC
3988. Otherwise, the configured MPLS MTU may not take effect.
NOTICE
Enabling or disabling the function to send an MTU TLV leads the reestablishment of existing
LDP sessions, resulting in MPLS service interruption.
Step 4 Run:
quit
Step 5 Run:
interface interface-type interface-number
Step 7 Run:
mpls mtu mtu
----End
Context
MPLS processes the TTL in the following modes:
Procedure
l Configuring the MPLS TTL processing mode
Perform the following steps on the ingress node.
a. Run:
system-view
NOTE
The ttl propagate command only take effect on LSPs that are to be set up. Before using the
function on LSPs that have been set up, run the reset mpls ldp command to reestablish the
LSPs.
l Configuring the path where ICMP response packets are transmitted
Perform the following steps on the ingress and egress nodes.
a. Run:
system-view
Or, run:
ttl expiration pop
By default, upon receiving an MPLS packet with one label, an LSR returns an
ICMP response packet using a local IP route.
----End
Context
The LSR distributes labels to both upstream and downstream LDP peers, which increases the
LDP LSP convergence speed. However, receiving and sending Label Mapping messages
result in the establishments of a large number of LSPs, which wastes resources. To reduce the
number of LSPs and save memory, use the following policies:
Procedure
l Configure an inbound LDP policy.
a. Run:
system-view
An inbound policy for allowing the local LSR to receive Label Mapping messages
from a specified LDP peer for a specified IGP route is configured.
NOTE
If multiple inbound policies are configured for a specified LDP peer, the first configured one
takes effect. For example, the following two inbound policies are configured:
inbound peer 2.2.2.2 fec host
inbound peer peer-group group1 fec none
As group1 also contains an LDP peer with peer-id of 2.2.2.2, the following inbound policy
takes effect:
inbound peer 2.2.2.2 fec host
If two inbound policies are configured in sequence and the peer parameters in the two
commands are the same, the second command overwrites the first one. For example, the
following two inbound policies are configured:
inbound peer 2.2.2.2 fec host
inbound peer 2.2.2.2 fec none
The second configuration overwrites the first one. This means that the following inbound
policy takes effect on the LDP peer with peer-id of 2.2.2.2:
inbound peer 2.2.2.2 fec none
A split horizon policy is configured to distribute labels to only upstream LDP peers.
By default, split horizon is not enabled and an LSR distributes labels to both
upstream and downstream LDP peers.
NOTE
The all parameter takes preference over the peer-id parameter. For example, the outbound
peer all split-horizon and then outbound peer 2.2.2.2 split-horizon commands are run, the
outbound peer all split-horizon command can be saved in the configuration file and take
effect, not the outbound peer 2.2.2.2 split-horizon command.
----End
Follow-up Procedure
l To delete all inbound policies simultaneously, run the undo inbound peer all command.
l To delete all outbound policies simultaneously, run the undo outbound peer all
command.
Context
In MPLS L2VPN scenarios using LDP (including Martini VLL, PWE3, and Martini VPLS),
PEs at both ends need to establish a remote LDP session. The remote LDP session is only
used to transmit Label Mapping messages, so LDP is not required. By default, LDP allocates
common LDP labels to remote peers. Many useless idle labels are generated, wasting LDP
labels.
To solve the preceding problem, disable a device from distributing labels to remote peers to
save system resources. You can use either of the following modes:
l In the LDP view, disable the PE from distributing labels to all remote peers.
l In the view of a specified remote peer, disable the PE from distributing labels to the
specified remote peer.
Procedure
l Disable a device from distributing labels to a specified remote peer.
a. Run:
system-view
LDP is prevented from allocating public network labels to a specified remote peer
device.
LDP is prevented from allocating public network labels to all remote peer devices.
----End
Context
After MPLS LDP is enabled, LSPs are automatically established. If no policy is configured,
an increasing number of LSPs are established, wasting resources.
l Configure the lsp-trigger command on the ingress and egress nodes to trigger LSP setup
based on routes. This setting controls the number of LSPs and saves network resources.
l Configure the propagate mapping command on the transit node to allow LDP to use
routes matching specified conditions to establish transit LSPs. For the routes that do not
match specified conditions, the local device does not send Label Mapping messages to
the upstream device, which reduces the number of LSPs and saves network resources.
By default, the lsp-trigger command is used. If policies cannot be configured on the ingress
and egress nodes, configure the propagate mapping command.
Procedure
l Perform the following steps on the ingress and egress nodes:
a. Run:
system-view
A policy for triggering LSP establishment based on static and IGP routes is
configured.
By default, the policy is host. This policy allows LDP to use 32-bit host-address
routes ( except 32-bit host-address of interfaces ) to establish LSPs.
NOTE
LSPs can be established using exactly matching routes on LSRs. For example, an exactly
matching host route to an IP address with a 32-bit mask of a loopback interface can be used
to trigger LSP establishment.
l Perform the following steps on the transit nodes:
a. Run:
system-view
By default, LDP uses all routes without filtering them to establish transit LSPs.
----End
Context
An LSP on a local node flaps because an LDP session between the node and its downstream
peer flaps, a route flaps, or an LDP policy is modified. The local node repeatedly sends Label
Withdraw and Label Mapping messages in sequence to upstream nodes. This causes the
upstream nodes to repeatedly tear down and reestablish LSPs. As a result, the entire LDP LSP
flaps. The label withdraw delay function prevents the entire LDP LSP from flapping.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls ldp
Step 3 Run:
label-withdraw-delay
Step 4 Run:
label-withdraw-delay timer time
----End
Prerequisites
The configurations of the MPLS LDP function are complete.
Procedure
l Run the display default-parameter mpls management command to check default
configurations of the MPLS management module.
l Run the display default-parameter mpls ldp command to check the default
configurations of MPLS LDP.
l Run the display mpls interface [ interface-type interface-number ] [ verbose ]
command to check information about MPLS-enabled interfaces.
l Run the display mpls ldp [ all ] [ verbose ] command to check LDP information.
l Run the display mpls ldp interface [ interface-type interface-number | [ all ]
[ verbose ] ] command to check information about LDP-enabled interfaces.
l Run the display mpls ldp adjacency [ interface interface-type interface-number |
remote ] [ peer peer-id ] [ verbose ] command to check information about LDP
adjacencies.
l Run the display mpls ldp adjacency statistics command to check statistics about LDP
adjacencies.
l Run the display mpls ldp session [ [ all ] [ verbose ] | peer-id ] command to check the
LDP session status.
l Run the display mpls ldp session statistics command to check statistics about sessions
between LDP peers.
l Run the display mpls ldp peer [ [ all ] [ verbose ] | peer-id ] command to check
information about LDP peers.
l Run the display mpls ldp peer statistics command to check statistics about LDP peers.
l Run the display mpls ldp remote-peer [ remote-peer-name | peer-id lsr-id ] command
to check information about the LDP remote peer.
l Run the display mpls ldp lsp [ all ] command to check LDP LSP information.
l Run the display mpls ldp lsp statistics command to check statistics about LDP LSPs.
l Run the display mpls route-state [ { exclude | include } { idle | ready | settingup } * |
destination-address mask-length ] [ verbose ] command to check the dynamic LSP route.
l Run the display mpls lsp [ verbose ] command to check LSP information.
l Run the display mpls lsp statistics command to check statistics about the LSPs that are
in the Up state and the number of the LSPs that are activated on the ingress, transit, and
egress nodes.
l Run the display mpls label all summary command to check allocation information
about all MPLS labels.
----End
Pre-configuration Tasks
Before configuring LDP extensions for inter-area LSPs, complete the following tasks:
Context
To configure LDP extensions for inter-area LSPs, configure the ingress or transit node.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls ldp
Step 3 Run:
longest-match
LDP is configured to search for routes based on the longest match rule to establish LSPs.
By default, LDP searches for routes to establish LSPs based on the exact matching rule.
----End
Context
When static BFD monitors an LDP LSP, pay attention to the following points:
l BFD is bound to only the ingress node of an LDP LSP.
l One LSP is bound to only one BFD session.
l The detection only supports the LDP LSP that is triggered to establish by the host route.
l The forwarding modes on the forwarding path and reverse path can be different (for
example, an IP packet is sent from the source to the destination through an LSP, and is
sent from the destination to the source in IP forwarding mode), but the forwarding path
and reverse path must be established over the same link. If they use different links, BFD
cannot identify the faulty path when a fault is detected.
Pre-configuration Tasks
Before configuring static BFD to detect an LDP LSP, complete the following task:
Configuration Process
Configure static BFD for LDP LSPs according to the following sequence.
Context
BFD parameters on the ingress node include the local and remote discriminators, intervals for
sending and receiving BFD packets, and local BFD detection multiplier. The BFD parameters
affect BFD session setup.
You can adjust the local detection time according to the network situation. On an unstable
link, if a small detection time is used, a BFD session may flap. You can increase the detection
time of the BFD session.
NOTE
Actual interval for the local device to send BFD packets = MAX { locally configured interval for
sending BFD packets, remotely configured interval for receiving BFD packets }
Actual interval for the local device to receive BFD packets = MAX { remotely configured interval for
sending BFD packets, locally configured interval for receiving BFD packets }
Local detection time = Actual interval for receiving BFD packets x Remotely configured BFD detection
multiplier
Procedure
Step 1 Run:
system-view
This node is enabled with the global BFD function. The global BFD view is displayed.
By default, global BFD is disabled.
Step 3 Run:
quit
The local and remote identifiers on both ends of a BFD session must be consistent with each other;
otherwise, the session cannot be established correctly. In addition, the local and remote identifiers cannot
be modified after configuration.
The interval for sending BFD packets is set on the local device.
The interval for receiving BFD packets is set on the local device.
Step 9 Run:
process-pst
The changes of BFD session status can be advertised to the application on the upper layer.
By default, a static BFD session cannot report faults of the monitored service module to the
system.
Step 10 Run:
commit
----End
Follow-up Procedure
When the BFD session is established and its status is Up, the BFD starts to detect failure in an
LDP LSP.
When the LDP LSP is deleted, the BFD status turns Down.
Context
BFD parameters on the egress node includes the local and remote discriminators, intervals for
sending and receiving BFD packets, and local BFD detection multiplier. The BFD parameters
affect BFD session setup.
You can adjust the local detection time according to the network situation. On an unstable
link, if a small detection time is used, a BFD session may flap. You can increase the detection
time of the BFD session.
NOTE
Actual interval for the local device to send BFD packets = MAX {locally configured interval for sending
BFD packets, remotely configured interval for receiving BFD packets}
Actual interval for the local device to receive BFD packets = MAX {remotely configured interval for
sending BFD packets, locally configured interval for receiving BFD packets}
Local detection time = Actual interval for receiving BFD packets x Remotely configured BFD detection
multiplier
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
This node is enabled with global BFD. The global BFD view is displayed.
Step 3 Run:
quit
Step 4 Configure a reverse tunnel to inform the ingress node of a fault if the fault occurs. The reverse
tunnel can be the IP link, LSP, or TE tunnel. To ensure that BFD packets are received and sent
along the same path, an LSP or TE tunnel is preferentially used to inform the egress node of
an LSP fault. If the configured reverse tunnel requires BFD detection, configure a pair of BFD
sessions for it. Run the following commands as required.
l For the IP link, run:
bfd cfg-name bind peer-ip peer-ip [ vpn-instance vpn-instance-name ]
[ interface interface-type interface-number ] [ source-ip source-ip ]
l Run:
discriminator remote discr-value
The local identifier and remote identifier on both ends of a BFD session must accord with each other.
The session cannot be established correctly otherwise. In addition, the local identifier and remote
identifier cannot be modified after configuration.
The interval for sending BFD packets is set on the local device.
By default, the interval for sending BFD packets is 1000 ms.
Step 7 (Optional) Run:
min-rx-interval interval
The interval for receiving BFD packets is set on the local device.
By default, the interval for receiving BFD packets is 1000 ms.
Step 8 (Optional) Run:
detect-multiplier multiplier
The changes of the BFD session status can be advertised to the upper-layer application.
By default, a static BFD session cannot report faults of the monitored service module to the
system.
If an LSP is used as a reverse tunnel to notify the ingress of a fault, you can run this command
to allow the reverse tunnel to switch traffic if the BFD session goes Down. If a single-hop IP
link is used as a reverse tunnel, this command can be configured. Because the process-pst
command can be only configured for BFD single-link detection.
Step 10 Run:
commit
----End
Prerequisites
The configurations of the static BFD for LDP LSP are complete.
Procedure
l Run the display bfd configuration { all | static } command to check the BFD
configuration.
l Run the display bfd session { all | static } command to check information about the
BFD session.
l Run the display bfd statistics session { all | static } command to check statistics about
BFD.
----End
Context
When configuring dynamic BFD for LDP LSPs, pay attention to the following points:
l Dynamic BFD only monitors the LDP LSP that is established using a host route.
l The forwarding modes on the forwarding path and reverse path can be different (for
example, an IP packet is sent from the source to the destination through an LSP, and is
sent from the destination to the source in IP forwarding mode), but the forwarding path
and reverse path must be established over the same link. If they use different links, BFD
cannot identify the faulty path when a fault is detected.
Pre-configuration Tasks
Before configuring the dynamic BFD for LDP LSP, complete the following tasks:
Configuration Process
Configure dynamic BFD for LDP LSPs according to the following sequence.
Context
Perform the following steps on the ingress and egress nodes:
Procedure
Step 1 Run:
system-view
You can set BFD parameters only after enabling global BFD.
----End
Context
You can enable MPLS to dynamically establish BFD sessions after enabling BFD on the
ingress and egress nodes.
Procedure
l Perform the following steps on the ingress node:
a. Run:
system-view
An LDP LSP is enabled with the capability of creating BFD session dynamically.
By default, an ingress cannot dynamically create BFD sessions for monitoring LDP
LSPs.
By default, the egress node of an LSP cannot passively create a BFD session.
Running this command cannot create a BFD session. The BFD session is not
created until the egress node receives the request packet that contains LSP ping of
BFD TLV from the ingress node.
----End
3.7.4.3 Configuring the Triggering Policy of Dynamic BFD for LDP LSP
Context
There are two triggering policies to establish the session of dynamic BFD for LDP LSP:
l Host mode: is used when all host addresses are required to be triggered to create BFD
session. You can specify parameters of nexthop and outgoing-interface to define LSPs
that can create a BFD session.
l FEC list mode: is used when only a part of host addresses are required to be triggered to
create a BFD session. You can use the fec-list command to specify host addresses.
You can configure the triggering policy on the source end of the detected LSP.
Procedure
Step 1 Run:
system-view
Step 2 (Optional) If you need the FEC list triggering policy, perform the following operations in this
step:
1. Run:
fec-list list-name
Step 3 Run:
mpls
Step 4 Run:
mpls bfd-trigger [ host [ nexthop next-hop-address | outgoing-interface interface-
type interface-number ] * | fec-list list-name ]
The triggering policy to establish the session of dynamic BFD for LDP LSP is configured.
----End
Context
BFD parameters include the minimum intervals for sending and receiving BFD packets, and
local BFD detection multiplier. The parameters affect BFD session setup.
You can adjust the local detection time according to the network situation. On an unstable
link, if a small detection time is used, a BFD session may flap. You can increase the detection
time of the BFD session.
NOTE
Actual interval for the local device to send BFD packets = MAX {locally configured interval for sending
BFD packets, remotely configured interval for receiving BFD packets}
Actual interval for the local device to receive BFD packets = MAX {remotely configured interval for
sending BFD packets, locally configured interval for receiving BFD packets}
Local detection time = Actual interval for receiving BFD packets x Remotely configured BFD detection
multiplier
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
Step 3 Run:
mpls ping interval interval
By default, the interval at which LSP ping packets are sent in a dynamic BFD session is 60
seconds.
Step 4 Run:
quit
Step 5 Run:
mpls
Step 6 Run:
mpls bfd { min-tx-interval interval | min-rx-interval interval | detect-
multiplier multiplier }*
By default, the interval between BFD packet transmissions and the interval between BFD
packet receipts are 1000 ms, and the detection multiple is 3.
----End
Prerequisites
The configurations of the dynamic BFD for LDP LSP function are complete.
Procedure
l Run the display bfd configuration all [ verbose ] command to check the BFD
configuration (ingress).
l Run the display bfd configuration passive-dynamic [ peer-ip peer-ip remote-
discriminator discriminator ] [ verbose ] command to check the BFD configuration
(egress).
l Run the display bfd session all [ verbose ] command to check information about the
BFD session (ingress).
l Run the display bfd session passive-dynamic [ peer-ip peer-ip remote-discriminator
discriminator ] [ verbose ] command to check information about the BFD established
passively (egress).
l Run the display mpls bfd session [ statistics | protocol ldp | outgoing-interface
interface-type interface-number | nexthop ip-address | fec fec-address | verbose |
monitor ] command to check information about MPLS BFD session (ingress).
----End
Pre-configuration Tasks
Before configuring synchronization between LDP and IGP, complete the following task:
l Configuring a local LDP session according to 3.7.1 Configuring Basic Functions of
MPLS LDP
Configuration Process
Enabling synchronization between LDP and IGP is mandatory and other tasks are optional.
Context
Synchronization between LDP and IGP can be configured in either of the following modes:
l Synchronization between LDP and IGP can be enabled in IS-IS processes, not OSPF
processes.
l If the synchronization status between LDP and IS-IS is different on an interface and in an IS-
IS process, the synchronization status on the interface takes effect.
Procedure
l If OSPF is used as an IGP, perform the following steps:
a. Run:
system-view
d. Run:
isis enable process-id
IS-IS is enabled.
e. Run:
isis ldp-sync
Synchronization between LDP and IS-IS is enabled on all interfaces in the specified
IS-IS process.
By default, synchronization between LDP and IS-IS is disabled on all interfaces in
an IS-IS process.
If you want to enable synchronization between LDP and IS-IS on MPLS LDP-
enabled interfaces, please specify the parameter mpls-binding-only.
----End
Context
The ldp-sync enable command run in an IS-IS process enables synchronization between LDP
and IS-IS on all local IS-IS interfaces. On an IS-IS interface transmits importance services,
LDP and IS-IS synchronization may affect service transmission. If the link is working
properly and an LDP session over the link fails, IS-IS sends link state PDUs (LSPs) to
advertise the maximum cost of the link. As a result, IS-IS does not select the route for the
link, which affects important service transmission.
To prevent the preceding problem, block LDP and IS-IS synchronization on an IS-IS interface
that transmits important services.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 4 Run:
isis ldp-sync block
----End
Context
On a device that has LDP-IGP synchronization enabled, if the active physical link recovers,
an IGP enters the Hold-down state, and a Hold-down timer starts. Before the Hold-down
timer expires, the IGP delays establishing an IGP neighbor relationship until an LDP session
is established over the active link so that the LDP session over and IGP route for the active
link can become available simultaneously.
NOTE
If IS-IS is used, you can set the value of the Hold-down timer on a specified interface or set the value of
the Hold-down timer for all IS-IS interfaces in the IS-IS view.
If different Hold-down values on an interface and in an IS-IS process are set, the setting on the interface
takes effect.
Procedure
l If OSPF is used as an IGP, perform the following steps:
a. Run:
system-view
d. Run:
ospf timer ldp-sync hold-down value
The interval during which OSPF waits for an LDP session to be established is set.
By default, the Hold-down timer value is 10 seconds.
l If IS-IS is used as an IGP, perform the following steps:
Set the Hold-down timer on a specified IS-IS interface.
a. Run:
system-view
The interval during which IS-IS waits for an LDP session to be established is set.
By default, the Hold-down timer value is 10 seconds.
Set the Hold-down timer on all IS-IS interfaces in a specified IS-IS process.
a. Run:
system-view
The Hold-down timer is set, which enables all IS-IS interfaces within an IS-IS
process to delay establishing IS-IS neighbor relationships until LDP sessions are
established.
By default, the Hold-down timer value is 10 seconds.
----End
Context
If an LDP session over the active link fails but an IGP route for the active link is reachable, a
node that has LDP-IGP synchronization enabled uses a Hold-max-cost timer to enable an IGP
to advertise LSAs or LSPs carrying the maximum route cost, which delays IGP route
convergence until an LDP session is established. Therefore, an IGP route for a standby link
and an LDP session over the standby link can become available simultaneously.
You can set the Hold-max-cost timer value in either of the following methods:
l Setting the Hold-max-cost timer value in the interface view
You can set the Hold-max-cost timer value on a specified interface. This mode applies to
the scenario where a few interfaces need to use the Hold-max-cost timer.
l Setting the Hold-max-cost timer value in the IGP process
After you set the Hold-max-cost timer value in the IGP process, the Hold-max-cost
timers on all interfaces in the IGP process are set to this value. This mode applies to the
scenario where many interfaces on a node need to use the Hold-max-cost timer.
NOTE
A Hold-max-cost timer can be set on either an OSPF or IS-IS interface and can only be set in an
IS-IS process, not an OSPF process.
If different Hold-max-cost values on an interface and in an IS-IS process are set, the setting on the
interface takes effect.
Procedure
l If OSPF is used as an IGP, perform the following steps:
a. Run:
system-view
The interval for advertising the maximum cost in the LSAs of local LSRs through
OSPF is set.
a. Run:
system-view
The Hold-max-cost timer is set, which enables IS-IS to keep advertising LSPs
carrying the maximum route cost on all interfaces within an IS-IS process.
By default, the value of the Hold-max-cost timer is 10 seconds.
----End
Context
When an LDP session is reestablished on a faulty link, LDP starts the Delay timer to wait for
the establishment of an LSP. After the Delay timer times out, LDP notifies the IGP that
synchronization between LDP and IGP is complete.
Procedure
Step 1 Run:
system-view
The period of waiting for the LSP setup after the establishment of the LDP session is set.
By default, the value of the delay timer is 10s.
----End
Prerequisites
The configurations of the synchronization between LDP and IGP function are complete.
Procedure
l Run the display ospf ldp-sync interface { all | interface-type interface-number }
command to check information about synchronization between LDP and OSPF on an
interface.
l Run the display isis [ process-id ] ldp-sync interface command to check information
about synchronization between LDP and IS-IS on the interface.
l Run the display rm interface [ interface-type interface-number | vpn-instance vpn-
instance-name ] command to check information about the route management.
----End
Pre-configuration Tasks
Before configuring LDP GR, complete the following tasks:
l Configuring a local LDP session according to 3.7.1 Configuring Basic Functions of
MPLS LDP
l Configuring IGP GR (see S2750&S5700&S6700 Series Ethernet Switches Configuration
Guide - IP Routing)
Context
Table 3-10 describes timers used during LDP GR.
Reconnect After the GR Restarter performs When a network with a large number
timer an active/standby switchover, of routes is faulty, you can increase
the GR Helper detects that the the value of the Reconnect timer to
LDP session with the GR avoid that all the LDP sessions cannot
Restarter fails, and then starts recover within the default timeout
the Reconnect timer and waits period 300s.
for reestablishment of the LDP
session.
The value of the Reconnect
timer that takes effect on the
GR Helper is the smaller one
between the value of the
Neighbor-liveness timer set on
the GR Helper and the value of
Reconnect timer set on the GR
Restarter.
Recovery timer After the LDP session is When a network with a large number
reestablished, the GR Helper of routes is faulty, you can increase
starts the Recovery timer and the value of the Recovery timer to
waits for the recovery of the avoid that all the LSPs cannot recover
LSP. within the default timeout period
The value of the Recovery timer 300s.
that takes effect on the GR
Helper is the smaller one
between the value of the
Recovery timer set on the GR
Helper and the value of
Recovery timer set on the GR
Restarter.
NOTE
l If the device supports stacking, the stack device can also function as the GR Restarter. If the device
does not support stacking, the device can only function as the GR Helper.
l Enabling or disabling LDP GR, or changing the LDP GR timer value cause LDP session
reestablishment.
Procedure
Step 1 Run:
system-view
----End
Pre-configuration Tasks
Before configuring LDP security features, complete the following task:
Configuration Process
You can perform the following configuration tasks in any sequence as required.You can
configure only either of LDP MD5 authentication and Keychain authentication for one
neighbor at the same time.
Context
MD5 authentication can be configured for a TCP connection over which an LDP session is
established, improving security. Note that the peers of an LDP session can be configured with
different encryption modes, but must be configured with a single password.
The MD5 algorithm is easy to configure and generates a single password which can be
changed only manually. MD5 authentication applies to the network requiring short-period
encryption.
Keychain authentication and MD5 authentication cannot be both configured on a single LDP
peer. Note that MD5 encryption algorithm cannot ensure security. Keychain authentication is
recommended.
NOTE
Configuring LDP MD5 authentication may causes LDP session reestablishment, deletion of the LSP
associated with the deleted LDP session, and MPLS service interruption.
Procedure
Step 1 Run:
system-view
NOTICE
If plain is selected, the password is saved in the configuration file in plain text. In this case,
users at a lower level can easily obtain the password by viewing the configuration file. This
brings security risks. Therefore, it is recommended that you select cipher to save the
password in cipher text.
----End
Context
To help improve LDP session security, Keychain authentication can be configured for a TCP
connection over which an LDP session has been established.
Keychain authentication involves a set of passwords and uses a new password when the
previous one expires. Keychain authentication is complex to configure and applies to a
network requiring high security.
You cannot configure Keychain authentication and MD5 authentication for a neighbor at the
same time.
Before configuring LDP Keychain authentication, configure keychain globally. For details
about the keychain configuration, see the Keychain Configuration in S2750&S5700&S6700
Series Ethernet Switches Configuration Guide - Security.
NOTE
Configuring LDP Keychain authentication may causes LDP session reestablishment, deletion of the LSP
associated with the deleted LDP session, and MPLS service interruption.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls ldp
Step 3 Run:
authentication key-chain peer peer-id name keychain-name
----End
Context
To protect device from attacks, Generalized TTL Security Mechanism (GTSM) checks the
TTL value of a packet to check whether the packet is valid. To check the TTL value of an
LDP packet exchanged between LDP peers, enable GTSM on LDP peers and set the TTL
range. If the TLL of an LDP packet is out of the TTL range, the LDP packet is considered as
an invalid attack packet and discarded. This prevents the CPU from processing a large number
of forged LDP packets. In this way, the upper layer protocols are protected.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of LDP security features are complete.
Procedure
l Run the display mpls ldp session verbose command to check the configurations of LDP
MD5 authentication and LDP keychain authentication.
l Run the display gtsm statistics all command to check GTSM statistics.
----End
Pre-configuration Tasks
Before configuring non-labeled public network routes to be iterated to LSPs, complete the
following tasks:
l Configuring a local LDP session according to 3.7.1 Configuring Basic Functions of
MPLS LDP
l Configuring an IP prefix list if non-labeled public network routes to be iterated to LSPs
need to be limited
Procedure
Step 1 Run:
system-view
The non-label public network route is allowed to be iterated to the LSP to forward through
MPLS.
By default, the non-label public network route can be iterated only to the outbound interface
and the next hop but not the LSP tunnel.
If ip-prefix ip-prefix-name is not set, all static routes and non-labeled public BGP routes will
be preferentially iterated to LSP tunnels.
----End
NOTICE
l Resetting LDP may temporarily affect the reestablishment of the LSP. Take care to reset
LDP.
l Resetting LDP is prohibited during the LDP GR.
Procedure
l Run the reset mpls ldp command to reset configurations of the global LDP instance in
the user view.
l Run the reset mpls ldp all command to reset configurations on all LDP instances in the
user view.
l Run the reset mpls ldp peer peer-id command to reset a specified peer in the user view.
----End
Context
NOTICE
The cleared LDP statistics cannot be restored. Exercise caution when you use the following
commands.
Procedure
l Run the reset mpls ldp error packet { tcp | udp | l2vpn | all } command in the user
view to clear statistics on LDP error messages.
l Run the reset mpls ldp event adjacency-down command in the user view to clear
statistics on LDP adjacencies in Down state.
l Run the reset mpls ldp event session-down command in the user view to clear statistics
on LDP sessions in Down state.
----End
Context
In routine maintenance, you can run the following commands in any view to view the LDP
running status.
Procedure
l Run the display mpls ldp error packet { tcp | udp | l2vpn } [ number ] command to
check statistics on LDP error messages.
l Run the display mpls ldp error packet state command to check the record status of
LDP-related error messages.
l Run the display mpls ldp event adjacency-down [ interface interface-type interface-
number | remote ] [ peer peer-id ] [ verbose ] command to check information about
LDP adjacencies in Down state.
l Run the display mpls ldp event session-down command to check information about
LDP sessions in Down state.
l Run the display mpls last-info lsp-down [ protocol ldp ] [ verbose ] command to check
information about LDP LSPs in Down state.
----End
Context
In MPLS, the control panel used for setting up an LSP cannot detect data forwarding failures
on the LSP. This makes network maintenance difficult.
MPLS ping checks LSP connectivity, and MPLS traceroute locates network faults in addition
to checking LSP connectivity.
MPLS ping and MPLS traceroute can be performed in any view. MPLS ping and MPLS
traceroute do not support packet fragmentation.
Procedure
Step 1 Run the system-view command to enter the system view.
Step 2 Run the lspv mpls-lsp-ping echo enable command to enable the response to MPLS Echo
Request packets.
Step 3 (Optional) Run the lspv packet-filter acl-number command to enable MPLS Echo Request
packet filtering based on source IP addresses. The filtering rule is specified in the ACL.
By default, the device does not filter MPLS Echo Request packets based on their source IP
addresses.
----End
Postrequisite
l Run the display lspv statistics command to check the LSPV test statistics. A large
amount of statistical information is saved in the system after MPLS ping or traceroute
tests are performed multiple times, which is unhelpful for problem analysis. To obtain
more accurate statistics, run the reset lspv statistics command to clear LSPV test
statistics before running the display lspv statistics command.
l Run the undo lspv mpls-lsp-ping echo enable command to disable response to MPLS
Echo Request packets. It is recommended that you run this command after completing an
MPLS ping or traceroute test to save system resources.
l Run the display lspv configuration command to check the current LSPV configuration.
Procedure
l Configuring the trap function for LDP
a. Run:
system-view
The upper and lower alarm thresholds for BGP LSP usage are configured.
The default upper limit of an alarm for BGP LSP usage is 80%. The default lower
threshold for a clear alarm for BGP LSP usage is 75%.
h. Run:
mpls bgpv6-lsp-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The upper and lower alarm thresholds for BGP IPv6 LSP usage are configured.
The default upper threshold of an alarm for BGP IPv6 LSP usage is 80%. The
default lower threshold for a clear alarm for BGP IPv6 LSP usage is 75%.
i. Run:
mpls total-lsp-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The upper and lower thresholds of alarms for total LSP usage are configured.
The default upper limit of an alarm for total LSP usage is 80%. The default lower
limit of a clear alarm for total LSP usage is 75%.
----End
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF between the LSRs to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on LSRs so that public tunnels can be set up to transmit
VPN services.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA. The configurations of LSRB and LSRC are similar to the configuration of
LSRA, and are not mentioned here.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.1 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure LSRB.
[LSRB] ospf 1
[LSRB-ospf-1] area 0
[LSRB-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[LSRB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] quit
[LSRB-ospf-1] quit
# Configure LSRC.
[LSRC] ospf 1
[LSRC-ospf-1] area 0
[LSRC-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[LSRC-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRC-ospf-1-area-0.0.0.0] quit
[LSRC-ospf-1] quit
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Enable global MPLS and MPLS LDP on each LSR.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.1
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
# Configure LSRB.
[LSRB] mpls lsr-id 2.2.2.2
[LSRB] mpls
[LSRB-mpls] quit
[LSRB] mpls ldp
[LSRB-mpls-ldp] quit
# Configure LSRC.
[LSRC] mpls lsr-id 3.3.3.3
[LSRC] mpls
[LSRC-mpls] quit
[LSRC] mpls ldp
[LSRC-mpls-ldp] quit
# Configure LSRB.
[LSRB] interface vlanif 10
[LSRB-Vlanif10] mpls
[LSRB-Vlanif10] mpls ldp
[LSRB-Vlanif10] quit
[LSRB] interface vlanif 20
[LSRB-Vlanif20] mpls
[LSRB-Vlanif20] mpls ldp
[LSRB-Vlanif20] quit
# Configure LSRC.
[LSRC] interface vlanif 20
[LSRC-Vlanif20] mpls
[LSRC-Vlanif20] mpls ldp
[LSRC-Vlanif20] quit
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 10 20
#
mpls lsr-id 2.2.2.2
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif20
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
#
interface Vlanif20
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.2.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 3-13, LSRA and LSRC are PEs of the IP/MPLS backbone network.
MPLS L2VPN services need to be deployed on LSRA and LSRC to connect VPN sites at
Layer 2, so remote LDP sessions need to be deployed between LSRA and LSRC to
implement VC label exchange.
Configuration Roadmap
If LSRA is directly connected to LSRC, local LDP sessions established on LSRs can be used
to set up LDP LSPs to transmit services and exchange VC labels. In this example, LSRA is
indirectly connected to LSRC, so remote LDP sessions must be configured. The configuration
roadmap is as follows:
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA. The configurations of LSRB and LSRC are similar to the configuration of
LSRA, and are not mentioned here.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.1 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure LSRB.
[LSRB] ospf 1
[LSRB-ospf-1] area 0
[LSRB-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[LSRB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] quit
[LSRB-ospf-1] quit
# Configure LSRC.
[LSRC] ospf 1
[LSRC-ospf-1] area 0
[LSRC-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[LSRC-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRC-ospf-1-area-0.0.0.0] quit
[LSRC-ospf-1] quit
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Enable global MPLS and MPLS LDP on each LSR.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.1
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
# Configure LSRB.
[LSRB] mpls lsr-id 2.2.2.2
[LSRB] mpls
[LSRB-mpls] quit
[LSRB] mpls ldp
[LSRB-mpls-ldp] quit
# Configure LSRC.
[LSRC] mpls lsr-id 3.3.3.3
[LSRC] mpls
[LSRC-mpls] quit
[LSRC] mpls ldp
[LSRC-mpls-ldp] quit
Step 4 Specify the name and IP address of the remote peer on the two LSRs of a remote LDP
session.
# Configure LSRA.
[LSRA] mpls ldp remote-peer LSRC
[LSRA-mpls-ldp-remote-lsrc] remote-ip 3.3.3.3
[LSRA-mpls-ldp-remote-lsrc] quit
# Configure LSRC.
[LSRC] mpls ldp remote-peer LSRA
[LSRC-mpls-ldp-remote-lsra] remote-ip 1.1.1.1
[LSRC-mpls-ldp-remote-lsra] quit
# Run the display mpls ldp remote-peer command on the two LSRs of the remote LDP
session to view information about the remote peer.
LSRA is used as an example.
[LSRA] display mpls ldp remote-peer
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
mpls ldp remote-peer lsrc
remote-ip 3.3.3.3
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
Figure 3-14 Example for configuring automatic triggering of a request for a Label Mapping
message in DoD mode
Configuration Roadmap
To connect VPN sites at Layer 2, configure local LDP sessions to establish LDP LSPs to
transmit L2VPN services. Remote LDP sessions need to be configured to exchange VC labels
so that PWs are set up.
To reduce the burden of edge devices, configure the default static route with the next hop
address as the neighbor on the edge device to reduce unnecessary IP entries. In addition, the
label advertisement mode is set up DoD to reduce unnecessary MPLS entries. Though the
burden of edge devices is reduced, LDP LSPs cannot be set up. Configure automatic
triggering of a request for a Label Mapping message in DoD mode so that LDP LSPs can be
set up.
Procedure
Step 1 Configure IP addresses for interfaces on each node and configure the loopback addresses that
are used as LSR IDs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.1 32
[LSRA-LoopBack0] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] vlan 10
[LSRA-vlan10] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
The configurations of LSRB, LSRC, and LSRD are similar to the configuration of LSRA, and
are not mentioned here.
Step 2 Configure basic IS-IS functions for backbone devices. Configure static routes for PEs and
their neighbors.
# Configure basic IS-IS functions for LSRB and import a static route.
[LSRB] isis 1
[LSRB-isis-1] network-entity 10.0000.0000.0001.00
[LSRB-isis-1] import-route static
[LSRB-isis-1] quit
[LSRB] interface vlanif 20
[LSRB-Vlanif20] isis enable 1
[LSRB-Vlanif20] quit
[LSRB] interface loopback 0
[LSRB-LoopBack0] isis enable 1
[LSRB-LoopBack0] quit
# Configure basic IS-IS functions for LSRC and import a static route.
[LSRC] isis 1
[LSRC-isis-1] network-entity 10.0000.0000.0002.00
[LSRC-isis-1] import-route static
[LSRC-isis-1] quit
[LSRC] interface vlanif 20
[LSRC-Vlanif20] isis enable 1
[LSRC-Vlanif20] quit
[LSRC] interface loopback 0
[LSRC-LoopBack0] isis enable 1
[LSRC-LoopBack0] quit
# Run the display ip routing-table command on LSRA to view the configure default route.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 6 Routes : 6
# Run the display ip routing-table command on LSRB to view the route to LSRA.
[LSRB] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 10 Routes : 10
Step 3 Enable MPLS globally and on an interface, and MPLS LDP on each node.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.1
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
The configurations of LSRB, LSRC, and LSRD are similar to the configuration of LSRA, and
are not mentioned here.
# Configure LSRA.
[LSRA] interface Vlanif 10
[LSRA-Vlanif10] mpls ldp advertisement dod
[LSRA-Vlanif10] quit
# Configure LSRB.
[LSRB] interface vlanif 10
[LSRB-Vlanif10] mpls ldp advertisement dod
[LSRB-Vlanif10] quit
# Configure LSRC.
[LSRC] interface vlanif 30
[LSRC-Vlanif30] mpls ldp advertisement dod
[LSRC-Vlanif30] quit
# Configure LSRD.
[LSRD] interface vlanif 30
[LSRD-Vlanif30] mpls ldp advertisement dod
[LSRD-Vlanif30] quit
# Run the longest-match command on LSRA to configure LDP to search for a route
according to the longest match rule to establish an inter-area LDP LSP.
[LSRA] mpls ldp
[LSRA-mpls-ldp] longest-match
[LSRA-mpls-ldp] quit
# Run the longest-match command on LSRD to configure LDP to search for a route
according to the longest match rule to establish an inter-area LDP LSP.
[LSRD] mpls ldp
[LSRD-mpls-ldp] longest-match
[LSRD-mpls-ldp] quit
Step 6 Configure a remote LDP session and enable LDP to automatically trigger a request for a
Label Mapping message in DoD mode.
# Configure LSRA.
[LSRA] mpls ldp remote-peer lsrd
[LSRA-mpls-ldp-remote-lsrd] remote-ip 4.4.4.4
[LSRA-mpls-ldp-remote-lsrd] remote-ip auto-dod-request
[LSRA-mpls-ldp-remote-lsrd] quit
# Configure LSRD.
[LSRD] mpls ldp remote-peer lsra
[LSRD-mpls-ldp-remote-lsra] remote-ip 1.1.1.1
[LSRD-mpls-ldp-remote-lsra] remote-ip auto-dod-request
[LSRD-mpls-ldp-remote-lsra] quit
The command output shows that only a default route exists in the routing table and the route
4.4.4.4 does not exist.
# Run the display mpls ldp lsp command on LSRA to view information about the established
LSP.
[LSRA] display mpls ldp lsp
The command output shows that the LSP with the destination address of 4.4.4.4 is
established. LSRA has obtained a Label Mapping message of 4.4.4.4 from LSRB to establish
an LSP.
[LSRA] display tunnel-info all
* -> Allocated VC Token
The command output shows that an LSP between LSRA and LSRD is established.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
longest-match
#
mpls ldp remote-peer lsrd
remote-ip 4.4.4.4
remote-ip auto-dod-request
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
mpls ldp advertisement dod
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
ip route-static 0.0.0.0 0.0.0.0 10.1.1.2
#
return
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 2.2.2.2 255.255.255.255
isis enable 1
#
ip route-static 1.1.1.1 255.255.255.255 10.1.1.1
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20 30
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
#
isis 1
network-entity 10.0000.0000.0002.00
import-route static
#
interface Vlanif20
ip address 10.1.2.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface Vlanif30
ip address 10.1.3.1 255.255.255.0
mpls
mpls ldp
mpls ldp advertisement dod
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 3.3.3.3 255.255.255.255
isis enable 1
#
ip route-static 4.4.4.4 255.255.255.255 10.1.3.2
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 30
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
longest-match
#
mpls ldp remote-peer lsra
remote-ip 1.1.1.1
remote-ip auto-dod-request
#
interface Vlanif30
ip address 10.1.3.2 255.255.255.0
mpls
mpls ldp
mpls ldp advertisement dod
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 4.4.4.4 255.255.255.255
#
ip route-static 0.0.0.0 0.0.0.0 10.1.3.1
#
return
Figure 3-15 Networking diagram for configuring a policy for triggering LDP LSP
establishment
Loopback0 Loopback0
2.2.2.9/32 3.3.3.9/32
GE0/0/2 GE0/0/1
10.2.1.1/24 10.2.1.2/24
LSRB LSRC
VLANIF20 VLANIF20
GE0/0/1 GE0/0/2
10.1.1.2/24 VLANIF10 VLANIF30
10.3.1.1/24
GE0/0/1 GE0/0/1
10.1.1.1/24 VLANIF10 VLANIF30 10.3.1.2/24
LSRA LSRD
Loopback0 Loopback0
1.1.1.9/32 4.4.4.9/32
Configuration Roadmap
You can configure a policy for triggering LDP LSP setup to meet the requirement. The
configuration roadmap is as follows:
1. Configure OSPF between the LSRs to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on LSRs so that LDP LSPs can be set up.
3. Configure a policy for triggering LDP LSP setup on LSRA and LSRD to reduce the
number of LSPs on edge devices so that the burden of edge devices is reduced.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.9 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
Step 3 Configure basic MPLS and MPLS LDP functions on the nodes and interfaces
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
# Run the display mpls lsp command on each node to view the establishment of the LDP
LSPs. LSRA is used as an example.
# Configure an IP prefix list on LSRD that allows only 1.1.1.9/32 and 4.4.4.9/32 for LSP
setup.
[LSRD] ip ip-prefix FilterOnEgress permit 1.1.1.9 32
[LSRD] ip ip-prefix FilterOnEgress permit 4.4.4.9 32
[LSRD] mpls
[LSRD-mpls] lsp-trigger ip-prefix FilterOnEgress
[LSRD-mpls] quit
After the policy is configured, there are only LDP LSPs to the destination 1.1.1.9/32 and
4.4.4.9/32 with LSRA as the ingress node and other LDP LSPs that do not use LSRA as the
ingress node.
[LSRD] display mpls lsp
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 NULL/4110 -/
Vlanif30
1.1.1.9/32 4100/4110 -/Vlanif30
2.2.2.9/32 1023/1028 -/Vlanif30
3.3.3.9/32 1027/3 -/Vlanif30
4.4.4.9/32 3/NULL -/-
After the policy is configured, there are only LDP LSPs to the destination 1.1.1.9/32 and
4.4.4.9/32 with LSRD as the ingress node and other LDP LSPs that do not use LSRD as the
ingress node.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.9
mpls
lsp-trigger ip-prefix FilterOnIngress
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.255
#
ip ip-prefix FilterOnIngress index 10 permit 1.1.1.9 32
ip ip-prefix FilterOnIngress index 20 permit 4.4.4.9 32
#
return
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20 30
#
mpls lsr-id 3.3.3.9
mpls
#
mpls ldp
#
interface Vlanif20
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif30
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 30
#
mpls lsr-id 4.4.4.9
mpls
lsp-trigger ip-prefix FilterOnEgress
#
mpls ldp
#
interface Vlanif30
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 4.4.4.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.3.1.0 0.0.0.255
#
ip ip-prefix FilterOnEgress index 10 permit 1.1.1.9 32
ip ip-prefix FilterOnEgress index 20 permit 4.4.4.9 32
#
return
Networking Requirements
As shown in Figure 3-16, LSRA and LSRD are edge devices of the MPLS backbone network
and have low performance. After MPLS LDP is enabled on each LSR interface, LDP LSPs
are set up automatically. Because the network scale is large (this example provides two
devices on intermediate nodes), many unnecessary LSPs are set up, wasting resources. The
number of LSPs established on edge devices needs to be reduced so that the burden of edge
devices is reduced.
NOTE
Figure 3-16 Networking diagram for configuring a policy for triggering LDP LSP
establishment
Loopback0 Loopback0
2.2.2.9/32 3.3.3.9/32
GE0/0/2 GE0/0/1
10.2.1.1/24 10.2.1.2/24
LSRB LSRC
VLANIF20 VLANIF20
GE0/0/1 GE0/0/2
10.1.1.2/24 VLANIF10 VLANIF30
10.3.1.1/24
GE0/0/1 GE0/0/1
10.1.1.1/24 VLANIF10 VLANIF30 10.3.1.2/24
LSRA LSRD
Loopback0 Loopback0
1.1.1.9/32 4.4.4.9/32
Configuration Roadmap
You can configure a policy for triggering LDP LSP setup to meet the requirement. The
configuration roadmap is as follows:
1. Configure OSPF between the LSRs to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on LSRs so that LDP LSPs can be set up.
3. Configure a policy for triggering LDP LSP setup on LSRB and LSRC to reduce the
number of LSPs on edge devices so that the burden of edge devices is reduced.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.9 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
Step 3 Configure basic MPLS and MPLS LDP functions on the nodes and interfaces
# Configure LSRA. The configurations of LSRB, LSRC, and LSRD are similar to the
configuration of LSRA, and are not mentioned here.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
# Run the display mpls ldp lsp command on each node to view the establishment of the LDP
LSPs. LSRA is used as an example.
# Configure the IP prefix list on transit node LSRB to allow only 1.1.1.9/32 and 4.4.4.9/32 for
LSP setup.
[LSRB] ip ip-prefix FilterOnTransit permit 1.1.1.9 32
[LSRB] ip ip-prefix FilterOnTransit permit 4.4.4.9 32
[LSRB] mpls ldp
[LSRB-mpls-ldp] propagate mapping for ip-prefix FilterOnTransit
[LSRB-mpls-ldp] quit
# Configure the IP prefix list on transit node LSRC to allow only 1.1.1.9/32 and 4.4.4.9/32 for
LSP setup.
[LSRC] ip ip-prefix FilterOnTransit permit 1.1.1.9 32
[LSRC] ip ip-prefix FilterOnTransit permit 4.4.4.9 32
[LSRC] mpls ldp
[LSRC-mpls-ldp] propagate mapping for ip-prefix FilterOnTransit
[LSRC-mpls-ldp] quit
# After the configuration is complete, run the display mpls ldp lsp command on LSRA and
LSRD to view LDP LSP establishment.
[LSRA] display mpls ldp lsp
Because the policy for triggering LDP LSP setup is configured on LSRB, the LDP LSP
destined for 3.3.3.9/32 is filtered on LSRA.
[LSRD] display mpls ldp lsp
Because the policy for triggering LDP LSP setup is configured on LSRC, the LDP LSP
destined for 2.2.2.9/32 is filtered on LSRD.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
#
vlan batch 10 20
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
propagate mapping for ip-prefix FilterOnTransit
#
interface Vlanif10
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif20
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
ip ip-prefix FilterOnTransit index 10 permit 1.1.1.9 32
ip ip-prefix FilterOnTransit index 20 permit 4.4.4.9 32
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20 30
#
mpls lsr-id 3.3.3.9
mpls
#
mpls ldp
propagate mapping for ip-prefix FilterOnTransit
#
interface Vlanif20
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif30
ip address 10.3.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
ip ip-prefix FilterOnTransit index 10 permit 1.1.1.9 32
ip ip-prefix FilterOnTransit index 20 permit 4.4.4.9 32
#
return
Figure 3-17 Networking diagram for disabling devices from distributing LDP labels to
remote peers
Loopback 0
5.5.5.5/32
PE2
AN 1.1 /3
Loopback 0
AN 1.2 /1
IF /24
VL .1. 0/0
IF /24
VL .1. 0/0
20
1.1.1.1/32
20 GE
20
20 GE
GE0/0/1
40.1.1.2/24
VLANIF10 Loopback 0
GE0/0/1 2.2.2.2/32
P G G
PE1 40.1.1.1/24 30 E 30 E
VLANIF10 VL .1. 0/0 V .1. 0/0
AN 1.1 /2 LA 1.2 /1
IF /24 N /2
IF 4
30 30
PE3
Loopback 0
4.4.4.4/32
Configuration Roadmap
To meet the preceding requirements, disable devices from distributing LDP labels to remote
peers. The configuration roadmap is as follows:
1. Configure IS-IS between on PEs and P to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on PEs and P so that public LSPs can be set up to transmit
L2VPN services.
3. Configure remote LDP sessions on PEs to exchange private labels so that dynamic PWs
are set up.
4. Disable PEs from allocating labels to remote peers so that PE1 cannot allocate LDP
labels to PE2 and PE3. This setting saves system resources.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure PE1. The configurations of P, PE2, and PE3 are similar to the configuration of
PE1, and are not mentioned here.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[PE1] interface loopback 0
[PE1-LoopBack0] ip address 1.1.1.1 32
[PE1-LoopBack0] quit
Step 2 Configure IS-IS to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure PE1.
[PE1] isis 1
[PE1-isis-1] is-level level-2
[PE1-isis-1] network-entity 86.4501.0010.0100.0001.00
[PE1-isis-1] quit
[PE1] interface vlanif 10
[PE1-Vlanif10] isis enable 1
[PE1-Vlanif10] quit
[PE1] interface loopback 0
[PE1-LoopBack0] isis enable 1
[PE1-LoopBack0] quit
# Configure P.
[P] isis 1
[P-isis-1] is-level level-2
[P-isis-1] network-entity 86.4501.0030.0300.0003.00
[P-isis-1] quit
[P] interface vlanif 10
[P-Vlanif10] isis enable 1
[P-Vlanif10] quit
[P] interface vlanif 20
[P-Vlanif20] isis enable 1
[P-Vlanif20] quit
[P] interface vlanif 30
[P-Vlanif30] isis enable 1
[P-Vlanif30] quit
[P] interface loopback 0
[P-LoopBack0] isis enable 1
[P-LoopBack0] quit
# Configure PE2.
[PE2] isis 1
[PE2-isis-1] is-level level-2
[PE2-isis-1] network-entity 86.4501.0050.0500.0005.00
[PE2-isis-1] quit
[PE2] interface vlanif 20
[PE2-Vlanif20] isis enable 1
[PE2-Vlanif20] quit
[PE2] interface loopback 0
[PE2-LoopBack0] isis enable 1
[PE2-LoopBack0] quit
# Configure PE3.
[PE3] isis 1
[PE3-isis-1] is-level level-2
[PE3-isis-1] network-entity 86.4501.0040.0400.0004.00
[PE3-isis-1] quit
[PE3] interface vlanif 30
[PE3-Vlanif30] isis enable 1
[PE3-Vlanif30] quit
[PE3] interface loopback 0
[PE3-LoopBack0] isis enable 1
[PE3-LoopBack0] quit
# Configure P.
[P] mpls lsr-id 2.2.2.2
[P] mpls
[P-mpls] quit
[P] mpls ldp
[P-mpls-ldp] quit
[P] interface vlanif 10
[P-Vlanif10] mpls
[P-Vlanif10] mpls ldp
[P-Vlanif10] quit
[P] interface vlanif 20
[P-Vlanif20] mpls
[P-Vlanif20] mpls ldp
[P-Vlanif20] quit
[P] interface vlanif 30
[P-Vlanif30] mpls
[P-Vlanif30] mpls ldp
[P-Vlanif30] quit
# Configure PE2.
[PE2] mpls lsr-id 5.5.5.5
[PE2] mpls
[PE2-mpls] quit
[PE2] mpls ldp
[PE2-mpls-ldp] quit
[PE2] interface vlanif 20
[PE2-Vlanif20] mpls
[PE2-Vlanif20] mpls ldp
[PE2-Vlanif20] quit
# Configure PE3.
[PE3] mpls lsr-id 4.4.4.4
[PE3] mpls
[PE3-mpls] quit
[PE3] mpls ldp
[PE3-mpls-ldp] quit
[PE3] interface vlanif 30
[PE3-Vlanif30] mpls
[PE3-Vlanif30] mpls ldp
[PE3-Vlanif30] quit
After the configuration is complete, LDP sessions and public network LSPs are established
between neighboring nodes. Run the display mpls ldp session command on each node. The
command output shows that the LDP session status is Operational. PE1 is used as an
example
[PE1] display mpls ldp session
Run the display mpls ldp lsp command to check the LSP setup result and label distribution.
[PE1] display mpls ldp lsp
Step 4 Set up the remote MPLS LDP peer relationship between PEs at both ends of the PW.
# Configure PE1.
[PE1] mpls ldp remote-peer pe2
[PE1-mpls-ldp-remote-pe2] remote-ip 5.5.5.5
[PE1-mpls-ldp-remote-pe2] quit
[PE1] mpls ldp remote-peer pe3
[PE1-mpls-ldp-remote-pe3] remote-ip 4.4.4.4
[PE1-mpls-ldp-remote-pe3] quit
# Configure PE2.
[PE2] mpls ldp remote-peer pe1
[PE2-mpls-ldp-remote-pe1] remote-ip 1.1.1.1
[PE2-mpls-ldp-remote-pe1] quit
# Configure PE3.
[PE3] mpls ldp remote-peer pe1
[PE3-mpls-ldp-remote-pe1] remote-ip 1.1.1.1
[PE3-mpls-ldp-remote-pe1] quit
After the configuration is complete, remote LDP sessions are established between
neighboring PEs. Run the display mpls ldp session command on each node. The command
output shows that the LDP session status is Operational. PE1 is used as an example
[PE1] display mpls ldp session
Run the display mpls ldp lsp command to view the label distribution. The command output
shows that PEs have distributed liberal labels to their own remote neighbors. These labels,
however, are idle and occupy many system resources in MPLS L2VPN applications that use
PWE3 technology.
[PE1] display mpls ldp lsp
Step 5 Disable devices from distributing LDP labels to remote peers on PEs at both ends of a PW.
# Configure PE1.
[PE1] mpls ldp remote-peer pe2
[PE1-mpls-ldp-remote-pe2] remote-ip 5.5.5.5 pwe3
[PE1-mpls-ldp-remote-pe2] quit
[PE1] mpls ldp remote-peer pe3
[PE1-mpls-ldp-remote-pe3] remote-ip 4.4.4.4 pwe3
[PE1-mpls-ldp-remote-pe3] quit
# Configure PE2.
[PE2] mpls ldp remote-peer pe1
[PE2-mpls-ldp-remote-pe1] remote-ip 1.1.1.1 pwe3
[PE2-mpls-ldp-remote-pe1] quit
# Configure PE3.
[PE3] mpls ldp remote-peer pe1
[PE3-mpls-ldp-remote-pe1] remote-ip 1.1.1.1 pwe3
[PE3-mpls-ldp-remote-pe1] quit
After the configuration is complete, PEs do not distribute labels to remote LDP peers. Run the
display mpls ldp lsp command on each node to view the established LSP after devices from
distributing LDP labels to remote peers is disabled. PE1 is used as an example.
[PE1] display mpls ldp lsp
A large number of idle remote labels and LSPs are disabled. The LSPs are established based
on the local LDP sessions.
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
mpls ldp remote-peer pe2
remote-ip 5.5.5.5 pwe3
#
mpls ldp remote-peer pe3
remote-ip 4.4.4.4 pwe3
#
isis 1
is-level level-2
network-entity 86.4501.0010.0100.0001.00
#
interface Vlanif10
ip address 40.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
isis 1
is-level level-2
network-entity 86.4501.0050.0500.0005.00
#
interface Vlanif20
ip address 20.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 5.5.5.5 255.255.255.255
isis enable 1
#
return
Networking Requirements
As shown in Figure 3-18, the network topology is simple and stable, PEs and P are MPLS
backbone network devices, and LDP LSPs are set up on the backbone network to transmit
network services.
Network services, such as VoIP, online game, and online video service, have high
requirements for real-timeness. Data loss caused by faulty links will seriously affect services.
It is required that services be fast switched to the backup LSP when the primary LSP becomes
faulty, minimizing packet loss. Static BFD for LDP LSPs is configured to fast detect LDP
LSPs.
Figure 3-18 Networking diagram of configuring static BFD for LDP LSPs
Loopback1
2.2.2.2/32
G
/ 0/1 4 10 E0/
0 /2 .2 0/
GE .1.2 0 VL .1.1/ 2
.1 1 AN 24 G
Loopback1 /1 10 NIF P1 IF2 10 E0/ Loopback1
0
/ /24 A 0
1.1.1.1/32 E .1 0 V L . 2.1 0/1 4.4.4.4/32
G .1 VL .
0 .1 IF10 AN 2/24
1 AN primary LSP IF2
VL 0
VL
AN 0
PE1 10 GE0 IF30 backup LSP N IF4 2 PE2
A /
.3 . /0 /2
1 .1 VL VL E0/0 /24
/2 4 AN 40 G .1.2
IF3 I F .4
G 0
P2 AN 10
1 0 E0 / V L /2
.3. 0/1
1.2 E 0/0 /24
/24 G .1.1
.4
10
Loopback1
3.3.3.3/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF between the PEs and P to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on PEs and P so that LDP LSPs can be set up to transmit
network services.
3. Configure static BFD on PEs to fast detect LDP LSPs.
NOTE
Ensure that STP is disabled in this scenario; otherwise, the primary LSP may be unavailable.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[PE1] interface loopback 1
[PE1-LoopBack1] ip address 1.1.1.1 32
[PE1-LoopBack1] quit
[PE1] vlan batch 10 30
[PE1] interface vlanif 10
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure PE1.
[PE1] ospf 1
[PE1-ospf-1] area 0
[PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[PE1-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[PE1-ospf-1-area-0.0.0.0] quit
[PE1-ospf-1] quit
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
After the configuration is complete, run the display ip routing-table command on each node.
You can see that the nodes learn routes from each other. The outbound interface of the route
from PE1 to PE2 is VLANIF 10.
# Configure PE1.
[PE1] mpls lsr-id 1.1.1.1
[PE1] mpls
[PE1-mpls] quit
[PE1] mpls ldp
[PE1-mpls-ldp] quit
[PE1] interface vlanif 10
[PE1-Vlanif10] mpls
[PE1-Vlanif10] mpls ldp
[PE1-Vlanif10] quit
[PE1] interface vlanif 30
[PE1-Vlanif30] mpls
[PE1-Vlanif30] mpls ldp
[PE1-Vlanif30] quit
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
# Run the display mpls ldp lsp command. The command output shows that an LDP LSP
destined for 4.4.4.4/32 is set up on PE1.
Step 5 Enable global BFD on the two nodes of the detected link.
# Configure PE1.
[PE1] bfd
[PE1-bfd] quit
# Configure PE2.
[PE2] bfd
[PE2-bfd] quit
Step 6 Bind the BFD session destined for the LDP LSP on PE1. Set the interval for sending and
receiving packets to both 100 ms. Configure the port status table to be changeable.
# Configure PE1.
[PE1] bfd pe1tope2 bind ldp-lsp peer-ip 4.4.4.4 nexthop 10.1.1.2 interface vlanif
10
[PE1-bfd-lsp-session-pe1tope2] discriminator local 1
[PE1-bfd-lsp-session-pe1tope2] discriminator remote 2
[PE1-bfd-lsp-session-pe1tope2] min-tx-interval 100
[PE1-bfd-lsp-session-pe1tope2] min-rx-interval 100
[PE1-bfd-lsp-session-pe1tope2] process-pst
[PE1-bfd-lsp-session-pe1tope2] commit
[PE1-bfd-lsp-session-pe1tope2] quit
Step 7 On PE2, configure a BFD session that is bound to the IP link to notify PE1 of the detected
faults on the LDP LSP.
# Configure PE2.
[PE2] bfd pe2tope1 bind peer-ip 1.1.1.1
[PE2-bfd-session-pe2tope1] discriminator local 2
[PE2-bfd-session-pe2tope1] discriminator remote 1
[PE2-bfd-session-pe2tope1] min-tx-interval 100
# Run the display bfd session all command on PE2, and the command output that the State
field is displayed as Up.
[PE2] display bfd session all
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
2 1 1.1.1.1 Up S_IP_PEER -
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10 30
#
bfd
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface Vlanif30
ip address 10.3.1.1 255.255.255.0
ospf cost 1000
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack1
ip address 1.1.1.1 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
bfd pe1tope2 bind ldp-lsp peer-ip 4.4.4.4 nexthop 10.1.1.2 interface Vlanif 10
discriminator local 1
discriminator remote 2
min-tx-interval 100
min-rx-interval 100
process-pst
commit
#
return
l Configuration file of P1
#
sysname P1
#
vlan batch 10 20
#
mpls lsr-id 2.2.2.2
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif20
ip address 10.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack1
ip address 2.2.2.2 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.2 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.2.1.0 0.0.0.255
#
return
l Configuration file of P2
#
sysname P2
#
vlan batch 30 40
#
mpls lsr-id 3.3.3.3
mpls
#
mpls ldp
#
interface Vlanif30
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif40
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 40
#
interface LoopBack1
ip address 3.3.3.3 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.3 0.0.0.0
network 10.3.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
#
return
l Configuration file of PE2
#
sysname PE2
#
vlan batch 20 40
#
bfd
#
mpls lsr-id 4.4.4.4
mpls
#
mpls ldp
#
interface Vlanif20
ip address 10.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif40
ip address 10.4.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 40
#
interface LoopBack1
ip address 4.4.4.4 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.4 0.0.0.0
network 10.2.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
#
bfd pe2tope1 bind peer-ip 1.1.1.1
discriminator local 2
discriminator remote 1
min-tx-interval 100
min-rx-interval 100
commit
#
return
Networking Requirements
As shown in Figure 3-19, the network topology is complex and unstable, PEs and P are
MPLS backbone network devices, and LDP LSPs are set up on the backbone network to
transmit network services.
Network services, such as VoIP, online game, and online video service, have high
requirements for real-timeness. Data loss caused by faulty links will seriously affect services.
It is required that services be fast switched to the backup LSP when the primary LSP becomes
faulty, minimizing packet loss. Dynamic BFD for LDP LSPs is configured to fast detect LDP
LSPs.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure OSPF between the PEs and P to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on PEs and P so that LDP LSPs can be set up to transmit
network services.
3. Configure dynamic BFD on PEs to fast detect LDP LSPs.
NOTE
Ensure that STP is disabled in this scenario; otherwise, the primary LSP may be unavailable.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[PE1] interface loopback 1
[PE1-LoopBack1] ip address 1.1.1.1 32
[PE1-LoopBack1] quit
[PE1] interface gigabitethernet 0/0/1
[PE1-GigabitEthernet0/0/1] port link-type trunk
[PE1-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[PE1-GigabitEthernet0/0/1] quit
[PE1] interface gigabitethernet 0/0/2
[PE1-GigabitEthernet0/0/2] port link-type trunk
[PE1-GigabitEthernet0/0/2] port trunk allow-pass vlan 30
[PE1-GigabitEthernet0/0/2] quit
[PE1] vlan batch 10 30
[PE1] interface vlanif 10
[PE1-Vlanif10] ip address 10.1.1.1 24
[PE1-Vlanif10] quit
[PE1] interface vlanif 30
[PE1-Vlanif30] ip address 10.3.1.1 24
[PE1-Vlanif30] quit
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure PE1.
[PE1] ospf 1
[PE1-ospf-1] area 0
[PE1-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[PE1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[PE1-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[PE1-ospf-1-area-0.0.0.0] quit
[PE1-ospf-1] quit
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
Step 3 Set the cost of VLANIF 30 on PE1 to 1000.
[PE1] interface vlanif 30
[PE1-Vlanif30] ospf cost 1000
[PE1-Vlanif30] quit
After the configuration is complete, run the display ip routing-table command on each node.
You can see that the nodes learn routes from each other. The outbound interface of the route
from PE1 to PE2 is VLANIF 10.
Step 4 Configure local LDP sessions.
# Configure PE1.
[PE1] mpls lsr-id 1.1.1.1
[PE1] mpls
[PE1-mpls] quit
[PE1] mpls ldp
[PE1-mpls-ldp] quit
[PE1] interface vlanif 10
[PE1-Vlanif10] mpls
[PE1-Vlanif10] mpls ldp
[PE1-Vlanif10] quit
[PE1] interface vlanif 30
[PE1-Vlanif30] mpls
[PE1-Vlanif30] mpls ldp
[PE1-Vlanif30] quit
The configurations of P1, P2, and PE2 are similar to the configuration of PE1, and are not
mentioned here.
# Run the display mpls ldp lsp command. The command output shows that an LDP LSP
destined for 4.4.4.4/32 is set up on PE1.
[PE1] display mpls ldp lsp
Step 5 Configure dynamic BFD to detect the connectivity of the LDP LSP between PE1 and PE2.
# Configure an FEC list on PE1 to ensure that BFD detects only the connectivity of the LDP
LSP between PE1 and PE2.
[PE1] fec-list tortc
[PE1-fec-list-tortc] fec-node 4.4.4.4
[PE1-fec-list-tortc] quit
# Enable BFD on PE1, specify the FEC list that triggers BFD session establishment
dynamically, and adjust BFD parameters.
[PE1] bfd
[PE1-bfd] quit
[PE1] mpls
[PE1-mpls] mpls bfd-trigger fec-list tortc
[PE1-mpls] mpls bfd enable
# Check the status of the BFD session created dynamically on PE2. The command output
shows that the State field is displayed as Up.
[PE2] display bfd session passive-dynamic
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
8192 8192 1.1.1.1 Up E_Dynamic -
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10 30
#
bfd
#
mpls lsr-id 1.1.1.1
mpls
mpls bfd enable
mpls bfd-trigger fec-list tortc
mpls bfd min-tx-interval 100 min-rx-interval 100
#
fec-list tortc
fec-node 4.4.4.4
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface Vlanif30
ip address 10.3.1.1 255.255.255.0
ospf cost 1000
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
Networking Requirements
As shown in Figure 3-20, P1, P2, P3, and PE2 exist on the MPLS backbone network and
OSPF runs between devices. Two LSPs are set up between PE1 and PE2 to transmit services:
primary LSP (PE1 -> P1 -> P2 -> PE2) and backup LSP (PE1 -> P1 -> P3 -> PE2). After the
primary link recovers, the IGP route of the primary link becomes active before an LDP
session is established over the primary link. As a result, traffic is dropped during attempts to
use the unreachable LSP. Short-time interruption of delay-sensitive services such as VoIP,
online game, and online video service is unacceptable. It is required that the MPLS traffic loss
be solved in this networking.
Figure 3-20 Networking diagram for configuring synchronization between LDP and IGP
Lookback1
2.2.2.9/32
0 /1 4 G
/
0 2/2 10 E0/
E . 0
G .1. 10
1 F VL 2.1. /2
. I A 1
NI /24 GE
1 10 N
0/0/ /24 VLA P2
F2 1 0 0
0 /0
.
GE .1.1 10 VL 2.1. /1
.1 IF A N /2 2
10 LAN IF2 4
V 0 PE2
Lookback1 Lookback1
1.1.1.9/32 4.4.4.9/32
PE1 P1 1 E0 G /2
0.3 /0 Lookback1 0 /0 /24
VL .1 /2 3.3.3.9/32 GE .1.2 F40
AN .1/2 .4 I
IF 4 1 GE / 2 10 LAN
30 0 0/
VL .3.1. 0/1 E 0 /0 1 /2 4 V
primary link AN 2/2 G .1 . 4 0
IF3 4 .4 IF
10 LAN
backup link 0 P3 V
Configuration Roadmap
To meet the preceding requirements, configure synchronization between LDP and IGP. The
configuration roadmap is as follows:
NOTE
Ensure that STP is disabled in this scenario; otherwise, the primary LSP may be unavailable.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure P1.
<HUAWEI> system-view
[HUAWEI] sysname P1
[P1] interface loopback 1
[P1-LoopBack1] ip address 1.1.1.9 32
[P1-LoopBack1] quit
[P1] vlan batch 10 30
[P1] interface vlanif 10
[P1-Vlanif10] ip address 10.1.1.1 24
[P1-Vlanif10] quit
[P1] interface vlanif 30
[P1-Vlanif30] ip address 10.3.1.1 24
[P1-Vlanif30] quit
[P1] interface gigabitethernet 0/0/1
[P1-GigabitEthernet0/0/1] port link-type trunk
[P1-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[P1-GigabitEthernet0/0/1] quit
[P1] interface gigabitethernet 0/0/2
[P1-GigabitEthernet0/0/2] port link-type trunk
[P1-GigabitEthernet0/0/2] port trunk allow-pass vlan 30
[P1-GigabitEthernet0/0/2] quit
The configurations of P2, P3, and PE2 are similar to the configuration of P1, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure P1.
[P1] ospf 1
[P1-ospf-1] area 0
[P1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[P1-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[P1-ospf-1-area-0.0.0.0] network 10.3.1.0 0.0.0.255
[P1-ospf-1-area-0.0.0.0] quit
[P1-ospf-1] quit
The configurations of P2, P3, and PE2 are similar to the configuration of P1, and are not
mentioned here.
Step 3 Set the cost of VLANIF 30 on P1 to 1000.
[P1] interface vlanif 30
[P1-Vlanif30] ospf cost 1000
[P1-Vlanif30] quit
After the configuration is complete, run the display ip routing-table command on each node.
The command output shows that the nodes have learned routes from each other. The outbound
interface of P1-to-PE2 route is VLANIF 10.
Step 4 Enable MPLS and MPLS LDP on each node and each interface.
# Configure P1.
[P1] mpls lsr-id 1.1.1.9
[P1] mpls
[P1-mpls] quit
[P1] mpls ldp
[P1-mpls-ldp] quit
[P1] interface vlanif 10
[P1-Vlanif10] mpls
[P1-Vlanif10] mpls ldp
[P1-Vlanif10] quit
[P1] interface vlanif 30
[P1-Vlanif30] mpls
[P1-Vlanif30] mpls ldp
[P1-Vlanif30] quit
# Configure P2.
[P2] mpls lsr-id 2.2.2.9
[P2] mpls
[P2-mpls] quit
[P2] mpls ldp
[P2-mpls-ldp] quit
[P2] interface vlanif 10
[P2-Vlanif10] mpls
[P2-Vlanif10] mpls ldp
[P2-Vlanif10] quit
[P2] interface vlanif 20
[P2-Vlanif20] mpls
[P2-Vlanif20] mpls ldp
[P2-Vlanif20] quit
# Configure P3.
[P3] mpls lsr-id 3.3.3.9
[P3] mpls
[P3-mpls] quit
[P3] mpls ldp
[P3-mpls-ldp] quit
[P3] interface vlanif 30
[P3-Vlanif30] mpls
[P3-Vlanif30] mpls ldp
[P3-Vlanif30] quit
[P3] interface vlanif 40
[P3-Vlanif40] mpls
[P3-Vlanif40] mpls ldp
[P3-Vlanif40] quit
# Configure PE2.
[PE2] mpls lsr-id 4.4.4.9
[PE2] mpls
[PE2-mpls] quit
[PE2] mpls ldp
[PE2-mpls-ldp] quit
[PE2] interface vlanif 20
[PE2-Vlanif20] mpls
[PE2-Vlanif20] mpls ldp
[PE2-Vlanif20] quit
[PE2] interface vlanif 40
[PE2-Vlanif40] mpls
[PE2-Vlanif40] mpls ldp
[PE2-Vlanif40] quit
After the configuration is complete, LDP sessions are established between neighboring nodes.
Run the display mpls ldp session command on each node. The command output shows that
the LDP session status is Operational. Use the display on P1 as an example.
[P1] display mpls ldp session
Step 5 Enable synchronization between LDP and IGP on the interfaces at both ends of the link
between P1 and P2.
# Configure P1.
[P1] interface vlanif 10
[P1-Vlanif10] ospf ldp-sync
[P1-Vlanif10] quit
# Configure P2.
[P2] interface vlanif 10
[P2-Vlanif10] ospf ldp-sync
[P2-Vlanif10] quit
Step 6 Set the value of Hold-down timer on the interfaces at both ends of the link between P1 and
P2.
# Configure P1.
[P1] interface vlanif 10
[P1-Vlanif10] ospf timer ldp-sync hold-down 8
[P1-Vlanif10] quit
# Configure P2.
[P2] interface vlanif 10
[P2-Vlanif10] ospf timer ldp-sync hold-down 8
[P2-Vlanif10] quit
Step 7 Set the value of Hold-max-cost timer on the interfaces at both ends of the link between P1 and
P2.
# Configure P1.
[P1] interface vlanif 10
[P1-Vlanif10] ospf timer ldp-sync hold-max-cost 9
[P1-Vlanif10] quit
# Configure P2.
[P2] interface vlanif 10
[P2-Vlanif10] ospf timer ldp-sync hold-max-cost 9
[P2-Vlanif10] quit
Step 8 Set the value of Delay timer on the interfaces at both ends of the link between P1 and P2.
# Configure P1.
[P1] interface vlanif 10
[P1-Vlanif10] mpls ldp timer igp-sync-delay 6
[P1-Vlanif10] quit
# Configure P2.
[P2] interface vlanif 10
[P2-Vlanif10] mpls ldp timer igp-sync-delay 6
[P2-Vlanif10] quit
Run the display ospf ldp-sync command on P1. The command output shows that the
interface status is Sync-Achieved.
----End
Configuration Files
l Configuration file of P1
#
sysname P1
#
vlan batch 10 30
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
ospf ldp-sync
ospf timer ldp-sync hold-down 8
ospf timer ldp-sync hold-max-cost 9
mpls
mpls ldp
mpls ldp timer igp-sync-delay 6
#
interface Vlanif30
ip address 10.3.1.1 255.255.255.0
ospf cost 1000
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.3.1.0 0.0.0.255
#
return
l Configuration file of P2
#
sysname P2
#
vlan batch 10 20
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.2 255.255.255.0
ospf ldp-sync
l Configuration file of P3
#
sysname P3
#
vlan batch 30 40
#
mpls lsr-id 3.3.3.9
mpls
#
mpls ldp
#
interface Vlanif30
ip address 10.3.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif40
ip address 10.4.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 40
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.3.1.0 0.0.0.255
network 10.4.1.0 0.0.0.255
#
return
Configuration Roadmap
To meet the preceding requirements, configure LDP GR. The configuration roadmap is as
follows:
1. Configure OSPF on LSRs to implement IP connectivity on the backbone network.
2. Configure local LDP sessions on LSRs so that LDP LSPs can be set up to transmit
network services.
3. Configure LDP GR on LSRs to prevent short-time traffic interruption.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.1 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure LSRB.
[LSRB] ospf 1
[LSRB-ospf-1] area 0
[LSRB-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[LSRB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] quit
[LSRB-ospf-1] quit
# Configure LSRC.
[LSRC] ospf 1
[LSRC-ospf-1] area 0
[LSRC-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[LSRC-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRC-ospf-1-area-0.0.0.0] quit
[LSRC-ospf-1] quit
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Configure OSPF GR.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] graceful-restart
[LSRA-ospf-1] quit
# Configure LSRB.
[LSRB] ospf 1
[LSRB-ospf-1] opaque-capability enable
[LSRB-ospf-1] graceful-restart
[LSRB-ospf-1] quit
# Configure LSRC.
[LSRC] ospf 1
[LSRC-ospf-1] opaque-capability enable
[LSRC-ospf-1] graceful-restart
[LSRC-ospf-1] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
After the configuration is complete, local LDP sessions are established between LSRA and
LSRB, and between LSRB and LSRC.
Run the display mpls ldp session command on each node to view the establishment of the
LDP session. LSRA is used as an example.
[LSRA] display mpls ldp session
!Continue? (y/n)y
[LSRA-mpls-ldp] quit
# Configure LSRB.
[LSRB] mpls ldp
[LSRB-mpls-ldp] graceful-restart
Warning: All the related sessions will be deleted if the operation is performed
!Continue? (y/n)y
[LSRB-mpls-ldp] quit
# Configure LSRC.
[LSRC] mpls ldp
[LSRC-mpls-ldp] graceful-restart
Warning: All the related sessions will be deleted if the operation is performed
!Continue? (y/n)y
[LSRC-mpls-ldp] quit
Capability:
Capability-Announcement : Off
mLDP P2MP Capability : Off
mLDP MBB Capability : Off
# Or run the display mpls ldp peer verbose command on the LSRs. The command output
shows that the Peer FT Flag field is displayed as On. LSRA is used as an example.
[LSRA] display mpls ldp peer verbose
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
graceful-restart
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
ospf 1
opaque-capability enable
graceful-restart
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.255
#
return
Figure 3-22 Networking diagram for configuring the LDP inbound policy
GE0/0/1
10.1.3.1/24
MPLS Network
VLANIF30
LSRD
Configuration Roadmap
To meet the preceding requirements, configure an LDP inbound policy. The configuration
roadmap is as follows:
1. Configure OSPF on LSRs to implement IP connectivity on the backbone network.
2. Configure local LDP sessions on the LSR so that LDP LSPs can be set up.
3. Configure an LDP inbound policy so that the LSRD receives only Label Mapping
messages from LSRB to LSRC. This setting saves the memory of the LSRD and saves
resources.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, and configure IP addresses for the
VLANIF interfaces.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 32
[LSRA-LoopBack1] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] vlan 10
[LSRA-vlan10] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
The configurations of LSRB, LSRC, and LSRD are similar to the configuration of LSRA, and
are not mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB, LSRC, and LSRD are similar to the configuration of LSRA, and
are not mentioned here.
Step 3 Configure local LDP sessions.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
# Configure LSRB.
[LSRB] mpls lsr-id 2.2.2.9
[LSRB] mpls
[LSRB-mpls] quit
[LSRB] mpls ldp
[LSRB-mpls-ldp] quit
[LSRB] interface vlanif 10
[LSRB-Vlanif10] mpls
[LSRB-Vlanif10] mpls ldp
[LSRB-Vlanif10] quit
[LSRB] interface vlanif 20
[LSRB-Vlanif20] mpls
[LSRB-Vlanif20] mpls ldp
[LSRB-Vlanif20] quit
[LSRB] interface vlanif 30
[LSRB-Vlanif30] mpls
[LSRB-Vlanif30] mpls ldp
[LSRB-Vlanif30] quit
# Configure LSRC.
[LSRC] mpls lsr-id 3.3.3.9
[LSRC] mpls
[LSRC-mpls] quit
[LSRC] mpls ldp
[LSRC-mpls-ldp] quit
[LSRC] interface vlanif 20
[LSRC-Vlanif20] mpls
[LSRC-Vlanif20] mpls ldp
[LSRC-Vlanif20] quit
# Configure LSRD.
[LSRD] mpls lsr-id 4.4.4.9
[LSRD] mpls
[LSRD-mpls] quit
[LSRD] mpls ldp
[LSRD-mpls-ldp] quit
[LSRD] interface vlanif 30
[LSRD-Vlanif30] mpls
[LSRD-Vlanif30] mpls ldp
[LSRD-Vlanif30] quit
# After the configuration is complete, run the display mpls lsp command on LSRD to view
the established LSP.
[LSRD] display mpls lsp
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.1.1.9/32 NULL/1024 -/Vlanif30
1.1.1.9/32 1024/1024 -/Vlanif30
2.2.2.9/32 NULL/3 -/Vlanif30
2.2.2.9/32 1025/3 -/Vlanif30
3.3.3.9/32 NULL/1025 -/Vlanif30
3.3.3.9/32 1026/1025 -/Vlanif30
4.4.4.9/32 3/NULL -/-
The command output shows that the LSPs from LSRD to LSRA, LSRB, and LSRC are
established.
Step 4 Configure an LDP inbound policy.
# Configure an IP prefix list on LSRD to allow only routes to LSRC to pass.
[LSRD] ip ip-prefix prefix1 permit 3.3.3.9 32
# Configure the LDP inbound policy on LSRD so that LSRC accepts only Label Mapping
messages from LSRD.
[LSRD] mpls ldp
[LSRD-mpls-ldp] inbound peer 2.2.2.9 fec ip-prefix prefix1
[LSRD-mpls-ldp] quit
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
#
sysname LSRB
#
vlan batch 10 20 30
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
#
interface Vlanif10
ip address 10.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif20
ip address 10.1.2.1 255.255.255.0
mpls
mpls ldp
#
interface Vlanif30
ip address 10.1.3.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 20
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 10.1.1.0 0.0.0.255
network 10.1.2.0 0.0.0.255
network 10.1.3.0 0.0.0.255
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20
#
mpls lsr-id 3.3.3.9
mpls
#
mpls ldp
#
interface Vlanif20
ip address 10.1.2.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 10.1.2.0 0.0.0.255
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 30
#
mpls lsr-id 4.4.4.9
mpls
#
mpls ldp
inbound peer 2.2.2.9 fec ip-prefix prefix1
#
interface Vlanif30
ip address 10.1.3.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 4.4.4.9 0.0.0.0
network 10.1.3.0 0.0.0.255
#
ip ip-prefix prefix1 index 10 permit 3.3.3.9 32
#
return
Loopback1
1.1.1.9/32
CE_1 GE0/0/1
VLANIF100
172.1.1.1/24 Loopback1
PE_1 2.2.2.9/32
GE0/0/1
VLANIF100 P
172.1.1.2/24
GE0/0/2
Loopback1 VLANIF200
3.3.3.9/32 172.2.1.1/24
GE0/0/1 IP/MPLS
VLANIF200 backbone
172.2.1.2/24 network
PE_2
CE_2
Configuration Roadmap
To meet the security requirements of LDP sessions, configure LDP Keychain authentication
between PE_1 and the P and between PE_2 and the P. The configuration roadmap is as
follows:
1. Configure OSPF between the PEs and P to implement IP connectivity on the backbone
network.
2. Configure local LDP sessions on PEs and P so that LDP LSPs can be set up to transmit
network services.
3. Configure LDP Keychain authentication between PE_1 and the P and between PE_2 and
the P to meet high security requirements.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure PE_1.
<HUAWEI> system-view
[HUAWEI] sysname PE_1
[PE_1] interface loopback 1
[PE_1-loopback1] ip address 1.1.1.9 32
[PE_1-loopback1] quit
[PE_1] vlan batch 100
[PE_1] interface vlanif 100
[PE_1-Vlanif100] ip address 172.1.1.1 24
[PE_1-Vlanif100] quit
[PE_1] interface gigabitethernet 0/0/1
[PE_1-GigabitEthernet0/0/1] port link-type trunk
The configurations of P, and PE_2 are similar to the configuration of PE_1, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure PE_1.
[PE_1] ospf 1
[PE_1-ospf-1] area 0
[PE_1-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[PE_1-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[PE_1-ospf-1-area-0.0.0.0] quit
[PE_1-ospf-1] quit
The configurations of P, and PE_2 are similar to the configuration of PE_1, and are not
mentioned here.
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Configure local LDP sessions.
# Configure PE_1.
[PE_1] mpls lsr-id 1.1.1.9
[PE_1] mpls
[PE_1-mpls] quit
[PE_1] mpls ldp
[PE_1-mpls-ldp] quit
[PE_1] interface vlanif 100
[PE_1-Vlanif100] mpls
[PE_1-Vlanif100] mpls ldp
[PE_1-Vlanif100] quit
The configurations of P, and PE_2 are similar to the configuration of PE_1, and are not
mentioned here.
Step 4 Configure Keychain.
# Configure PE_1.
[PE_1] keychain kforldp1 mode periodic weekly
[PE_1-keychain-kforldp1] tcp-kind 180
[PE_1-keychain-kforldp1] tcp-algorithm-id sha-256 8
[PE_1-keychain-kforldp1] receive-tolerance 15
[PE_1-keychain-kforldp1] key-id 1
[PE_1-keychain-kforldp1-keyid-1] algorithm sha-256
[PE_1-keychain-kforldp1-keyid-1] key-string cipher huaweiwork
[PE_1-keychain-kforldp1-keyid-1] send-time day mon to thu
[PE_1-keychain-kforldp1-keyid-1] receive-time day mon to thu
[PE_1-keychain-kforldp1-keyid-1] quit
[PE_1-keychain-kforldp1] key-id 2
[PE_1-keychain-kforldp1-keyid-2] algorithm sha-256
[PE_1-keychain-kforldp1-keyid-2] key-string cipher testpass
[PE_1-keychain-kforldp1-keyid-2] send-time day fri to sun
[PE_1-keychain-kforldp1-keyid-2] receive-time day fri to sun
[PE_1-keychain-kforldp1-keyid-2] quit
[PE_1-keychain-kforldp1] quit
# Configure the P.
[P] keychain kforldp1 mode periodic weekly
[P-keychain-kforldp1] tcp-kind 180
# Configure PE_1.
[PE_1] mpls ldp
[PE_1-mpls-ldp] authentication key-chain peer 2.2.2.9 name kforldp1
[PE_1-mpls-ldp] quit
# Configure the P.
[P] mpls ldp
[P-mpls-ldp] authentication key-chain peer 1.1.1.9 name kforldp1
[P-mpls-ldp] quit
# Configure PE_2.
[PE_2] keychain kforldp2 mode periodic weekly
[PE_2-keychain-kforldp2] tcp-kind 180
[PE_2-keychain-kforldp2] tcp-algorithm-id sha-256 8
[PE_2-keychain-kforldp2] receive-tolerance 15
[PE_2-keychain-kforldp2] key-id 1
[PE_2-keychain-kforldp2-keyid-1] algorithm sha-256
[PE_2-keychain-kforldp2-keyid-1] key-string cipher huaweiwork
[PE_2-keychain-kforldp2-keyid-1] send-time day mon to thu
[PE_2-keychain-kforldp2-keyid-1] receive-time day mon to thu
[PE_2-keychain-kforldp2-keyid-1] quit
[PE_2-keychain-kforldp2] key-id 2
[PE_2-keychain-kforldp2-keyid-2] algorithm sha-256
[PE_2-keychain-kforldp2-keyid-2] key-string cipher testpass
[PE_2-keychain-kforldp2-keyid-2] send-time day fri to sun
[PE_2-keychain-kforldp2-keyid-2] receive-time day fri to sun
[PE_2-keychain-kforldp2-keyid-2] quit
[PE_2-keychain-kforldp2] quit
# Configure the P.
[P] keychain kforldp2 mode periodic weekly
[P-keychain-kforldp2] tcp-kind 180
[P-keychain-kforldp2] tcp-algorithm-id sha-256 8
[P-keychain-kforldp2] receive-tolerance 15
[P-keychain-kforldp2] key-id 1
[P-keychain-kforldp2-keyid-1] algorithm sha-256
[P-keychain-kforldp2-keyid-1] key-string cipher huaweiwork
[P-keychain-kforldp2-keyid-1] send-time day mon to thu
[P-keychain-kforldp2-keyid-1] receive-time day mon to thu
[P-keychain-kforldp2-keyid-1] quit
[P-keychain-kforldp2] key-id 2
[P-keychain-kforldp2-keyid-2] algorithm sha-256
[P-keychain-kforldp2-keyid-2] key-string cipher testpass
[P-keychain-kforldp2-keyid-2] send-time day fri to sun
[P-keychain-kforldp2-keyid-2] receive-time day fri to sun
[P-keychain-kforldp2-keyid-2] quit
[P-keychain-kforldp2] quit
# Configure the P.
[P] mpls ldp
[P-mpls-ldp] authentication key-chain peer 3.3.3.9 name kforldp2
[P-mpls-ldp] quit
Capability:
Capability-Announcement : Off
mLDP P2MP Capability : Off
mLDP MBB Capability : Off
Capability:
Capability-Announcement : Off
mLDP P2MP Capability : Off
----End
Configuration Files
l Configuration file of PE_1
#
sysname PE_1
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
#
mpls ldp
authentication key-chain peer 2.2.2.9 name kforldp1
#
keychain kforldp1 mode periodic weekly
receive-tolerance 15
tcp-kind 180
#
key-id 1
algorithm sha-256
key-string cipher @%@%&Fk0W2VMD'IN*GAqQS20R*6}@%@%
send-time day mon to thu
receive-time day mon to thu
#
key-id 2
algorithm sha-256
key-string cipher @%@%p7gySOm*BNl=LS.o.!;.&gW5@%@%
send-time day fri to sun
send-time day fri to sun
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface loopback1
ip address 1.1.1.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.1.1.0 0.0.0.255
#
return
l Configuration file of P
#
sysname P
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
#
mpls ldp
authentication key-chain peer 1.1.1.9 name kforldp1
authentication key-chain peer 3.3.3.9 name kforldp2
#
keychain kforldp1 mode periodic weekly
receive-tolerance 15
tcp-kind 180
#
key-id 1
algorithm sha-256
key-string cipher @%@%l'7C8D71T$.[UxEvuRSOuSGc@%@%
send-time day mon to thu
receive-time day mon to thu
#
key-id 2
algorithm sha-256
key-string cipher @%@%RBtnHmbv%'obk\Sx/VnAuS7Y@%@%
send-time day fri to sun
receive-time day fri to sun
#
keychain kforldp2 mode periodic weekly
receive-tolerance 15
tcp-kind 180
#
key-id 1
algorithm sha-256
key-string cipher @%@%^;2E~Wr.,*\(x$Jmxg&!,4KJ@%@%
send-time day mon to thu
receive-time day mon to thu
#
key-id 2
algorithm sha-256
key-string cipher @%@%tT.TBtoDmLXzJNPL9@fR,4U.@%@%
send-time day fri to sun
receive-time day fri to sun
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls ldp
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface loopback1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
#
return
l Configuration file of PE_2
#
sysname PE_2
#
vlan batch 200
#
mpls lsr-id 3.3.3.9
mpls
#
mpls ldp
authentication key-chain peer 2.2.2.9 name kforldp2
#
keychain kforldp2 mode periodic weekly
receive-tolerance 15
tcp-kind 180
#
key-id 1
algorithm sha-256
key-string cipher @%@%KY%A-5HYPU3\Ju/3bdS<,1P1@%@%
send-time day mon to thu
receive-time day mon to thu
#
key-id 2
algorithm sha-256
key-string cipher @%@%%/aU'CP~m>6Dqb:u.(k1,1Z|@%@%
send-time day fri to sun
send-time day fri to sun
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface loopback1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.2.1.0 0.0.0.255
#
return
Networking Requirements
On an MPLS network shown in Figure 3-24, MPLS and MPLS LDP run between each two
nodes. Attackers may simulate LDP unicast packets and send the packets to LSRB. LSRB
becomes busy processing these packets, causing high CPU usage. The preceding problems
need to be addressed to protect nodes and enhance system security.
Configuration Roadmap
To meet the preceding requirements, configure LDP GTSM. The configuration roadmap is as
follows:
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.1.1 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 2 Configure OSPF to advertise the network segments connecting to interfaces on each node and
to advertise the routes of hosts with LSR IDs.
# Configure LSRA.
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.1 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure LSRB.
[LSRB] ospf 1
[LSRB-ospf-1] area 0
[LSRB-ospf-1-area-0.0.0.0] network 2.2.2.2 0.0.0.0
[LSRB-ospf-1-area-0.0.0.0] network 10.1.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRB-ospf-1-area-0.0.0.0] quit
[LSRB-ospf-1] quit
# Configure LSRC.
[LSRC] ospf 1
[LSRC-ospf-1] area 0
[LSRC-ospf-1-area-0.0.0.0] network 3.3.3.3 0.0.0.0
[LSRC-ospf-1-area-0.0.0.0] network 10.2.1.0 0.0.0.255
[LSRC-ospf-1-area-0.0.0.0] quit
[LSRC-ospf-1] quit
After the configuration is complete, run the display ip routing-table command on each node,
and you can view that the nodes learn routes from each other.
Step 3 Enable MPLS and MPLS LDP on each node and each interface of nodes.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.1
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
After the configuration is complete, run the display mpls ldp session command on each node
to view the established LDP session. LSRA is used as an example.
[LSRA] display mpls ldp session
# On LSRA, configure the TTL values carried in LDP packets received from LSRB to range
from 253 to 255.
[LSRA] mpls ldp
[LSRA-mpls-ldp] gtsm peer 2.2.2.2 valid-ttl-hops 3
[LSRA-mpls-ldp] quit
# On LSRB, configure the TTL values carried in the LDP packets received from LSRA to
range from 252 to 255, and the TTL values carried in LDP packets received from LSRC to
range from 251 to 255.
[LSRB] mpls ldp
[LSRB-mpls-ldp] gtsm peer 1.1.1.1 valid-ttl-hops 4
[LSRB-mpls-ldp] gtsm peer 3.3.3.3 valid-ttl-hops 5
[LSRB-mpls-ldp] quit
# On LSRC, configure the TTL values carried in LDP packets received from LSRB to range
from 250 to 255.
[LSRC] mpls ldp
[LSRC-mpls-ldp] gtsm peer 2.2.2.2 valid-ttl-hops 6
[LSRC-mpls-ldp] quit
If a host simulates the LDP packets of LSRA to attack LSRB, LSRB directly discards the
packets because the TTL values carried in the LDP packets are beyond the range of 252 to
255. In the GTSM statistics on LSRB, the number of discarded packets increases.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.1.1
mpls
#
mpls ldp
gtsm peer 2.2.2.2 valid-ttl-hops 3
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.252
mpls
mpls ldp
#
interface LoopBack0
ip address 1.1.1.1 255.255.255.255
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
ospf 1
area 0.0.0.0
network 1.1.1.1 0.0.0.0
network 10.1.1.0 0.0.0.3
#
return
Figure 3-25 Networking diagram for configuring LDP extension for inter-area LSP
Loopback0
1.3.0.1/32
/3
E 0/0 /24
Loopback0 Loopback0 G .1.1 0 /1
. 1 I F 3 E0/0 /24 LSRB
1.1.0.1/32 GE0/0/1 1.2.0.1/32 20 N G . 1. 2 30
A
10.1.1.1/24 VL G 0.1 NIF IS-IS
VLANIF10 20 E0/ 2 LA
.1. 0/2 V Area10
VL 2 .
GE0/0/1 AN 1 / 2
LSRA 10.1.1.2/24 LSRD IF 2 4
0 Loopback0
VLANIF10 1.3.0.2/32
IS-IS GE
Area20 20 0/
. 0
V L 1 .2 . /1
AN 2/2
IF 2 4
0 LSRC
Configuration Roadmap
To meet the preceding requirements, configure LDP extension for inter-area LSP. The
configuration roadmap is as follows:
1. Configure IS-IS on LSRs to implement IP connectivity on the backbone network.
2. Enable MPLS and MPLS LDP globally and interfaces of LSRs.
3. Configure LDP extension for inter-area LSP on LSRA to enable LDP to search for a
route according to the longest match rule to establish an LDP LSP.
Procedure
Step 1 Create VLANs and VLANIF interfaces on the switch, configure IP addresses for the VLANIF
interfaces, and add physical interfaces to the VLANs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface loopback 0
[LSRA-LoopBack0] ip address 1.1.0.1 32
[LSRA-LoopBack0] quit
[LSRA] vlan batch 10
[LSRA] interface vlanif 10
[LSRA-Vlanif10] ip address 10.1.1.1 24
[LSRA-Vlanif10] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[LSRA-GigabitEthernet0/0/1] quit
The configurations of LSRB, LSRC, and LSRD are similar to the configuration of LSRA, and
are not mentioned here.
Step 2 Configure basic IS-IS functions.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] is-level level-2
[LSRA-isis-1] network-entity 20.0010.0100.0001.00
[LSRA-isis-1] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] isis enable 1
[LSRA-Vlanif10] quit
[LSRA] interface loopback 0
[LSRA-LoopBack0] isis enable 1
[LSRA-LoopBack0] quit
# Configure LSRD.
[LSRD] isis 1
[LSRD-isis-1] network-entity 10.0010.0200.0001.00
[LSRD-isis-1] quit
[LSRD] interface vlanif 10
[LSRD-Vlanif10] isis enable 1
[LSRD-Vlanif10] isis circuit-level level-2
[LSRD-Vlanif10] quit
[LSRD] interface vlanif 20
[LSRD-Vlanif20] isis enable 1
[LSRD-Vlanif20] isis circuit-level level-1
[LSRD-Vlanif20] quit
[LSRD] interface vlanif 30
[LSRD-Vlanif30] isis enable 1
[LSRD-Vlanif30] isis circuit-level level-1
[LSRD-Vlanif30] quit
[LSRD] interface loopback 0
[LSRD-LoopBack0] isis enable 1
[LSRD-LoopBack0] quit
# Configure LSRB.
[LSRB] isis 1
[LSRB-isis-1] is-level level-1
[LSRB-isis-1] network-entity 10.0010.0300.0001.00
[LSRB-isis-1] quit
[LSRB] interface vlanif 30
[LSRB-Vlanif30] isis enable 1
[LSRB-Vlanif30] quit
[LSRB] interface loopback 0
[LSRB-LoopBack0] isis enable 1
[LSRB-LoopBack0] quit
# Configure LSRC.
[LSRC] isis 1
[LSRC-isis-1] is-level level-1
[LSRC-isis-1] network-entity 10.0010.0300.0002.00
[LSRC-isis-1] quit
[LSRC] interface vlanif 20
[LSRC-Vlanif20] isis enable 1
[LSRC-Vlanif20] quit
[LSRC] interface loopback 0
[LSRC-LoopBack0] isis enable 1
[LSRC-LoopBack0] quit
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 10 Routes : 10
The command output shows that host routes that are destined for LSRB and LSRC are
aggregated.
Step 4 Configure global and interface-based MPLS and MPLS LDP on each node so that the
network can forward MPLS traffic. Then check the LSP setup result.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.0.1
[LSRA] mpls
[LSRA-mpls] quit
[LSRA] mpls ldp
[LSRA-mpls-ldp] quit
[LSRA] interface vlanif 10
[LSRA-Vlanif10] mpls
[LSRA-Vlanif10] mpls ldp
[LSRA-Vlanif10] quit
# Configure LSRD.
[LSRD] mpls lsr-id 1.2.0.1
[LSRD] mpls
[LSRD-mpls] quit
[LSRD] mpls ldp
[LSRD-mpls-ldp] quit
[LSRD] interface vlanif 10
[LSRD-Vlanif10] mpls
[LSRD-Vlanif10] mpls ldp
[LSRD-Vlanif10] quit
[LSRD] interface vlanif 20
[LSRD-Vlanif20] mpls
[LSRD-Vlanif20] mpls ldp
[LSRD-Vlanif20] quit
[LSRD] interface vlanif 30
[LSRD-Vlanif30] mpls
[LSRD-Vlanif30] mpls ldp
[LSRD-Vlanif30] quit
# Configure LSRB.
[LSRB] mpls lsr-id 1.3.0.1
[LSRB] mpls
[LSRB-mpls] quit
[LSRB] mpls ldp
[LSRB-mpls-ldp] quit
[LSRB] interface vlanif 30
[LSRB-Vlanif30] mpls
[LSRB-Vlanif30] mpls ldp
[LSRB-Vlanif30] quit
# Configure LSRC.
[LSRC] mpls lsr-id 1.3.0.2
[LSRC] mpls
[LSRC-mpls] quit
[LSRC] mpls ldp
[LSRC-mpls-ldp] quit
[LSRC] interface vlanif 20
[LSRC-Vlanif20] mpls
[LSRC-Vlanif20] mpls ldp
[LSRC-Vlanif20] quit
# After the configuration is complete, run the display mpls lsp command on LSRA to view
the established LSP.
[LSRA] display mpls lsp
-------------------------------------------------------------------------------
LSP Information: LDP LSP
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.2.0.1/32 NULL/3 -/Vlanif10
1.2.0.1/32 1024/3 -/Vlanif10
1.1.0.1/32 3/NULL -/-
The command output shows that by default, LDP does not establish the inter-area LSPs from
LSRA to LSRB and from LSRA to LSRC.
Step 5 Configure LDP extensions for inter-area LSPs.
# Run the longest-match command on LSRA to configure LDP to search for a route
according to the longest match rule to establish an inter-area LDP LSP.
[LSRA] mpls ldp
[LSRA-mpls-ldp] longest-match
[LSRA-mpls-ldp] quit
-------------------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
1.2.0.1/32 NULL/3 -/Vlanif10
1.2.0.1/32 1024/3 -/Vlanif10
1.3.0.1/32 NULL/1025 -/Vlanif10
1.3.0.1/32 1025/1025 -/Vlanif10
1.3.0.2/32 NULL/1026 -/Vlanif10
1.3.0.2/32 1026/1026 -/Vlanif10
1.1.0.1/32 3/NULL -/-
The command output shows that LDP establishes the inter-area LSPs from LSRA to LSRB
and from LSRA to LSRC.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 10
#
mpls lsr-id 1.1.0.1
mpls
#
mpls ldp
longest-match
#
isis 1
is-level level-2
network-entity 20.0010.0100.0001.00
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface LoopBack0
ip address 1.1.0.1 255.255.255.255
isis enable 1
#
return
mpls
mpls ldp
#
interface Vlanif20
ip address 20.1.2.1 255.255.255.0
isis enable 1
isis circuit-level level-1
mpls
mpls ldp
#
interface Vlanif30
ip address 20.1.1.1 255.255.255.0
isis enable 1
isis circuit-level level-1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 30
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 1.2.0.1 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 30
#
mpls lsr-id 1.3.0.1
mpls
#
mpls ldp
#
isis 1
is-level level-1
network-entity 10.0010.0300.0001.00
#
interface Vlanif30
ip address 20.1.1.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 30
#
interface LoopBack0
ip address 1.3.0.1 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 20
#
mpls lsr-id 1.3.0.2
mpls
#
mpls ldp
#
isis 1
is-level level-1
network-entity 10.0010.0300.0002.00
#
interface Vlanif20
ip address 20.1.2.2 255.255.255.0
isis enable 1
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
interface LoopBack0
ip address 1.3.0.2 255.255.255.255
isis enable 1
#
return
Procedure
Step 1 Run the display this command in the LDP view to check whether LDP GR or LDP MTU is
configured.
l If the following information is displayed:
mpls ldp
graceful-restart
LDP GR is configured.
l If the following information is displayed:
mpls ldp
mtu-signalling apply-tlv
LDP MTU is configured.
l If information similar to the following is displayed:
mpls ldp
md5-password cipher 2.2.2.2 @%@%p7gySOm*BNl=LS.o.!;.&gW5@%@%
or
mpls ldp
authentication key-chain peer 2.2.2.2 name kc1
LDP authentication is configured.
Step 2 Run the display this command in the interface view to check whether the LDP Keepalive
timer or LDP transport address is configured.
l If information similar to the following is displayed:
mpls ldp
mpls ldp timer keepalive-hold 30
The LDP Keepalive timer is configured.
l If information similar to the following is displayed:
mpls ldp
mpls ldp transport-address interface
The LDP transport address is configured.
Step 3 After the preceding configurations are complete, wait for 10s and the LDP session becomes
stable.
----End
Procedure
Step 1 Check whether the interface where the LDP session is established is shut down.
Run the display this command in the interface view. If the following information is
displayed:
shutdown
----End
Fault Description
An LDP LSP alternates between Up and Down states after being established.
Procedure
l Check whether the LDP session flaps.
Run the display mpls ldp session command to check the displayed Status field. You are
advised to run this command once every 1s. If the LDP session status switches between
Operational and non-operational, the LDP session flap occurs.
If the LDP session flap occurs, rectify the fault by referring to LDP Session Alternates
Between Up and Down States.
----End
Fault Description
An LDP LSP is Down after being established.
Procedure
Step 1 Check whether the LDP session is correctly established.
Run the display mpls ldp session command to check the displayed Status field. If LDP
session status is Operational, the LDP session is established and in Up state. If LDP session
status is not Operational, the LDP session is not established.
l If the LDP session is not established, rectify the fault by referring to LDP Session Is
Down.
----End
Fault Description
An inter-area LSP fails to be established after LDP extension for inter-area LSP is configured.
Procedure
Step 1 Check whether LDP extension for inter-area LSP is configured.
Run the display mpls ldp command to check the displayed Longest-match field. If the field
is displayed as On, LDP extension for inter-area LSP is enabled. If the field is displayed as
Off, LDP extension for inter-area LSP is disabled.
l If LDP extension for inter-area LSP is disabled, run the longest-match command to
enable this function.
Run the display ip routing-table command to check the fields NextHop and Interface.
Run the display mpls ldp session verbose command to check the Addresses received from
peer field.
Run the display mpls ldp peer command to check the DiscoverySource field.
If the field NextHop is contained in the field Addresses received from peer and the values
of fields Interface and DiscoverySource are the same, the LDP session matches the route.
l If the LDP session does not match the route, locate the fault by referring to LDP LSP Is
Down.
----End
3.11 FAQ
This section describes the FAQ of MPLS LDP.
After an MPLS LDP session fails to be established, R&D personnel need to collect the
following information for analysis:
Command Description
display mpls ldp session verbose Displays detailed information about the
session status.
display mpls ldp peer verbose Displays the LDP status: local or remote.
display mpls ldp interface [verbose] Displays sent and received LDP packets on
the interface. If MPLS LDP is disabled on
the interface, no command output is
displayed.
display mpls ldp remote-peer peer-name Displays sent and received LDP protocol
packets after the remote session is
established.
display ip routing-table x.x.x.x verbose Displays whether the route to the peer
display fib x.x.x.x verbose exists.
display mpls ldp event session-down Displays the reason for LDP Session Down.
3.11.2 The Two Ends of an LSP Are Up and Can Send Hello
Messages, but the Peer End Cannot Receive Them. Why?
If the two ends of an LSP are Up and can send Hello messages, but the peer end cannot
receive the messages, the possible causes are as follows:
l Devices do not support sending of large packets, for example, the device can send
packets whose maximum size is 180 bytes. To check whether the peer end can send large
packets, ping the IP address of the peer end using large packets.
l Run the display cpu-defend statistics slot slot-id command to check whether Hello
messages are dropped due to attack defense policies or Hello messages do not reach the
cpu-defend module.
l Check whether statistics on MPLS-related ACL packets exist and ACLs are correctly
delivered.
3.12 References
This section lists references of MPLS LDP.
On an MPLS network, MPLS QoS controls enterprise network traffic, and implements
congestion avoidance and congestion management to reduce packet loss. In addition, MPLS
QoS provides dedicated bandwidth for enterprise users or differentiated services (such as
voice, video, and data services).
Definition
MPLS quality of service (MPLS QoS) is implemented using the Differentiated Services
(DiffServ) model on an MPLS network. MPLS QoS provides differentiated services to meet
diversified requirements.
Purpose
MPLS uses label-based forwarding to replace route-based forwarding and provides powerful
and flexible functions to meet requirements of new applications. In addition, MPLS supports
multiple network protocols including IPv4 and IPv6. MPLS has been widely used for building
large-scale networks. On an MPLS network, IP QoS cannot be used to guarantee quality of
services, so MPLS QoS is used.
Similar to the way IP QoS differentiates services based on priorities of IP packets, MPLS QoS
differentiates data flows based on the EXP field and provides differentiated services for data
flows. The use of MPLS QoS helps minimize delays and ensure low packet loss ratios for
voice and video data streams.
4.2 Principles
This section describes the implementation of MPLS QoS.
0 19 22 23 31
Label Exp S TTL
MPLS DiffServ maps the EXP field (as shown in Figure 4-1) to a per-hop behavior (PHB).
LSRs forward MPLS packets based on EXP fields in the MPLS packets. MPLS DiffServ
provides the following solutions for LSP setup:
l E-LSP: an LSP whose PHB is determined by the EXP field. E-LSP applies to a network
with less than eight PHBs. A differentiated services code point (DSCP) or 802.1p
priority is mapped to a specified EXP value that identifies a PHB. Table 4-1 describes
the mapping between PHBs and EXP values. Packets are forwarded based on labels, and
the EXP field determines the packet scheduling algorithm and drop priority at each hop.
An LSP transmits a maximum of eight PHB flows that are identified by the EXP field in
the MPLS packet header. The EXP value can be configured by the ISP or mapped from
the DSCP or 802.1p priority in a packet. In E-LSP, PHB information does not need to be
transmitted by signaling protocols. Additionally, the label efficiency is high, and the
label status is easy to maintain.
BE 0
AF1 1
AF2 2
AF3 3
AF4 4
EF 5
CS6 6
CS7 7
l L-LSP: an LSP whose PHB is determined by both the label and EXP value. L-LSP
applies to a network with any number of PHBs. During packet forwarding, the label of a
packet determines the forwarding path and scheduling algorithm, whereas the EXP field
determines the drop priority of the packet. Labels differentiate service flows, so service
flows of a specified type are transmitted over the same LSP. This solution requires more
labels and occupies a large number of system resources.
NOTE
DiffServ Domain
As shown in Figure 4-2, DiffServ domains include MPLS DiffServ and IP DiffServ domains.
In the E-LSP solution, MPLS DiffServ manages and schedules the two DiffServ domains and
implements bidirectional mapping between DSCP or 802.1p priorities and EXP priorities at
the MPLS network edge.
MPLS
PE DiffServ Domain PE
CE CE
IP IP
DiffServ Domain DiffServ Domain
As shown in Figure 4-3, the MPLS DiffServ domain forwards MPLS packets based on EXP
values and provides differentiated services.
When MPLS packets enter the P device, the P device classifies packets and maps EXP values
in packets to CoS values and drop priorities. After traffic classification, QoS implementations
including traffic shaping, traffic policing, and congestion avoidance are the same as those on
an IP network. When MPLS packets leave the P device, the P device maps CoS values and
drop priorities to EXP values so that the downstream device of the P device can provide
differentiated services based on EXP values.
E-LSP
BE queue
EF queue
value on the MPLS network determines the PHB used when the packet leaves the MPLS
network. The egress node maps the EXP value to the DSCP or 802.1p priority. Figure
4-4 shows priority mapping in a uniform tunnel using an L3VPN network as an example.
P_1 changes the outer MPLS EXP value to 6. P_2 pops out the outer MPLS label and
changes the inner MPLS EXP value to the outer MPLS EXP value. PE_2 changes the
DSCP priority to 48.
IP/MPLS backbone
network
CE_1 PE_1 P_1 P_2 PE_2 CE_2
l Pipe: The EXP value can be manually configured, and the ingress node adds this EXP
value to MPLS packets. Any change in the EXP value is valid only on the MPLS
network. The egress node selects the PHB for MPLS packets according to the EXP
value. When the packets leave the MPLS network, their DSCP or 802.1p priority is still
valid. Figure 4-5 shows priority mapping in a pipe tunnel using an L3VPN network as
an example. PE_1 changes the outer and inner MPLS EXP values to 1 and 2. PE_2
retains the DSCP priority of packets and selects a PHB based on the inner MPLS EXP
value.
IP/MPLS backbone
network
l Short pipe: The EXP value can be manually configured, and the ingress node adds this
EXP value to MPLS packets. Any change in the EXP value is valid only on the MPLS
network. The egress node selects the PHB for MPLS packets according to the DSCP or
802.1p priority. When the packets leave the MPLS network, their DSCP or 802.1p
priority is still valid. Figure 4-6 shows priority mapping in a short-pipe tunnel using an
L3VPN network as an example. PE_1 changes the outer and inner MPLS EXP values to
1 and 2. PE_2 retains the DSCP priority of packets and selects a PHB on the DSCP
priority.
IP/MPLS backbone
network
4.3 Applications
This section describes application scenarios of MPLS QoS.
IP/MPLS backbone
network
PE_1 P PE_2
CE_1 CE_2
When service flows from different VPNs enter the MPLS network, devices on the MPLS
network must differentiate priorities of the services to ensure that service flows from
enterprise A have higher priorities than those from enterprise B. The devices then provide
differentiated services to the service flows based on their priorities.
Packets carry different precedence fields depending on the network type. For example,
packets carry the 802.1p field on a Layer 2 network, the DSCP field on a Layer 3 network,
and the EXP field on an MPLS network. As on a L3VPN shown in Figure 4-8, PE_1, P, and
PE_2 process packets as follows:
l The ingress node PE_1 maps priorities of packets from enterprises A and B to EXP
priorities in descending order, so that devices on the MPLS network can provide
differentiated services based on the EXP priorities.
l The transit node P maps EXP priorities carried in received packets to internal priorities
and colors and provides different QoS services according to the internal priorities and
colors. When packets leave P, it re-marks the internal priorities and colors to EXP
priorities.
l The egress node PE_2 maps EXP or DSCP priorities carried in received packets to
internal priorities and colors and provides different QoS services according to the
internal priorities and colors. When packets leave PE_2, it re-marks the internal priorities
and colors to DSCP priorities so that downstream devices can provide differentiated
services based on packet priorities.
VPN_1 VPN_1
site site
IP/MPLS
CE_1 backbone network CE_3
Enterprise A PE_1 P PE_2 Enterprise A
Enterprise B Enterprise B
CE_2 CE_4
VPN_2 VPN_2
site site
When you configure MPLS QoS on the switch, note the following:
l Table 4-2 lists the mappings from PHBs and colors to EXP priorities in MPLS packets.
l Table 4-3 lists the mappings from EXP priorities in MPLS packets to PHBs and colors.
Table 4-2 Mappings from PHBs and colors to EXP priorities of outgoing packets in the
DiffServ domain
PHB Color EXP Priority
BE green 0
BE yellow 0
BE red 0
AF1 green 1
AF1 yellow 1
AF1 red 1
AF2 green 2
AF2 yellow 2
AF2 red 2
AF3 green 3
AF3 yellow 3
AF3 red 3
AF4 green 4
AF4 yellow 4
AF4 red 4
EF green 5
EF yellow 5
EF red 5
CS6 green 6
CS6 yellow 6
CS6 red 6
CS7 green 7
CS7 yellow 7
CS7 red 7
Table 4-3 Mappings from EXP priorities to PHBs and colors of incoming packets in the
DiffServ domain
EXP Priority PHB Color
0 BE green
1 AF1 green
2 AF2 green
3 AF3 green
4 AF4 green
5 EF green
6 CS6 green
7 CS7 green
Pre-configuration Tasks
Before configuring the mapping of the precedence in the tunnel label, complete the following
tasks:
Configuration Process
Configure the mapping of the precedence in the tunnel label according to the following
sequence.
Context
A DiffServ domain comprises the connected DiffServ nodes, which use the same service
policy and implement the same PHBs.
When traffic enters a device, the device maps packet priorities to PHBs and colors, and
performs congestion management based on PHBs and congestion avoidance based on colors.
When traffic flows out of the device, the device maps PHBs and colors of packets to
priorities. The downstream device provides QoS services based on packet priorities.
Procedure
Step 1 Run:
system-view
Step 2 Run:
diffserv domain { default | ds-domain-name }
The default domain defines the default mappings from packet priorities to PHBs and colors.
You can modify the mappings defined in the default domain but cannot delete the default
domain.
The inbound interface is configured to map EXP priorities of MPLS packets to the PHBs
and colors.
l Run:
mpls-exp-outbound service-class color map exp-value
The outbound interface is configured to map PHBs and colors to EXP priorities of
MPLS packets.
To check the default mappings between PHBs and colors of MPLS packets and EXP
priorities, see mpls-exp-inbound and mpls-exp-outbound commands.
----End
Context
To map priorities of incoming packets to PHBs and colors based on the mappings defined in a
DiffServ domain, bind the DiffServ domain to the inbound interface of the packets. The
system then maps priorities of packets to PHBs and colors based on the mappings in the
DiffServ domain.
To map PHBs and colors of outgoing packets to priorities based on the mappings defined in a
DiffServ domain, bind the DiffServ domain to the outbound interface of the packets. The
system then maps PHBs and colors of outgoing packets to priorities based on the mappings in
the DiffServ domain.
NOTE
This command must be run before the public tunnel is set up. If the command is run after the public
tunnel is set up, you must restart MPLS LDP; otherwise, the command cannot take effect.
Procedure
l Perform the following steps on the ingress node.
a. Run:
system-view
The PHB/color of packet is mapped to the EXP priority of the public tunnel on the
ingress node.
By default, mapping from the PHB/color to the EXP priority of the public tunnel is
performed according to the settings in the default domain.
If you want to perform priority mapping based on the EXP priority of the private
tunnel, specify the vpn-label-exp parameter in the command.
l Perform the following steps on the transit node.
a. Run:
system-view
Priority mapping is performed based on the EXP priority of the public tunnel on the
transit node.
By default, mapping of the EXP priority of the public tunnel is performed according
to the settings in the default domain.
l Perform the following steps on the egress node.
a. Run:
system-view
The EXP priority of the public tunnel is mapped to the PHB/color on the egress
node.
By default, mapping from the EXP priority of the public tunnel to the PHB/color is
performed according to the settings in the default domain.
----End
Pre-configuration Tasks
Before configuring the DiffServ mode for the MPLS private network, complete the following
task:
l 4.6.1 Configuring the Mapping of the Precedence in the Public MPLS Tunnel Label
Configuration Process
You can perform the following configuration tasks in any sequence as required.
Context
To provide QoS guarantee for VPN traffic on an MPLS VPN network, set the DiffServ mode
based on actual needs.
l If you want to differentiate priorities of different services in a VPN, set the DiffServ
mode to uniform. You can also set the DiffServ mode to pipe or short pipe, but you need
to specify the DiffServ domain in which the mode applies.
l If you want to differentiate priorities of services in different VPNs but not priorities of
services in a VPN, set the DiffServ mode to pipe or short pipe and specify EXP values in
private labels.
If you do not want to change priorities carried in original packets, you are advised to set the
DiffServ mode to pipe or short pipe. In uniform and pipe modes, the egress node determines
the per-hop behavior (PHB) based on EXP priorities of packets. In short pipe mode, the egress
node determines the PHB based on DSCP priorities of packets.
Procedure
Step 1 Run:
system-view
Step 2 Run:
ip vpn-instance vpn-instance-name
Step 3 Run:
diffserv-mode { pipe { mpls-exp mpls-exp | domain ds-name } | short-pipe [ mpls-
exp mpls-exp ] domain ds-name | uniform [ domain ds-name ] }
l If the mpls-qos ingress trust upstream none or mpls-qos egress trust upstream none
command is configured, the device on the private network does not perform EXP priority
mapping even if you run the diffserv-mode command.
l When the DiffServ mode is set to uniform on the ingress node, the ingress node performs
priority mapping in the DiffServ domain specified by the domain parameter in this
command. If the domain parameter is not specified, the ingress node performs priority
mapping in the DiffServ domain specified by the mpls-qos ingress trust upstream { ds-
name | default } command.
l In a non-PHP scenario, the egress node performs priority mapping in the DiffServ
domain specified by the mpls-qos egress trust upstream { ds-name | default }
command. In a PHP scenario, the egress node performs priority mapping in the DiffServ
domain specified by the domain parameter in this command. If the domain parameter is
not specified, the egress node performs priority mapping in the DiffServ domain
specified by the mpls-qos egress trust upstream { ds-name | default } command.
NOTE
This command must be configured before the instance takes effect; otherwise, you must reset BGP
connections to make the configuration take effect.
----End
Context
To provide QoS guarantee for VPN traffic on an MPLS VPN network, set the DiffServ mode
based on actual needs.
l If you want to differentiate priorities of different services in a VPN, set the DiffServ
mode to uniform. You can also set the DiffServ mode to pipe or short pipe, but you need
to specify the DiffServ domain in which the mode applies.
l If you want to differentiate priorities of services in different VPNs but not priorities of
services in a VPN, set the DiffServ mode to pipe or short pipe and specify EXP values in
private labels.
If you do not want to change priorities carried in original packets, you are advised to set the
DiffServ mode to pipe or short pipe. In uniform and pipe modes, the egress node determines
the per-hop behavior (PHB) based on EXP priorities of packets. In short pipe mode, the egress
node determines the PHB based on 802.1p priorities of packets.
Procedure
l In VLL networking
a. Run:
system-view
n When the DiffServ mode is set to uniform on the ingress node, the ingress
node performs priority mapping in the DiffServ domain specified by the
domain parameter in this command. If the domain parameter is not specified,
the ingress node performs priority mapping in the DiffServ domain specified
by the mpls-qos ingress trust upstream { ds-name | default } command.
n In a non-PHP scenario, the egress node performs priority mapping in the
DiffServ domain specified by the mpls-qos egress trust upstream { ds-name |
default } command. In a PHP scenario, the egress node performs priority
mapping in the DiffServ domain specified by the domain parameter in this
command. If the domain parameter is not specified, the egress node performs
priority mapping in the DiffServ domain specified by the mpls-qos egress
trust upstream { ds-name | default } command.
NOTE
This command must be run before the VC is set up; otherwise, you must unbind the bound
AC interface and bind the AC interface again to make the command take effect.
l In VPLS networking
a. Run:
system-view
This command must be configured before the instance takes effect; otherwise, you must
enable or disable the VSI to make the configuration take effect.
----End
Prerequisites
The DiffServ mode supported by the MPLS private network has been configured.
Procedure
l Run the display mpls l2vc [ vc-id | interface interface-type interface-number | remote-
info [ vc-id | verbose ] | state { down | up } ] command to check information about the
MPLS DiffServ mode used by a VLL.
l Run the display vsi [ name vsi-name ] [ verbose ] command to check information about
the MPLS DiffServ mode used by a VPLS.
----End
GE0/0/1 GE0/0/1
VLANIF 20 VLANIF 50
10.2.1.1/24 10.4.1.1/24
CE2 CE4
vpnb vpnb
AS: 65420 AS: 65440
Configuration Roadmap
Configure MPLS QoS on PE1 and PE2. Enable the pipe mode on vpna and vpnb. Set the
MPLS EXP values of vpna and vpnb to 4 and 3 respectively to provide better QoS guarantee
for services of Enterprise A.
Procedure
Step 1 Configure OSPF on the MPLS backbone network so that PE and P can communicate with
each other.
# Configure PE1.
<HUAWEI> system-view
[HUAWEI] sysname PE1
[PE1] interface loopback 1
[PE1-LoopBack1] ip address 1.1.1.9 32
[PE1-LoopBack1] quit
[PE1] vlan batch 10 20 30
[PE1] interface gigabitethernet 0/0/1
[PE1-GigabitEthernet0/0/1] port link-type trunk
[PE1-GigabitEthernet0/0/1] port trunk allow-pass vlan 10
[PE1-GigabitEthernet0/0/1] quit
# Configure P.
<HUAWEI> system-view
[HUAWEI] sysname P
[P] interface loopback 1
[P-LoopBack1] ip address 2.2.2.9 32
[P-LoopBack1] quit
[P] vlan batch 30 60
[P] interface gigabitethernet 0/0/1
[P-GigabitEthernet0/0/1] port link-type trunk
[P-GigabitEthernet0/0/1] port trunk allow-pass vlan 30
[P-GigabitEthernet0/0/1] quit
[P] interface gigabitethernet 0/0/2
[P-GigabitEthernet0/0/2] port link-type trunk
[P-GigabitEthernet0/0/2] port trunk allow-pass vlan 60
[P-GigabitEthernet0/0/2] quit
[P] interface vlanif 30
[P-Vlanif30] ip address 172.1.1.2 24
[P-Vlanif30] quit
[P] interface vlanif 60
[P-Vlanif60] ip address 172.2.1.1 24
[P-Vlanif60] quit
[P] ospf
[P-ospf-1] area 0
[P-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[P-ospf-1-area-0.0.0.0] network 172.2.1.0 0.0.0.255
[P-ospf-1-area-0.0.0.0] network 2.2.2.9 0.0.0.0
[P-ospf-1-area-0.0.0.0] quit
[P-ospf-1] quit
# Configure PE2.
<HUAWEI> system-view
[HUAWEI] sysname PE2
[PE2] interface loopback 1
[PE2-LoopBack1] ip address 3.3.3.9 32
[PE2-LoopBack1] quit
[PE2] vlan batch 40 50 60
[PE2] interface gigabitethernet 0/0/1
[PE2-GigabitEthernet0/0/1] port link-type trunk
[PE2-GigabitEthernet0/0/1] port trunk allow-pass vlan 40
[PE2-GigabitEthernet0/0/1] quit
[PE2] interface gigabitethernet 0/0/2
[PE2-GigabitEthernet0/0/2] port link-type trunk
[PE2-GigabitEthernet0/0/2] port trunk allow-pass vlan 50
[PE2-GigabitEthernet0/0/2] quit
[PE2] interface gigabitethernet 0/0/3
[PE2-GigabitEthernet0/0/3] port link-type trunk
[PE2-GigabitEthernet0/0/3] port trunk allow-pass vlan 60
[PE2-GigabitEthernet0/0/3] quit
[PE2] interface vlanif 60
[PE2-Vlanif60] ip address 172.2.1.2 24
[PE2-Vlanif60] quit
[PE2] ospf
[PE2-ospf-1] area 0
[PE2-ospf-1-area-0.0.0.0] network 172.2.1.0 0.0.0.255
[PE2-ospf-1-area-0.0.0.0] network 3.3.3.9 0.0.0.0
[PE2-ospf-1-area-0.0.0.0] quit
[PE2-ospf-1] quit
After the configuration is complete, OSPF neighbor relationships are set up between PE1, P,
and PE2. Run the display ip routing-table command. The command output shows that PEs
have learned the routes to Loopback1 of each other.
Step 2 Configure basic MPLS functions, enable MPLS LDP, and establish LDP LSPs on the MPLS
backbone network.
# Configure PE1.
[PE1] mpls lsr-id 1.1.1.9
[PE1] mpls
[PE1-mpls] quit
[PE1] mpls ldp
[PE1-mpls-ldp] quit
[PE1] interface vlanif 30
[PE1-Vlanif30] mpls
[PE1-Vlanif30] mpls ldp
[PE1-Vlanif30] quit
# Configure P.
[P] mpls lsr-id 2.2.2.9
[P] mpls
[P-mpls] quit
[P] mpls ldp
[P-mpls-ldp] quit
[P] interface vlanif 30
[P-Vlanif30] mpls
[P-Vlanif30] mpls ldp
[P-Vlanif30] quit
[P] interface vlanif 60
[P-Vlanif60] mpls
[P-Vlanif60] mpls ldp
[P-Vlanif60] quit
# Configure PE2.
[PE2] mpls lsr-id 3.3.3.9
[PE2] mpls
[PE2-mpls] quit
[PE2] mpls ldp
[PE2-mpls-ldp] quit
[PE2] interface vlanif 60
[PE2-Vlanif60] mpls
[PE2-Vlanif60] mpls ldp
[PE2-Vlanif60] quit
After the configuration is complete, LDP sessions are set up between PE1 and P and between
P and PE2. Run the display mpls ldp session command. The command output shows that the
LDP session status is Operational.
PE1 is used as an example
[PE1] display mpls ldp session
------------------------------------------------------------------------------
2.2.2.9:0 Operational DU Active 0000:00:01 6/6
------------------------------------------------------------------------------
TOTAL: 1 session(s) Found.
Step 3 Configure a VPN instance on each PE and connect the CEs to the PEs.
# Configure PE1.
[PE1] ip vpn-instance vpna
[PE1-vpn-instance-vpna] ipv4-family
[PE1-vpn-instance-vpna-af-ipv4] route-distinguisher 100:1
[PE1-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[PE1-vpn-instance-vpna-af-ipv4] quit
[PE1-vpn-instance-vpna] quit
[PE1] ip vpn-instance vpnb
[PE1-vpn-instance-vpnb] ipv4-family
[PE1-vpn-instance-vpnb-af-ipv4] route-distinguisher 100:2
[PE1-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both
[PE1-vpn-instance-vpnb-af-ipv4] quit
[PE1-vpn-instance-vpnb] quit
[PE1] interface vlanif 10
[PE1-Vlanif10] ip binding vpn-instance vpna
[PE1-Vlanif10] ip address 10.1.1.2 24
[PE1-Vlanif10] quit
[PE1] interface vlanif 20
[PE1-Vlanif20] ip binding vpn-instance vpnb
[PE1-Vlanif20] ip address 10.2.1.2 24
[PE1-Vlanif20] quit
# Configure PE2.
[PE2] ip vpn-instance vpna
[PE2-vpn-instance-vpna] ipv4-family
[PE2-vpn-instance-vpna-af-ipv4] route-distinguisher 200:1
[PE2-vpn-instance-vpna-af-ipv4] vpn-target 111:1 both
[PE2-vpn-instance-vpna-af-ipv4] quit
[PE2-vpn-instance-vpna] quit
[PE2] ip vpn-instance vpnb
[PE2-vpn-instance-vpnb] ipv4-family
[PE2-vpn-instance-vpnb-af-ipv4] route-distinguisher 200:2
[PE2-vpn-instance-vpnb-af-ipv4] vpn-target 222:2 both
[PE2-vpn-instance-vpnb-af-ipv4] quit
[PE2-vpn-instance-vpnb] quit
[PE2] interface vlanif 40
[PE2-Vlanif40] ip binding vpn-instance vpna
[PE2-Vlanif40] ip address 10.3.1.2 24
[PE2-Vlanif40] quit
[PE2] interface vlanif 50
[PE2-Vlanif50] ip binding vpn-instance vpnb
[PE2-Vlanif50] ip address 10.4.1.2 24
[PE2-Vlanif50] quit
# Assign IP addresses to the interfaces on the CEs according to Figure 4-9. The configuration
procedure is not mentioned here.
After the configurations are complete, each PE can ping its connected CE.
NOTE
If a PE has multiple interfaces bound to the same VPN instance, specify a source IP addresses by
specifying -a source-ip-address in the ping -vpn-instance vpn-instance-name -a source-ip-address dest-
ip-address command to ping the CE connected to the remote PE. If you do not specify a source IP
address, the ping fails.
# Configure PE2.
[PE2] bgp 100
[PE2-bgp] peer 1.1.1.9 as-number 100
[PE2-bgp] peer 1.1.1.9 connect-interface loopback 1
[PE2-bgp] ipv4-family vpnv4
[PE2-bgp-af-vpnv4] peer 1.1.1.9 enable
[PE2-bgp-af-vpnv4] quit
[PE2-bgp] quit
After the configuration is complete, run the display bgp peer command on PEs. The
command output shows that the BGP peer relationships have been established between the
PEs.
[PE1] display bgp peer
Step 5 Set up the EBGP peer relationships between the PEs and CEs and import VPN routes.
# Configure CE1.
[CE1] bgp 65410
[CE1-bgp] peer 10.1.1.2 as-number 100
[CE1-bgp] import-route direct
The configurations of CE2, CE3, and CE4 are similar to the configuration of CE1, and are not
mentioned here.
# Configure PE1.
[PE1] bgp 100
[PE1-bgp] ipv4-family vpn-instance vpna
[PE1-bgp-vpna] peer 10.1.1.1 as-number 65410
[PE1-bgp-vpna] import-route direct
[PE1-bgp-vpna] quit
[PE1-bgp] ipv4-family vpn-instance vpnb
The configuration of PE2 is similar to that of PE1, and is not mentioned here.
After the configurations are complete, run the display bgp vpnv4 vpn-instance peer
command on the PEs. The command output shows that BGP peer relationships between PEs
and CEs have been established.
Use the peer relationship between PE1 and CE1 as an example.
[PE1] display bgp vpnv4 vpn-instance vpna peer
#Configure PE2.
[PE2] mpls-qos ingress use vpn-label-exp
[PE2] ip vpn-instance vpna
[PE2-vpn-instance-vpna] diffserv-mode pipe mpls-exp 4
[PE2-vpn-instance-vpna] quit
[PE2] ip vpn-instance vpnb
[PE2-vpn-instance-vpnb] diffserv-mode pipe mpls-exp 3
[PE2-vpn-instance-vpnb] quit
NOTE
After the configurations are complete, you must reset MPLS LDP and BGP connections to make the
configuration take effect.
----End
Configuration Files
l Configuration file of PE1
#
sysname PE1
#
vlan batch 10 20 30
#
mpls-qos ingress use vpn-label-exp
#
ip vpn-instance vpna
ipv4-family
route-distinguisher 100:1
vpn-target 111:1 export-extcommunity
interface Vlanif40
ip binding vpn-instance vpna
ip address 10.3.1.2 255.255.255.0
#
interface Vlanif50
ip binding vpn-instance vpnb
ip address 10.4.1.2 255.255.255.0
#
interface Vlanif60
ip address 172.2.1.2 255.255.255.0
mpls
mpls ldp
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 40
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 50
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 60
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bgp 100
peer 1.1.1.9 as-number 100
peer 1.1.1.9 connect-interface LoopBack1
#
ipv4-family unicast
undo synchronization
peer 1.1.1.9 enable
#
ipv4-family vpnv4
policy vpn-target
peer 1.1.1.9 enable
#
ipv4-family vpn-instance vpna
import-route direct
peer 10.3.1.1 as-number 65430
#
ipv4-family vpn-instance vpnb
import-route direct
peer 10.4.1.1 as-number 65440
#
ospf 1
area 0.0.0.0
network 3.3.3.9 0.0.0.0
network 172.2.1.0 0.0.0.255
#
return
l Configuration file of CE1
#
sysname CE1
#
vlan batch 10
#
interface Vlanif10
ip address 10.1.1.1 255.255.255.0
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 10
#
bgp 65410
peer 10.1.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.1.1.2 enable
#
return
l Configuration file of CE2
#
sysname CE2
#
vlan batch 20
#
interface Vlanif20
ip address 10.2.1.1 255.255.255.0
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 20
#
bgp 65420
peer 10.2.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.2.1.2 enable
#
return
l Configuration file of CE3
#
sysname CE3
#
vlan batch 40
#
interface Vlanif40
ip address 10.3.1.1 255.255.255.0
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 40
#
bgp 65430
peer 10.3.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.3.1.2 enable
#
return
l Configuration file of CE4
#
sysname CE4
#
vlan batch 50
#
interface Vlanif50
ip address 10.4.1.1 255.255.255.0
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 50
#
bgp 65440
peer 10.4.1.2 as-number 100
#
ipv4-family unicast
undo synchronization
import-route direct
peer 10.4.1.2 enable
#
return
4.8 References
This section lists references of MPLS QoS.
The following table lists the references.
5 MPLS TE Configuration
MPLS TE tunnels transmit MPLS L2VPN (VLL and VPLS) services and MPLS L3VPN
services and provide high security and guarantees reliable QoS for VPN services.
5.1 Overview
This section describes the definition and functions of MPLS TE.
5.2 Principles
This section describes the implementation of MPLS TE.
5.3 Applications
This section describes the applicable scenario of MPLS TE.
5.4 Specification
This section provides MPLS TE specifications supported by the device.
5.5 Configuration Task Summary
MPLS TE is implemented after an MPLS TE tunnel is created and traffic is imported to the
TE tunnel. To adjust MPLS TE parameters and deploy some security solutions, perform one
or more of the following operations: adjusting RSVP-TE signaling parameters, adjusting the
path of the CR-LSP, adjusting the establishment of MPLS TE tunnels and CR-LSP backup,
configuring MPLS TE FRR, configuring MPLS TE tunnel protection group, configuring BFD
for MPLS TE, and configuring RSVP GR.
5.6 Configuration Notes
This section describes notes about configuring MPLS TE.
5.7 Default Configuration
This section describes default MPLS TE settings.
5.8 Configuring MPLS TE
This section describes how to configure MPLS TE.
5.9 Maintaining MPLS TE
Maintaining MPLS TE includes checking connectivity of a TE tunnel, checking information
about tunnel faults, clearing operation information, and resetting the RSVP process.
5.10 Configuration Examples
This section provides several examples for configuring MPLS TE. Each configuration
example consists of the networking requirements, configuration roadmap, configuration
procedures, and configuration files.
5.11 References
This section lists references of MPLS TE.
5.1 Overview
This section describes the definition and functions of MPLS TE.
Definition
Multiprotocol Label Switching Traffic Engineering (MPLS TE) establishes constraint-based
routed label switched paths (CR-LSPs) and directs traffic to them. In this way, network traffic
is transmitted over specified paths.
Purpose
On a traditional IP network, nodes select the shortest path as the route to a destination
regardless of other factors such as bandwidth. This routing mechanism may cause congestion
on the shortest path and waste resources on other available paths, as shown in Figure 5-1.
Path 2
Switch_2
40M
Switch_1
Switch_5 Switch_6
On the network shown in Figure 5-1, each link has a bandwidth of 100 Mbit/s and the same
metric. Switch_1 sends traffic to Switch_4 at 40 Mbit/s, and Switch_7 sends traffic to
Switch_4 at 80 Mbit/s. If the network runs an interior gateway protocol (IGP) that uses the
shortest path mechanism, both the two shortest paths (Path 1 and Path 2) pass through the link
Switch_2->Switch_3->Switch_4. As a result, the link Switch_2->Switch_3->Switch_4 is
overloaded, whereas the link Switch_2->Switch_5->Switch_6->Switch_4 is idle.
Traffic engineering can prevent congestion caused by uneven resource allocation by allocating
some traffic to idle links.
The following TE mechanisms have been available before MPLS TE came into use:
l IP TE: This mechanism adjusts path metrics to control traffic transmission paths. It
prevents congestion on some links but may cause congestion on other links. In addition,
path metrics are difficult to adjust on a complex network because any change on a link
affects multiple routes.
l Asynchronous Transfer Mode (ATM) TE: All IGPs select routes only based on
connections and cannot distribute traffic based on bandwidth and the traffic attributes of
links. The IP over ATM overlay model can overcome this defect by setting up virtual
links to transmit some traffic, which helps ensure proper traffic distribution and good
QoS control. However, ATM TE causes high extra costs and low scalability on the
network.
What is needed is a scalable and simple solution to deploy TE on a large backbone network.
MPLS TE is an ideal solution. As an overlay model, MPLS can set up a virtual topology over
a physical topology and map traffic to the virtual topology.
On the network shown in Figure 5-1, MPLS TE can establish an 80 Mbit/s LSP over Path 1
and a 40 Mbit/s LSP over Path 2. Traffic is then distributed to the two LSPs, preventing
congestion on a single path.
Switch_4
Switch_2
Switch_1
Switch_5 Path 2 Switch_6
Benefits
MPLS TE fully uses network resources and provides bandwidth and QoS guarantee without
the need to upgrade hardware. This significantly reduces network deployment costs. MPLS
TE is easy to deploy and maintain because it is implemented based on MPLS. In addition,
MPLS TE provides various reliability mechanisms to ensure network and device reliability.
5.2 Principles
This section describes the implementation of MPLS TE.
5.2.1 Concepts
This chapter involves the following concepts:
l LSP
l MPLS TE Tunnel
l Link Attributes
l Tunnel Attributes
LSP
On a label switched path (LSP), traffic forwarding is determined by the labels added to
packets by the ingress node of the LSP. An LSP can be considered as a tunnel because traffic
is transparently transmitted on intermediate nodes along the LSP.
MPLS TE Tunnel
MPLS TE usually associates multiple LSPs with a virtual tunnel interface to form an MPLS
TE tunnel. An MPLS TE tunnel involves the following terms:
l Tunnel interface: a point-to-point virtual interface used to encapsulate packets. Similar to
a loopback interface, a tunnel interface is a logical interface.
l Tunnel ID: a decimal number that uniquely identifies an MPLS TE tunnel to facilitate
tunnel planning and management.
l LSP ID: a decimal number that uniquely identifies an LSP to facilitate LSP planning and
management.
Figure 5-3 illustrates the preceding terms. Two LSPs are available on the network. The path
LSRA->LSRB->LSRC->LSRD->LSRE is the primary LSP with an LSP ID of 2. The path
LSRA->LSRF->LSRG->LSRH->LSRE is the backup LSP with an LSP ID of 1024. The two
LSPs form an MPLS TE tunnel with a tunnel ID of 100, and the tunnel interface is Tunnel1.
MPLS TE Tunnel
LSRA LSRE
Link Attributes
MPLS TE link attributes identify the bandwidth usage, route cost, and link reliability on a
physical link. The link attributes include:
l Total link bandwidth
Bandwidth of a physical link.
l Maximum reservable bandwidth
Maximum bandwidth that a link can reserve for an MPLS TE tunnel. The maximum
reservable bandwidth must be lower than or equal to the total link bandwidth.
l TE metric
Cost of a TE link. TE metrics are used to control MPLS TE path calculation, making
path calculation more independent of IGP routing. By default, IGP metrics are used as
TE metrics.
l SRLG
Shared risk link group (SRLG), a group of links that share a physical resource, such as
an optical fiber. Links in an SRLG have the same risk. If one link fails, other links in the
SRLG also fail.
The SRLG attribute is used in CR-LSP hot standby and TE fast reroute (FRR) to
enhance TE tunnel reliability. For details about SRLG, see SRLG.
l Link administrative group
A 32-bit vector that identifies link attributes, also called a link color. Each bit can be set
to 0 or 1 by the network administrator. A link administrative group identifies an attribute,
such as the link bandwidth or performance. A link administrative group can also be used
for link management. For example, it can identify that an MPLS TE tunnel passes
through a link or that a link is transmitting multicast services. The administrative group
attribute must be used with the affinity attribute to control path selection.
Tunnel Attributes
An MPLS TE tunnel is composed of several constraint-based routed label switched paths
(CR-LSPs). The constraints for LSP setup are tunnel attributes.
Different from a common LSP (LDP LSP for example), a CR-LSP is set up based on
constraints in addition to routing information, including bandwidth constraints and path
constraints.
l Bandwidth constraints
Bandwidth constraint is mainly the tunnel bandwidth.
l Path constraints
Path constraints include explicit path, priority and preemption, route pinning, affinity
attribute, and hop limit.
Constraint-based routing (CR) is a mechanism to create and manage these constraints, which
are described in the following:
l Tunnel bandwidth
The bandwidth of a tunnel must be planned according to requirements of the services to
be transmitted over the tunnel. The planned bandwidth is reserved on the links along the
tunnel to provide bandwidth guarantee.
l Explicit path
An explicit path is a CR-LSP manually set up by specifying the nodes to pass or avoid.
Explicit paths are classified into the following types:
Strict explicit path
On a strict explicit path, all the nodes are manually specified and two consecutive
hops must be directly connected. A strict explicit path precisely controls the path of
an LSP.
Explicit path
LSRB Strict
LSRC Strict LSRC LSRE
LSRE Strict
LSRD Strict Strict explicit path
As shown in Figure 5-4, LSRA is the ingress node, and LSRF is the egress node.
An LSP from LSRA to LSRF is set up over a strict explicit path. LSRB Strict
indicates that this LSP must pass through LSRB, which is directly connected to
LSRA. LSRC Strict indicates that this LSP must pass through LSRC, which is
directly connected to LSRB. The rest may be deduced by analogy. In this way, the
path that the LSP passes through is precisely controlled.
Loose explicit path
A loose explicit path passes through the specified nodes but allows intermediate
nodes between the specified nodes.
Explicit path
LSRD Loose
LSRC LSRE
Loose explicit path
As shown in Figure 5-5, an LSP is set up over a loose explicit path from LSRA to
LSRF. LSRD Loose indicates that this LSP must pass through LSRD, but LSRD
may not be directly connected to LSRA.
l Priority and preemption
Priority and preemption determine resources allocated to MPLS TE tunnels based on the
importance of services to be transmitted on the tunnels.
Setup priorities and holding priorities of tunnels determine whether a new tunnel can
preempt the resources of existing tunnels. If the setup priority of a new CR-LSP is higher
than the holding priority of an existing CR-LSP, the new CR-LSP can occupy resources
of the existing CR-LSP. The priority value ranges from 0 to 7, among which the value 0
indicates the highest priority, and the value 7 indicates the lowest priority. The setup
priority of a tunnel must be lower than or equal to the holding priority of the tunnel.
If no path can provide the required bandwidth for a new CR-LSP, an existing CR-LSP is
torn down and its bandwidth is assigned to the new CR-LSP. This is the preemption
process. The following preemption modes are supported:
Hard preemption: A high-priority CR-LSP can directly preempt resources assigned
to a low-priority CR-LSP. As a result, some traffic is dropped on the low-priority
CR-LSP.
Soft preemption: The make-before-break mechanism applies to resource
preemption. A high-priority CR-LSP preempts bandwidth assigned to a lower-
priority CR-LSP only after traffic over the low-priority CR-LSP switches to a new
CR-LSP.
The priority and preemption attributes determine resource preemption among tunnels. If
multiple CR-LSPs need to be set up, CR-LSPs with higher setup priorities can be set up
by preempting resources. If resources (such as bandwidth) are insufficient, a CR-LSP
with a higher setup priority can preempt resources of an established CR-LSP with a
lower holding priority.
As shown in Figure 5-6, links on the network have different bandwidth values but the
same metric value. There are two TE tunnels on the network:
Tunnel 1: established over Path 1. Its bandwidth is 100 Mbit/s, and its setup and
holding priority values are 0.
Tunnel 2: established over Path 2. Its bandwidth is 100 Mbit/s, and its setup and
holding priority values are 7.
Tunnel 1
1G 100M
Path1 LSRB
1G
100M 1G
Tunnel 2
Path2
100M 100M
LSRC LSRD LSRE
Path of Tunnel 1
Path of Tunnel 2
When the link between LSRB and LSRE fails, LSRA calculates a new path, Path 3
(LSRA->LSRB->LSRF->LSRE), for Tunnel 1. The bandwidth of the link between
LSRB and LSRF is insufficient for tunnels Tunnel 1 and Tunnel 2. As a result,
preemption is triggered, as shown in Figure 5-7.
LSRA LSRF
Tunnel 1
Preemption
1G
occurs
Path3 LSRB
100M
1G
100M 1G
Tunnel 2
Path2
100M 100M
LSRC Path4 LSRD LSRE
New path of Tunnel 1
Old path of Tunnel 2
New path of Tunnel 2
Link failure
The affinity attribute is a 32-bit vector that specifies the links required for a TE tunnel.
This attribute is configured on the ingress node of a tunnel and must be used with the
link administrative group attribute.
After the affinity attribute is configured for a tunnel, a label switching router (LSR)
compares the affinity attribute with the administrative group attribute of a link to
determine whether to select or avoid the link during MPLS TE path calculation. A 32-bit
mask identifies the bits to be compared in the affinity and administrative group
attributes. An LSR performs an AND operation on the affinity and administrative group
attributes with the mask and compares the results of the AND operations. If the two
results are the same, the LSR selects the link. If the two results are different, the LSR
avoids the link. The rules for comparing the affinity and administrative group attributes
are as follows:
Among the bits mapping the 1 bits in the mask, at least one administrative group bit
and the corresponding affinity bit must be 1. The administrative group bits
corresponding to the 0 bits in the affinity attribute must also be 0.
For example, if the affinity attribute is 0x0000FFFF of a tunnel and the mask is
0xFFFFFFFF, the administrative group attribute of an available link must have all
0s in its leftmost 16 bits and at least one 1 bit in its rightmost 16 bits. Therefore,
links with the administrative group values in the range of 0x00000001 to
0x0000FFFF can be selected for the tunnel.
An LSR does not check the administrative group bits mapping 0 bits in the mask.
For example, if the affinity attribute of a tunnel is 0xFFFFFFFF and the mask is
0xFFFF0000, the administrative group attribute of an available link must have at
least one 1 bit in its leftmost 16 bits. The rightmost 16 bits of the administrative
group attribute can be 0 or 1. Therefore, links with the administrative group values
in the range of 0x00010000 to 0xFFFFFFFF can be selected for the tunnel.
NOTE
Devices from different vendors may follow different rules to compare the administrative group and
affinity attributes. When using devices from different vendors on your network, understand their
implementations and ensure that they can interoperate with one another.
A network administrator can use the administrative group and affinity attributes to
control path selection for tunnels.
l Hop limit
Hop limit is a condition for path selection during CR-LSP setup. Similar to the
administrative group and affinity attributes, hop limit controls the number of hops
allowed on a CR-LSP.
5.2.2 Implementation
Figure 5-8 illustrates the MPLS TE framework.
Upstream Downstream
Local nodes
nodes nodes
Information Information
advertisement advertisement
IS-IS/OSPF routing
Incoming Outgoing
packets packets
Traffic forwarding
N Function Description
o.
2 Path Uses the Constrained Shortest Path First (CSPF) algorithm and data in
calculation the TEDB to calculate a path that satisfies specific constraints. CSPF
evolves from the Shortest Path First (SPF) algorithm. It excludes nodes
and links that do not satisfy specific constraints and uses the SPF
algorithm to calculate a path.
N Function Description
o.
4 Traffic Directs traffic to an MPLS TE tunnel and forwards traffic over the
forwardin MPLS TE tunnel. The first three functions set up an MPLS TE tunnel,
g and the traffic forwarding function directs traffic arriving at a node to
the MPLS-TE tunnel.
NOTE
l A static CR-LSP is manually established and does not require information advertisement or path
calculation.
l A dynamic CR-LSP is set up using a signaling protocol and involves all the four functions listed in
the table.
To deploy MPLS TE on a network, you must configure link and tunnel attributes. Then MPLS
TE sets up tunnels automatically. After a tunnel is set up, traffic is directed to the tunnel for
forwarding.
The Opaque Type field is the leftmost byte that identifies the application type of an
Opaque LSA. The Opaque ID field is the rightmost three bytes that differentiate LSAs of
the same type. Therefore, each type of Opaque LSA has 255 applications, and each
application has 16777216 different LSAs within a flooding scope.
For example, OSPF Graceful Restart LSAs are Type-9 LSAs with the Opaque Type of 3,
and TE LSAs are Type-10 LSAs with the Opaque Type of 1.
The Opaque Information field contains the content to be advertised by an LSA. The
information format is defined by the specific application. The commonly used format is
the extensible Type/Length/Value (TLV) structure.
0 15 23 31
LS age Options LS type=10
Opq Type=1 Opaque ID
Advertising Router
LS sequence number
LS checksum length=132
TLV Type=1 TLV length=4
Router Address
TLV Type=2 TLV length=100
Sub-TLV Type=1 Sub-TLV length=1
Link Type=1 Padding
Sub-TLV Type=2 Sub-TLV length=4
External Route Tag
Link ID
Sub-TLV Type=3 Sub-TLV length=4N
Local IP Address
Remote IP Address
TE LSAs carry information in TLVs. Two types of TLVs are defined for TE LSAs:
TLV Type 1
It is a Router Address TLV that uniquely identifies an MPLS node. A Router
Address TLV plays the same role as a router ID in the Constrained Shortest Path
First (CSPF) algorithm.
TLV Type 2
It is a Link TLV that carries attributes of an MPLS TE capable link. Table 5-2 lists
the sub-TLVs that can be carried in a Link TLV.
Type 2: Link ID (with a 4-byte Value Carries a link identifier in IP address format.
field) l For a point-to-point link, this sub-TLV
indicates the OSPF router ID of a
neighbor.
l For a multi-access link, this sub-TLV
indicates the interface IP address of the
designated router (DR).
Type 8: Unreserved Bandwidth (with Carries reservable bandwidth values for the
a 32-byte Value field) eight priorities of a link. The bandwidth for
each priority is a 4-byte floating point number.
MPLS TE link and advertises the TE LSA to the local area. If other nodes in the local
area support TE extensions, these nodes establish a topology of TE links. Each node that
advertises TE LSAs must have a unique router address.
Type-10 Opaque LSAs are advertised within an OSPF, so CSPF calculation is performed
on an area basis. To calculate an LSP spanning multiple areas, CSPF calculation must be
performed in each area.
IS-IS TE
IS-IS is a link state routing protocol and supports TE extensions to advertise TE information.
IS-IS TE defines two new TLV types:
Sub-TLV Description
Type 6: IPv4 interface address (with a 4N-byte Carries one or more local interface
Value field) IP addresses. Each IP address
occupies 4 bytes.
Type 8: IPv4 neighbor address (with a 4N-byte Indicates one or more remote
Value field) interface IP addresses. Each IP
address occupies 4 bytes.
l For a point-to-point link, this
sub-TLV is filled with a remote
IP address.
l For a multi-access link, this
sub-TLV is filled with 0.0.0.0.
Type 9: Maximum link bandwidth (with a 4-byte Carries the maximum bandwidth
Value field) of a link.
Type 10: Reservable link bandwidth (with a 4- Carries the maximum reservable
byte Value field) bandwidth of a link.
Type 11: Unreserved bandwidth (with a 32-byte Carries reservable bandwidth for
Value field) eight priorities of a link.
Type 18: TE Default metric (with a 3-byte Value Carries the TE metric configured
field) on a TE link.
10% 10%
9% 8.9%
8% 7.8%
7% 6.7%
6% 5.6%
5%
4.4%
4%
3.3% 3.8%
3%
2% 2.2% 2.5%
1% 1.1% 1.3%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22......
Nodes on an MPLS TE network need to advertise resource information. Each device collects
link information in the local area, such as constraints and bandwidth usage, and generates a
database of link attributes and topology attributes. This database is the TEDB.
A device calculates the optimal path to another node in the local area according to information
in the TEDB. MPLS TE then uses this path to set up a CR-LSP.
The TEDB is independent of the link state database (LSDB) of an IGP. Both the two
databases are generated through IGP-based flooding, but they record different information
and provide different functions. The TEDB stores TE information in addition to all
information available in the LSDB. The LSDB is used to calculate the shortest path, whereas
the TEDB is used to calculate the best LSP for an MPLS TE tunnel.
A TEDB can be generated only when OSPF TE or IS-IS TE is configured. On an IGP TE-incapable
network, CR-LSPs are established based on IGP routes, but not calculated using CSPF.
NOTE
If both OSPF TE and IS-IS TE are deployed, CSPF uses the OSPF TEDB to calculate a CR-LSP. If a
CR-LSP is calculated using the OSPF TEDB, CSPF does not use the IS-IS TEDB. If no CR-LSP is
calculated using the OSPF TEDB, CSPF uses the IS-IS TEDB to calculate a CR-LSP.
Whether OSPF TEDB or IS-IS TEDB is used first in the CSPF calculation is determined by the
administrator.
If there are multiple shortest paths with the same metric, CSPF uses a tie-breaking policy to
select one of them. The following tie-breaking policies are available:
l Most-fill: selects the link with the highest proportion of used bandwidth to the maximum
reservable bandwidth. This policy uses the full bandwidth of a link.
l Least-fill: selects the link with the lowest proportion of used bandwidth to the maximum
reservable bandwidth. This policy uses consistent bandwidth resources on links.
l Random: selects a random path among equal-metric paths. This policy sets LSPs
consistently over links, regardless of bandwidth distribution.
When several links have the same proportion of used bandwidth to the maximum reservable
bandwidth, CSPF selects the link discovered first, irrespective of most-fill or least-fill.
Figure 5-14 shows an example of CSPF calculation. Figure 5-14 shows the color and
bandwidth of some links. The other links are black and have a bandwidth of 100 Mbit/s. A
path to LSRE needs to be set up on the network and must pass through LSRH, with a
bandwidth of 80 Mbit/s and an affinity attribute of black. According to the constraints, CSPF
excludes the blue links, 50 Mbit/s links, and links not connected to LSRH.
50 Bl Bl
ue ue
ue
Bl
LSRA LSRE
50
LSRF LSRG LSRH
MPLS TE Tunnel 1:
Destination = LSRE
Bandwidth = 80Mbit/s
Affinity Property = Black
LSRH Loose
LSRC LSRD
Calculated topology
LSRA LSRE
After excluding unqualified links, CSPF uses the SPF algorithm to calculate the path. Figure
5-15 shows the calculation result.
LSRA LSRE
l CSPF uses path constraints such as link bandwidth, link attributes, and affinity attributes
as metrics, while SPF simply uses link costs as metrics.
l CSPF does not support load balancing and uses tie-breaking policies to determine a path
if multiple paths have the same metric.
Introduction to RSVP-TE
The Resource Reservation Protocol (RSVP) is designed for the integrated services model, and
reserves resources for nodes along a path. This bandwidth reservation capability makes
RSVP-TE a suitable signaling protocol for establishing MPLS TE paths.
RSVP-TE provides the following extensions based on RSVP to support MPLS TE
implementation:
l RSVP-TE adds Label Request objects to Path messages to request labels and adds Label
objects to Resv messages to allocate labels.
l An extended RSVP message can carry path constraints in addition to label binding
information.
l The extended objects carry MPLS TE bandwidth constraints to implement resource
reservation.
RSVP-TE Implementation
Table 5-4 describes RSVP-TE implementation.
Function Description
1. PE1 uses CSPF to calculate a path from PE1 to PE2, on which the IP address of every
hop is specified. PE1 generates an explicit route object (ERO) with the IP address of
each hop and adds the ERO in a Path message. PE1 then creates a PSB and sends the
Path message to P1 according to information in the ERO. Table 5-5 describes objects
carried in the Path message.
RSVP_HOP PE1-if1
LABEL LABEL_REQUEST
2. After P1 receives the Path message, it parses the message and creates a PSB according to
information in the message. Then P1 updates the message and sends it to P2 according to
the ERO. Table 5-6 describes objects in the Path message.
The RSVP_HOP object specifies the IP address of the outbound interface through
which a Path message is sent. Therefore, PE1 sets the RSVP_HOP object to the IP
address of the outbound interface toward P1, and P1 sets the RSVP_HOP field to
the IP address of the outbound interface toward P2.
P1 deletes the local LSR ID and IP addresses of the inbound and outbound
interfaces from the ERO field in the Path message.
RSVP_HOP P1-if1
LABEL LABEL_REQUEST
3. After receiving the Path message, P2 creates a PSB according to information in the
message, updates the message, and then sends it to PE2 according to the ERO field.
Table 5-7 describes objects in the Path message.
RSVP_HOP P2-if1
EXPLICIT_ROUTE PE2-if0
LABEL LABEL_REQUEST
4. After PE2 receives the Path message, PE2 knows that it is the egress of the CR-LSP to
be set up according to the Destination field in the Session object. PE2 then allocates a
label and reserves bandwidth, and generates a Resv message based on the local PSB. The
Resv message carries the label allocated by PE2 and is sent to P2.
PE2 uses the address carried in the RSVP_HOP field of the received Path message as the
destination IP address of the Resv message. The Resv message does not carry the ERO
field because it is forwarded along the reverse path. Table 5-8 describes objects in the
Resv message.
NOTE
If a Resv message carries the RESV_CONFIRM object, the receiver needs to send a ResvConf
message to the sender to confirm the resource reservation request.
RSVP_HOP PE2-if0
LABEL 3
RECORD_ROUTE PE2-if0
5. When P2 receives the Resv message, P2 creates an RSB according to information in the
message, allocates a new label, updates the message, and then sends it to P1. Table 5-9
describes objects in the Resv message.
RSVP_HOP P2-if0
LABEL 17
6. After receiving the Resv message, P1 creates an RSB according to information in the
message, updates the message, and then sends it to PE1. Table 5-10 describes objects in
the Resv message.
PE1 obtains the label allocated by P1 from the received Resv message. Resources are
successfully reserved and a CR-LSP is set up.
RSVP_HOP P1-if0
LABEL 18
NOTE
Refresh messages carry information that has already been advertised. The Time Value field in Refresh
messages specifies the refresh interval.
If a node does not receive any Refresh message about a certain state block after the specified
refreshing intervals elapses, it deletes the state.
A node can send Path and Resv messages to its neighbors in any sequence.
RSVP Srefresh
In addition to state synchronization, RSVP Refresh messages can also be used to detect
reachability between RSVP neighbors and maintain RSVP neighbor relationships. Because
Path and Resv messages are large, sending many RSVP Refresh messages to establish a large
number of CR-LSPs consumes excess network resources. RSVP Summary Refresh (Srefresh)
can address this problem.
RSVP Srefresh is implemented based on extended objects and the following mechanisms:
l Message_ID extension and retransmission
The Message_ID extension defined in RFC 2961 extends objects carried in RSVP
messages. Among the objects, the Message_ID and Message_ID_ACK objects
acknowledge received RSVP messages to ensure reliable RSVP message delivery.
The Message_ID object can also provide the RSVP retransmission mechanism. A node
resets the retransmission timer (Rf seconds) after sending an RSVP message carrying the
Message_ID object. If the node receives no ACK message within Rf seconds, the node
retransmits an RSVP message after (1 + Delta) x Rf seconds. The Delta value depends
on rate at which the sender increases the retransmission interval. The node keeps
retransmitting the message until it receives an ACK message or the retransmission count
reaches the threshold (retransmission multiplier).
l Srefresh messages transmission
Srefresh messages can be sent instead of standard Path or Resv messages to update
RSVP states. These messages reduce the amount of information that must be transmitted
and processed for maintaining RSVP states. When Srefresh messages are sent to update
the RSVP states, standard Refresh messages are suppressed.
Each Srefresh message carries a Message_ID object, which contains multiple message
IDs to identify the Path and Resv states to update. Srefresh implementation depends on
the Message_ID extension. Srefresh messages can only update the states that have been
advertised in Path and Resv messages containing Message_ID objects.
When a node receives a Srefresh message, the node compares the Message_ID in the
message with that saved in the local PSB or RSB. If the two Message_IDs match, the
node updates the local state block, just like it receives a standard Path or Resv message.
If they do not match, the node sends a Srefresh NACK message to the sender. Later, the
node updates the Message_ID and the state block based on the received Path or Resv
message.
A Message_ID object contains a message identifier. When a CR-LSP changes, the
message identifier increases. A node compares the message identifier in the received
Path message with the message identifier saved in the local state block. If they are the
same, the node does not update the state block. If the received message identifier is
larger than the local message identifier, the node updates the state block.
Error Signaling
RSVP-TE uses the following messages to advertise CR-LSP errors:
l PathErr message: is sent by an RSVP node to its upstream node if an error occurs while
this node is processing a Path message. The message is forwarded upstream by
intermediate nodes and finally reaches the ingress node.
l ResvErr message: is sent by an RSVP node to its downstream node if an error occurs
while this node is processing a Resv message. The message is forwarded downstream by
intermediate nodes and finally reaches the egress node.
Path Teardown
After the ingress node receives a ResvErr message or an instruction to delete a CR-LSP, it
immediately sends a PathTear message downstream. After receiving this message, the
downstream nodes tear down the CR-LSP and reply with a ResvTear message.
Objects ( Variable )
Format of Objects
0 16 24 31
Length Class_Number C-Type
Version 4 bits Indicates the RSVP version number. Currently, the value is
1.
Flags 4 bits Indicates the message flag. Generally, the value is 0. RFC
2961 extends this field to identify whether Summary
Refresh (Srefresh) is supported. If Srefresh is supported,
the value of the Flags field is 0x01.
Message 8 bits Indicates RSVP messages type. For example, the value 1
Type indicates a Path message, and the value 2 indicates a Resv
message.
RSVP 16 bits Indicates the RSVP checksum. The value 0 indicates that
Checksum the checksum of messages is not checked during
transmission.
Length 16 bits Indicates the total length of an object, in bytes. The value
must be a multiple of 4, and the smallest value is 4.
Class_Numb 8 bits Identifies an object class. Each object class has a name,
er such as SESSION, SENDER_TEMPLATE, or
TIME_VALUE.
NOTE
For details of each type of RSVP message, see RFC3209 and RFC 2205.
Path Message
RSVP-TE uses Path messages to create RSVP sessions and to maintain path states. A Path
message is sent from the ingress node to the egress node. A path state block (PSB) is created
on each node the Path message passes.
NOTE
The source IP address of a Path message is the LSR ID of the ingress node and the destination IP
address is the LSR ID of the egress node.
Resv Message
After receiving a Path message, the egress node returns a Resv message. The Resv message
carries resource reservation information and is sent hop-by-hop to the ingress node. Each
intermediate node creates and maintains a reservation state block (RSB) and allocates a label.
When the Resv message reaches the ingress node, an LSP is set up successfully.
Table 5-13 describes objects in a Resv message.
Reservation Styles
A reservation style is the method that an RSVP node uses to reserve resources after receiving
a resource reservation request from the upstream node. The following reservation styles are
supported:
l Fixed Filter (FF) style: creates an exclusive reservation for each sender. A sender does
not share its resource reservation with other senders, and each CR-LSP on a link has a
separate resource reservation.
l Shared Explicit (SE) style: creates a single reservation for a series of selected upstream
senders. CR-LSPs on a link share the same resource reservation.
Static Route
The simplest method to direct traffic to an MPLS TE tunnel is to configure a static route and
specify a TE tunnel interface as the outbound interface.
Tunnel Policy
By default, VPN traffic is forwarded over LSP tunnels but not MPLS TE tunnels. Either of
the following tunnel policies can be used to direct VPN traffic to MPLS TE tunnels:
l Select-seq policy: selects a TE tunnel to transmit VPN traffic on the public network by
configuring an appropriate tunnel selection sequence.
l Tunnel binding policy: binds a TE tunnel to a destination address to provide QoS
guarantee.
Auto Route
The auto route feature allows a TE tunnel to participate in IGP route calculations as a logical
link. The tunnel interface is used as the outbound interface of the route. The tunnel is
considered a point-to-point (P2P) link with a specified metric. Two auto route types are
available:
l IGP shortcut: An LSP tunnel is not advertised to neighbor nodes, so it will not be used
by other nodes.
l Forwarding adjacency: An LSP tunnel is advertised to neighboring nodes, so it can be
used by these nodes.
Forwarding adjacency advertises LSP tunnels by carrying neighbor IP address in the
Remote IP Address sub-TLV of OSPF Type-10 Opaque LSAs or the Remote IP Address
sub-TLV of IS Reachability TLV.
To use the forwarding adjacency feature, nodes on both ends of a tunnel must be located
in the same area.
The following example shows the differences between IGP shortcut and forwarding
adjacency.
TE
MPL Metric=1
10 S TE 0
Switch_2 10 T unn
el 1 10
10
Switch_1
Switch_6 Switch_7
In Figure 5-18, Switch_7 sets up an MPLS TE tunnel to Switch_2 over the path Switch_7 ->
Switch_6 -> Switch_2. The TE metrics of the links are shown in the figure. On Switch_5 and
Switch_7, routes to Switch_2 and Switch_1 differ depending on the auto route configuration:
l If auto route is not configured, Switch_5 uses Switch_4 as the next hop, and Switch_7
uses Switch_6 as the next hop.
l If auto route is used:
When Tunnel1 is advertised using IGP shortcut, Switch_5 uses Switch_4 as the
next hop, and Switch_7 uses Tunnel1 as the next hop. Because Tunnel1 is not
advertised to Switch_5, only Switch_7 selects Tunnel1 using the IGP.
When Tunnel1 is advertised using forwarding adjacency, Switch_5 uses Switch_7
as the next hop, and Switch_7 uses Tunnel1 as the next hop. Because Tunnel1 is
advertised to Switch_5 and Switch_7, both the two nodes select Tunnel1 using the
IGP.
Background
MPLS TE tunnels are used to optimize traffic distribution on a network. An MPLS TE tunnel
is configured using the initial bandwidth required for services and initial network topology.
The network topology often changes, so the ingress node may not use the optimal path to
forward MPLS packets, causing a waste of network resources. MPLS TE tunnels need to be
optimized after being established.
Implementation
A specific event that occurs on the ingress node can trigger optimization of a CR-LSP. The
optimization enables the CR-LSP to be reestablished over the optimal path with the smallest
metric.
NOTE
Background
RSVP uses raw IP to transmit packets. Raw IP has no security mechanism and is prone to
attacks. RSVP authentication verifies packets using keys to prevent attacks. When the local
RSVP router receives a packet with a sequence number smaller than the local maximum
sequence number, the neighbor relationship is terminated.
Concepts
l Raw IP: Similar to UDP, raw IP is unreliable because it has no control mechanism to
determine whether raw IP datagrams reach their destinations.
l Spoofing attack: An unauthorized router establishes a neighbor relationship with a local
router or sends pseudo RSVP messages to attack the local router. (For example,
requesting the local router to reserve a lot of bandwidth.)
l Replay attack: A remote RSVP router continuously sends packets with sequence
numbers smaller than the maximum sequence number on a local RSVP router. Then the
local router terminates the RSVP neighbor relationship with the remote RSVP router and
the established CR-LSP is torn down.
Implementation
l Key authentication
RSVP authentication protects RSVP nodes from spoofing attacks by verifying keys in
packets exchanged between neighboring nodes. The same key must be configured on
two neighboring nodes before they perform RSVP authentication. A local node uses the
configured key and the Keyed-Hashing for Message Authentication Message Digest 5
(HMAC-MD5) algorithm to calculate a digest for a message, adds this digest as an
integrity object into the message, and then sends the message to the remote node. After
the remote node receives the message, it uses the same key and algorithm to calculate a
digest and checks whether the local digest is the same as the received one. If they match,
the remote node accepts the message. If they do not match, the remote node discards the
message.
l Authentication lifetime
The authentication lifetime specifies the period during which the RSVP neighbor
relationship is retained and provides the following functions:
Controls the lifetime of an RSVP neighbor relationship when no CR-LSP exists
between the RSVP neighbors. The RSVP neighbor relationship is retained until the
RSVP authentication lifetime expires. The RSVP-TE authentication lifetime does
not affect the status of existing CR-LSPs.
Prevents continuous RSVP authentication. For example, after RSVP authentication
is enabled between RTA and RTB, RTA continuously sends tampered RSVP
messages with an incorrect key to RTB. As a result, RTB continuously discards the
messages. The authentication relationship between neighbors, however, cannot be
terminated. The authentication lifetime can prevent this situation. When neighbors
receive valid RSVP messages within the lifetime, the RSVP authentication lifetime
is reset. Otherwise, the authentication relationship is deleted after the authentication
lifetime expires.
l Handshake mechanism
The handshake mechanism maintains the RSVP authentication status. After neighboring
nodes authenticate each other, they exchange handshake packets. If they accept the
packets, they record a successful handshake. If a local node receives a packet with a
sequence number smaller than the local maximum sequence number, the local node
processes the packet as follows:
Discards the packet if it shows that the handshake mechanism is not enabled on the
remote node.
Discards the packet if it shows that the handshake mechanism is enabled on the
remote node and the local node has a record about a successful handshake. If the
local node does not have a record of a successful handshake with the remote node,
this packet becomes the first to arrive at the local node and the local node starts a
handshake process.
l Message window
A message window saves the received RSVP messages. If the window size is 1, the
system saves only the largest sequence number. If the window size is set to a value
greater than 1, the system saves the specified number of largest sequence numbers. For
example, the window size is set to 10, and the largest sequence number of received
RSVP messages is 80. The sequence numbers from 71 to 80 can be saved if there is no
packet mis-sequencing. If packet mis-sequencing occurs, the local node sequences the
messages and records the 10 largest sequence numbers.
When the window size is not 1, the local RSVP node considers the RSVP message
received from the neighboring node as a valid message in either of the following
situations:
The sequence number in the received RSVP message is larger than the maximum
sequence number in the window.
The RSVP message carries an original sequence number that is larger than the
minimum sequence number in the window but is not saved in the window.
The local RSVP node then adds the sequence number of the received RSVP message to
the window and processes the RSVP message. If the sequence number is larger than the
maximum sequence number in the window, the local RSVP node deletes the minimum
sequence number in the window. If the sequence number is smaller than the minimum
sequence number in the window or already exists in the window, the local RSVP node
discards the RSVP message.
NOTE
By default, the window size is 1. The handshake mechanism works when the window size is 1. If the
window size is not 1, the handshake mechanism is affected. When the local RSVP node receives an
RSVP message with a sequence number smaller than the local maximum sequence number, either of the
following situations occurs:
l If the sequence number of the received RSVP message is larger than the minimum sequence
number in the window and is not saved in the window, the local RSVP node correctly processes
the RSVP message.
l If the sequence number already exists in the window, the local RSVP node discards the RSVP
message.
l If the sequence number is smaller than the minimum sequence number in the window, RSVP
determines whether both ends are enabled with the handshake mechanism. If either one is not
enabled with the handshake mechanism, the system discards the RSVP message. If both ends are
enabled with the handshake mechanism, the local and remote ends start the handshake process
again and discard the RSVP message.
For example, the window size is 10, and the window stores sequence numbers 71, 75, and 80. If the
local RSVP node receives an RSVP message with sequence number 72, it adds the sequence number to
the window and correctly processes the RSVP message. If the local RSVP node receives an RSVP
message with sequence number 75, it directly discards the RSVP message. If the local RSVP node
receives an RSVP message with sequence number 70, RSVP determines whether both ends are enabled
with the handshake mechanism. The local and remote ends start the handshake process again only when
they are both enabled with the handshake mechanism.
An MD5 key is entered in either cipher text or plain text. The MD5 algorithm has the
following characteristics:
Each protocol is configured with a separate key and cannot share a key with another
protocol.
An interface or a node is assigned only one key. To change the key, you must delete
the original key and configure a new one.
l Keychain key
Keychain is an enhanced encryption algorithm. It allows you to define a group of
passwords to form a password string, and to specify encryption and decryption
algorithms and a validity period for each password. When the system sends or receives a
packet, the system selects a valid password. Within the validity period of the password,
the system uses the encryption algorithm configured for the password to encrypt the
packet before sending it out, or the system uses the decryption algorithm configured for
the password to decrypt the packet after receiving it. In addition, the system uses a new
password after the previous one expires, minimizing the risks of password cracking.
Keychain has the following characteristics:
A keychain authentication password and the encryption and decryption algorithms
must be configured. The password validity period can also be configured.
Keychain settings can be shared by protocols and managed uniformly.
Keychain can be used on an RSVP interface or node and supports only HMAC-MD5.
NOTE
MD5 key cannot ensure key. You are advised to use Keychain key.
new path. Reliability technologies are required to prevent or minimize packet loss in the
process.
l If a node or link on a working MPLS TE tunnel fails, reliability technologies are required
to set up a backup CR-LSP and switch traffic to the backup CR-LSP, while minimizing
packet loss in this process.
l When a node on a working MPLS TE tunnel encounters a control plane failure but its
forwarding plane is still working properly, reliability technologies are required to ensure
nonstop traffic forwarding during fault recovery on the control plane.
MPLS TE provides multiple reliability technologies to ensure high reliability of key services
transmitted over MPLS TE tunnels. Table 5-14 describes these reliability technologies.
5.2.9.2 Make-Before-Break
The make-before-break mechanism prevents traffic loss during a traffic switchover between
two CR-LSPs. This mechanism improves MPLS TE tunnel reliability.
Background
Any change in link or tunnel attributes causes a CR-LSP to be reestablished using new
attributes. Traffic is then switched from the previous CR-LSP to the new CR-LSP. If a traffic
switchover is triggered before the new CR-LSP is set up, some traffic is lost. The make-
before-break mechanism prevents traffic loss.
Implementation
The make-before-break mechanism sets up a new CR-LSP and switches traffic to it before the
original CR-LSP is torn down. This mechanism helps minimize data loss and reduces
Path2
Switch_5
In Figure 5-19, the maximum reservable bandwidth on each link is 60 Mbit/s. A CR-LSP has
been set up along Path 1 (Switch_1 -> Switch_2 -> Switch_3 -> Switch_4) with the
bandwidth of 40 Mbit/s.
A new CR-LSP needs to be set up along Path 2 (Switch_1 -> Switch_5 -> Switch_3 ->
Switch_4) to forward data through the lightly loaded Switch_5. The available bandwidth of
the link Switch_3 -> Switch_4 is only 20 Mbit/s, not enough for the new path. The make-
before-break mechanism can be used in this situation to allow the new CR-LSP to use the
bandwidth of the link between Switch_3 and Switch_4 reserved for the original CR-LSP.
After the new CR-LSP is established, traffic switches to the new CR-LSP, and the original
CR-LSP is torn down.
The make-before-break mechanism can also be used to increase tunnel bandwidth. If the
reservable bandwidth of a shared link increases to the required value, a new CR-LSP can be
established.
On the network shown in Figure 5-19, the maximum reservable bandwidth on each link is 60
Mbit/s. A CR-LSP has been set up along Path 1 with the bandwidth of 30 Mbit/s.
A new CR-LSP needs to be set up along Path 2 to forward data through the lightly loaded
Switch_5, and the path bandwidth needs to increase to 40 Mbit/s. The available bandwidth of
the link Switch_3 -> Switch_4 is only 30 Mbit/s. The make-before-break mechanism can be
used in this situation. This mechanism allows the new CR-LSP to use the bandwidth of the
link between Switch_3 and Switch_4 reserved for the original CR-LSP, and reserves an
additional bandwidth of 10 Mbit/s for the new path. After the new CR-LSP is set up, traffic is
switched to the new CR-LSP, and the original CR-LSP is torn down.
to a new CR-LSP after the switching delay time, and then deletes the original CR-LSP after
the deletion delay time.
Background
RSVP Refresh messages can synchronize PSB and RSB between nodes, monitor reachability
between RSVP neighbors, and maintain RSVP neighbor relationships.
This soft state mechanism detects neighbor relationships using Path and Resv messages. The
detection speed is low and a link failure cannot promptly trigger a service traffic switchover.
RSVP Hello is introduced to solve this problem.
Implementation
RSVP Hello is implemented as follows:
1. Hello handshake
NOTE
Usage Scenario
RSVP Hello applies to scenarios with TE FRR or RSVP GR enabled.
Concepts
CR-LSP backup functions include hot standby, ordinary backup, and the best-effort path:
l Hot standby: A hot-standby CR-LSP is set up immediately after the primary CR-LSP is
set up. When the primary CR-LSP fails, traffic switches to the hot-standby CR-LSP.
l Ordinary backup: An ordinary backup CR-LSP can be set up only after a primary CR-
LSP fails. The ordinary backup CR-LSP takes over traffic when the primary CR-LSP
fails.
l Best-effort path: If both the primary and backup CR-LSPs fail, a best-effort path is set up
and takes over traffic.
In Figure 5-21, the primary CR-LSP is set up over the path PE1 -> P1 -> P2 -> PE2, and
the backup CR-LSP is set up over the path PE1 -> P3 -> PE2. When both CR-LSPs fail,
PE1 sets up a best-effort path PE1 -> P4 -> PE2 to take over traffic.
Backup CR-LSP
Primary
PE1 P1 P2 PE2
CR-LSP
Best-effort
path
P4
NOTE
A best-effort path has no bandwidth reserved for traffic, but has an affinity and a hop limit
configured to control the nodes it passes.
Implementation
The process of CR-LSP backup is as follows:
1. CR-LSP backup deployment
Determine the paths, bandwidth values, and deployment modes. Table 5-15 lists CR-
LSP backup deployment items.
If attributes of a backup CR-LSP are modified, the ingress node uses the make-before-
break mechanism to reestablish the backup CR-LSP with the updated attributes. After
that backup CR-LSP has been successfully reestablished, traffic on the original backup
CR-LSP (if it is transmitting traffic) switches to this new backup CR-LSP, and then the
original backup CR-LSP is torn down.
4. Fault detection
CR-LSP backup supports the following fault detection functions:
Default error signaling mechanism of RSVP-TE: The fault detection speed is
relatively slow.
Bidirectional forwarding detection (BFD) for CR-LSP: This function is
recommended because it implements fast fault detection.
5. Traffic switchover
After the primary CR-LSP fails, the ingress node attempts to switch traffic from the
primary CR-LSP to a hot-standby CR-LSP. If the hot-standby CR-LSP is unavailable,
the ingress node attempts to switch traffic to an ordinary backup CR-LSP. If the ordinary
backup CR-LSP is unavailable, the ingress attempts to switch traffic to a best-effort path.
6. Traffic switchback
Traffic switches back to a path based on priorities of the available CR-LSPs. Traffic will
first switch to the primary CR-LSP. If the primary CR-LSP is unavailable, traffic will
switch to the hot-standby CR-LSP. The ordinary CR-LSP has the lowest priority.
5.2.9.5 TE FRR
Traffic engineering fast reroute (TE FRR) provides link protection and node protection for
MPLS TE tunnels. If a link or node fails, TE FRR rapidly switches traffic to a backup path,
minimizing traffic loss.
Background
A link or node failure triggers a primary/backup CR-LSP switchover. The switchover is not
completed until the IGP routes of the backup path converge, CSPF calculates a new path, and
a new CR-LSP is established. Traffic is lost during this process.
TE FRR technology can prevent traffic loss during a primary/backup CR-LSP switchover.
After a link or node fails, TE FRR establishes a CR-LSP that bypasses the faulty link or node.
The bypass CR-LSP can then rapidly take over traffic to minimize loss. At the same time, the
ingress node reestablishes a primary CR-LSP.
Concepts
Bypass CR-LSP
LSRE
PLR Point of local repair, ingress node of a bypass CR-LSP. The PLR
can be the ingress node but not the egress node of the primary CR-
LSP.
Prot Link In Figure 5-23 below, the primary CR-LSP passes through the direct
ecte protectio link between the PLR (LSRB) and MP (LSRC). Bypass LSP 1 can
d n protect this link, which is called link protection.
obje
ct Node In Figure 5-23 below, the primary CR-LSP passes through LSRC
protectio between the PLR (LSRB) and MP (LSRD). Bypass LSP 2 can protect
n LSRC, which is called node protection.
Ban Bandwid A bypass CR-LSP is assigned bandwidth higher than or equal to the
dwi th primary CR-LSP bandwidth, so that the bypass CR-LSP protects the
dth protectio path and bandwidth of the primary CR-LSP.
n
Non- A bypass CR-LSP has no bandwidth and protects only the path of the
bandwidt primary CR-LSP.
h
protectio
n
Imp Manual A bypass CR-LSP is manually configured and bound to a primary CR-
lem protectio LSP.
enta n
tion
Auto An auto FRR-enabled node automatically establishes a bypass CR-LSP.
protectio The node binds the bypass CR-LSP to a primary CR-LSP if the node
n receives an FRR protection request and the FRR topology requirements
are met.
PLR MP MP
LSRB LSRC LSRD
Primary CR-LSP
LSRA LSRE
Link Fault
Node Fault
NOTE
A bypass CR-LSP supports the combination of protection modes. For example, manual protection, node
protection, and bandwidth protection can be implemented together on a bypass CR-LSP.
Implementation
TE FRR is implemented as follows:
LSRA LSRE
Link Fault
Node Fault
If multiple bypass CR-LSPs are established, the PLR checks whether the bypass CR-LSP
protect bandwidth, their implementations, and protected objects in sequence. Bypass CR-
LSPs providing bandwidth protection are preferred over those that do not provide
bandwidth protection. Manual bypass CR-LSPs are preferred over auto bypass CR-LSPs.
Bypass CR-LSPs providing node protection are preferred over those providing link
protection. Figure 5-24 shows two bypass CR-LSPs. If both the bypass CR-LSPs
provide bandwidth protection and are manually configured, bypass LSP 2 is bound to the
primary CR-LSP. (Bypass LSP 2 provides node protection, and bypass LSP 1 provides
link protection.) If bypass LSP 1 provides bandwidth protection but bypass LSP 2 does
not, bypass LSP 1 is bound to the primary CR-LSP.
After the binding is complete, the primary CR-LSP's NHLFE records the bypass CR-
LSP's NHLFE index and an inner label that the MP allocates to the upstream node on the
primary CR-LSP. This label is used to forward traffic during a primary/backup CR-LSP
switchover.
3. Fault detection
Link protection uses a link layer protocol to detect and report faults. The speed of
fault detection at the link layer depends on the link type.
Node protection uses a link layer protocol to detect link faults. If no fault occurs on
a link, RSVP Hello or BFD for RSVP is used to detect faults on the protected
node.
As soon as a link or node fault is detected, an FRR switchover is triggered.
NOTE
l In node protection, only the link between the protected node and the PLR is protected. The
PLR cannot detect faults on the link between the protected node and the MP.
l Link fault detection, BFD, and RSVP Hello mechanisms detect a failure at descending speeds.
4. Switchover
When the primary CR-LSP fails, service traffic and RSVP messages are switched to the
bypass CR-LSP, and the switchover event is advertised to the upstream nodes. Upon
receiving a data packet, the PLR pushes an inner label and an outer label into the packet.
The inner label is allocated by the MP to the upstream node on the primary CR-LSP, and
the outer label is allocated by the next hop on the bypass CR-LSP to the PLR. The
penultimate hop of the bypass CR-LSP pops the outer label and forwards the packet with
only the inner label to the MP. The MP forwards the packet to the next hop along the
primary CR-LSP according to the inner label.
Figure 5-25 shows nodes on the primary and bypass CR-LSPs, labels allocated to the
nodes, and behaviors that the nodes perform. The bypass CR-LSP provides node
protection. If LSRC or the link between LSRB and LSRC fails, the PLR (LSRB) swaps
the inner label 1024 to 1022, pushes the outer label 34 into a packet, and forwards the
packet to the next hop along the bypass CR-LSP. The lower part of Figure 5-25 shows
the packet forwarding process after a TE FRR switchover.
Bypass CR-LSP
Primary CR-LSP
PLR MP
34
1022 1022
IP IP
PLR MP
1024 IP
LSRA IP LSRB LSRC LSRD LSRE
Swap 10241022
Push 34
label assigned for the
Primary CR-LSP
label assigned for the
Bypass CR-LSP
Link Fault
Node Fault
5. Switchback
After a TE FRR switchover is complete, the ingress node of the primary CR-LSP
reestablishes the primary CR-LSP using the make-before-break mechanism. Service
traffic and RSVP messages are switched back to the primary CR-LSP after the primary
CR-LSP is successfully reestablished. The reestablished primary CR-LSP is called a
modified CR-LSP. The make-before-break mechanism allows the original primary CR-
LSP to be torn down only after the modified CR-LSP is set up successfully.
NOTE
FRR does not take effect if multiple nodes fail simultaneously. After data is switched from the primary
CR-LSP to the bypass CR-LSP, the bypass CR-LSP must remain Up to ensure data forwarding. If the
bypass CR-LSP fails, the protected data cannot be forwarded using MPLS, and the FRR function fails.
Even if the bypass CR-LSP is reestablished, it cannot forward data. Data forwarding will be restored
only after the primary CR-LSP restores or is reestablished.
Other Functions
l N:1 protection
TE FRR supports N:1 protection mode, in which a bypass CR-LSP protects multiple
primary CR-LSPs.
5.2.9.6 SRLG
Shared risk link group (SRLG) is a constraint to calculating a backup or a bypass CR-LSP on
a network with CR-LSP hot standby or TE FRR configured. SRLG prevents bypass and
primary CR-LSPs from being set up on links with the same risk level, which enhances TE
tunnel reliability.
Background
A network administrator often uses CR-LSP hot standby or TE FRR technology to ensure
MPLS TE tunnel reliability. However, CR-LSP hot standby or TE FRR may fail in real-world
application.
Path2
Logical topology
P3
SRLG
PE1 P1 P2 PE2
NE1
Shared link
In Figure 5-26, Path 1 is the primary CR-LSP and Path 2 is the bypass CR-LSP. The link
between P1 and P2 requires TE FRR protection.
Core nodes P1, P2, and P3 on the backbone network are connected by a transport network
device. In Figure 5-26, the top diagram is an abstract version of the actual topology below.
NE1 is a transport network device. During network construction and deployment, two core
nodes may share links on the transport network. For example, the yellow links in Figure 5-26
are shared by P1, P2, and P3. A shared link failure affects primary and bypass CR-LSPs and
makes FRR protection invalid. To enable TE FRR to protect the CR-LSP, bypass and primary
CR-LSPs must be set up over links of different risk levels. SRLG technology can be deployed
to meet this requirement.
However, an SRLG is a set of links that share the same risks. If one of the links fails, other
links in the group may fail as well. Therefore, protection fails even if other links in the group
function as the hot standby or bypass CR-LSP for the failed link.
Implementation
SRLG is a link attribute, expressed by a numeric value. Links with the same SRLG value
belong to a single SRLG.
The SRLG value is advertised to the entire MPLS TE domain using IGP TE. Nodes in a
domain can then obtain SRLG values of all the links in the domain. The SRLG value is used
in CSPF calculations together with other constraints such as bandwidth.
l Strict mode: The SRLG value is a mandatory constraint when CSPF calculates paths for
hot standby and bypass CR-LSPs.
l Preferred mode: The SRLG value is an optional constraint when CSPF calculates paths
for hot standby and bypass CR-LSPs. If CSPF fails to calculate a path based on the
SRLG value, CSPF excludes the SRLG value when recalculating the path.
Usage Scenario
SRLG applies to networks with CR-LSP hot standby or TE FRR configured.
Benefits
SRLG constrains the path calculation for hot standby and bypass CR-LSPs, which avoids
primary and bypass CR-LSPs with the same risk level.
Concepts
Tunnel protection group concepts are as follows:
Working tunnel-1
LSRA LSRB
Protection tunnel-3
As shown in Figure 5-27, on LSRA, tunnel-3 is specified as the protection tunnel for working
tunnel-1. When a failure of tunnel-1 is detected, the ingress node switches traffic to protection
tunnel-3. After tunnel-1 is restored, the system determines whether to switch traffic back to
the working tunnel according to the configured switchback policy.
Implementation
A tunnel protection group uses a configured protection tunnel to protect a working tunnel,
improving tunnel reliability. Configuring working and protection tunnels over separate links is
recommended.
Table 5-19 describes the process of implementing a tunnel protection group.
Tunnel The process of setting up working and protection tunnels is the same as that of
setup setting up a common tunnel. The working and protection tunnels must have
the same ingress and egress nodes. Protection tunnel attributes, however, can
differ from working tunnel attributes. To better protect the working tunnel,
configure working and protection tunnels over separate links when deploying
a tunnel protection group.
NOTE
l The protection tunnel cannot be protected by any other protection tunnel or enabled
with TE FRR.
l You can configure independent attributes for the protection tunnel, which facilitates
network planning.
Binding After a tunnel protection group is configured for a working tunnel, the
protection tunnel with a specified tunnel ID is bound to the working tunnel.
Fault To implement fast protection switchover, the tunnel protection group detects
detection faults using the BFD for CR-LSP mechanism in addition to MPLS TE's
detection mechanism.
Step Description
Protection The tunnel protection group supports the following switchover modes:
switchove l Manual switchover: A network administrator runs a command to switch
r traffic.
l Auto switchover: The ingress node automatically switches traffic when
detecting a fault on the working tunnel.
In auto switchover mode, you can set the switchover period.
Switchbac After the working tunnel is restored, the ingress node determines whether to
k switch traffic back to the working tunnel according to the configured
switchback policy.
Working tunnel-1
LSRA Working tunnel-2 LSRB
Protection tunnel-3
Table 5-20 Differences between CR-LSP backup and tunnel protection group
Item CR-LSP Backup Tunnel Protection Group
Protected object Primary and backup CR-LSPs The protection tunnel protects the
are set up in the same tunnel. working tunnel.
The backup CR-LSP protects
the primary CR-LSP.
LSP attributes The primary and backup CR- Attributes of tunnels in a tunnel
LSPs have the same attributes protection group are independent
(such as bandwidth, setup from each other. For example, a
priority, and holding priority), protection tunnel without bandwidth
except the TE FRR attributes. can protect a working tunnel
requiring bandwidth protection.
Protection Supports 1:1 protection mode. Supports 1:1 and N:1 protection
mode Each primary CR-LSP has a modes. A protection tunnel can
backup CR-LSP. protect multiple working tunnels. If a
working tunnel fails, data is switched
to the shared protection tunnel.
Background
In most cases, MPLS TE uses TE FRR, CR-LSP backup, and TE tunnel protection group to
enhance network reliability. These technologies detect faults using the RSVP Hello or RSVP
Srefresh mechanism, but the detection speed is slow. When a Layer 2 device such as a switch
or hub exists between two nodes, the traffic switchover speed is even slower, leading to traffic
loss. BFD uses the fast packet transmission mode to quickly detect faults on MPLS TE
tunnels, so that a service traffic switchover can be triggered quickly to better protect the
MPLS TE service.
Concepts
Based on BFD session setup modes, BFD is classified into the following types:
l Static BFD: Local and remote discriminators of BFD sessions are manually configured.
l Dynamic BFD: Local and remote discriminators of BFD sessions are automatically
allocated by the system.
NOTE
For details about BFD, see "BFD Configuration" in S2750&S5700&S6700 Series Ethernet Switches
Configuration Guide - Reliability.
Implementation
In MPLS TE, BFD is implemented in the following methods for different detection scenarios:
l BFD for RSVP
BFD for Resource Reservation Protocol (RSVP) detects faults on links between RSVP
nodes in milliseconds. BFD for RSVP applies to TE FRR networking where a Layer 2
device exists between the PLR and its RSVP neighbor along the primary CR-LSP.
l BFD for CR-LSP
BFD for CR-LSP can rapidly detect faults on CR-LSPs and notify the forwarding plane
of the faults to ensure a fast traffic switchover. BFD for CR-LSP is usually used together
with a hot-standby CR-LSP or a tunnel protection group.
l BFD for TE Tunnel
When an MPLS TE tunnel functions as a virtual private network (VPN) tunnel on the
public network, BFD for TE tunnel detects faults in the entire TE tunnel. This triggers
traffic switchovers for VPN applications including VPN FRR and virtual leased line
(VLL) FRR.
BFD for RSVP can share BFD sessions with BFD for OSPF, BFD for IS-IS, or BFD for
Border Gateway Protocol (BGP). Therefore, the local node selects the minimum parameter
values among the shared BFD session as the local BFD parameters. The parameters include
the transmit interval, the receive interval, and the local detection multiplier.
Figure 5-30 BFD for CR-LSP before and after a link fault occurs
LSRD
Before a link fault occurs
LSRD
After a link fault occurs
Primary CR-LSP
Backup CR-LSP
LSRA LSRB LSRC BFD session
Link fault
Differences
Table 5-21 lists differences among BFD for RSVP, BFD for CR-LSP, and BFD for TE tunnel.
5.2.9.9 RSVP GR
RSVP Graceful Restart (GR) ensures uninterrupted traffic transmission on the forwarding
plane when traffic is switched to the control plane upon a node failure.
NOTE
Background
GR is typically applied to provider edge (PE) routers, especially when users connect to the
backbone network through a single PE router. If an MPLS TE tunnel deployed on such a PE
router for traffic engineering or as a VPN tunnel on the public network, traffic on the tunnel is
interrupted when the PE router fails or undergoes an active/standby switchover for
maintenance (software upgrade, for example). As shown in Figure 5-31, RSVP GR can be
deployed on PE3 to ensure uninterrupted service forwarding when PE3 fails.
VPNA VPNA
CE1 CE2
PE1 PE2
Backbone
PE3 PE4
CE3 CE4
VPNB VPNB
Concepts
RSVP GR is a fast state recovery mechanism for RSVP-TE. As one of the high-reliability
technologies, RSVP GR is designed based on non-stop forwarding (NSF).
The GR process involves GR restarter and GR helper routers. The GR restarter restarts the
protocol and the GR helper assists in the process.
Implementation
RSVP GR detects the GR status of a neighbor using RSVP Hello extensions.
In Figure 5-32, after the GR restarter triggers a GR, it stops sending Hello messages to its
neighbors. If a GR helper does not receive Hello messages for three consecutive intervals, it
considers that the neighbor is performing a GR and retains all forwarding information.
Meanwhile, the GR restarter interface cards continue to transmit services and to wait for the
GR restarter to complete the process.
After the GR restarter starts, it receives Hello messages from neighbors and sends Hello
messages in response. Upstream and downstream nodes process Hello messages in different
ways:
l When the upstream GR helper receives a Hello message, it sends a GR Path message
downstream to the GR restarter.
l When the downstream GR helper receives a Hello message, it sends a Recovery Path
message upstream to the GR restarter.
Upstream Downstream
Hello Hello
GR Path Recovery
GR-Helper GR-Restarter Path GR-Helper
When receiving the GR Path message and the Recovery Path message, the GR restarter
reestablishes the path state block (PSB) and reservation state block (RSB) of the CR-LSP
based on the two messages. Information about the CR-LSP on the local control plane is
restored.
If the downstream GR helper cannot send Recovery Path messages, the GR restarter
reestablishes the local PSB and RSB using only GR Path messages.
Usage Scenario
RSVP GR can be deployed to improve device-level reliability for nodes when an MPLS TE
tunnel is set up using RSVP TE.
Benefits
When an active/standby switchover occurs on the control plane, RSVP GR ensures
uninterrupted data transmission, improving device-level reliability.
5.3 Applications
This section describes the applicable scenario of MPLS TE.
Service Overview
Carriers are converging their service bearer networks. IP/MPLS technology is essential on
these converged networks because the technology allows voice, video, leased line, and data
services to be transmitted on an IP/MPLS backbone network. Depending upon individual
subscribers' requirements, services on a metropolitan area network (MAN) are classified into:
l For individual subscribers: high-speed Internet (HSI), video on demand (VoD), and voice
over IP (VoIP)
l For business and enterprise subscribers: L3VPN services (business VPN) and L2VPN
services (data, video, and voice services)
VoIP l Bandwidth
guarantee:
required
l QoS
guarantee:
high
Business l Bandwidth
VPN guarantee:
required
l QoS
guarantee:
medium
Networking Description
Currently, an IP MAN consists of a MAN backbone and a MAN access network, which
deliver services to users. Figure 5-33 and Figure 5-34 show end-to-end service models for
individual and enterprise subscribers.
PE-AGG BRAS
HSI
DSLAM
IP/MPLS
MAN
BackBone
VOIP
UPE
PE-AGG SR
SoftX
VOD
HSI MPLS TE+VLL/VPLS
BRAS
UPE
Enterprise IP/MPLS
MAN
service BackBone
SR
MPLS TE
L3VPN or L2VPN MPLS TE Hot-standby
BFD for CR-LSP
Feature Deployment
Enterprise or individual services are core services that have bandwidth, QoS, and reliability
requirements. MPLS TE tunnels are recommended as VPN tunnels on the public network to
meet service requirements. For detailed deployment, see Table 5-23.
Explicit paths are configured to establish primary and bypass CR-LSPs. The two paths do not
overlap in important areas.
5.4 Specification
This section provides MPLS TE specifications supported by the device.
Context
Table 5-24 lists the MPLS TE specifications.
in a heavy workload. TE
Auto FRR can be used to
set up a bypass tunnel
automatically that meets
specified constraints. The
workload is reduced.
l Tunnel protection group
The tunnel protection group
provides end-to end protection
for MPLS TE tunnels. If a
working tunnel in a protection
group fails, traffic is switched
to a protection tunnel.
l BFD for RSVP
BFD monitors RSVP. BFD can
detect faults in links between
RSVP neighboring nodes in
milliseconds. BFD for RSVP
applies to a TE FRR network,
on which Layer 2 devices exist
between the PLR and its
RSVP neighboring nodes over
the primary CR-LSP.
l BFD for CR-LSP
BFD monitors CR-LSPs. After
BFD detects a fault in a CR-
LSP, the BFD module
immediately instructs the
forwarding plane to trigger a
rapid traffic switchover. BFD
for CR-LSP is used together
with a hot-standby CR-LSP or
a tunnel protection group.
l BFD for TE tunnel
BFD can monitor MPLS TE
tunnels that are used as public
network tunnels to transmit
VPN traffic. BFD monitors a
whole TE tunnel. If BFD
detects a fault in a tunnel that
transmits private network
traffic, the BFD module
instructs the VPN or virtual
leased line (VLL) FRR
module to perform a traffic
switchover.
l RSVP GR
l Tunnel-specific attributes in a tunnel protection group are independent from each other.
For example, a protection tunnel with the bandwidth 50 Mbit/s can protect a working
tunnel with the bandwidth 100 Mbit/s.
l TE FRR can be enabled to protect the working tunnel.
NOTE
A tunnel protection group and TE FRR cannot be configured simultaneously on the ingress node
of a primary tunnel.
l A protection tunnel cannot be protected by other tunnels or be enabled with TE FRR.
When you configure BFD for MPLS TE on the device, note the following:
l BFD can detect faults in static and dynamic CR-LSPs.
l BFD for LSP can function properly though the forward path is an LSP and the backward
path is an IP link. The forward path and the backward path must be established over the
same link; otherwise, if a fault occurs, BFD cannot identify the faulty path. Before
deploying BFD, ensure that the forward and backward paths are over the same link so
that BFD can correctly identify the faulty path.
MPLS TE Disabled
RSVP TE Disabled
Affinity property of tunnels The values of affinity property and mask are
both 0x0.
Pre-configuration Tasks
Before configuring a static MPLS TE tunnel, complete the following tasks:
l Configuring an LSR ID on each LSR
l Enabling basic MPLS functions on each LSR globally and on each interface
NOTE
After a static CR-LSP is bound to a tunnel interface, the static CR-LSP takes effect without an IP route
configured.
Configuration Process
Except that configuring link bandwidth is optional, all the other configurations are mandatory.
Context
Perform the following configurations on each node of the MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
Before setting up an MPLS TE Tunnel, you must create a tunnel interface and configure other
tunnel attributes on the tunnel interface. An MPLS TE tunnel interface is responsible for
establishing an MPLS TE tunnel and managing packet forwarding on the tunnel.
NOTE
Because the type of the packet forwarded by the MPLS TE tunnel is MPLS, the commands, such as the
ip verify source-address and urpf commands, related to IP packet forwarding configured on this
interface are invalid.
Procedure
Step 1 Run:
system-view
is unidirectional and does not need a peer address. Therefore, there is no need to configure a
separate IP address for the tunnel interface. Generally, a loopback interface is created on the
ingress node and a 32-bit address that is the same as the LSR ID is assigned to the loopback
interface. Then the tunnel interface borrows the IP address of the loopback interface.
Step 4 Run:
tunnel-protocol mpls te
Step 5 Run:
destination dest-ip-address
The destination address of the tunnel is configured, which is usually the LSR ID of the egress
node.
Different types of tunnels need different destination addresses. When the tunnel protocol is
changed to MPLS TE from other protocols, the configured destination address is deleted
automatically and you need to configure an address again.
Step 6 Run:
mpls te tunnel-id tunnel-id
Step 7 Run:
mpls te signal-protocol cr-static
By default, the tunnel interface name such as Tunnel1 is used as the name of the TE tunnel.
Step 9 Run:
mpls te commit
NOTE
If MPLS TE parameters on a tunnel interface are modified, run the mpls te commit command to
activate them.
----End
Context
When a non-Huawei device as the ingress node of an MPLS TE tunnel initiates a request for
setting up a CR-LSP with bandwidth constraints, configure link bandwidth on the connected
Huawei device for negotiation so that the CR-LSP can be set up and network resources are
used efficiently.
NOTE
The configured bandwidth takes effect only during tunnel establishment and protocol negotiation, and
does not limits the bandwidth for traffic forwarding.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 4 Run:
mpls te bandwidth max-reservable-bandwidth bw-value
By default, the maximum reservable bandwidth of a link is 0 bit/s. The bandwidth allocated to
a static CR-LSP built over a link is certainly higher than 0 bit/s. If the maximum reservable
bandwidth of the link is not configured, the static CR-LSP cannot be set up due to insufficient
bandwidth.
Step 5 Run:
mpls te bandwidth { bc0 bc0-bw-value | bc1 bc1-bw-value } *
NOTE
l The maximum reservable bandwidth of a link cannot be greater than the actual bandwidth of the
link. A maximum of 80% of the actual bandwidth of the link is recommended for the maximum
reservable bandwidth of the link.
l Neither the BC0 bandwidth nor the BC1 bandwidth can be greater than the maximum reservable
bandwidth of the link.
----End
Context
When configuring a static MPLS TE tunnel, configure static CR-LSPs on the ingress, transit,
and egress nodes. When there is no intermediate node, there is no need to configure a static
CR-LSP on the intermediate node.
NOTE
After static CR-LSPs are configured, you can execute commands again to modify CR-LSP parameters.
Procedure
l Configure the ingress node.
Perform the following operations on the ingress node of a static MPLS TE tunnel.
a. Run:
system-view
tunnel interface-number specifies the MPLS TE tunnel interface that uses this static
CR-LSP. By default, the Bandwidth Constraints value is ct0, and the value of
bandwidth is 0. The bandwidth used by the tunnel cannot be higher than the
maximum reservable bandwidth of the link.
tunnel-name must be the same as the tunnel name created by using the interface
tunnel interface-number command. tunnel-name is a case-sensitive character string
in which spaces are not supported.
The next hop or outbound interface is determined by the route from the ingress to
the egress. For the difference between the next hop and outbound interface, refer to
"Static Route Configuration" in the S2750&S5700&S6700 Series Ethernet
Switches Configuration Guide - IP Routing.
NOTE
The configured bandwidth takes effect only during tunnel establishment and protocol
negotiation, and does not limits the bandwidth for traffic forwarding.
l Configure a transit node.
Perform the following operations on the transit node of a static MPLS TE tunnel.
a. Run:
system-view
lsp-name cannot be specified as the same as the name of an existing tunnel on the
node. The name of the MPLS TE tunnel interface associated with the static CR-LSP
can be used, such as Tunnel1.
NOTE
The configured bandwidth takes effect only during tunnel establishment and protocol
negotiation, and does not limits the bandwidth for traffic forwarding.
l Configure the egress node.
Perform the following operations on the egress node of a static MPLS TE tunnel.
a. Run:
system-view
Prerequisites
The configurations of the static MPLS TE tunnel are complete.
Procedure
l Run the display mpls static-cr-lsp [ lsp-name ] [ { include | exclude } ip-address mask-
length ] [ verbose ] command to check information about the static CR-LSP.
l Run the display mpls te tunnel [ destination ip-address ] [ lsp-id ingress-lsr-id session-
id local-lsp-id ] [ lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-
name ] [ { incoming-interface | interface | outgoing-interface } interface-type
interface-number ] [ verbose ] command to check tunnel information.
l Run the display mpls te tunnel statistics or display mpls lsp statistics command to
check the tunnel statistics.
l Run the display mpls te tunnel-interface [ tunnel interface-number ] command to
check information about the tunnel interface on the ingress node.
----End
Pre-configuration Tasks
Before configuring a dynamic MPLS TE tunnel, complete the following tasks:
Configuration Process
Except that configuring link bandwidth, referencing the CR-LSP attribute template to set up a
CR-LSP, and configuring tunnel constraints are optional, all the other configurations are
mandatory.
Context
To create a dynamic MPLS TE tunnel, first enable MPLS TE, enable RSVP-TE globally,
enable RSVP-TE on an interface, and perform other configurations, such as setting the link
bandwidth attributes and enabling CSPF.
Perform the following configurations on each node of the MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 8 Run:
mpls
Step 9 Run:
mpls te
Step 10 Run:
mpls rsvp-te
----End
Context
A tunnel interface must be created on the ingress so that a tunnel can be established and
forward data packets.
NOTE
Because MPLS TE tunnels forward MPLS packets, not IP packets, IP forwarding-related commands run
on the tunnel interface are invalid.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
NOTE
If the shutdown command is run on the tunnel interface, all tunnels established on the tunnel interface
will be deleted.
Step 3 Run either of the following commands to assign an IP address to the tunnel interface:
An MPLS TE tunnel can be established even if the tunnel interface is assigned no IP address.
The tunnel interface must obtain an IP address before forwarding traffic. An MPLS TE tunnel
is unidirectional and does not need a peer address. Therefore, there is no need to configure a
separate IP address for the tunnel interface. Generally, a loopback interface is created on the
ingress node and a 32-bit address that is the same as the LSR ID is assigned to the loopback
interface. Then the tunnel interface borrows the IP address of the loopback interface.
Step 4 Run:
tunnel-protocol mpls te
Step 5 Run:
destination dest-ip-address
A tunnel destination address is configured, which is usually the LSR ID of the egress.
Various types of tunnels require specific destination addresses. If a tunnel protocol is changed
from another protocol to MPLS TE, a configured destination address is deleted automatically
and a new destination address needs to be configured.
Step 6 Run:
mpls te tunnel-id tunnel-id
A tunnel ID is set.
Step 7 Run:
mpls te signal-protocol rsvp-te
By default, the tunnel interface name such as Tunnel1 is used as the name of the TE tunnel.
Do not perform the constraint shortest path first (CSPF) calculation when an MPLS TE tunnel
is being set up.
Step 10 Run:
mpls te commit
NOTE
The mpls te commit command must be run to make configurations take effect each time MPLS TE
parameters are changed on a tunnel interface.
----End
Context
When a non-Huawei device as the ingress node of an MPLS TE tunnel initiates a request for
setting up a CR-LSP with bandwidth constraints, configure link bandwidth on the connected
Huawei device for negotiation so that the CR-LSP can be set up and network resources are
used efficiently.
NOTE
The configured bandwidth takes effect only during tunnel establishment and protocol negotiation, and
does not limits the bandwidth for traffic forwarding.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface interface-type interface-number
Step 4 Run:
mpls te bandwidth max-reservable-bandwidth bw-value
By default, the maximum reservable bandwidth of a link is 0 bit/s. The bandwidth allocated to
a static CR-LSP built over a link is certainly higher than 0 bit/s. If the maximum reservable
bandwidth of the link is not configured, the static CR-LSP cannot be set up due to insufficient
bandwidth.
Step 5 Run:
mpls te bandwidth { bc0 bc0-bw-value | bc1 bc1-bw-value } *
NOTE
l The maximum reservable bandwidth of a link cannot be greater than the actual bandwidth of the
link. A maximum of 80% of the actual bandwidth of the link is recommended for the maximum
reservable bandwidth of the link.
l Neither the BC0 bandwidth nor the BC1 bandwidth can be greater than the maximum reservable
bandwidth of the link.
----End
Context
Nodes on an MPLS network use OSPF TE to exchange TE link attributes such as bandwidth
and colors to generate TEDBs. TEDB information is used by CSPF to calculate paths for
MPLS TE tunnels. Current, the device can use two methods to advertise TE information to
generate TEDBs.
l OSPF TE
OSPF TE is an OSPF extension used on an MPLS TE network. LSRs on the MPLS area
exchange Opaque Type 10 LSAs that carry TE link information to generate TEDBs for
CSPF calculation.
OSPF areas do not support TE by default. The OSPF Opaque capability must be enabled
to support OSPF TE, and a node can generate Opaque Type 10 LSAs only if at least one
OSPF neighbor is in the Full state.
NOTE
Procedure
l Configure OSPF TE.
a. Run:
system-view
IS-IS TE is enabled.
By default, TE is not enabled for IS-IS processes.
If no IS-IS level is specified, the node is a Level-1-2 device that can generate two
TEDBs for communicating with Level-1 and Level-2 devices.
----End
Context
You can create a CR-LSP by using the following methods:
l Creating a CR-LSP without using a CR-LSP attribute template
l Creating a CR-LSP by using a CR-LSP attribute template
It is recommended to use a CR-LSP attribute template to set up a CR-LSP because this
method has the following advantages:
A CR-LSP attribute template can greatly simplify the configurations of CR-LSPs.
A maximum of three CR-LSP attribute templates can be created for a hot-standby
CR-LSP or an ordinary backup CR-LSP. You can set up a hot-standby CR-LSP or
an ordinary backup CR-LSP with different path options. (Among the three attribute
templates, the template with the smallest sequence number is firstly used. If the
setup fails, the template with a greater sequence number is used.)
If configurations of a CR-LSP attribute template are modified, configurations of the
CR-LSPs established by using the CR-LSP attribute template are automatically
updated, which makes the configurations of CR-LSPs more flexible.
NOTE
The preceding two methods can be used together. If the TE attribute configured in the tunnel interface
view and the TE attribute configured through a CR-LSP attribute template coexist, the former takes
precedence over the latter.
Procedure
l Configure a CR-LSP attribute template.
a. Run:
system-view
A CR-LSP attribute template is created and the LSP attribute view is displayed.
NOTE
A CR-LSP attribute template can be deleted only when it is not used by any tunnel interface.
c. (Optional) Run:
bandwidth { ct0 ct0-bandwidth | ct1 ct1-bandwidth }
The setup priority and hold priority are set for the CR-LSP attribute template.
By default, both the setup priority and the hold priority are 7.
g. (Optional) Run:
hop-limit hop-limit
NOTE
Before enabling or disabling FRR for the CR-LSP attribute template, note the following:
l After FRR is enabled, the route recording function is automatically enabled for the CR-
LSP.
l After FRR is disabled, attributes of the bypass tunnel are automatically deleted.
l The undo mpls te record-route command can take effect only when FRR is disabled.
i. (Optional) Run:
record-route [ label ]
The route recording function is enabled for the CR-LSP attribute template.
By default, the route recording function is disabled.
j. (Optional) Run:
bypass-attributes { bandwidth bandwidth | priority setup_priority_value
[ hold_priority_value ] }*
The bypass tunnel attributes are configured for the CR-LSP attribute template.
By default, the bypass tunnel attributes are not configured.
k. Run:
commit
The primary CR-LSP is set up through the specified CR-LSP attribute template.
If dynamic is used, it indicates that when a CR-LSP attribute template is used to set
up a primary CR-LSP, all attributes in the template use the default values.
d. (Optional) Run:
mpls te hotstandby-lsp-constraint number { dynamic | lsp-attribute lsp-
attribute-name }
The Wait to Restore (WTR) time is set for the traffic to switch back from the hot-
standby CR-LSP to the primary CR-LSP.
By default, the WTR time for the traffic to switch back from the hot-standby CR-
LSP to the primary CR-LSP is 10 seconds.
NOTE
The ordinary backup CR-LSP is set up by using the specified CR-LSP attribute
template.
A maximum of three CR-LSP attribute templates can be used to set up an ordinary
backup CR-LSP. The ordinary backup CR-LSP must be consistent with the primary
CR-LSP in the attributes of the setup priority, hold priority, and bandwidth type. To
set up an ordinary backup CR-LSP, you should keep on attempting to use CR-LSP
attribute templates one by one in ascending order of the number of the attribute
template until the ordinary backup CR-LSP is set up.
If dynamic is used, it indicates that the ordinary backup CR-LSP is assigned the
same bandwidth and priority as the primary CR-LSP.
g. (Optional) Run:
mpls te backup ordinary-lsp-constraint lock
By default, the attribute template of the ordinary backup CR-LSP is not locked.
NOTE
Before running this command, you must run the mpls te ordinary-lsp-constraint command
to reference the CR-LSP attribute template to set up an ordinary backup CR-LSP.
h. Run:
mpls te commit
Context
Constraints such as bandwidth and explicit path attributes can be configured on the ingress to
accurately and flexibly establish an RSVP-TE tunnel.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
1. Configuring an MPLS TE Explicit Path
You need to configure an explicit path before you can configure constraints on the
explicit path.
An explicit path refers to a vector path on which a series of nodes are arranged in
configuration sequence. The IP address of an interface on the egress is usually used as
the destination address of the explicit path. Links or nodes can be specified for an
explicit path so that a CR-LSP can be established over the specified path, facilitating
resource allocation and efficiently controlling CR-LSP establishment.
Two adjacent nodes are connected in either of the following modes on an explicit path:
Strict : Two consecutive hops must be directly connected. This mode strictly
controls the path through which the LSP passes.
Loose: Other nodes may exist between a hop and its next hop.
The strict and loose modes are used either separately or together.
2. Configuring Tunnel Constraints
After constraints are configured for tunnel establishment, a CR-LSP is established over a
path calculated by CSPF.
Procedure
l Configure an MPLS TE explicit path.
a. Run:
system-view
By default, the include strict parameters are configured, meaning that a hop and its
next hop must be directly connected. An explicit path can be configured to pass
through a specified node or not to pass through a specified node.
d. You can run the following commands to add, modify, or delete nodes on the explicit
path.
n Run:
list hop [ ip-address ]
NOTE
The configured bandwidth takes effect only during tunnel establishment and protocol
negotiation, and does not limits the bandwidth for traffic forwarding.
d. Run:
mpls te path explicit-path path-name
Context
To calculate a tunnel path meeting specified constraints, CSPF should be configured on the
ingress.
CSPF extends the shortest path first (SPF) algorithm and is able to calculate the shortest path
meeting MPLS TE requirements. CSPF calculates paths using the following information:
l Link state information sent by IGP-TE and saved in TEDBs
l Network resource attributes, such as the maximum available bandwidth, maximum
reservable bandwidth, and affinity property, sent by IGP-TE and saved in TEDBs
l Configured constraints such as explicit paths
NOTE
Procedure
Step 1 Run:
system-view
If a single IGP protocol is only configured on the backbone network to advertise OSPF or IS-
IS TE information, ignore this step.
----End
Prerequisites
The configurations of a dynamic MPLS TE tunnel are complete.
Procedure
l Run the display mpls te link-administration bandwidth-allocation [ interface
interface-type interface-number ] command to check information about the allocated link
bandwidth.
l Run the display ospf [ process-id ] mpls-te [ area area-id ] [ self-originated ] command
to check information about OSPF TE.
l Run one of the following commands to check IS-IS TE information:
display isis traffic-eng advertisements
display isis traffic-eng link
display isis traffic-eng network
display isis traffic-eng statistics
display isis traffic-eng sub-tlvs
l Run the display explicit-path [ [ name ] path-name ] [ tunnel-interface | lsp-attribute |
verbose ] command to check configured explicit paths.
l Run the display mpls te cspf destination ip-address [ affinity properties [ mask mask-
value ] | bandwidth { ct0 ct0-bandwidth | ct1 ct1-bandwidth } * | explicit-path path-
name | hop-limit hop-limit-number | metric-type { igp | te } | priority setup-priority |
srlg-strict exclude-path-name | tie-breaking { random | most-fill | least-fill } ] * [ hot-
standby [ explicit-path path-name | overlap-path | affinity properties [ mask mask-
value ] | hop-limit hop-limit-number | srlg { preferred | strict } ] * ] command to check
information about a path that is calculated using CSPF based on specified conditions.
l Run the display mpls te cspf tedb { all | area { area-id | area-id-ip } | interface ip-
address | network-lsa | node [ router-id ] | srlg srlg-number | overload-node }
command to check information about TEDBs that can meet specified conditions and be
used by CSPF to calculate paths.
l Run the display mpls rsvp-te command to check RSVP information.
l Run the display mpls rsvp-te established [ interface interface-type interface-number
peer-ip-address ] command to check information about the established RSVP-TE CR-
LSPs.
l Run the display mpls rsvp-te peer [ interface interface-type interface-number ]
command to check RSVP neighbor parameters.
l Run the display mpls rsvp-te reservation [ interface interface-type interface-number
peer-ip-address ] command to check information about RSVP resource reservation.
l Run the display mpls rsvp-te request [ interface interface-type interface-number peer-
ip-address ] command to check information about the RSVP-TE request messages on
interfaces.
l Run the display mpls rsvp-te sender [ interface interface-type interface-number peer-
ip-address ] command to check information about RSVP senders.
l Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-
number ] } command to check RSVP-TE statistics.
l Run the display mpls te link-administration admission-control [ interface interface-
type interface-number | stale-interface interface-index ] command to check the tunnels
set up on the local node.
l Run the display mpls te tunnel [ destination ip-address ] [ lsp-id ingress-lsr-id session-
id local-lsp-id ] [ lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-
name ] [ { incoming-interface | interface | outgoing-interface } interface-type
interface-number ] [ verbose ] command to check tunnel information.
l Run the display mpls te tunnel statistics or display mpls lsp statistics command to
check tunnel statistics.
l Run the display lsp-attribute [ name lsp-attribute-name ] [ tunnel-interface | verbose ]
command to check the configurations of the CR-LSP attribute template and the tunnels
using it.
l Run the display mpls te tunnel-interface lsp-constraint [ tunnel interface-number ]
command to view information about the CR-LSP attribute template on the TE tunnel
interface.
l Run the display mpls te tunnel-interface [ tunnel interface-number | auto-bypass-
tunnel [ tunnel-name ] ] command to check information about the MPLS TE tunnel.
l Run the display mpls te tunnel c-hop [ tunnel-name ] [ lsp-id ingress-lsr-id session-id
lsp-id ] command to check path computation results of tunnels.
l Run the display mpls te session-entry [ ingress-lsr-id tunnel-id egress-lsr-id ] command
to check detailed information about the LSP session entry.
----End
Pre-configuration Tasks
Before importing traffic to the MPLS TE tunnel, complete one of the following tasks:
l 5.8.1 Configuring a Static MPLS TE Tunnel
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
Configuration Procedure
To direct traffic to the MPLS TE tunnel, perform one of the following operations according to
the network planning. You are advised to use the auto route mechanism.
Context
Using static routes is the simplest method for importing traffic to an MPLS TE tunnel.
Procedure
Static routes in an MPLS TE tunnel are similar to common static routes. You only need to
configure a static route with a TE tunnel interface as the outbound interface. For detailed
instructions, see Configuring IPv4 Static Routes in the S2750&S5700&S6700 Series Ethernet
Switches Configuration Guide - IP Unicast Routing.
Context
In general, VPN traffic is forwarded through an LSP tunnel but not an MPLS TE tunnel. To
import VPN traffic to the MPLS TE tunnel, you need to configure a tunnel policy.
Procedure
You can configure either of the following types of tunnel policies according to service
requirements:
l Tunnel type prioritizing policy: Such a policy specifies the sequence in which different
types of tunnels are selected by the VPN. For example, you can specify the VPN to
select the TE tunnel first.
l Tunnel binding policy: This policy binds a TE tunnel to a specified VPN by binding a
specified destination address to the TE tunnel to provide QoS guarantee.
For detailed instructions, see Configuring and Applying a Tunnel Policy in "BGP MPLS IP
VPN Configuration" of the S2750&S5700&S6700 Series Ethernet Switches Configuration
Guide - VPN.
Context
After you configure auto routes, TE tunnels act as logical links to participate in IGP route
calculation and tunnel interfaces are used as the outbound interfaces of packets. Devices on
network nodes determine whether to advertise LSP information to neighboring nodes to
instruct packet forwarding. Two modes are available for auto routes:
l Configuring IGP shortcut: A device uses a TE tunnel for local route calculation and
does not advertise the TE tunnel to its peers as a route. Therefore, the peers of this device
cannot use the TE tunnel for route calculation.
l Configuring forwarding adjacency: A device uses a TE tunnel for local route
calculation and advertises the TE tunnel to its peers as a route. Therefore, the peers of
this device can use the TE tunnel for route calculation.
NOTE
Procedure
l Configuring IGP Shortcut
a. Run:
system-view
a. Run:
system-view
NOTE
The IGP metric value must be set properly to ensure that LSP information is advertised and
used correctly. For example, the metric of a TE tunnel must be less than that of IGP routes to
ensure that the TE tunnel is used as a route link.
e. Run:
mpls te commit
----End
Prerequisites
The configuration for importing traffic to an MPLS TE tunnel is complete.
Procedure
l Run the display current-configuration command to view the configuration for
importing traffic to an MPLS TE tunnel.
l Run the display ip routing-table command to view the routes with an MPLS TE tunnel
interface as the outbound interface.
l Run the display ospf [ process-id ] traffic-adjustment command to check tunnel
information about OSPF processes related to traffic adjustment (IGP shortcut and
forwarding adjacency).
----End
Pre-configuration Tasks
Before adjusting RSVP-TE signaling parameters, complete the following task:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
Configuration Process
The following configurations are optional and can be performed in any sequence.
Context
If multiple CR-LSPs pass through the same node, the ingress nodes can be configured with an
RSVP resource reservation style to allow the CR-LSPs to share reserved resources or use
separate reserved resources on the overlapping node.
A reservation style is used by an RSVP node to reserve resources after receiving resource
reservation requests from upstream nodes. The device supports the following reservation
styles:
l Fixed filter (FF): creates an exclusive reservation for each sender. A sender does not
share its resource reservation with other senders, and each CR-LSP on a link has a
separate resource reservation.
l SE: creates a single reservation for a series of selected upstream senders. CR-LSPs on a
link share the same resource reservation.
The SE style is used for tunnels established using the Make-Before-Break mechanism,
whereas the FF style is seldom used.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel tunnel-number
----End
Context
Receiving an ResvConf message does not mean that the resource reservation succeeds. It
means that resources are reserved successfully only on the farthest upstream node where this
Resv message arrives. These resources, however, may be preempted by other applications
later. You can enable reservation confirmation mechanism to prevent this problem.
Perform the following configurations on the egress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
If an RSVP node does not receive any Refresh message within a specified period, it deletes
the path or reservation state. You can set the interval for sending Path/Resv messages and
retry count by setting RSVP timers to change the timeout interval. The default interval and
retry count are recommended. The timeout interval is calculated using the following formula:
Timeout interval = (keep-multiplier-number + 0.5) x 1.5 x refresh-interval.
In the formula, keep-multiplier-number specifies the retry count allowed for RSVP Refresh
messages; refresh-interval specifies the interval for sending RSVP Refresh messages.
Perform the following configurations on each node of the MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
Enabling Srefresh in the mpls view on two nodes that are the neighbors of each other can
reduce the cost and improve the performance of a network. In the MPLS view, Srefresh can
be enabled on the entire device. After Srefresh is enabled, the retransmission of Srefresh
messages is automatically enabled on the interface or the device.
NOTE
The Srefresh mechanism in MPLS view is applied to the TE FRR networking. Srefresh is enabled globally on
the Point of Local Repair (PLR) and Merge Point (MP) over an FRR bypass tunnel. This allows efficient use
of network resources and improves Srefresh reliability.
Assume that a node initializes the retransmission interval as Rf seconds. If receiving no ACK
message within Rf seconds, the node retransmits the RSVP message after (1 + Delta) x Rf
seconds. The value of Delta depends on the link rate. The node retransmits the message until
it receives an ACK message or the times of retransmission reach the threshold (that is,
retransmission increment value).
Procedure
l Perform the following steps in the MPLS view.
a. Run:
system-view
Srefresh is enabled.
Srefresh is enabled.
----End
Context
The RSVP Hello extension mechanism is used to fast detect reachability of RSVP neighbors.
When the mechanism detects that a neighboring RSVP node is unreachable, the MPLS TE
tunnel is torn down.
NOTE
For details about the RSVP Hello extension mechanism, see RFC 3209.
Procedure
Step 1 Run:
system-view
----End
Context
You can adjust object information in RSVP messages by configuring the RSVP message
format. In scenarios where an RSVP-TE tunnel is deployed, when devices from other vendors
on the RSVP-TE tunnel use different format of RSVP message, you can modify the format of
RSVP messages to be sent by the Huawei device to implement interworking.
You can configure the transit and egress nodes to add the down-reason object in an RSVP
message to be sent, facilitating fault locating.
Procedure
l Configure the formats of objects in an RSVP message.
Perform the following steps on each node of the MPLS TE tunnel:
a. Run:
system-view
n If you want an ingress to learn RSVP-TE tunnel Down causes of the transit
and egress nodes, run the mpls rsvp-te send-message down-reason
command.
l Configure the format of the Record Route Object (RRO) in an Resv message.
When the format in an Resv message sent by a non-Huawei device connected to the
Huawei device is different from that on the Huawei device, run the following command
to adjust the format of Resv messages on the Huawei device to be the same as that on the
non-Huawei device to implement interworking.
Perform the following configurations on the transit and egress nodes of an MPLS TE
tunnel.
a. Run:
system-view
Context
RSVP key authentication prevents an unauthorized node from setting up RSVP neighbor
relationships with the local node or generating forged packets to attack the local node. By
default, RSVP authentication is not configured. Configuring RSVP authentication is
recommended to ensure system security.
RSVP key authentication prevents the following unauthorized means of setting up RSVP
neighbor relationships, protecting the local node from attacks (such as malicious reservation
of high bandwidth):
l An unauthorized node attempts to set up a neighbor relationship with the local node.
l A remote node generates and sends forged RSVP messages to set up a neighbor
relationship with the local node.
RSVP key authentication alone cannot prevent anti-replay attacks or RSVP message mis-
sequence during network congestion. RSVP message mis-sequence causes authentication
termination between RSVP neighbors. The handshake and message window functions,
together with RSVP key authentication, can prevent the preceding problems.
The RSVP authentication lifetime is configured, preventing unceasing RSVP authentication.
In the situation where no CR-LSP exists between RSVP neighbors, the neighbor relationship
is kept Up until the RSVP authentication lifetime expires.
The RSVP key authentication is configured either in the interface view or the MPLS RSVP-
TE neighbor view:
l Configure RSVP key authentication in the interface view: the RSVP key authentication
is performed between directly connected nodes.
l Configure RSVP key authentication in the MPLS RSVP-TE neighbor view: the RSVP
key authentication is performed between neighboring nodes, which is recommended.
Perform the following configurations on each node of the MPLS TE tunnel.
NOTE
The configuration must be complete on two neighboring nodes within three refreshing intervals. If the
configuration is not complete on either of the two neighboring nodes after three intervals elapse, the
session goes Down.
Procedure
Step 1 Run:
system-view
RSVP key authentication configured in the interface view takes effect only on the
current interface and has the lowest preference.
NOTE
On an Ethernet interface, run the undo portswitch command to switch the working mode of the
interface to Layer 3 mode.
l To enter the MPLS RSVP-TE neighbor view, run:
mpls rsvp-te peer ip-address
When ip-address is specified as an interface address but not the LSR ID of the
RSVP neighbor, key authentication is based on this neighbor's interface address.
This means that RSVP key authentication takes effect only on the specified
interface of the neighbor, providing high security. In this case, RSVP key
authentication has the highest preference.
When ip-address is specified as an address equal to the LSR ID of the RSVP
neighbor, key authentication is based on the neighbor's LSR ID. This means that
RSVP key authentication takes effect on all interfaces of the neighbor. In this case,
this RSVP key authentication has the higher preference than that configured in the
interface view, but has the lower preference than that configured based on the
neighbor interface address.
NOTE
If a neighbor node is identified by its LSR-ID, CSPF must be enabled on two neighboring devices
where RSVP authentication is required.
Step 3 Run:
mpls rsvp-te authentication { { cipher | plain } auth-key | keychain keychain-
name }
Note that HMAC-MD5 encryption algorithm cannot ensure security. Keychain authentication is
recommended.
NOTE
If you run the mpls rsvp-te authentication lifetime lifetime command after configuring the handshake
function, note that the RSVP authentication lifetime must be greater than the interval for sending RSVP
refresh messages configured by mpls rsvp-te timer refresh command.
If the RSVP authentication lifetime is smaller than the interval for sending RSVP refresh messages, the
RSVP authentication relationship may be deleted because no RSVP refresh message is received within
the RSVP authentication lifetime. In such a case, after the next RSVP refresh message is received, the
handshake operation is triggered. Repeated handshake operations may cause RSVP tunnels unable to be
set up or cause RSVP tunnels to be deleted.
The default window size is 1, which means that a device saves only the largest sequence
number of the RSVP message from neighbors.
When window-size is larger than 1, it means that a device accepts several valid sequence
numbers.
NOTE
If RSVP is enabled on an Eth-Trunk interface, only one neighbor relationship is established on the trunk
link between RSVP neighbors. Therefore, any member interface of the trunk interface receives RSVP
messages in a random order, resulting in RSVP message mis-sequence. Configuring RSVP message
window size prevents RSVP message mis-sequence.
The window size larger than 32 is recommended. If the window size is set too small, the RSVP packets
are discarded because the sequence number is beyond the range of the window size, causing an RSVP
neighbor relationship to be terminated.
Step 7 Run:
quit
Step 8 (Optional) Set an interval at which a Challenge message is retransmitted and the maximum
number of times that a Challenge message can be retransmitted.
If Authentication messages exchanged between two RSVP nodes are out of order, a node
sends a Challenge message to the other one to request for connection restoration. If no reply
to the Challenge message is received, the node retransmits the Challenge message at a
specified interval. If no reply is received after the maximum number of retransmission times
is reached, the neighbor relationship is not restored. If a reply is received before the maximum
number of retransmission times is reached, the neighbor relationship is restored, and the
number of retransmission times is cleared for the Challenge message.
1. Run:
mpls
----End
Prerequisites
The configurations of adjusting RSVP signaling parameters are complete.
Procedure
l Run the display mpls rsvp-te command to check related information about RSVP-TE.
l Run the display default-parameter mpls rsvp-te command to check default parameters
of RSVP-TE.
l Run the display mpls rsvp-te session ingress-lsr-id tunnel-id egress-lsr-id command to
check information about the specified RSVP session.
l Run the display mpls rsvp-te psb-content [ ingress-lsr-id tunnel-id lsp-id ] command to
check information about RSVP-TE PSB.
l Run the display mpls rsvp-te rsb-content [ ingress-lsr-id tunnel-id lsp-id ] command to
check information about RSVP-TE RSB.
l Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-
number ] } command to check RSVP-TE statistics.
l Run the display mpls rsvp-te peer [ interface interface-type interface-number ]
command to view information about the RSVP neighbor on an RSVP-TE-enabled
interface.
----End
Pre-configuration Tasks
Before adjusting the path of a CR-LSP, complete the following task:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
Configuration Process
The following configurations are optional and can be performed in any sequence.
Context
You can configure the CSPF tie-breaking function to select a path from multiple paths with
the same weight value.
Procedure
Step 1 Run:
system-view
NOTE
The maximum reservable bandwidth is the bandwidth configured by the command mpls te bandwidth
max-reservable-bandwidth bw-value.
Step 4 Run:
quit
NOTE
The tunnel preferentially takes the tie-breaking policy configured in its tunnel interface view. If the tie-
breaking policy is not configured in the tunnel interface view, the configuration in the MPLS view is
used.
----End
Context
You can configure the metric type that is used for setting up a tunnel.
Procedure
l Specifying the metric type used by the tunnel
Perform the following configurations on the ingress node of an MPLS TE tunnel.
a. Run:
system-view
The path metric type used by the tunnel during route selection is specified.
If the mpls te path metric-type command is not run in the tunnel interface view,
the metric type in the MPLS view is used; otherwise, the metric type in the tunnel
interface view is used.
By default, path metric type used by the tunnel during route selection is TE.
l (Optional) Configuring the TE metric value of the path
If the metric type of a specified tunnel is TE, you can modify the TE metric value of the
path on the outbound interface of the ingress and the transit node by performing the
following configurations.
a. Run:
system-view
b. Run:
interface interface-type interface-number
NOTE
If the IGP is OSPF and the current device is a stub router, the mpls te metric command does
not take effect.
----End
Context
Similar to the administrative group and the affinity property, the hop limit is a condition for
CR-LSP path selection and is used to specify the number of hops along a CR-LSP to be set
up.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
The number of hops along the CR-LSP is set. The hop-limit-value is an integer ranging from
1 to 32.
Step 4 Run:
mpls te commit
----End
Context
By configuring the route pinning function, you can use the path that is originally selected,
rather than another eligible path, to set up a CR-LSP.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
NOTE
If route pinning is enabled, the MPLS TE re-optimization cannot be used at the same time.
Procedure
Step 1 Run:
system-view
----End
Context
The configuration of the administrative group affects only LSPs to be set up; the
configuration of the affinity property affects established LSPs by recalculating the paths.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
In the networking scenario where the hot standby CR-LSP is set up or TE FRR is enabled,
configure the SRLG attribute on the outbound interface of the ingress node of the MPLS TE
tunnel or the PLR and the other member links of the SRLG to which the outbound interface
belongs.
Configuring SRLG includes:
l Configuring SRLG for the link
l Configuring SRLG path calculation mode for the tunnel
l Deleting the member interfaces of all SRLGs
Perform the following configurations according to actual networking.
Procedure
l Configuring SRLG for the link
Perform the following configurations on the links which are in the same SRLG.
a. Run:
system-view
On a network with CR-LSP hot standby or TE FRR configured, the SRLG attribute
can be configured for the outbound interface of the ingress node of the MPLS TE
tunnel or the PLR and other members of the SRLG to which the outbound interface
belongs. A link joins an SRLG after the SRLG attribute is configured on an
outbound interface of the link.
l Configuring SRLG path calculation mode for the tunnel
Perform the following configurations on the ingress node of the hot-standby tunnel or the
TE FRR tunnel.
a. Run:
system-view
If you specify the strict keyword, CSPF avoids the following links when
calculating the bypss CR-LSP or backup CR-LSP:
n Link with the same SRLG attributes as SRLG attributes of the primary CR-
LSP
n All links along the primary CR-LSP regardless of whether the links are
configured with SRLG attributes
CSPF does not exclude the nodes that the primary CR-LSP passes.
NOTE
l If you specify the strict keyword, CSPF always considers the SRLG as a constraint
when calculating the path for the bypass CR-LSP or the backup CR-LSP.
l If you specify the preferred keyword, CSPF tries to calculate the path which avoids the
links in the same SRLG as protected interfaces; if the calculation fails, CSPF does not
consider the SRLG as a constraint.
l Delete the member interfaces of all SRLGs.
Perform the following configurations to delete member interfaces of all SRLGs from a
node of the MPLS TE tunnel.
a. Run:
system-view
The member interfaces of all SRLGs are deleted from the MPLS TE node.
NOTE
The undo mpls te srlg all-config does not delete an SRLG-based path calculation mode
configured in the mpls te srlg path-calculation command in the MPLS view.
----End
Context
A node becomes overloaded in the following situations:
l When the node is transmitting a large number of services and its system resources are
exhausted, the node marks itself overloaded.
l When the node is transmitting a large number of services and its CPU is overburdened,
an administrator can run the set-overload command to mark the node overloaded.
If there are overloaded nodes on an MPLS TE network, associate CR-LSP establishment with
the IS-IS overload setting to ensure that CR-LSPs are established over paths excluding
overloaded nodes. This configuration prevents overloaded nodes from being further burdened
and improves CR-LSP reliability.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls
Step 3 Run:
mpls te path-selection overload
CR-LSP establishment is associated with the IS-IS overload setting. This association allows
CSPF to calculate paths excluding overloaded IS-IS nodes.
Before the association is configured, the mpls te record-route command must be run to
enable the route and label record.
Traffic travels through an existing CR-LSP before a new CR-LSP is established. After the
new CR-LSP is established, traffic switches to the new CR-LSP and the original CR-LSP is
deleted. This traffic switchover is performed based on the Make-Before-Break mechanism.
Traffic is not dropped during the switchover.
The mpls te path-selection overload command has the following influences on the CR-LSP
establishment:
l CSPF recalculates paths excluding overloaded nodes for established CR-LSPs.
l CSPF calculates paths excluding overloaded nodes for new CR-LSPs.
NOTE
----End
Context
CSPF uses a locally-maintained traffic-engineering database (TEDB) to calculate the shortest
path to the destination address. Then, the signaling protocol applies for and reserves resources
for the path. In the case of a link on a network is faulty, if the routing protocol fails to notify
CSPF of updating the TEDB in time, this may cause the path calculated by CSPF to contain
the faulty link.
As a result, the control packets, such as RSVP Path messages, of a signaling protocol are
discarded on the faulty link. Then, the signaling protocol returns an error message to the
upstream node. Receiving the link error message on the upstream node triggers CSPF to
recalculate a path. The path recalculated by CSPF and returned to the signaling protocol still
contains the faulty link because the TEDB is not updated. The control packets of the signaling
protocol are still discarded and the signaling protocol returns an error message to trigger
CSPF to recalculate a path. The procedure repeats until the TEDB is updated.
To avoid the preceding situation, when the signaling protocol returns an error message to
notify CSPF of a link failure, CSPF sets the status of the faulty link to INACTIVE and
enables a failed link timer. Then, CSPF does not use the faulty link in path calculation until
CSPF receives a TEDB update event or the failed link timer expires.
Before the failed link timer expires, if a TEDB update event is received, CSPF deletes the
failed link timer.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
The bandwidth flooding threshold indicates the ratio of the link bandwidth occupied or
released by a TE tunnel to the link bandwidth remained in the TEDB.
If the link bandwidth changes little, bandwidth flooding wastes network resources. For
example, if link bandwidth is 100 Mbit/s and 100 TE tunnels (with bandwidth as 1 Mbit/s) are
created along this link, bandwidth flooding need be performed for 100 times.
If the flooding threshold is set to 10%, bandwidth flooding is not performed when tunnel 1 to
tunnel 9 are created. When tunnel 10 is created, the bandwidth of tunnel 1 to tunnel 10 (10
Mbit/s in total) is flooded. Similarly, bandwidth flooding is not performed when tunnel 11 to
tunnel 18 are created. When tunnel 19 is created, the bandwidth of tunnel 11 to tunnel 19 is
flooded. Therefore, configuring bandwidth flooding threshold can reduce the times of
bandwidth flooding and hence ensure the efficient use of network resources.
By default, on a link, IGP flood information about this link and CSPF updates the TEDB
accordingly if one of the following conditions is met:
l The ratio of the bandwidth reserved for an MPLS TE tunnel to the bandwidth remained
in the TEDB is equal to or higher than 10%.
l The ratio of the bandwidth released by an MPLS TE tunnel to the bandwidth remained in
the TEDB is equal to or higher than 10%.
Perform the following configurations on the ingress or transit node of an MPSL TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Prerequisites
The configurations of adjusting the path of a CR-LSP are complete.
Procedure
l Run the display mpls te tunnel verbose command to check information about the
MPLS TE tunnel.
l Run the display mpls te srlg { srlg-number | all } command to check the SRLG
configuration and interfaces in the SRLG.
l Run the display mpls te link-administration srlg-information [ interface interface-
type interface-number ] command to check the SRLG that interfaces belong to.
l Run the display mpls te tunnel c-hop [ tunnel-name ] [ lsp-id ingress-lsr-id session-id
lsp-id ] command to check path computation results of tunnels.
l Run the display default-parameter mpls te cspf command to check default CSPF
settings.
----End
Pre-configuration Tasks
Before adjusting establishment of an MPLS TE tunnel, complete the following task:
Configuration Process
The following configurations are optional and can be performed in any sequence.
Context
In the loop detection mechanism, a maximum number of 32 hops are allowed on an LSP. If
information about the local LSR is recorded in the path information table, or the number of
hops on the path exceeds 32, this indicates that a loop occurs and the LSP fails to be set up.
By configuring the loop detection function, you can prevent loops.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
By configuring route record and label record, you can determine whether to record routes and
labels during the establishment of an RSVP-TE tunnel.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Run:
mpls te record-route [ label ]
The route and label are recorded when establishing the tunnel.
Step 4 Run:
mpls te commit
----End
Context
By configuring the tunnel re-optimization function, you can periodically recompute routes for
a CR-LSP. If the recomputed routes are better than the routes in use, a new CR-LSP is then
established according to the recomputed routes. In addition, services are switched to the new
CR-LSP, and the previous CR-LSP is deleted.
If an upstream node on an MPLS network is busy but its downstream node is idle or an
upstream node is idle but its downstream node is busy, a CR-LSP may be torn down before
the new CR-LSP is established, causing a temporary traffic interruption. In this case, you can
configure the switching and deletion delays.
NOTE
l If the re-optimization is enabled, the route pinning cannot be used at the same time.
l The CR-LSP re-optimization cannot be configured when the resource reservation style is FF.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Run:
mpls te reoptimization [ frequency interval ]
----End
Context
By configuring the tunnel reestablishment function, you can periodically recompute the route
for a CR-LSP. If the route in recomputation is better than the route in use, a new CR-LSP is
then established according to the recomputed route. In addition, services are switched to the
new CR-LSP, and the previous CR-LSP is deleted.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Run:
mpls te timer retry interval
Step 4 Run:
mpls te commit
If the establishment of a tunnel fails, the system attempts to reestablish the tunnel within the
set interval and the maximum number of attempts is the set reestablishment times.
----End
Context
In the case that a fault occurs on an MPLS network, a great number of RSVP CR-LSPs need
to be reestablished. This causes consumption of a large number of system resources. By
configuring the delay for triggering the RSVP signaling, you can reduce the consumption of
system resources when establishing an RSVP CR-LSP.
Perform the following configurations on each node on which multiple CR-LSPs need to be
reestablished.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls
Step 3 Run:
mpls te signaling-delay-trigger enable
----End
Context
In the process of establishing a CR-LSP, if no path with the required bandwidth exists, you
can perform bandwidth preemption according to setup priorities and holding priorities.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Run:
mpls te priority setup-priority [ hold-priority ]
Both the setup priority and the holding priority range from 0 to 7. The smaller the value is, the
higher the priority is.
By default, both the setup priority and the holding priority are 7. If only the setup priority
value is set, the holding priority value is the same as the setup priority value.
NOTE
The setup priority should not be higher than the holding priority. So the value of the setup priority must
not be less than that of the holding priority.
Step 4 Run:
mpls te commit
----End
Prerequisites
The configurations of adjusting establishment of an MPLS TE tunnel are complete.
Procedure
l Run the display mpls te tunnel-interface [ tunnel interface-number ] command to
check information about the tunnel interface.
----End
Pre-configuration Tasks
Before configuring CR-LSP backup, complete the following tasks:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l Enabling MPLS, MPLS TE, and RSVP-TE globally and on interfaces of each node along
a backup CR-LSP
NOTE
If CR-LSP hot standby is configured, perform the operation of 5.8.13 Configuring Static BFD for CR-LSPs
or 5.8.14 Configuring Dynamic BFD for CR-LSPs to implement fast switching at the millisecond level.
Configuration Process
Configuring forcible switchover, locking a backup CR-LSP attribute template, configuring
dynamic bandwidth for hot-standby CR-LSPs, and configuring a best-effort path are optional.
Context
CR-LSP backup can be configured to allow traffic to switch from a primary CR-LSP to a
backup CR-LSP, providing end-to-end protection.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel tunnel-number
Step 3 Run:
mpls te backup hot-standby
or run:
mpls te backup ordinary
NOTE
A tunnel interface cannot be used for both a bypass tunnel and a backup tunnel. A protection failure will
occur if the mpls te backup and mpls te bypass-tunnel commands are run on the tunnel interface, or if
the mpls te backup and mpls te protected-interface commands are run on the tunnel interface. For
details on how to create a bypass CR-LSP, see Configuring Manual TE FRR or Configuring Auto TE
FRR.
A tunnel interface cannot be used for both a bypass tunnel and a protection tunnel in a tunnel protection
group. A protection failure will occur if the mpls te backup and mpls te protection tunnel commands
are run on the tunnel interface. For details on how to create a protection tunnel, see Configuring a
Tunnel Protection Group.
After hot standby or ordinary backup is configured, the system selects a path for a backup
CR-LSP. To specify a path for a backup CR-LSP, repeatedly perform one or more of steps 4 to
6. When hot standby is configured, repeatedly perform one or more of steps 7 to 9.
Use a separate explicit path for the backup CR-LSP to prevent the backup CR-LSP from
completely overlapping its primary CR-LSP. Protection will fail if the backup CR-LSP
completely overlaps its primary CR-LSP.
The mpls te path explicit-path command can be run successfully only after an explicit path
is set up by running the explicit-path path-name command in the system view, and the nodes
on the path are specified.
By default, the affinity property used by the backup CR-LSP is 0x0 and the mask is 0x0.
The path overlapping function is configured. This function allows a hot-standby CR-LSP to
use links of a primary CR-LSP.
By default, the path overlapping function is disabled. If the path overlapping function is
disabled, a hot-standby CR-LSP may fail to be set up.
After the path overlapping function is configured, the path of the hot-standby CR-LSP
partially overlaps the path of the primary CR-LSP when the hot-standby CR-LSP cannot
exclude paths of the primary CR-LSP.
By default, the WTR time for switching traffic from a hot-standby CR-LSP to a primary CR-
LSP is 10 seconds.
Step 9 (Optional) Run:
mpls te backup hot-standby mode { revertive [ wtr interval ] | non-revertive }
----End
Context
If a backup CR-LSP has been established and a primary CR-LSP needs to be adjusted,
configure the forcible switchover function to switch traffic from the primary CR-LSP to the
backup CR-LSP. After adjusting the primary CR-LSP, switch traffic back to the primary CR-
LSP. This process prevents traffic loss during the primary CR-LSP adjustment.
Perform the following configurations on the ingress node of an MPLS TE tunnel.
Procedure
l Before adjusting a primary CR-LSP, perform the following configurations.
a. Run:
system-view
NOTE
To prevent traffic loss, check that a backup CR-LSP has been established before running the
hotstandby-switch force command.
l After adjusting the primary CR-LSP, perform the following configurations.
a. Run:
system-view
c. Run:
hotstandby-switch clear
----End
Context
A maximum of three hot-standby or ordinary backup attribute templates can be used for
establishing a hot-standby or an ordinary CR-LSP. TE attribute templates are prioritized. The
system attempts to use each template in ascending order by priority to establish a backup CR-
LSP.
If an existing backup CR-LSP is set up using a lower-priority attribute template, the system
automatically attempts to set up a new backup CR-LSP using a higher-priority attribute
template, which is unneeded sometimes. If a CR-LSP has been established using the locked
CR-LSP attribute template, the CR-LSP will not be unnecessarily reestablished using another
template with a higher priority. Locking a CR-LSP attribute template allows the existing CR-
LSP to keep transmitting traffic without triggering unneeded traffic switchovers, efficiently
using system resources.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel tunnel-number
Step 3 Run:
mpls te primary-lsp-constraint { dynamic | lsp-attribute lsp-attribute-name }
Step 4 Run either of the following commands as needed to establish a backup CR-LSP:
l To establish an ordinary backup CR-LSP, run:
mpls te ordinary-lsp-constraint number { dynamic | lsp-attribute lsp-
attribute-name }
Step 5 Run either of the following commands as needed to lock a backup CR-LSP attribute template:
l To lock an attribute template for an ordinary backup CR-LSP, run:
mpls te backup ordinary-lsp-constraint lock
NOTE
A used attribute template can be unlocked after the undo mpls te backup ordinary-lsp-constraint lock or
undo mpls te backup hotstandby-lsp-constraint lock command is run. After unlocking templates, the
system uses each available template in ascending order by priority. If a template has a higher priority than that
of the currently used template, the system establishes a CR-LSP using the higher-priority template.
Step 6 Run:
mpls te commit
----End
Context
Hot-standby CR-LSPs are established using reserved bandwidth resources by default. The
dynamic bandwidth function can be configured to allow the system to create a primary CR-
LSP and a hot-standby CR-LSP with the bandwidth of 0 bit/s simultaneously.
Procedure
l Perform the following configurations to enable the dynamic bandwidth function for hot-
standby CR-LSPs that are established not using attribute templates.
a. Run:
system-view
NOTE
l If a hot-standby CR-LSP has been established before the dynamic bandwidth function is
enabled, the system uses the Make-Before-Break mechanism to establish a new hot-standby
CR-LSP with the bandwidth of 0 bit/s to replace the existing hot-standby CR-LSP.
l The undo mpls te backup hot-standby dynamic-bandwidth command can be used to
disable the dynamic bandwidth function. This allows the hot-standby CR-LSP with the
bandwidth of 0 bit/s to obtain bandwidth.
d. Run:
mpls te commit
NOTE
l If a hot-standby CR-LSP has been established before the dynamic bandwidth function is
enabled, the system uses the Make-Before-Break mechanism to establish a new hot-standby
CR-LSP with the bandwidth of 0 bit/s to replace the existing hot-standby CR-LSP.
l The undo mpls te backup hotstandby-lsp-constraint dynamic-bandwidth command can
be used to disable the dynamic bandwidth function of the hot-standby CR-LSP which is set up
by using an attribute template. This allows the hot-standby CR-LSP with no bandwidth to
obtain bandwidth.
d. Run:
mpls te commit
----End
Context
A best-effort path is configured on the ingress node of a primary CR-LSP to take over traffic
if both the primary and backup CR-LSPs fail.
Procedure
Step 1 Run:
system-view
NOTE
A tunnel interface cannot be used for both a best-effort path and a manually configured ordinary backup
tunnel. A protection failure will occur if the mpls te backup ordinary best-effort and mpls te backup
ordinary commands are run on the tunnel interface.
To establish a best-effort path over a specified path, run either or both of step 4 and step 5.
Step 4 (Optional) Run:
mpls te affinity property properties [ mask mask-value ] best-effort
----End
Prerequisites
The configurations of CR-LSP backup are complete.
Procedure
l Run the display mpls te tunnel-interface [ tunnel tunnel-number ] command to check
information about the tunnel interface.
l Run the display mpls te hot-standby state { all [ verbose ] | interface tunnel interface-
number } command to check information about the hot-standby status.
l Run the display mpls te tunnel [ destination ip-address ] [ lsp-id ingress-lsr-id session-
id local-lsp-id ] [ lsr-role { all | egress | ingress | remote | transit } ] [ name tunnel-
----End
Pre-configuration Tasks
Before configuring manual MPLS TE FRR, complete the following tasks:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l Enabling MPLS, MPLS TE and RSVP-TE in the system view and interface view of each
node along a bypass tunnel
l Enabling CSPF on a PLR
NOTE
Perform the operation of 5.8.12 Configuring Dynamic BFD for RSVP to implement fast switching at the
millisecond level.
Configuration Process
Except that configuring a TE FRR scanning timer and changing the PSB and RSB timeout
multiplier are optional, other configurations are mandatory.
Context
TE FRR must be enabled for a primary tunnel before a bypass tunnel is established.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel tunnel-number
Step 3 Run:
mpls te fast-reroute [ bandwidth ]
TE FRR is enabled.
NOTE
Only the primary tunnel in a tunnel protection group can be configured together with TE FRR on the
ingress node. Neither the protection tunnel nor the tunnel protection group itself can be used together
with TE FRR. If the tunnel protection group and TE FRR are used, neither of them takes effect.
For example, Tunnel1 and Tunnel2 are tunnel interfaces on MPLS TE tunnels and the tunnel named
Tunnel2 has a tunnel ID of 200. The mpls te protection tunnel 200 and mpls te fast-reroute
commands cannot be configured simultaneously on Tunnel1. That is, the tunnel protection group and TE
FRR cannot be used together on Tunnel1. A configuration failure will occur if the mpls te protection
tunnel 200 command is run on Tunnel1 and the mpls te fast-reroute command is run on Tunnel2.
Step 4 Run:
mpls te commit
----End
Context
A bypass tunnel provides protection for a link or node on a primary tunnel. An explicit path
and attributes must be specified for a bypass tunnel when TE manual FRR is being
configured.
Bypass tunnels are established on selected links or nodes that are not on the protected primary
tunnel. If a link or node on the protected primary tunnel is used for a bypass tunnel and fails,
the bypass tunnel also fails to protect the primary tunnel.
NOTE
l FRR does not take effect if multiple nodes or links fail simultaneously. After FRR switching is
performed to switch data from the primary tunnel to a bypass tunnel, the bypass tunnel must remain
Up when forwarding data. If the bypass tunnel goes Down, the protected traffic is interrupted and
FRR fails. Even though the bypass tunnel goes Up again, traffic is unable to flow through the bypass
tunnel but travels through the primary tunnel after the primary tunnel recovers or is reestablished.
l By default, the system searches for an optimal manual FRR tunnel for each primary tunnel every 1
second and binds the bypass tunnel to the primary tunnel if there is an optimal bypass tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel tunnel-number
Step 3 Run either of the following commands to configure the IP address for the tunnel interface:
l To configure an IP address for the interface, run:
ip address ip-address { mask | mask-length } [ sub ]
Step 4 Run:
tunnel-protocol mpls te
Step 5 Run:
destination ip-address
Step 6 Run:
mpls te tunnel-id tunnel-id
Before using this command, ensure that the explicit path has been created using the explicit-
path command. Note that physical links of a bypass tunnel cannot overlap protected physical
links of the primary tunnel.
Step 8 Run:
mpls te bypass-tunnel
After a bypass tunnel is configured, the system automatically records routes related to the
bypass tunnel.
NOTE
l A tunnel interface cannot be used for both a bypass tunnel and a backup tunnel. A protection failure
will occur if the mpls te bypass-tunnel and mpls te backup commands are both configured on the
tunnel interface.
l A tunnel interface cannot be used for both a bypass tunnel and a primary tunnel. A protection failure
will occur if the mpls te bypass-tunnel and mpls te fast-reroute commands are both configured on
the tunnel interface.
l A tunnel interface cannot be used for both a bypass tunnel and a protection tunnel in a tunnel
protection group. A protection failure will occur if the mpls te bypass-tunnel and mpls te
protection tunnel commands are both configured on the tunnel interface.
Step 9 Run:
mpls te protected-interface interface-type interface-number
NOTE
Step 10 Run:
mpls te commit
----End
Context
A TE FRR-enabled device periodically refreshes the binding between a bypass CR-LSP and a
primary LSP at a specified interval. The PLR searches for the optimal TE bypass CR-LSP and
binds it to a primary CR-LSP. A TE FRR scanning timer is set to determine the interval at
which the binding between a bypass CR-LSP and a primary CR-LSP is refreshed.
Perform the following configurations on the PLR.
Procedure
Step 1 Run:
system-view
Set the interval at which the binding between a bypass CR-LSP and a primary CR-LSP is
refreshed.
By default, the time weight used to calculate the interval is 300. And the actual interval at
which the binding between a bypass CR-LSP and a primary LSP is refreshed depends on
device performance and the maximum number of LSPs that can be established on the device.
----End
Context
To help allow TE FRR to operate during the RSVP GR process, the timeout multiplier of the
Path State Block (PSB) and Reservation State Block (RSB) can be set. The setting prevents
the situation where information in PSBs and RSBs is dropped due to a timeout before the GR
processes are complete for a large number of CR-LSPs.
Procedure
Step 1 Run:
system-view
NOTE
Setting the timeout multiplier to 5 or greater is recommended for a network where a large number of
CR-LSPs are established and RSVP GR is configured.
----End
Prerequisites
The configurations of manual TE FRR are complete.
Procedure
l Run the display mpls lsp lsp-id ingress-lsr-id session-id lsp-id [ verbose ] command to
check information about a specified primary tunnel.
l Run the display mpls lsp attribute bypass-inuse { inuse | not-exists | exists-not-used }
command to check information about the attribute of a specified bypass LSP.
l Run the display mpls lsp attribute bypass-tunnel tunnel-name command to check
information about the attribute of a bypass tunnel.
l Run the display mpls te tunnel-interface [ tunnel interface-number | auto-bypass-
tunnel [ tunnel-name ] ] command to check detailed information about the tunnel
interface of a specified primary or bypass tunnel.
l Run the display mpls te tunnel path [ [ [ tunnel-name ] tunnel-name ] [ lsp-id ingress-
lsr-id session-id lsp-id ] | fast-reroute { local-protection-available | local-protection-
inuse } | lsr-role { ingress | transit | egress } ] command to check information about
paths of a specified primary or bypass tunnel.
l Run the display mpls rsvp-te statistics fast-reroute command to check TE FRR
statistics.
l Run the display mpls stale-interface [ interface-index ] [ verbose ] command to check
the information about MPLS interfaces in the Stale state.
----End
Pre-configuration Tasks
Before configuring auto TE FRR, complete the following task:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l Enabling MPLS, MPLS TE and RSVP-TE in the system view and interface view of each
node along a bypass tunnel
l Enabling CSPF on a PLR
NOTE
Perform the operation of 5.8.12 Configuring Dynamic BFD for RSVP to implement fast switching at the
millisecond level.
Configuration Process
Except that configuring a TE FRR scanning timer, changing the PSB and RSB timeout
multiplier, configuring auto bypass tunnel re-optimization, and configuring interworking with
other vendors are optional, other configurations are mandatory.
Context
Before configuring auto TE FRR, enable auto TE FRR globally on the PLR. To implement
link protection, enable link protection on an interface.
Perform the following configurations on the PLR.
Procedure
Step 1 Run:
system-view
quit
The interface view of the outbound interface of the primary tunnel is displayed.
3. (Optional) On an Ethernet interface, run:
undo portswitch
Auto TE FRR is enabled on the outbound interface on the ingress node of the primary
tunnel.
To implement link protection, specify link. If link is not specified, the system provides
only node protection.
By default, after auto TE FRR is enabled globally, all the MPLS TE interfaces are
automatically configured with the mpls te auto-frr default command. To disable auto
TE FRR on some interfaces, run the mpls te auto-frr block command on these
interfaces. Then, these interfaces no longer have auto TE FRR capability even if auto TE
FRR is enabled or is to be re-enabled globally.
NOTE
After mpls te auto-frr is used in the MPLS view, the mpls te auto-frr default or mpls te auto-
frr node command used on an interface protects only nodes. When the topology does not meet the
requirement to set up an automatic bypass tunnel for node protection, the penultimate hop (but not
other hops) on the primary tunnel attempts to set up an automatic bypass tunnel for link protection.
----End
5.8.9.2 Enabling the TE FRR and Configuring the Auto Bypass Tunnel Attributes
Context
After TE Auto FRR is enabled, the system automatically sets up a bypass tunnel.
Perform the following configurations on the ingress node of the primary MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
NOTE
l The bypass tunnel attributes can be configured only after the mpls te fast-reroute bandwidth
command is run on the primary tunnel.
l The bandwidth of the bypass tunnel cannot be greater than the bandwidth of the primary tunnel.
l When the attributes of the automatic bypass tunnel are not configured, by default, the bandwidth of
the automatic bypass tunnel is the same as the bandwidth of the primary tunnel.
l The setup priority of the bypass tunnel cannot be higher than the holding priority. Both priorities
cannot be higher than the priority of the primary tunnel.
l When the bandwidth of the primary tunnel is changed or the FRR is disabled, the attributes of the
bypass tunnel are cleared automatically.
Step 5 Run:
mpls te commit
----End
Context
A TE FRR-enabled device periodically refreshes the binding between a bypass CR-LSP and a
primary LSP at a specified interval. The PLR searches for the optimal TE bypass CR-LSP and
binds it to a primary CR-LSP. A TE FRR scanning timer is set to determine the interval at
which the binding between a bypass CR-LSP and a primary CR-LSP is refreshed.
Perform the following configurations on the PLR.
Procedure
Step 1 Run:
system-view
Set the interval at which the binding between a bypass CR-LSP and a primary CR-LSP is
refreshed.
By default, the time weight used to calculate the interval is 300. And the actual interval at
which the binding between a bypass CR-LSP and a primary LSP is refreshed depends on
device performance and the maximum number of LSPs that can be established on the device.
----End
Context
To help allow TE FRR to operate during the RSVP GR process, the timeout multiplier of the
Path State Block (PSB) and Reservation State Block (RSB) can be set. The setting prevents
the situation where information in PSBs and RSBs is dropped due to a timeout before the GR
processes are complete for a large number of CR-LSPs.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls
Step 3 Run:
mpls rsvp-te keep-multiplier keep-multiplier-number
NOTE
Setting the timeout multiplier to 5 or greater is recommended for a network where a large number of
CR-LSPs are established and RSVP GR is configured.
----End
Context
Network changes often cause the changes in optimal paths. Auto Bypass tunnel re-
optimization allows paths to be recalculated at certain intervals for an auto bypass tunnel. If
an optimal path to the same destination is found due to some reasons, such as the changes in
the cost, a new auto bypass tunnel will be set up over this optimal path. In this manner,
network resources are optimized.
Procedure
Step 1 Run:
system-view
----End
Context
If a non-Huawei device connected to the Huawei device uses the integer mode to save the
bandwidth of FRR objects, configure the Huawei device to save the bandwidth of FRR
objects in integer mode.
Perform the following operations on the PLR connected to the non-Huawei device.
Procedure
Step 1 Run:
system-view
mpls
The device is configured to save the bandwidth of FRR objects in integer mode.
By default, the bandwidth of FRR objects is saved in the float point mode.
----End
Prerequisites
The configurations of the auto TE FRR function are complete.
Procedure
l Run the display mpls te tunnel verbose command to check binding information about
the primary tunnel and the auto bypass tunnel.
l Run the display mpls lsp attribute bypass-inuse { inuse | not-exists | exists-not-used }
command to check information about the attribute of a specified bypass LSP.
l Run the display mpls lsp attribute bypass-tunnel tunnel-name command to check
information about the attribute of a bypass tunnel.
l Run the display mpls te tunnel-interface [ tunnel interface-number | auto-bypass-
tunnel [ tunnel-name ] ] command to check detailed information about the tunnel
interface of a specified primary or bypass tunnel.
l Run the display mpls te tunnel path [ [ [ tunnel-name ] tunnel-name ] [ lsp-id ingress-
lsr-id session-id lsp-id ] | fast-reroute { local-protection-available | local-protection-
inuse } | lsr-role { ingress | transit | egress } ] command to check information about
paths of a specified primary or bypass tunnel.
l Run the display mpls rsvp-te statistics fast-reroute command to check TE FRR
statistics.
l Run the display mpls stale-interface [ interface-index ] [ verbose ] command to check
the information about MPLS interfaces in the Stale state.
----End
Pre-configuration Tasks
Before configuring association between TE FRR and CR-LSP backup, complete the following
tasks:
l 5.8.7 Configuring CR-LSP Backup (except for the best-effort path) in either hot
standby mode or ordinary backup mode
Context
Association between TE FRR and CR-LSP backup protects the entire CR-LSP.
Perform the following configurations on the ingress node of the primary MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Run:
mpls te backup frr-in-use
When the primary CR-LSP is faulty (that is, the primary CR-LSP is in FRR-in-use state), the
system starts the bypass CR-LSP and tries to restore the primary CR-LSP. At the same time,
the system attempts to set up a backup CR-LSP.
Step 4 Run:
mpls te commit
----End
Pre-configuration Tasks
Before configuring a tunnel protection group, complete the following tasks:
l Creating a working tunnel according to 5.8.1 Configuring a Static MPLS TE Tunnel or
5.8.2 Configuring a Dynamic MPLS TE Tunnel
l Creating a protection tunnel according to 5.8.1 Configuring a Static MPLS TE Tunnel
or 5.8.2 Configuring a Dynamic MPLS TE Tunnel
NOTE
l A TE tunnel protection group enhances reliability of the primary tunnel through planning. Before
configuring a TE tunnel protection group, plan the network. To ensure better performance of the
protection tunnel, the protection tunnel must detour the links and nodes through which the primary
tunnel passes.
l Perform the operation of 5.8.13 Configuring Static BFD for CR-LSPs or 5.8.14 Configuring
Dynamic BFD for CR-LSPs to implement fast switching at the millisecond level.
Configuration Process
Except that configuring the protection switching trigger mechanism is optional, other
configurations are mandatory.
Context
A configured protection tunnel can be bound to a working tunnel to form a tunnel protection
group. If the working tunnel fails, traffic switches to the protection tunnel, improving tunnel
reliability.
When creating a tunnel protocol group, you can set the switchback delay and a switchback
mode. The switchback modes are classified into revertive and non-revertive modes. You can
set the switchback delay only when the revertive mode is used.
NOTE
You can also perform the preceding steps to modify a protection tunnel group.
Perform the following configurations on the ingress node of the primary MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
l Non-revertive mode means that traffic does not switch back to a working tunnel even
though a working tunnel recovers.
l Revertive mode means that traffic can switch back to a working tunnel after the working
tunnel recovers.
By default, the tunnel protection group works in revertive mode.
l Wait to restore (WTR) time is the time elapses before traffic switching is performed. The
WTR time ranges from 0 to 30 minutes. The default WTR time is 12 minutes. The wtr-
time parameter specifies a multiplier of 30 seconds.
WTR time = 30 seconds x wtr-time
NOTE
If the number of working tunnels in the same tunnel protection group is N, perform Step 2 and Step 3 on
each interface with a specific interface-number.
Step 4 Run:
mpls te commit
----End
Context
After configuring a tunnel protection group, you can configure a trigger mechanism of
protection switching to force traffic to switch to the primary LSP or the backup LSP.
Alternatively, you can perform switchover manually.
Pay attention to the protection switching mechanism before configuring the protection
switching trigger mechanism.
The device performs protection switching based on the following rules, see Table 5-27. in
this table indicates that the priority level in the upper line is higher than that in a lower line.
Perform the following configurations on the ingress node of the primary MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
interface tunnel interface-number
Step 3 Select one of the following protection switching trigger methods as required:
l To forcibly switch traffic from the working tunnel to the protection tunnel, run:
mpls te protect-switch force
Step 4 Run:
mpls te commit
----End
Prerequisites
All configurations of a tunnel protection group are complete.
Procedure
Step 1 Run the display mpls te protection tunnel { all | tunnel-id | interface tunnel interface-
number } [ verbose ] command to check information about a tunnel protection group.
Step 2 Run the display mpls te protection binding protect-tunnel { tunnel-id | interface tunnel
interface-number } command to check the binding between the working and protection
tunnels.
----End
Pre-configuration Tasks
Before configuring dynamic BFD for RSVP, complete one of the following tasks:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l 5.8.8 Configuring Manual TE FRR
l 5.8.9 Configuring Auto TE FRR
Configuration Process
Except that adjusting BFD parameters is optional, other configurations are mandatory.
Context
To configure dynamic BFD for RSVP, you must enable BFD on both ends of RSVP
neighbors.
Perform the following configurations on the two RSVP neighboring nodes with a Layer 2
device exists between them.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
----End
Context
Enabling BFD for RSVP in the following manners:
l Enabling BFD for RSVP Globally
Enable BFD for RSVP globally when a large number of RSVP-enabled interfaces of the
local node need to be enabled with BFD for RSVP.
l Enabling BFD for RSVP on the RSVP Interface
Enable BFD for RSVP on the RSVP interface when a small number of RSVP-enabled
interfaces of the local node need to be enabled with BFD for RSVP.
Perform the following configurations on the two RSVP neighboring nodes with a Layer 2
device exists between them.
Procedure
l Enable BFD for RSVP globally.
a. Run:
system-view
a. Run:
system-view
Context
BFD parameters are adjusted on the ingress node of a TE tunnel using either of the following
modes:
l Adjusting Global BFD Parameters
Adjust global BFD parameters when a large number of RSVP-enabled interfaces of the
local node use the same BFD parameters.
l Adjusting BFD Parameters on an RSVP Interface
Adjust global BFD parameters on an RSVP interface when certain RSVP-enabled
interfaces of the local node need to use BFD parameters different from global BFD
parameters.
Perform the following configurations on the two RSVP neighboring nodes with a Layer 2
device exists between them.
Procedure
l Adjust global BFD parameters globally.
a. Run:
system-view
Prerequisites
The configurations of dynamic BFD for RSVP are complete.
Procedure
l Run the display mpls rsvp-te bfd session { all | interface interface-type interface-
number | peer ip-address } [ verbose ] command to check information about the BFD
for RSVP session.
l Run the display mpls rsvp-te command to check the RSVP-TE configuration.
l Run the display mpls rsvp-te interface [ interface-type interface-number ] command to
check the RSVP-TE configuration on the interface.
l Run the display mpls rsvp-te peer [ interface interface-type interface-number ]
command to check information about the RSVP neighbor.
l Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-
number ] } command to check RSVP-TE statistics.
----End
Pre-configuration Tasks
Before configuring static BFD for CR-LSPs, complete one of the following tasks:
l 5.8.1 Configuring a Static MPLS TE Tunnel
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l 5.8.7 Configuring CR-LSP Backup
l 5.8.11 Configuring a Tunnel Protection Group
Configuration Process
The following configurations are mandatory.
Context
To configure static BFD for CR-LSP, you must enable BFD globally on the ingress node and
the egress node of a tunnel.
Perform the following configurations on the ingress and egress nodes of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
----End
Context
The BFD parameters configured on the ingress node include the local and remote
discriminators, local intervals at which BFD packets are sent and received, and BFD detection
multiplier, which determine the establishment of a BFD session.
Procedure
Step 1 Run:
system-view
BFD is configured to detect the primary or backup CR-LSP bound to a specified tunnel.
The parameter backup means that backup CR-LSPs are to be checked.
Step 3 Run:
discriminator local discr-value
l Actual local sending interval = MAX {200 ms, 600 ms} = 600 ms; Actual local
receiving interval = MAX {100 ms, 300 ms} = 300 ms; Actual local detection interval is
300 ms x 5 = 1500 ms.
l Actual remote sending interval = MAX {100 ms, 300 ms} = 300 ms; Actual remote
receiving interval = MAX {200 ms, 600 ms} = 600 ms; Actual remote detection interval
is 600 ms x 4 = 2400 ms.
Step 8 Run:
process-pst
The system is enabled to modify the port status table (PST) when the BFD session status
changes.
When the BFD status changes, BFD notifies the application of the change, triggering a fast
switchover between the primary and backup CR-LSPs.
Step 9 Run:
notify neighbor-down
A BFD session is configured to notify the upper layer protocol when the BFD session detects
a neighbor Down event.
In most cases, when you use a BFD session to detect link faults, the BFD session notifies the
upper layer protocol of a link fault in the following scenarios:
l When the BFD detection time expires, the BFD session notifies the upper layer protocol.
BFD sessions must be configured on both ends. If the BFD session on the local end does
not receive any BFD packets from the remote end within the detection time, the BFD
session on the local end concludes that the link fails and notifies the upper layer protocol
of the link fault.
l When a BFD session detects a neighbor Down event, the BFD session notifies the upper
layer protocol. If the BFD session on the local end detects a neighbor Down event within
the detection time, the BFD session on the local end directly notifies the upper layer
protocol of the neighbor Down event.
When you use a BFD session to detect faults on an LSP, you need only be concerned about
whether a fault occurs on the link from the local end to remote end. In this situation, run the
notify neighbor-down command to configure the BFD session to notify the upper layer
protocol only when the BFD session detects a neighbor Down event. This configuration
prevents the BFD session from notifying the upper layer protocol when the BFD detection
time expires and ensures that services are not interrupted.
Step 10 Run:
commit
----End
Context
The BFD parameters configured on the egress node include the local and remote
discriminators, local intervals at which BFD packets are sent and received, and BFD detection
multiplier, which determine the establishment of a BFD session.
Procedure
Step 1 Run:
system-view
NOTE
When an IP link is used as the reverse tunnel, you do not need to perform steps 8 and 9.
Step 3 Run:
discriminator local discr-value
Actual local sending interval = MAX { Configured local sending interval, Configured remote
receiving interval }
Actual local receiving interval = MAX { Configured remote sending interval, Configured
local receiving interval }
Actual local detection interval = Actual local receiving interval x Configured remote detection
multiplier
For example:
l The local sending and receiving intervals are set to 200 ms and 300 ms respectively and
the detection multiplier is set to 4.
l The remote sending and receiving intervals are set to 100 ms and 600 ms respectively
and the detection multiplier is set to 5.
Then,
l Actual local sending interval = MAX {200 ms, 600 ms} = 600 ms; Actual local
receiving interval = MAX {100 ms, 300 ms} = 300 ms; Actual local detection interval is
300 ms x 5 = 1500 ms.
l Actual remote sending interval = MAX {100 ms, 300 ms} = 300 ms; Actual remote
receiving interval = MAX {200 ms, 600 ms} = 600 ms; Actual remote detection interval
is 600 ms x 4 = 2400 ms.
Step 8 (Optional) Run:
process-pst
The system is enabled to modify the port status table (PST) when the BFD session status
changes.
If an LSP or a TE tunnel is used as a reverse tunnel to notify the ingress node of a fault, you
can run this command to allow the reverse tunnel to switch traffic if the BFD session goes
Down. If a single-hop IP link is used as a reverse tunnel, this command can be configured.
Because the process-pst command can be only configured for BFD single-link detection.
Step 9 Run:
notify neighbor-down
A BFD session is configured to notify the upper layer protocol when the BFD session detects
a neighbor Down event.
In most cases, when you use a BFD session to detect link faults, the BFD session notifies the
upper layer protocol of a link fault in the following scenarios:
l When the BFD detection time expires, the BFD session notifies the upper layer protocol.
BFD sessions must be configured on both ends. If the BFD session on the local end does
not receive any BFD packets from the remote end within the detection time, the BFD
session on the local end concludes that the link fails and notifies the upper layer protocol
of the link fault.
l When a BFD session detects a neighbor Down event, the BFD session notifies the upper
layer protocol. If the BFD session on the local end detects a neighbor Down event within
the detection time, the BFD session on the local end directly notifies the upper layer
protocol of the neighbor Down event.
When you use a BFD session to detect faults on an LSP, you need only be concerned about
whether a fault occurs on the link from the local end to remote end. In this situation, run the
notify neighbor-down command to configure the BFD session to notify the upper layer
protocol only when the BFD session detects a neighbor Down event. This configuration
prevents the BFD session from notifying the upper layer protocol when the BFD detection
time expires and ensures that services are not interrupted.
Step 10 Run:
commit
----End
Prerequisites
The configurations of static BFD for CR-LSPs are complete.
Procedure
l Run the display bfd configuration mpls-te interface tunnel interface-number te-lsp
[ verbose ] command to check BFD configurations on the ingress.
l Run the following commands to check BFD configurations on the egress:
Run the display bfd configuration all [ for-ip | for-lsp | for-te ] [ verbose ]
command to check all BFD configurations.
Run the display bfd configuration static [ for-ip | for-lsp | for-te | name cfg-
name ] [ verbose ] command to check the static BFD configurations.
Run the display bfd configuration peer-ip peer-ip [ vpn-instance vpn-instance-
name ] [ verbose ] command to check the configurations of BFD with the reverse
path being an IP link.
Run the display bfd configuration static-lsp lsp-name [ verbose ] command to
check the configurations of BFD with the reverse path being a static LSP.
Run the display bfd configuration ldp-lsp peer-ip peer-ip nexthop nexthop-
address [ interface interface-type interface-number ] [ verbose ] command to
check the configurations of BFD with the backward channel being an LDP LSP.
Run the display bfd configuration mpls-te interface tunnel interface-number te-
lsp [ verbose ] command to check the configurations of BFD with the backward
channel being a CR-LSP.
Run the display bfd configuration mpls-te interface tunnel interface-number
[ verbose ] command to check the configurations of BFD with the backward
channel being a TE tunnel.
l Run the display bfd session mpls-te interface tunnel interface-number te-lsp
[ verbose ] command to check BFD session configurations on the ingress.
l Run the following commands to check BFD session configurations on the egress:
Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ] command to
check all the BFD configurations.
Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ] command
to check the static BFD configurations.
Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-instance-name ]
[ verbose ] command to check the configurations of BFD with the backward
channel being an IP link.
Run the display bfd session static-lsp lsp-name [ verbose ] command to check the
configurations of BFD with the backward channel being a static LSP.
Run the display bfd session ldp-lsp peer-ip peer-ip nexthop nexthop-address
[ interface interface-type interface-number ] [ verbose ] command to check the
configurations of BFD with the backward channel being an LDP LSP.
Run the display bfd session mpls-te interface tunnel interface-number te-lsp
[ verbose ] command to check the configurations of BFD with the backward
channel being a CR-LSP.
Run the display bfd session mpls-te interface tunnel interface-number [ verbose ]
command to check the configurations of BFD with the backward channel being a
TE tunnel.
l Run the following command to check BFD statistics:
Run the display bfd statistics session all [ for-ip | for-lsp | for-te ] command to
check all BFD session statistics.
Run the display bfd statistics session peer-ip peer-ip [ vpn-instance vpn-instance-
name ] command to check statistics about the BFD session that detects faults in the
IP link.
Run the display bfd statistics session static-lsp lsp-name command to check
statistics about the BFD session that detects faults in the static LSP.
Run the display bfd statistics session ldp-lsp peer-ip peer-ip nexthop nexthop-
address [ interface interface-type interface-number ] command to check statistics
of the BFD session that detects faults in the LDP LSP.
Run the display bfd statistics session mpls-te interface tunnel interface-number
te-lsp command to check statistics about the BFD session that detects faults in the
CR-LSP.
Run the display bfd statistics session mpls-te interface tunnel interface-number
command to check statistics on BFD sessions for TE tunnels.
----End
Pre-configuration Tasks
Before configuring dynamic BFD for CR-LSPs, complete one of the following tasks:
l 5.8.1 Configuring a Static MPLS TE Tunnel
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l 5.8.7 Configuring CR-LSP Backup
l 5.8.11 Configuring a Tunnel Protection Group
Configuration Process
Except that adjusting BFD parameters is optional, other configurations are mandatory.
Context
To configure dynamic BFD for CR-LSP, enable BFD globally on the ingress node and the
egress node of a tunnel.
Perform the following configurations on the ingress and egress nodes of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
----End
Context
Enabling the capability of dynamically creating BFD sessions on a TE tunnel can be
implemented in either of the following methods:
Procedure
l Enable MPLS TE BFD globally.
a. Run:
system-view
After this command is run in the MPLS view, dynamic BFD for TE is enabled on
all the tunnel interfaces, excluding the interfaces on which dynamic BFD for TE are
blocked.
d. (Optional) Block the capability of dynamically creating BFD sessions for TE on the
tunnel interfaces of the TE tunnels that do not need dynamic BFD for TE.
i. Run:
quit
Context
On a unidirectional LSP, creating a BFD session on the active role (ingress node) triggers the
sending of LSP ping request messages to the passive role (egress node). Only after the passive
role receives the ping packets, a BFD session can be automatically set up.
Perform the following configurations on the egress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd
Step 3 Run:
mpls-passive
After this command is run, a BFD session can be created only after the egress receives an LSP
Ping request containing a BFD TLV from the ingress.
----End
Context
BFD parameters are adjusted on the ingress node of a TE tunnel using either of the following
modes:
l Adjusting Global BFD Parameters when a large number of TE tunnels on the ingress
node use the same BFD parameters
l Adjusting BFD Parameters on an Interface when certain TE tunnels on the ingress
node need to use BFD parameters different from global BFD parameters
Actual local sending interval = MAX { Configured local sending interval, Configured remote
receiving interval }
Actual local receiving interval = MAX { Configured remote sending interval, Configured
local receiving interval }
Actual local detection interval = Actual local receiving interval x Configured remote detection
multiplier
On the egress node of the TE tunnel enabled with the capability of passively creating BFD
sessions, the default values of the receiving interval, sending interval and detection multiplier
cannot be adjusted. The default values of these three parameters are the minimum
configurable values on the egress node. Therefore, the BFD detection interval on the ingress
and that on the egress node of a CR-LSP are as follows:
l Actual detection interval on the ingress = Configured receiving interval on the ingress
node x 3
l Actual detection interval on the egress = Configured sending interval on the ingress x
Configured detection multiplier on the ingress node
Procedure
l Adjust global BFD parameters.
a. Run:
system-view
----End
Prerequisites
The configurations of dynamic BFD for CR-LSPs are complete.
Procedure
l Run the display bfd configuration dynamic [ verbose ] command to check the
configuration of dynamic BFD on the ingress.
l Run the display bfd configuration passive-dynamic [ peer-ip peer-ip remote-
discriminator discriminator ] [ verbose ] command to check the configuration of
dynamic BFD on the egress.
l Run the display bfd session dynamic [ verbose ] command to check information about
the BFD session on the ingress.
Pre-configuration Tasks
Before configuring static BFD for TE tunnels, complete one of the following tasks:
l 5.8.1 Configuring a Static MPLS TE Tunnel
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l 5.8.11 Configuring a Tunnel Protection Group
Configuration Process
The following configurations are mandatory.
Context
To configure static BFD for TE, enable BFD globally on the ingress and egress nodes of a
tunnel.
Perform the following configurations on the ingress and egress nodes of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
----End
Context
The BFD parameters configured on the ingress node include the local and remote
discriminators, local intervals at which BFD packets are sent and received, and BFD detection
multiplier, which determine the establishment of a BFD session.
Procedure
Step 1 Run:
system-view
Step 2 Run:
bfd cfg-name bind mpls-te interface tunnel interface-number
NOTE
If the status of the tunnel to be checked is Down, the BFD session cannot be set up.
Step 3 Run:
discriminator local discr-value
Step 4 Run:
discriminator remote discr-value
Actual local sending interval = MAX { Configured local sending interval, Configured remote
receiving interval }
Actual local receiving interval = MAX { Configured remote sending interval, Configured
local receiving interval }
Actual local detection interval = Actual local receiving interval x Configured remote detection
multiplier
For example:
l The local sending and receiving intervals are set to 200 ms and 300 ms respectively and
the detection multiplier is set to 4.
l The remote sending and receiving intervals are set to 100 ms and 600 ms respectively
and the detection multiplier is set to 5.
Then,
l Actual local sending interval = MAX {200 ms, 600 ms} = 600 ms; Actual local
receiving interval = MAX {100 ms, 300 ms} = 300 ms; Actual local detection interval is
300 ms x 5 = 1500 ms.
l Actual remote sending interval = MAX {100 ms, 300 ms} = 300 ms; Actual remote
receiving interval = MAX {200 ms, 600 ms} = 600 ms; Actual remote detection interval
is 600 ms x 4 = 2400 ms.
Step 8 Run:
process-pst
The system is enabled to modify the port status table (PST) when the BFD session status
changes.
When the BFD status changes, BFD notifies the application of the change, triggering a fast
switchover between TE tunnels.
Step 9 Run:
notify neighbor-down
A BFD session is configured to notify the upper layer protocol when the BFD session detects
a neighbor Down event.
In most cases, when you use a BFD session to detect link faults, the BFD session notifies the
upper layer protocol of a link fault in the following scenarios:
l When the BFD detection time expires, the BFD session notifies the upper layer protocol.
BFD sessions must be configured on both ends. If the BFD session on the local end does
not receive any BFD packets from the remote end within the detection time, the BFD
session on the local end concludes that the link fails and notifies the upper layer protocol
of the link fault.
l When a BFD session detects a neighbor Down event, the BFD session notifies the upper
layer protocol. If the BFD session on the local end detects a neighbor Down event within
the detection time, the BFD session on the local end directly notifies the upper layer
protocol of the neighbor Down event.
When you use a BFD session to detect faults on an LSP, you need only be concerned about
whether a fault occurs on the link from the local end to remote end. In this situation, run the
notify neighbor-down command to configure the BFD session to notify the upper layer
protocol only when the BFD session detects a neighbor Down event. This configuration
prevents the BFD session from notifying the upper layer protocol when the BFD detection
time expires and ensures that services are not interrupted.
Step 10 Run:
commit
----End
Context
The BFD parameters configured on the egress node include the local and remote
discriminators, local intervals at which BFD packets are sent and received, and BFD detection
multiplier, which determine the establishment of a BFD session.
Perform the following configurations on the egress node of an MPLS TE tunnel.
Procedure
Step 1 Run:
system-view
NOTE
When an IP link is used as the reverse tunnel, you do not need to perform steps 8 and 9.
Step 3 Run:
discriminator local discr-value
Actual local sending interval = MAX { Configured local sending interval, Configured remote
receiving interval }
Actual local receiving interval = MAX { Configured remote sending interval, Configured
local receiving interval }
Actual local detection interval = Actual local receiving interval x Configured remote detection
multiplier
For example:
l The local sending and receiving intervals are set to 200 ms and 300 ms respectively and
the detection multiplier is set to 4.
l The remote sending and receiving intervals are set to 100 ms and 600 ms respectively
and the detection multiplier is set to 5.
Then,
l Actual local sending interval = MAX {200 ms, 600 ms} = 600 ms; Actual local
receiving interval = MAX {100 ms, 300 ms} = 300 ms; Actual local detection interval is
300 ms x 5 = 1500 ms.
l Actual remote sending interval = MAX {100 ms, 300 ms} = 300 ms; Actual remote
receiving interval = MAX {200 ms, 600 ms} = 600 ms; Actual remote detection interval
is 600 ms x 4 = 2400 ms.
The system is enabled to modify the port status table (PST) when the BFD session status
changes.
If an LSP or a TE tunnel is used as a reverse tunnel to notify the ingress node of a fault, you
can run this command to allow the reverse tunnel to switch traffic if the BFD session goes
Down. If a single-hop IP link is used as a reverse tunnel, this command can be configured.
Because the process-pst command can be only configured for BFD single-link detection.
Step 9 Run:
notify neighbor-down
A BFD session is configured to notify the upper layer protocol when the BFD session detects
a neighbor Down event.
In most cases, when you use a BFD session to detect link faults, the BFD session notifies the
upper layer protocol of a link fault in the following scenarios:
l When the BFD detection time expires, the BFD session notifies the upper layer protocol.
BFD sessions must be configured on both ends. If the BFD session on the local end does
not receive any BFD packets from the remote end within the detection time, the BFD
session on the local end concludes that the link fails and notifies the upper layer protocol
of the link fault.
l When a BFD session detects a neighbor Down event, the BFD session notifies the upper
layer protocol. If the BFD session on the local end detects a neighbor Down event within
the detection time, the BFD session on the local end directly notifies the upper layer
protocol of the neighbor Down event.
When you use a BFD session to detect faults on an LSP, you need only be concerned about
whether a fault occurs on the link from the local end to remote end. In this situation, run the
notify neighbor-down command to configure the BFD session to notify the upper layer
protocol only when the BFD session detects a neighbor Down event. This configuration
prevents the BFD session from notifying the upper layer protocol when the BFD detection
time expires and ensures that services are not interrupted.
Step 10 Run:
commit
----End
Prerequisites
The configurations of static BFD for TE tunnels are complete.
Procedure
l Run the display bfd configuration mpls-te interface tunnel interface-number
[ verbose ] command to check BFD configurations on the ingress.
l Run the following commands to check BFD configurations on the egress:
Run the display bfd configuration all [ for-ip | for-lsp | for-te ] [ verbose ]
command to check all BFD configurations.
Run the display bfd configuration static [ for-ip | for-lsp | for-te | name cfg-
name ] [ verbose ] command to check the static BFD configurations.
Run the display bfd configuration peer-ip peer-ip [ vpn-instance vpn-instance-
name ] [ verbose ] command to check the configurations of BFD with the reverse
path being an IP link.
Run the display bfd configuration static-lsp lsp-name [ verbose ] command to
check the configurations of BFD with the reverse path being a static LSP.
Run the display bfd configuration ldp-lsp peer-ip peer-ip nexthop nexthop-
address [ interface interface-type interface-number ] [ verbose ] command to
check the configurations of BFD with the backward channel being an LDP LSP.
Run the display bfd configuration mpls-te interface tunnel interface-number te-
lsp [ verbose ] command to check the configurations of BFD with the backward
channel being a CR-LSP.
Run the display bfd session all [ for-ip | for-lsp | for-te ] [ verbose ] command to
check all the BFD configurations.
Run the display bfd session static [ for-ip | for-lsp | for-te ] [ verbose ] command
to check the static BFD configurations.
Run the display bfd session peer-ip peer-ip [ vpn-instance vpn-instance-name ]
[ verbose ] command to check the configurations of BFD with the backward
channel being an IP link.
Run the display bfd session static-lsp lsp-name [ verbose ] command to check the
configurations of BFD with the backward channel being a static LSP.
Run the display bfd session ldp-lsp peer-ip peer-ip nexthop nexthop-address
[ interface interface-type interface-number ] [ verbose ] command to check the
configurations of BFD with the backward channel being an LDP LSP.
Run the display bfd session mpls-te interface tunnel interface-number te-lsp
[ verbose ] command to check the configurations of BFD with the backward
channel being a CR-LSP.
Run the display bfd session mpls-te interface tunnel interface-number [ verbose ]
command to check the configurations of BFD with the backward channel being a
TE tunnel.
l Run the following command to check BFD statistics:
Run the display bfd statistics session all [ for-ip | for-lsp | for-te ] command to
check all BFD session statistics.
Run the display bfd statistics session peer-ip peer-ip [ vpn-instance vpn-instance-
name ] command to check statistics about the BFD session that detects faults in the
IP link.
Run the display bfd statistics session static-lsp lsp-name command to check
statistics about the BFD session that detects faults in the static LSP.
Run the display bfd statistics session ldp-lsp peer-ip peer-ip nexthop nexthop-
address [ interface interface-type interface-number ] command to check statistics
of the BFD session that detects faults in the LDP LSP.
Run the display bfd statistics session mpls-te interface tunnel interface-number
te-lsp command to check statistics about the BFD session that detects faults in the
CR-LSP.
Run the display bfd statistics session mpls-te interface tunnel interface-number
command to check statistics on BFD sessions for TE tunnels.
----End
Pre-configuration Tasks
Before configuring RSVP GR, complete the following tasks:
l 5.8.2 Configuring a Dynamic MPLS TE Tunnel
l Configuring IS-IS GR or OSPF GR on each LSR
NOTE
If the device supports stacking, the stack device can also function as the GR Restarter. If the device
does not support stacking, the device can only function as the GR Helper.
Configuration Process
Enabling the RSVP GR support function and modifying the basic time and configuring Hello
sessions between RSVP GR nodes are optional.
Context
By configuring the RSVP Hello extension, you can enable a device to quickly check
reachability between RSVP nodes.
Perform the following configurations on a GR node and its neighboring nodes.
Procedure
Step 1 Run:
system-view
Step 7 Run:
mpls rsvp-te hello
By default, although the RSVP Hello extension function has been enabled globally, it is
disabled on RSVP-enabled interfaces.
----End
Context
RSVP GR prevents service interruptions during an active/standby switchover and allows a
dynamic CR-LSP to be restored.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls
Step 3 Run:
mpls rsvp-te hello full-gr
The RSVP GR function and the RSVP GR helper function are enabled.
By default, the RSVP GR function and the RSVP GR helper function are disabled.
NOTE
If the device supports stacking, the stack device can also function as the GR Restarter. If the device does not
support stacking, the device can only function as the GR Helper.
----End
Context
By being enabled with RSVP GR Helper, a device supports the GR capability of its neighbor.
RSVP GR takes effect on the RSVP GR-enabled neighbor automatically after the neighbor is
enabled with RSVP GR. If the GR node's neighbor is a GR node, do not perform the
following configurations. If the GR node's neighbor is not a GR node, perform the following
configurations.
Procedure
Step 1 Run:
system-view
----End
Context
If TE FRR is deployed, a Hello session is required between a PLR and an MP.
Perform the following configurations on the PLR and MP of the bypass CR-LSP.
Procedure
Step 1 Run:
system-view
----End
Context
After an active/standby switchover starts, an RSVP GR node has an RSVP smoothing period,
during which the data plane continues forwarding data if the control plane is not restored.
After RSVP smoothing is completed, a restart timer is started.
Restart timer value = Basic time + Number of ingress LSPs x 60 ms + Number of none-
ingress LSPs x 15 ms
In this formula, the default basic time is 90 seconds and is configurable by using a command
line, and the number of LSPs is the number of LSPs with the local node being the ingress.
Procedure
Step 1 Run:
system-view
Step 2 Run:
mpls
Step 3 Run:
mpls rsvp-te hello basic-restart-time basic-restart-time
----End
Prerequisites
The configurations of RSVP GR are complete.
Procedure
l Run the display mpls rsvp-te graceful-restart command to check the status of the local
RSVP GR.
l Run the display mpls rsvp-te graceful-restart peer [ { interface interface-type
interface-number | node-id } [ ip-address ] ] command to check the status of RSVP GR
on a neighbor.
----End
Procedure
l Run the ping lsp [ -a source-ip | -c count | -exp exp-value | -h ttl-value | -m interval | -r
reply-mode | -s packet-size | -t time-out | -v ] * te tunnel interface-number [ hot-
standby ] [ draft6 ] command to check the connectivity of the TE tunnel between the
ingress and egress.
----End
Procedure
After configuring MPLS TE, you can use NQA to check the connectivity and jitter of the TE
tunnel. For detailed configurations, see NQA Configuration in the S2750&S5700&S6700
Series Ethernet Switches Configuration Guide - Network Management.
Context
To facilitate operation and maintenance and learn about the running status of the MPLS
network, configure the MPLS TE trap function so that the device can notify the NMS of the
RSVP and MPLS TE status change and usage of dynamic labels.
Procedure
l Configure the RSVP trap function.
a. Run:
system-view
a. Run:
system-view
NOTE
This command specifies only the alarm thresholds for dynamic label usage. the system can
generate hwMplsDynamicLabelThresholdExceed and
hwMplsDynamicLabelThresholdExceedClear only when snmp-agent trap enable feature-
name mpls_lspm trap-name { hwMplsDynamicLabelThresholdExceed |
hwMplsDynamicLabelThresholdExceedClear } is used and the usage of dynamic labels
reaches the threshold.
f. Run:
mpls rsvp-lsp-number threshold-alarm upper-limit upper-limit-value lower-
limit lower-limit-value
The upper and lower thresholds of alarms for RSVP LSP usage are configured.
The default upper limit of an alarm for RSVP LSP usage is 80%. The default lower
limit of a clear alarm for RSVP LSP usage is 75%. Using the default upper limit
and lower limit is recommended.
NOTE
l This command configures the alarm threshold for RSVP LSP usage. The alarm that the
number of RSVP LSPs exceeded the upper threshold is generated only when the
command snmp-agent trap enable feature-name mpls_lspm trap-name
hwmplslspthresholdexceed is configured, and the actual RSVP LSP usage reaches the
upper limit of the alarm threshold. The alarm that the number of RSVP LSPs fell below
the lower threshold is generated only when the command snmp-agent trap enable
feature-name mpls_lspm trap-name hwmplslspthresholdexceedclear is configured,
and the actual RSVP LSP usage falls below the lower limit of the clear alarm threshold.
l After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplslsptotalcountexceed | hwmplslsptotalcountexceedclear } command is run to
enable LSP limit-crossing alarm and LSP limit-crossing clear alarm, the an alarm is
generated in the following situations:
l If the total number of RSVP LSPs exceeds the upper limit, a limit-crossing alarm
is generated.
l If the total number of RSVP LSPs falls below the upper limit, a limit-crossing
clear alarm is generated.
g. Run:
mpls total-crlsp-number threshold-alarm upper-limit upper-limit-value
lower-limit lower-limit-value
The upper and lower thresholds of alarms for total CR-LSP usage are configured.
The default upper limit of an alarm for total CR-LSP usage is 80%. The default
lower limit of a clear alarm for total CR-LSP usage is 75%. Using the default upper
limit and lower limit is recommended.
NOTE
l This command configures the alarm threshold for total CR-LSP usage. The alarm that
the number of total CR-LSPs exceeded the upper threshold is generated only when the
command snmp-agent trap enable feature-name mpls_lspm trap-name
hwmplslspthresholdexceed is configured, and the actual total CR-LSP usage reaches
the upper limit of the alarm threshold. The alarm that the number of total CR-LSPs fell
below the lower threshold is generated only when the command snmp-agent trap
enable feature-name mpls_lspm trap-name hwmplslspthresholdexceedclear is
configured, and the actual total CR-LSP usage falls below the lower limit of the clear
alarm threshold.
l After the snmp-agent trap enable feature-name mpls_lspm trap-name
{ hwmplslsptotalcountexceed | hwmplslsptotalcountexceedclear } command is run to
enable LSP limit-crossing alarm and LSP limit-crossing clear alarm, the an alarm is
generated in the following situations:
l If the total number of CR-LSPs exceeds the upper limit, a limit-crossing alarm is
generated.
l If the total number of CR-LSPs falls below the upper limit, a limit-crossing clear
alarm is generated.
----End
Context
NOTE
Cleared statistics cannot be restored. Exercise caution when you use the command.
Procedure
l Run the reset mpls rsvp-te statistics { global | interface [ interface-type interface-
number ] } command in the user view to clear statistics about RSVP-TE.
l Run the reset mpls stale-interface [ interface-index ] command in the user view to
delete the information about MPLS interfaces in the Stale state.
----End
Context
To check TE information during routine maintenance, run the following display commands in
any view.
Procedure
l Run the display default-parameter mpls te management command to check default
parameters of MPLS TE management.
l Run the display mpls te tunnel statistics or display mpls lsp statistics command to
check tunnel statistics.
l Run the display mpls te tunnel-interface last-error [ tunnel-name ] command to check
information about tunnel faults.
l Run the display mpls te tunnel-interface failed command to check MPLS TE tunnels
that fail to be established or are being established.
l Run the display mpls te tunnel-interface traffic-state [ tunnel-name ] command to
check traffic on the tunnel interface of the local node.
l Run the display mpls rsvp-te statistics { global | interface [ interface-type interface-
number ] } command to check RSVP-TE statistics.
l Run the display mpls rsvp-te statistics fast-reroute command to check TE FRR
statistics.
----End
Context
To make the tunnel-related configuration take effect, you can run the mpls te commit
command in the tunnel interface view and run the reset command in the user view.
NOTE
If the configuration is modified in the interface view of the TE tunnel but the mpls te commit command
is not configured, the system cannot execute the reset mpls te tunnel-interface tunnel command to re-
establish the tunnel.
Procedure
Step 1 Run the reset mpls te tunnel-interface tunnel interface-number command to reset the tunnel
interface.
----End
Context
NOTICE
Resetting the RSVP process results in the release and reestablishment of all RSVP CR-LSPs.
To reestablish all RSVP CR-LSPs or verify the operation process of RSVP, run the following
reset command in the user view.
Procedure
l Run the reset mpls rsvp-te command to reset the RSVP process.
----End
Context
In a scenario where auto TE FRR is used, you can run the following reset command to release
or re-establish bypass tunnels.
Procedure
l Run the reset mpls te auto-frr { lsp-id ingress-lsr-id tunnel-id | name bypass-tunnel-
name } command to delete or reset the auto FRR bypass tunnel.
----End
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface on each LSR and configure OSPF to ensure that
there are reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS and MPLS TE on each LSR
and interface.
3. Create a tunnel interface on the ingress node and set the tunnel type to static CR-LSP.
4. Configure the static LSP bound to the tunnel; specify the next hop address and outgoing
label on the ingress node; specify the inbound interface, incoming label, next hop
address, and outgoing label on the transit node; specify the incoming label and inbound
interface on the egress node.
NOTE
l The value of the outgoing label of each node is the value of the incoming label of its next node.
l When running the static-cr-lsp ingress { tunnel-interface tunnel interface-number | tunnel-name }
destination destination-address { nexthop next-hop-address | outgoing-interface interface-type
interface-number } * out-label out-label command to configure the ingress node of a CR-LSP,
ensure that tunnel-name must be the same as the tunnel name created by using the interface tunnel
interface-number command. tunnel-name is a case-sensitive character string without spaces. For
example, the name of the tunnel created by using the interface tunnel 1 command is Tunnel1. In
this case, the parameter of the ingress node of the static CR-LSP is Tunnel1; otherwise, the tunnel
cannot be created. There is no such limitation on the transit node and egress node.
Procedure
Step 1 Configure an IP address and routing protocol for each interface.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure IP addresses for interfaces of LSRB and LSRC and OSPF according to Figure
5-35. The configurations of LSRB and LSRC are similar to the configuration of LSRA, and
are not mentioned here.
After the configurations are complete, OSPF neighbor relationships can be set up between
LSRA, LSRB, and LSRC. Run the display ospf peer command. You can see that the
neighbor status is Full. Run the display ip routing-table command. You can see that LSRs
have learnt the routes to Loopback1 of each other.
Step 2 Configure basic MPLS functions and enable MPLS TE.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 3 Configure MPLS TE tunnels.
# On LSRA, create an MPLS TE tunnel from LSRA to LSRC.
[LSRA] interface tunnel 1
[LSRA-Tunnel1] ip address unnumbered interface loopback 1
[LSRA-Tunnel1] tunnel-protocol mpls te
[LSRA-Tunnel1] destination 3.3.3.9
[LSRA-Tunnel1] mpls te tunnel-id 100
[LSRA-Tunnel1] mpls te signal-protocol cr-static
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
Run the display mpls te tunnel command on each LSR to view the MPLS TE tunnel status.
The display on LSRA is used as an example.
[LSRA] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
1.1.1.9 3.3.3.9 1 --/20 I Tunnel1
- - - 130/-- E LSRC2LSRA
Run the display mpls lsp or display mpls static-cr-lsp command on each LSR to view the
static CR-LSP status.
The display on LSRA is used as an example.
[LSRA] display mpls lsp
----------------------------------------------------------------------
LSP Information: STATIC CRLSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
3.3.3.9/32 NULL/20 -/Vlanif100
-/- 130/NULL Vlanif100/-
When a static CR-LSP is used to establish an MPLS TE tunnel, the transit node and the egress
node do not forward packets according to the specified incoming label and outgoing label.
Therefore, no EFC information is displayed on LSRB or LSRC.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te signal-protocol cr-static
mpls te tunnel-id 100
mpls te commit
#
ospf 1
area 0.0.0.0
network 1.1.1.9 0.0.0.0
network 172.1.1.0 0.0.0.255
#
static-cr-lsp ingress tunnel-interface Tunnel1 destination 3.3.3.9 nexthop
172.1.1.2 out-label 20
static-cr-lsp egress LSRC2LSRA incoming-interface Vlanif100 in-label 130
#
return
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
#
static-cr-lsp transit LSRA2LSRC incoming-interface Vlanif100 in-label 20
nexthop 172.2.1.2 out-label 30
static-cr-lsp transit LSRC2LSRA incoming-interface Vlanif200 in-label 120
nexthop 172.1.1.1 out-label 130
#
return
Configuration Roadmap
The configuration roadmap is as follows:
1. On the MPLS backbone network, MPLS LDP and MPLS TE tunnels can carry L2VPN
or L3VPN services. Configure an MPLS TE tunnel to ensure stable data transmission
upon frequent topology changes on the enterprise network.
2. Configure IS-IS to ensure that there are reachable routes between devices on the MPLS
backbone network.
3. Enable MPLS TE and RSVP-TE on each node so that an MPLS TE tunnel can be set up.
4. Enable IS-IS TE and change the cost type so that TE information can be advertised to
other nodes through IS-IS.
5. Create a tunnel interface on the ingress node, configure tunnel attributes, and enable
MPLS TE CSPF to create a dynamic MPLS TE tunnel.
Procedure
Step 1 Assign IP addresses to interfaces.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
# Configure IP addresses for interfaces of LSRB and LSRC according to Figure 5-36. The
configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 2 Configure IS-IS to advertise routes.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[LSRA-isis-1] is-level level-2
[LSRA-isis-1] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] isis enable 1
[LSRA-Vlanif100] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] isis enable 1
[LSRA-LoopBack1] quit
# Configure LSRB.
[LSRB] isis 1
[LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[LSRB-isis-1] is-level level-2
[LSRB-isis-1] quit
[LSRB] interface vlanif 100
[LSRB-Vlanif100] isis enable 1
[LSRB-Vlanif100] quit
[LSRB] interface vlanif 200
[LSRB-Vlanif200] isis enable 1
[LSRB-Vlanif200] quit
[LSRB] interface loopback 1
[LSRB-LoopBack1] isis enable 1
[LSRB-LoopBack1] quit
# Configure LSRC.
[LSRC] isis 1
[LSRC-isis-1] network-entity 00.0005.0000.0000.0003.00
[LSRC-isis-1] is-level level-2
[LSRC-isis-1] quit
[LSRC] interface vlanif 200
[LSRC-Vlanif200] isis enable 1
[LSRC-Vlanif200] quit
[LSRC] interface loopback 1
[LSRC-LoopBack1] isis enable 1
[LSRC-LoopBack1] quit
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs have learned the routes from each other. The display on
LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 8 Routes : 8
Step 3 Configure basic MPLS functions and enable MPLS TE and RSVP-TE.
Enable MPLS, MPLS TE, and RSVP-TE globally on each node and interfaces along the
tunnel.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 4 Configure IS-IS TE.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] cost-style wide
[LSRA-isis-1] traffic-eng level-2
[LSRA-isis-1] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 5 Configure an MPLS TE tunnel interface and enable MPLS TE CSPF.
# On the ingress node of the tunnel, create a tunnel interface, and set the IP address, tunnel
protocol, destination IP address, tunnel ID, and dynamic signaling protocol for the tunnel
interface. Then run the mpls te commit command to commit the configuration.
# Configure LSRA.
[LSRA] interface tunnel 1
[LSRA-Tunnel1] ip address unnumbered interface loopback 1
[LSRA-Tunnel1] tunnel-protocol mpls te
[LSRA-Tunnel1] destination 3.3.3.9
[LSRA-Tunnel1] mpls te tunnel-id 100
[LSRA-Tunnel1] mpls te signal-protocol rsvp-te
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
[LSRA] mpls
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
Run the display mpls te tunnel-interface command on LSRA. You can view tunnel interface
information.
[LSRA] display mpls te tunnel-interface
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 3
Run the display mpls te tunnel verbose command on LSRA. You can view detailed
information about the tunnel.
[LSRA] display mpls te tunnel verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 1 LSP Index : 2048
Session ID : 100 LSP ID : 3
LSR Role : Ingress LSP Type : Primary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 3.3.3.9
In-Interface : -
Out-Interface : Vlanif100
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : -
ER-Hop Table Index : - AR-Hop Table Index: -
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 16388
Created Time : 2013-09-16 11:51:21+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x3 Protected Flag : 0x0
Bypass In Use : Not Exists
Bypass Tunnel Id : -
BypassTunnel : -
Bypass LSP ID : - FrrNextHop : -
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
Run the display mpls te cspf tedb all command on LSRA. You can view link information in
the TEDB.
[LSRA] display mpls te cspf tedb all
Maximum Nodes Supported: 512 Current Total Node Number: 3
Maximum Links Supported: 2048 Current Total Link Number: 4
Maximum SRLGs supported: 5120 Current Total SRLG Number: 0
ID Router-ID IGP Process-ID Area Link-Count
1 1.1.1.9 ISIS 1 Level-2 1
2 2.2.2.9 ISIS 1 Level-2 2
3 3.3.3.9 ISIS 1 Level-2 1
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te commit
#
return
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0002.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0003.00
traffic-eng level-2
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
As shown in Figure 5-37, an MPLS TE tunnel is set up between LSRA and LSRC. The
primary path of the tunnel is LSRA -> LSRB -> LSRC. When the primary CR-LSP fails,
traffic must be switched to a backup CR-LSP.
LSRA needs to set up multiple MPLS TE tunnels to meet service requirements. The network
administrator wants to simplify the MPLS TE tunnel configuration.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
GE0/0/1 GE0/0/2
VLANIF600 VLANIF700
172.6.1.2/24 172.7.1.1/24
LSRF
GE0/0/2 GE0/0/3
VLANIF600 VLANIF700
Loopback1
172.6.1.1/24 172.7.1.2/24
Loopback1 2.2.2.9/32 Loopback1
GE0/0/1 GE0/0/2
1.1.1.9/32 3.3.3.9/32
VLANIF100 VLANIF200
172.1.1.2/24 172.2.1.1/24
LSRA LSRC
GE0/0/1 GE0/0/1
VLANIF100 VLANIF200
GE0/0/3 172.1.1.1/24 LSRB 172.2.1.2/24 GE0/0/2
VLANIF400 VLANIF500
172.4.1.1/24 172.5.1.2/24
Loopback1
5.5.5.9/32
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
LSRE
Primary CR-LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to interfaces and configure OSPF to ensure that public network
routes between the nodes are reachable.
2. Configure LSR IDs for the nodes, enable MPLS, MPLS TE, RSVP-TE, and CSPF on the
LSRs globally and on their interfaces, and enable OSPF TE on the LSRs.
3. Use CR-LSP attribute templates to simplify the configuration. Configure different
attribute templates for the primary CR-LSP, hot-standby CR-LSP, and ordinary backup
CR-LSP.
4. On the ingress node of the primary tunnel, create a tunnel interface, configure the tunnel
IP address, tunneling protocol, destination IP address, tunnel ID, and RSVP-TE signaling
protocol for the tunnel interface, and then apply the corresponding CR-LSP attribute
template to set up the primary CR-LSP.
5. Configure hot-standby and ordinary backup CR-LSPs on the ingress node of the primary
tunnel. In this way, traffic can be switched to the backup CR-LSP when the primary CR-
LSP fails. Apply the CR-LSP corresponding attribute template to create the backup CR-
LSP.
Procedure
Step 1 Assign IP addresses to interfaces and configure OSPF on the LSRs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400 600
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] ip address 172.6.1.1 255.255.255.0
[LSRA-Vlanif600] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 600
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface gigabitethernet 0/0/3
[LSRA-GigabitEthernet0/0/3] port link-type trunk
[LSRA-GigabitEthernet0/0/3] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/3] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.4.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.6.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
Assign IP addresses to interfaces of LSRB, LSRC, LSRE, and LSRF according to Figure
5-37. The configurations on these LSRs are similar to the configuration on LSRA, and are not
mentioned here.
After the configurations are complete, run the display ip routing-table command on the
LSRs. You can see that the LSRs learn the routes of Loopback1 from each other. The
command output on LSRA is provided as an example:
Step 2 Configure basic MPLS capabilities and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] mpls
[LSRA-Vlanif400] mpls te
[LSRA-Vlanif400] mpls rsvp-te
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] mpls
[LSRA-Vlanif600] mpls te
[LSRA-Vlanif600] mpls rsvp-te
[LSRA-Vlanif600] quit
The configurations on LSRB, LSRC, LSRE, and LSRF are similar to the configuration on
LSRA, and are not mentioned here. CSPF needs to be enabled only on the ingress node of the
primary tunnel.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations on LSRB, LSRC, LSRE, and LSRF are similar to the configuration on
LSRA, and are not mentioned here.
Step 4 Configure CR-LSP attribute templates and specify explicit paths for the CR-LSPs.
# Specify an explicit path for the primary CR-LSP.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
# Configure the CR-LSP attribute template used for setting up the primary CR-LSP.
[LSRA] lsp-attribute lsp_attribute_pri
[LSRA-lsp-attribute-lsp_attribute_pri] explicit-path pri-path
[LSRA-lsp-attribute-lsp_attribute_pri] commit
[LSRA-lsp-attribute-lsp_attribute_pri] quit
# Configure the CR-LSP attribute template used for setting up the hot-standby CR-LSP.
[LSRA] lsp-attribute lsp_attribute_hotstandby
[LSRA-lsp-attribute-lsp_attribute_hotstandby] explicit-path hotstandby-path
[LSRA-lsp-attribute-lsp_attribute_hotstandby] hop-limit 12
[LSRA-lsp-attribute-lsp_attribute_hotstandby] commit
[LSRA-lsp-attribute-lsp_attribute_hotstandby] quit
# Configure the CR-LSP attribute template used for setting up the ordinary backup CR-LSP.
[LSRA] lsp-attribute lsp_attribute_ordinary
[LSRA-lsp-attribute-lsp_attribute_ordinary] explicit-path ordinary-path
[LSRA-lsp-attribute-lsp_attribute_ordinary] hop-limit 15
[LSRA-lsp-attribute-lsp_attribute_ordinary] commit
[LSRA-lsp-attribute-lsp_attribute_ordinary] quit
Step 5 On the ingress node LSRA, create the MPLS TE tunnel on the primary CR-LSP.
# Specify an MPLS TE tunnel interface for the primary CR-LSP and apply the primary CR-
LSP attribute template to set up this CR-LSP.
[LSRA] interface tunnel 1
[LSRA-Tunnel1] ip address unnumbered interface loopBack 1
[LSRA-Tunnel1] tunnel-protocol mpls te
[LSRA-Tunnel1] destination 3.3.3.9
[LSRA-Tunnel1] mpls te tunnel-id 100
[LSRA-Tunnel1] mpls te primary-lsp-constraint lsp-attribute lsp_attribute_pri
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
Run the display interface tunnel 1 command on LSRA to check the tunnel status. The tunnel
is in Up state.
Step 6 Configure hot-standby and common backup CR-LSPs on the ingress node.
# On LSRA, apply CR-LSP attribute templates to create hot-standby and common backup
CR-LSPs.
[LSRA] interface tunnel 1
[LSRA-Tunnel1] mpls te hotstandby-lsp-constraint 1 lsp-attribute
lsp_attribute_hotstandby
[LSRA-Tunnel1] mpls te ordinary-lsp-constraint 1 lsp-attribute
lsp_attribute_ordinary
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
Run the display mpls te tunnel-interface command on LSRA to check tunnel information.
You can see that the hot-standby CR-LSP has been set up successfully.
[LSRA] display mpls te tunnel-interface
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 5
Hot-Standby LSP State : UP
Main LSP State : READY LSP ID : 32772
# Run the display mpls te tunnel verbose command on LSRA to view detailed tunnel
information. You can see that the primary and hot-standby CR-LSPs have been set up using
the attribute templates.
[LSRA] display mpls te tunnel verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 1 LSP Index : 2048
Session ID : 100 LSP ID : 5
LSR Role : Ingress LSP Type : Primary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 3.3.3.9
In-Interface : -
Out-Interface : Vlanif100
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : 1
ER-Hop Table Index : 0 AR-Hop Table Index: 0
C-Hop Table Index : 1
PrevTunnelIndexInSession: 2 NextTunnelIndexInSession: -
PSB Handle : 8194
Created Time : 2013-09-16 14:53:15+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x3 Protected Flag : 0x0
Bypass In Use : Not Exists
Bypass Tunnel Id : -
BypassTunnel : -
Bypass LSP ID : - FrrNextHop : -
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
No : 2
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 2 LSP Index : 2050
Session ID : 100 LSP ID : 32772
LSR Role : Ingress LSP Type : Hot-Standby
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 3.3.3.9
In-Interface : -
Out-Interface : Vlanif400
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : 1
ER-Hop Table Index : 1 AR-Hop Table Index: 1
C-Hop Table Index : 2
PrevTunnelIndexInSession: - NextTunnelIndexInSession: 1
PSB Handle : 8195
Created Time : 2013-09-16 14:53:15+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
# Run the display mpls te tunnel verbose command on LSRA. You can see that an ordinary
CR-LSP has been set up using the attribute template.
[LSRA] display mpls te tunnel verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 2 LSP Index : 2048
Session ID : 100 LSP ID : 32774
LSR Role : Ingress LSP Type : Ordinary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 3.3.3.9
In-Interface : -
Out-Interface : Vlanif600
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : 1
ER-Hop Table Index : 2 AR-Hop Table Index: 1
C-Hop Table Index : 2
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8196
Created Time : 2013-09-16 15:00:08+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
----End
Configuration File
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400 600
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
explicit-path hotstandby-path
next hop 172.4.1.2
next hop 172.5.1.2
next hop 3.3.3.9
#
explicit-path ordinary-path
next hop 172.6.1.2
next hop 172.7.1.2
next hop 3.3.3.9
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 3.3.3.9
#
lsp-attribute lsp_attribute_hotstandby
explicit-path hotstandby-path
hop-limit 12
commit
#
lsp-attribute lsp_attribute_ordinary
explicit-path ordinary-path
hop-limit 15
commit
#
lsp-attribute lsp_attribute_pri
explicit-path pri-path
commit
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te primary-lsp-constraint lsp-attribute lsp_attribute_pri
mpls te hotstandby-lsp-constraint 1 lsp-attribute lsp_attribute_hotstandby
mpls te ordinary-lsp-constraint 1 lsp-attribute lsp_attribute_ordinary
mpls te record-route
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.6.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 500 700
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
use a TE tunnel as a logical link for IGP route calculation. You can set a proper metric for an
MPLS TE tunnel to ensure that the route passing through the MPLS TE tunnel is preferred,
allowing traffic to be directed to the MPLS TE tunnel.
As shown in Figure 5-38, devices use OSPF to communicate with each other. An MPLS TE
tunnel is established from LSRA and LSRC. The MPLS TE tunnel passes through LSRB. The
number marked on each link indicates the link cost. If LSRA has traffic destined for LSRE
and LSRC, LSRA sends the traffic to GE0/0/2 based on the OSPF route selection result. If the
link between LSRA and LSRD has 100 Mbit/s of bandwidth and LSRA requires 50 Mbit/s
bandwidth to send traffic to LSRC and 60 Mbit/s bandwidth to send traffic to LSRE, the link
between LSRA and LSRB is congested. Congestion on the link causes traffic transmission
delay or packet loss.
To resolve this problem, configure IGP shortcut on the tunnel interface of LSRA to direct
traffic destined for LSRC to the MPLS TE tunnel. By doing this, traffic is forwarded by
GE0/0/1 and network congestion is prevented.
NOTE
After IGP shortcut is configured on the tunnel interface of LSRA, LSRA does not advertise the MPLS
TE tunnel to its peers as a route. The MPLS TE tunnel is used only for local route calculation.
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, configure OSPF to ensure that there are reachable
routes between LSRs, and configure the OSPF cost.
2. On LSRA, create an MPLS TE tunnel over the path LSRA -> LSRB -> LSRC. This
example uses RSVP-TE to establish a dynamic MPLS TE tunnel. Configure an ID for
each LSR, enable MPLS TE, RSVP-TE, and CSPF on each node and interface, and
enable OSPF TE. On the ingress node of the primary tunnel, create a tunnel interface,
and specify the IP address, tunneling protocol, destination IP address, tunnel ID, and
dynamic signaling protocol RSVP-TE for the tunnel interface.
3. Enable IGP shortcut on the TE tunnel interface of LSRA and configure an IGP metric for
the TE tunnel.
Procedure
Step 1 Assign an IP address to each interface, configure OSPF, and set the OSPF cost.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] ospf cost 15
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] ospf cost 10
[LSRA-Vlanif400] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.4.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure IP addresses for interfaces of LSRB, LSRC, LSRD, and LSRE according to
Figure 5-38. The configurations on LSRB, LSRC, LSRD, and LSRE are similar to the
configuration of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on LSRA,
LSRB, and LSRC. You can see that PE1 and PE2 have learned the routes to Loopback1 of
each other.
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
To set up a TE tunnel from LSRA to LSRC, perform the following configurations on LSRA,
LSRB, and LSRC.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
The configurations on LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here. CSPF only needs to be configured on the ingress node of the primary tunnel.
Step 3 Configure OSPF TE.
To set up a TE tunnel from LSRA to LSRC, perform the following configurations on LSRA,
LSRB, and LSRC.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations on LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 4 Create an MPLS TE tunnel.
# Specify an explicit path for a TE tunnel.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
1.1.1.9 and the outbound interface of this route is Tunnel1. The traffic destined for LSRC has
been directed to the MPLS TE tunnel.
[LSRA] display ip routing-table 3.3.3.9
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Table : Public
Summary Count : 1
Destination/Mask Proto Pre Cost Flags NextHop Interface
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
ospf cost 15
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te path explicit-path pri-
path
mpls te igp shortcut
ospf
mpls te igp metric absolute
10
mpls te commit
#
ospf
1
opaque-capability
enable
enable traffic-adjustment
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
ospf cost 15
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
ospf cost 10
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
ospf cost 10
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300 400 500
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
ospf cost 10
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
ospf cost 10
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
ospf 1
area 0.0.0.0
network 172.3.1.0 0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.5.1.0 0.0.0.255
#
return
Networking Requirements
An MPLS TE tunnel does not automatically direct traffic. To direct traffic to an MPLS TE
tunnel, configure forwarding adjacency. Forwarding adjacency enables a device to use a TE
tunnel as a logical link for IGP route calculation. Unlike IGP shortcut, forwarding adjacency
advertises a TE tunnel to its peers as an IGP route. You can set a proper metric for an MPLS
TE tunnel to ensure that the route passing through the MPLS TE tunnel is preferred, allowing
traffic to be directed to the MPLS TE tunnel.
As shown in Figure 5-39, devices use OSPF to communicate with each other. An MPLS TE
tunnel is established from LSRA and LSRC. The MPLS TE tunnel passes through LSRB. The
number marked on each link indicates the link cost. If LSRA and LSRE have traffic destined
for LSRC, traffic from the two LSRs is forwarded by GE0/0/1 on LSRD based on the OSPF
route selection result. If LSRA requires 10 Mbit/s bandwidth to send traffic to LSRC, and
LSRE requires 100 Mbit/s bandwidth to send traffic to LSRC, but the link between LSRC and
LSRD has only 100 Mbit/s of bandwidth, the link is congested. Congestion on the link causes
traffic transmission delay or packet loss.
To resolve this problem, configure forwarding adjacency on the MPLS TE tunnel interface of
LSRA. Then all traffic from LSRA to LSRC is forwarded over the MPLS TE tunnel, whereas
only some of traffic from LSRE to LSRC is forwarded over the MPLS TE tunnel. The rest of
traffic is forwarded by LSRD. Therefore, traffic congestion is prevented over the link between
LSRC and LSRD.
NOTE
After you configure forwarding adjacency, LSRA advertises the MPLS TE tunnel to its peer as an OSPF
route. Because OSPF requires bidirectional link detection, the MPLS TE tunnel from LSRC to LSRA
must be established and forwarding adjacency must be configured on the tunnel interface.
STP must be disabled on the network. Otherwise, some interfaces may be blocked by STP.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, configure OSPF to ensure that there are reachable
routes between LSRs, and configure the OSPF cost.
2. On LSRA, create an MPLS TE tunnel over the path LSRA -> LSRB -> LSRC. On
LSRC, create an MPLS TE tunnel over the path LSRC -> LSRB -> LSRA. This example
uses RSVP-TE to establish a dynamic MPLS TE tunnel. Configure an ID for each LSR,
enable MPLS TE, RSVP-TE, and CSPF on each node and interface, and enable OSPF
TE. On the ingress node of the primary tunnel, create a tunnel interface, and specify the
IP address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface.
3. Enable forwarding adjacency on the TE tunnel interfaces of LSRA and LSRC, and
configure the IGP metric for the TE tunnels.
Procedure
Step 1 Assign an IP address to each interface, configure OSPF, and set the OSPF cost.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400 600
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] ospf cost 15
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] ospf cost 10
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] ip address 172.6.1.1 255.255.255.0
[LSRA-Vlanif600] ospf cost 10
[LSRA-Vlanif600] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface gigabitethernet 0/0/3
[LSRA-GigabitEthernet0/0/3] port link-type trunk
[LSRA-GigabitEthernet0/0/3] port trunk allow-pass vlan 600
[LSRA-GigabitEthernet0/0/3] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.4.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.6.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure IP addresses for interfaces of LSRB, LSRC, LSRD, and LSRE according to
Figure 5-39. The configurations on LSRB, LSRC, LSRD, and LSRE are similar to the
configuration of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on LSRA,
LSRB, and LSRC. You can see that PE1 and PE2 have learned the routes to Loopback1
interfaces of each other.
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
To create TE tunnels on LSRA and LSRC, perform the following configurations on LSRA,
LSRB, and LSRC.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
The configurations on LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here. CSPF only needs to be configured on the ingress node of the primary tunnel.
Step 3 Configure OSPF TE.
To create TE tunnels on LSRA and LSRC, perform the following configurations on LSRA,
LSRB, and LSRC.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations on LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
Step 4 Create an MPLS TE tunnel.
Create MPLS TE tunnel interfaces on LSRA and LSRC, and configure explicit paths.
# Configure LSRA.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
[LSRA] interface tunnel 1
[LSRA-Tunnel1] ip address unnumbered interface loopback 1
[LSRA-Tunnel1] tunnel-protocol mpls te
[LSRA-Tunnel1] destination 3.3.3.9
[LSRA-Tunnel1] mpls te tunnel-id 100
[LSRA-Tunnel1] mpls te path explicit-path pri-path
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
# Configure LSRC.
[LSRC] explicit-path pri-path
[LSRC-explicit-path-pri-path] next hop 172.2.1.1
[LSRC-explicit-path-pri-path] next hop 172.1.1.1
[LSRC-explicit-path-pri-path] next hop 1.1.1.9
[LSRC-explicit-path-pri-path] quit
[LSRC] interface tunnel 1
[LSRC-Tunnel1] ip address unnumbered interface loopback 1
[LSRC-Tunnel1] tunnel-protocol mpls te
[LSRC-Tunnel1] destination 1.1.1.9
[LSRC-Tunnel1] mpls te tunnel-id 101
[LSRC-Tunnel1] mpls te path explicit-path pri-path
[LSRC-Tunnel1] mpls te commit
[LSRC-Tunnel1] quit
# Configure LSRC.
[LSRC] interface tunnel 1
[LSRC-Tunnel1] mpls te igp advertise
[LSRC-Tunnel1] mpls te igp metric absolute 10
[LSRC-Tunnel1] mpls te commit
[LSRC-Tunnel1] quit
[LSRC] ospf 1
[LSRC-ospf-1] enable traffic-adjustment advertise
[LSRC-ospf-1] quit
Run the display ip routing-table 3.3.3.9 command on LSRE. You can see that there are two
equal-cost routes to LSRC (3.3.3.9). Some traffic destined for LSRC is forwarded by LSRD
and some traffic is sent to the LSRA and forwarded over the MPLS TE tunnel.
[LSRE] display ip routing-table 3.3.3.9
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Table : Public
Summary Count : 2
Destination/Mask Proto Pre Cost Flags NextHop Interface
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400 600
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
ospf cost 15
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
ospf cost 10
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 600
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te path explicit-path pri-
path
mpls te igp advertise
mpls te igp metric absolute
10
mpls te commit
#
ospf
1
opaque-capability
enable
enable traffic-adjustment advertise
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.6.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
ospf cost 15
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
ospf cost 10
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path pri-
path
next hop
172.2.1.1
next hop
172.1.1.1
next hop 1.1.1.9
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
ospf cost 10
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
1.1.1.9
mpls te tunnel-id
101
mpls te path explicit-path pri-
path
mpls te igp
advertise
mpls te igp metric absolute
10
mpls te commit
#
ospf 1
opaque-capability enable
enable traffic-adjustment advertise
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300 400 500
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
ospf cost 10
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
ospf cost 10
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
ospf cost 10
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
ospf 1
area 0.0.0.0
network 172.3.1.0 0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.5.1.0 0.0.0.255
#
return
Networking Requirements
As shown in Figure 5-40, LSRA has two dynamic MPLS TE tunnels to LSRD: Tunnel1 and
Tunnel2. The affinity attribute and mask need to be used according to the administrative
group attribute so that Tunnel1 on LSRA uses the physical link LSRA -> LSRB -> LSRC ->
LSRD and Tunnel2 uses the physical link LSRA -> LSRB -> LSRE -> LSRC -> LSRD.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
LSRD
GE0/0/1
VLANIF300
172.3.1.2/24
GE0/0/1 GE0/0/2
Path of Tunnel 1 VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Path of Tunnel 2 LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS TE, RSVP-TE, CSPF on each
node and interface, and enable OSPF TE.
3. Configure the administrative group attribute of the outbound interface of the tunnel on
each LSR.
4. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface.
5. Determine and configure the affinity attribute and mask for each tunnel according to the
administrative group attribute and networking requirements.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100
# Configure IP addresses for interfaces of LSRB, LSRC, LSRD, and LSRE according to
Figure 5-40. The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the
configuration of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs have learned the routes to Loopback1 interfaces of each
other. The display on LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 13 Routes : 13
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the configuration of
LSRA, and are not mentioned here. CSPF only needs to be configured on the ingress node of
the primary tunnel.
The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the configuration of
LSRA, and are not mentioned here.
Step 4 Set MPLS TE attributes of the outbound interface of each node.
# Configure the administrative group attribute on LSRA.
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls te link administrative group 10001
[LSRA-Vlanif100] quit
After the configurations are complete, check the TEDB including the Color field of each link.
The Color field indicates the administrative group attribute. The display on LSRA is used as
an example.
[LSRA] display mpls te cspf tedb node
Router ID: 1.1.1.9
IGP Type: OSPF Process ID: 1
MPLS-TE Link Count: 1
Link[1]:
OSPF Router ID: 1.1.1.9 Opaque LSA ID: 1.0.0.1
Interface IP Address: 172.1.1.1
DR Address: 172.1.1.2
IGP Area: 0
Link Type: Multi-access Link Status: Active
IGP Metric: 1 TE Metric: 1 Color: 0x10001
...
----------------------------------------------------------------
Tunnel2
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 101
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 4
Run the display mpls te tunnel path command to view the path of the tunnel. You can see
that the affinity attribute and mask of the tunnel match the administrative group attribute of
each link.
[LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.9 :100 :47
Hop Information
Hop 0 172.1.1.1
Hop 1 172.1.1.2 Label 1065
Hop 2 2.2.2.9 Label 1065
Hop 3 172.2.1.1
Hop 4 172.2.1.2 Label 1075
Hop 5 3.3.3.9 Label 1075
Hop 6 172.3.1.1
Hop 7 172.3.1.2 Label 3
Hop 8 4.4.4.9 Label 3
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10001
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
4.4.4.9
mpls te tunnel-id
100
mpls te record-route
label
mpls te affinity property 10101 mask
11011
mpls te
commit
#
interface Tunnel2
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
4.4.4.9
mpls te tunnel-id
101
mpls te record-route
label
mpls te affinity property 10011 mask
11101
mpls te
commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10101
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10011
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
network 172.5.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300
#
mpls lsr-id
4.4.4.9
mpls
mpls
te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls te link administrative group 10011
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
LSRD
GE0/0/1
VLANIF300
172.3.1.2/24
GE0/0/1 GE0/0/2
Primary CR-LSP VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Bypass CR-LSP LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure manual TE FRR.
2. Configure Srefresh on the PLR and MP along a tunnel to enhance transmission reliability
of RSVP messages and improve resource use efficiency.
Procedure
Step 1 Configure manual TE FRR.
Configure the primary and bypass MPLS TE tunnels according to 5.10.13 Example for
Configuring Manual TE FRR, and then bind the two tunnels.
Step 2 Configure the Srefresh function on LSRB and LSRC.
# Configure the Srefresh function on LSRB.
[LSRB] mpls
[LSRB-mpls] mpls rsvp-te srefresh
[LSRB-mpls] quit
Run the display interface tunnel 1 command on LSRA. You can view the status of the
primary CR-LSP and that the status of the tunnel interface is still Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-21 10:58:49
Description:
...
Run the tracert lsp te tunnel 1 command on LSRA. You can view the path that the tunnel
passes.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C t
o break.
TTL Replier Time Type Downstream
0 Ingress 172.1.1.2/[1034 ]
1 172.1.1.2 1 ms Transit 172.4.1.2/[1042 1025 ]
2 172.4.1.2 1 ms Transit 172.5.1.2/[3 ]
3 172.5.1.2 2 ms Transit 172.3.1.2/[3 ]
4 4.4.4.9 2 ms Egress
The preceding information shows that services on the link have been switched to the bypass
CR-LSP.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. You can see
that the bypass CR-LSP is in use.
[LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : -
TunnelIndex : 1 LSP Index : 2048
Session ID : 100 LSP ID : 5
LSR Role : Transit
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : Vlanif100
Out-Interface : Vlanif200
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: 0
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8421
Created Time : 2013-09-16 18:27:55+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x1
Bypass In Use : In Use
Bypass Tunnel Id : 1225021547
BypassTunnel : Tunnel Index[Tunnel2], InnerLabel[1042]
Bypass LSP ID : 2 FrrNextHop : 172.5.1.2
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
Run the display mpls rsvp-te statistics global command on LSRB to view Srefresh statistics.
[LSRB] display mpls rsvp-te statistics global
LSR ID: 2.2.2.9 LSP Count: 2
PSB Count: 2 RSB Count: 2
RFSB Count: 1
SendAckCounter: 0 RecAckCounter: 0
SendPathErrCounter: 287 RecPathErrCounter: 0
SendResvErrCounter: 0 RecResvErrCounter: 0
SendPathTearCounter: 11 RecPathTearCounter: 8
SendResvTearCounter: 2 RecResvTearCounter: 0
SendSrefreshCounter: 13 RecSrefreshCounter: 14
SendAckMsgCounter: 14 RecAckMsgCounter: 13
SendChallengeMsgCounter: 0 RecChallengeMsgCounter: 0
SendResponseMsgCounter: 0 RecResponseMsgCounter: 0
SendErrMsgCounter: 0 RecErrMsgCounter: 0
SendRecoveryPathMsgCounter: 0 RecRecoveryPathMsgCounter: 0
SendGRPathMsgCounter: 0 RecGRPathMsgCounter: 0
ResourceReqFaultCounter: 0 RecGRPathMsgFromLSPMCounter: 0
Bfd neighbor count: 2 Bfd session count: 0
Because the Srefresh function is configured globally on LSRB and LSRC, the Srefresh
function takes effect on LSRB and LSRC when the primary tunnel fails.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 172.3.1.2
next hop 4.4.4.9
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te tunnel-id 100
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
mpls te commit
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls te timer fast-reroute 120
mpls rsvp-te
mpls rsvp-te srefresh
mpls te cspf
#
explicit-path by-path
next hop 172.4.1.2
next hop 172.5.1.2
next hop 3.3.3.9
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0002.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 300
mpls te record-route
mpls te path explicit-path by-path
mpls te bypass-tunnel
mpls te protected-interface Vlanif200
mpls te commit
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300 500
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
mpls rsvp-te srefresh
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0003.00
traffic-eng level-2
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0004.00
traffic-eng level-2
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0005.00
traffic-eng level-2
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
isis enable 1
#
return
Networking Requirements
As shown in Figure 5-42, VLANIF100 between LSRA and LSRB contains member
interfaces GE0/0/1 and GE0/0/2. An MPLS TE tunnel from LSRA to LSRC is set up by using
RSVP.
The handshake function needs to be configured so that LSRA and LSRB perform RSPV
authentication to prevent forged Resv messages from consuming network resources. In
addition, the message window function is configured to solve the problem of RSVP packet
mis-sequencing.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface on each LSR and configure OSPF to ensure that
there are reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, and RSVP-TE on
each node and interface.
3. On the ingress node, create a tunnel interface, and specify the IP address, tunneling
protocol, destination IP address, tunnel ID, and dynamic signaling protocol RSVP-TE,
and enable CSPF.
4. Configure RSVP authentication on LSRA and LSRB of the tunnel.
5. Configure the Handshake function on LSRA and LSRB to prevent forged Resv messages
from consuming network resources.
6. Configure the sliding window function on LSRA and LSRB to solve the problem of
RSVP packet mis-sequencing.
NOTE
It is recommended that the window size be larger than 32. If the window size is too small, some received
RSVP messages may be discarded, which can terminate the RSVP neighbor relationships.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure IP addresses for interfaces of LSRB and LSRC according to Figure 5-42. The
configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs have learned the routes to Loopback1 interfaces of each
other.
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here. CSPF only needs to be configured on the ingress node of the primary tunnel.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB and LSRC are similar to the configuration of LSRA, and are not
mentioned here.
After the configurations are complete, run the display interface tunnel command on LSRA.
You can see that the tunnel interface status is Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-02-22 14:28:37
Description:...
Step 5 On LSRA and LSRB, configure RSVP authentication on the interfaces on the MPLS TE link.
# Configure LSRA.
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls rsvp-te authentication cipher Huawei@1234
[LSRA-Vlanif100] mpls rsvp-te authentication handshake 12345678
[LSRA-Vlanif100] mpls rsvp-te authentication window-size 32
[LSRA-Vlanif100] quit
# Configure LSRB.
[LSRB] interface vlanif 100
[LSRB-Vlanif100] mpls rsvp-te authentication cipher Huawei@1234
[LSRB-Vlanif100] mpls rsvp-te authentication handshake 12345678
[LSRB-Vlanif100] mpls rsvp-te authentication window-size 32
[LSRB-Vlanif100] quit
Run the reset mpls rsvp-te command, and then run the display interface tunnel command
on LSRA. You can see that the tunnel interface is Up.
Run the display mpls rsvp-te interface command on LSRA or LSRB to view information
about RSVP authentication.
[LSRA] display mpls rsvp-te interface vlanif 100
Interface: Vlanif100
Interface Address: 172.1.1.1
Interface state: UP Interface Index: 0x36
Total-BW: 0 Used-BW: 0
Hello configured: NO Num of Neighbors: 1
SRefresh feature: DISABLE SRefresh Interval: 30 sec
Mpls Mtu: 1500 Retransmit Interval: 5000 msec
Increment Value: 1 Authentication: ENABLE
Challenge: ENABLE WindowSize: 32
Next Seq # to be sent:2767789282 0 Key ID: 0xa4ff1cdc0000
Bfd Enabled: DISABLE Bfd Min-Tx: 1000
Bfd Min-Rx: 1000 Bfd Detect-Multi: 3
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher @%@%dF/wP{e=L~kuASApKdnN8!Np@%@%
mpls rsvp-te authentication handshake 12345678
mpls rsvp-te authentication window-size 32
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
101
mpls te
commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
mpls-te enable
#
return
mpls
te
mpls rsvp-
te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
mpls rsvp-te authentication cipher @%@%dF/wP{e=L~kuASApKdnN8!Np@%@%
mpls rsvp-te authentication handshake 12345678
mpls rsvp-te authentication window-size 32
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
LSRD
GE0/0/1
VLANIF300
172.3.1.2/24
GE0/0/1 GE0/0/2
Primary CR-LSP VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Bypass CR-LSP LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure manual TE FRR.
2. Configure RSVP authentication on LSRB and LSRC to prevent forged Resv messages
from consuming network resources.
Procedure
Step 1 Configure MPLS TE FRR.
Configure the primary and bypass MPLS TE tunnels according to 5.10.13 Example for
Configuring Manual TE FRR, and then bind the two tunnels.
Step 2 Configure RSVP authentication on LSRB and LSRC.
The Handshake function and local password are configured to check whether RSVP
authentication is configured successfully.
NOTE
The neighbor node is identified by its LSR-ID, therefore, you must enable CSPF on two neighboring devices
where RSVP authentication is required.
[LSRC] mpls
[LSRC-mpls] mpls te cspf
[LSRC-mpls] quit
[LSRC] mpls rsvp-te peer 2.2.2.9
[LSRC-mpls-rsvp-te-peer-2.2.2.9] mpls rsvp-te authentication cipher Huawei@1234
[LSRC-mpls-rsvp-te-peer-2.2.2.9] mpls rsvp-te authentication handshake 12345678
[LSRC-mpls-rsvp-te-peer-2.2.2.9] quit
Run the display interface tunnel 1 command on LSRA. You can view the status of the
primary CR-LSP and that the status of the tunnel interface is still Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-21 10:58:49
Description:
...
Run the tracert lsp te tunnel 1 command on LSRA. You can view the path that the tunnel
passes.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C t
o break.
TTL Replier Time Type Downstream
0 Ingress 172.1.1.2/[1037 ]
The preceding information shows that services on the link have been switched to the bypass
CR-LSP.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. You can see
that the bypass CR-LSP is in use.
[LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : -
TunnelIndex : 1 LSP Index : 2049
Session ID : 100 LSP ID : 8
LSR Role : Transit
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : Vlanif100
Out-Interface : Vlanif200
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: 2
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8562
Created Time : 2013-09-16
19:14:37+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x1
Bypass In Use : In Use
Bypass Tunnel Id : 1280021547
BypassTunnel : Tunnel Index[Tunnel2], InnerLabel[1045]
Bypass LSP ID : 4 FrrNextHop : 172.5.1.2
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
# Run the display mpls rsvp-te peer command to check whether the bypass CR-LSP is
successfully set up.
[LSRB] display mpls rsvp-te peer
Remote Node id Neighbor
Neighbor Addr: -----
SrcInstance: 0x60128590 NbrSrcInstance: 0x0
PSB Count: 1 RSB Count: 0
Hello Type Sent: NONE
SRefresh Enable: NO
Last valid seq # rcvd: NULL
Interface: Vlanif100
Neighbor Addr: 172.1.1.1
SrcInstance: 0x60128590 NbrSrcInstance: 0x0
PSB Count: 1 RSB Count: 0
Hello Type Sent: NONE
SRefresh Enable: NO
Last valid seq # rcvd: NULL
Interface: Vlanif400
Neighbor Addr: 172.4.1.2
SrcInstance: 0x60128590 NbrSrcInstance: 0x0
PSB Count: 0 RSB Count: 1
Hello Type Sent: NONE
SRefresh Enable: NO
Last valid seq # rcvd: NULL
The command output shows that the number of RSBs on neighbor of LSRB is not zero. This
indicates that RSVP authentication is successful on LSRB and its neighbor LSRC, and
resources are successfully reserved.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 172.3.1.2
next hop 4.4.4.9
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te tunnel-id 100
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
mpls te commit
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls te timer fast-reroute 120
mpls rsvp-te
mpls te cspf
#
explicit-path by-path
next hop 172.4.1.2
next hop 172.5.1.2
next hop 3.3.3.9
#
mpls rsvp-te peer 3.3.3.9
mpls rsvp-te authentication cipher @%@%dF/wP{e=L~kuASApKdnN8!Np@%@%
mpls rsvp-te authentication handshake 12345678
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0002.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
isis enable 1
#
interface Tunnel2
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 300
mpls te record-route
mpls te path explicit-path by-path
mpls te bypass-tunnel
mpls te protected-interface Vlanif200
mpls te commit
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300 500
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
mpls rsvp-te peer 2.2.2.9
mpls rsvp-te authentication cipher @%@%dF/wP{e=L~kuASApKdnN8!Np@%@%
mpls rsvp-te authentication handshake 12345678
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0003.00
traffic-eng level-2
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0004.00
traffic-eng level-2
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0005.00
traffic-eng level-2
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
isis enable 1
#
return
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
GE0/0/1 GE0/0/2
VLANIF600 VLANIF700
172.6.1.2/24 172.7.1.1/24
LSRF
GE0/0/2 GE0/0/3
VLANIF600 VLANIF700
Loopback1
172.6.1.1/24 172.7.1.2/24
Loopback1 2.2.2.9/32 Loopback1
GE0/0/1 GE0/0/2
1.1.1.9/32 3.3.3.9/32
VLANIF100 VLANIF200
172.1.1.2/24 172.2.1.1/24
LSRA LSRC
GE0/0/1 GE0/0/1
VLANIF100 VLANIF200
GE0/0/3 172.1.1.1/24 LSRB 172.2.1.2/24 GE0/0/2
VLANIF400 VLANIF500
172.4.1.1/24 172.5.1.2/24
Loopback1
5.5.5.9/32
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
LSRE
Primary CR-LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, CSPF
on each node and interface, and enable OSPF TE.
3. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface. The explicit path is LSRA -> LSRB ->
LSRC.
4. Configure SRLG numbers for SRLG member interfaces.
5. Configure the SRLG path calculation mode on the ingress node of the primary tunnel.
6. Configure auto TE FRR on the ingress node of the primary tunnel to protect LSRB.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400 600
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] ip address 172.6.1.1 255.255.255.0
[LSRA-Vlanif600] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 600
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface gigabitethernet 0/0/3
[LSRA-GigabitEthernet0/0/3] port link-type trunk
[LSRA-GigabitEthernet0/0/3] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/3] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.4.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.6.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
Configure IP addresses for interfaces of LSRB, LSRC, LSRE, and LSRF according to Figure
5-44. The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs learn the routes to Loopback1 of each other. The display on
LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 16 Routes : 18
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration of
LSRA, and are not mentioned here. CSPF only needs to be configured on the ingress node of
the primary tunnel.
Step 3 Configure OSPF TE.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration of
LSRA, and are not mentioned here.
Step 4 On LSRA, create an MPLS TE tunnel for the primary CR-LSP.
# Configure the explicit path of the primary CR-LSP.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
Run the display interface tunnel 1 command on LSRA. You can see that the tunnel status is
Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-22 16:57:00
Description:
...
Run the display mpls te srlg all command to view SRLG information and the interfaces that
belong to the SRLG. The display on LSRA is used as an example.
[LSRA] display mpls te srlg all
Total SRLG supported : 1024
Total SRLG configured : 2
SRLG 1: Vlanif100
Vlanif400
Run the display mpls te link-administration srlg-information to view SRLGs to which the
interfaces belong. The display on LSRA is used as an example.
[LSRA] display mpls te link-administration srlg-information
SRLGs on Vlanif100 :
1
SRLGs on Vlanif400 :
1
Run the display mpls te cspf tedb srlg command to view TEDB information of the specified
SRLG.
[LSRA] display mpls te cspf tedb srlg 1
Interface-Address IGP-Type Area
172.1.1.1 OSPF 0
172.4.1.1 OSPF 0
Run the display mpls te tunnel command on LSRA. You can see that the bypass CR-LSP has
been established.
[LSRA] display mpls te tunnel
------------------------------------------------------------------------------
Run the display mpls te tunnel path Tunnel1 command on LSRA. You can see that local
protection is enabled on the outbound interface (172.1.1.1) of the primary tunnel on LSRA.
[LSRA] display mpls te tunnel path Tunnel1
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.9 :100 :1
Hop Information
Hop 0 172.1.1.1 Local-Protection available | node
Hop 1 172.1.1.2 Label 1024
Hop 2 2.2.2.9 Label 1024
Hop 3 172.2.1.1
Hop 4 172.2.1.2 Label 3
Hop 5 3.3.3.9 Label 3
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
# Run the display mpls te tunnel path Tunnel2048 command on LSRA to check the path of
the bypass CR-LSP. You can see that the path of the bypass CR-LSP is LSRA -> LSRF ->
LSRC.
[LSRA] display mpls te tunnel path Tunnel2048
Tunnel Interface Name : Tunnel2048
Lsp ID : 1.1.1.9 :1025 :4
Hop Information
Hop 0 172.6.1.1
Hop 1 172.6.1.2 Label 1025
Hop 2 6.6.6.9 Label 1025
Hop 3 172.7.1.1
Hop 4 172.7.1.2 Label 3
Hop 5 3.3.3.9 Label 3
# Run the display mpls te tunnel name Tunnel1 verbose command on LSRA. You can see
that the primary tunnel is bound to Tunnel2049 and the FRR next hop is 172.5.1.2.
[LSRA] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 0 LSP Index : 2048
Session ID : 100 LSP ID : 1
LSR Role : Ingress LSP Type : Primary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 3.3.3.9
In-Interface : -
Out-Interface : Vlanif100
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : -
ER-Hop Table Index : 0 AR-Hop Table Index: 1
C-Hop Table Index : 1
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8198
Created Time : 2013-09-16 15:20:42+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x2
Bypass In Use : Not Used
Bypass Tunnel Id : 11
BypassTunnel : Tunnel Index[Tunnel2049], InnerLabel[1024]
Bypass LSP ID : 4 FrrNextHop : 172.5.1.2
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
# Run the display mpls te tunnel path Tunnel2049 command to check the path of the bypass
CR-LSP.
[LSRA] display mpls te tunnel path Tunnel2049
Tunnel Interface Name : Tunnel2049
Lsp ID : 1.1.1.9 :1026 :4
Hop Information
Hop 0 172.4.1.1
Hop 1 172.4.1.2 Label 1026
Hop 2 5.5.5.9 Label 1026
Hop 3 172.5.1.1
Hop 4 172.5.1.2 Label 3
Hop 5 3.3.3.9 Label 3
You can see that the path of the bypass CR-LSP is LSRA -> LSRE -> LSRC. This is because
the SRLG path calculation mode is configured as preferred. CSPF tries to calculate the path
of the bypass tunnel to avoid the links in the same SRLG as the protected interface(s). If
calculation fails, CSPF does not take the SRLG as a constraint.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400 600
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls te auto-frr
mpls te srlg path-calculation preferred
mpls rsvp-
te
mpls te cspf
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 1
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 1
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.6.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 500 700
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
Networking Requirements
As shown in Figure 5-45, An MPLS TE tunnel is set up between LSRA and LSRC, with the
path LSRA -> LSRB -> LSRC.
The link LSRA -> LSRB and the link LSRA -> LSRE are in the same SRLG (SRLG1 for
example); the link LSRC -> LSRB and the link LSRC -> LSRE are in the same SLRG
(SRLG2 for example).
To improve reliability, a hot-standby CR-LSP needs to be established and the links of the
bypass CR-LSP and primary tunnel must be in different SRLGs.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
Figure 5-45 Networking for configuring SRLG based on CR-LSP hot standby
Loopback1
6.6.6.9/32
GE0/0/1 GE0/0/2
VLANIF600 VLANIF700
172.6.1.2/24 172.7.1.1/24
LSRF
GE0/0/2 GE0/0/3
VLANIF600 VLANIF700
Loopback1
172.6.1.1/24 172.7.1.2/24
Loopback1 2.2.2.9/32 Loopback1
GE0/0/1 GE0/0/2
1.1.1.9/32 3.3.3.9/32
VLANIF100 VLANIF200
172.1.1.2/24 172.2.1.1/24
LSRA LSRC
GE0/0/1 GE0/0/1
VLANIF100 VLANIF200
GE0/0/3 172.1.1.1/24 LSRB 172.2.1.2/24 GE0/0/2
VLANIF400 VLANIF500
172.4.1.1/24 172.5.1.2/24
Loopback1
5.5.5.9/32
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
LSRE
Primary CR-LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, CSPF
on each node and interface, and enable OSPF TE.
3. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface. The explicit path is LSRA -> LSRB ->
LSRC.
4. Configure SRLG numbers for SRLG member interfaces.
5. Configure the SRLG path calculation mode on the ingress node of the primary tunnel.
6. Configure a hot-standby CR-LSP on the ingress node of the primary tunnel.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400 600
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] ip address 172.6.1.1 255.255.255.0
[LSRA-Vlanif600] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 600
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface gigabitethernet 0/0/3
[LSRA-GigabitEthernet0/0/3] port link-type trunk
[LSRA-GigabitEthernet0/0/3] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/3] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.4.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.6.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
Configure IP addresses for interfaces of LSRB, LSRC, LSRE, and LSRF according to Figure
5-45. The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs learn the routes to Loopback1 of each other. The display on
LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 16 Routes : 18
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] mpls
[LSRA-Vlanif400] mpls te
[LSRA-Vlanif400] mpls rsvp-te
[LSRA-Vlanif400] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] mpls
[LSRA-Vlanif600] mpls te
[LSRA-Vlanif600] mpls rsvp-te
[LSRA-Vlanif600] quit
The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration of
LSRA, and are not mentioned here. CSPF only needs to be configured on the ingress node of
the primary tunnel.
Step 3 Configure OSPF TE.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB, LSRC, LSRE, and LSRF are similar to the configuration of
LSRA, and are not mentioned here.
Step 4 On LSRA, create an MPLS TE tunnel for the primary CR-LSP.
# Configure the explicit path of the primary CR-LSP.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
Run the display interface tunnel 1 command on LSRA. You can see that the tunnel status is
Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-22 16:57:00
Description:
...
# Configure LSRB.
[LSRB] interface vlanif 200
[LSRB-Vlanif200] mpls te srlg 2
[LSRB-Vlanif200] quit
# Configure LSRE.
[LSRE] interface vlanif 500
[LSRE-Vlanif500] mpls te srlg 2
[LSRE-Vlanif500] quit
Run the display mpls te srlg all command to view SRLG information and the interfaces that
belong to the SRLG. The display on LSRA is used as an example.
[LSRA] display mpls te srlg all
Total SRLG supported : 1024
Total SRLG configured : 2
SRLG 1: Vlanif100
Vlanif400
Run the display mpls te link-administration srlg-information to view SRLGs to which the
interfaces belong. The display on LSRA is used as an example.
[LSRA] display mpls te link-administration srlg-information
SRLGs on Vlanif100 :
1
SRLGs on Vlanif400 :
1
Run the display mpls te cspf tedb srlg command to view TEDB information of the specified
SRLG.
[LSRA] display mpls te cspf tedb srlg 1
Interface-Address IGP-Type Area
172.1.1.1 OSPF 0
172.4.1.1 OSPF 0
[LSRA] display mpls te cspf tedb srlg 2
Interface-Address IGP-Type Area
172.2.1.1 OSPF 0
172.5.1.1 OSPF 0
Run the display mpls te tunnel-interface command on LSRA. You can see that the hot-
standby CR-LSP has been established.
[LSRA] display mpls te tunnel-interface
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 54
Hot-Standby LSP State : UP
Main LSP State : READY LSP ID : 32780
Run the display mpls te hot-standby state interface tunnel 1 command on LSRA to view
the hot-standby CR-LSP.
[LSRA] display mpls te hot-standby state interface tunnel 1
---------------------------------------------------------------------
Verbose information about the Tunnel1 hot-standby state
---------------------------------------------------------------------
session id : 100
main LSP token : 0x51
hot-standby LSP token : 0x4f
HSB switch result : Primary LSP
HSB switch reason : -
WTR config time : 10s
WTR remain time : -
using overlapped path : no
Run the display mpls te hot-standby state interface tunnel 1 command on LSRA. You can
see that the hot-standby LSP token is 0x0. This means that the hot-standby LSP is not set up
even though there are paths for setting up the hot-standby LSP.
[LSRA] display mpls te hot-standby state interface tunnel 1
---------------------------------------------------------------------
Verbose information about the Tunnel1 hot-standby state
---------------------------------------------------------------------
session id : 100
main LSP token : 0x51
hot-standby LSP token : 0x0
HSB switch result : Primary LSP
HSB switch reason : -
WTR config time : 10s
WTR remain time : -
using overlapped path : -
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400 600
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls te srlg path-calculation strict
mpls rsvp-
te
mpls te cspf
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 1
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 1
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te record-route label
mpls te path explicit-path pri-path
mpls te backup hot-
standby
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
network 172.6.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 2
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 500 700
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 2
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRF
#
sysname LSRF
#
vlan batch 600 700
#
mpls lsr-id 6.6.6.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 6.6.6.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 6.6.6.9
0.0.0.0
network 172.6.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
Networking Requirements
Figure 5-46 shows an MPLS VPN. A TE tunnel with LSRA as the ingress node and LSRC as
the egress node needs to be established on LSRA. A hot-standby CR-LSP and best-effort path
also need to be configured.
l The path of the primary CR-LSP is LSRA -> LSRB -> LSRC.
l The path of the backup CR-LSP is LSRA -> LSRD -> LSRC.
l The best-effort path is LSRA -> LSRD -> LSRB -> LSRC.
When the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary
CR-LSP recovers, traffic switches back to the primary CR-LSP in 15 seconds. If both the
primary CR-LSP and backup CR-LSP fail, traffic switches to the best-effort path.
NOTE
STP must be disabled on the network. Otherwise, some interfaces may be blocked by STP.
GE0/0/1 GE0/0/1
VLANIF100 VLANIF300
172.1.1.1/24 172.3.1.1 /24
GE0/0/2 GE0/0/2
LSRA VLANIF500 VLANIF200 LSRC
172.5.1.1/24 172.2.1.2/24
Loopback1 Loopback1
1.1.1.9/32 3.3.3.9/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, CSPF
on each node and interface, and enable OSPF TE.
3. Specify explicit paths for the primary and backup CR-LSPs on LSRA.
4. Create a tunnel interface with LSRC as the egress node on LSRA, specify an explicit
path, configure the hot-standby CR-LSP and best-effort path, and set the WTR time to 15
seconds.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 500
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 500
[LSRA-Vlanif500] ip address 172.5.1.1 255.255.255.0
[LSRA-Vlanif500] quit
# Configure IP addresses for interfaces of LSRB, LSRC, and LSRD according to Figure 5-46.
The configurations on LSRB, LSRC, and LSRD are similar to the configuration of LSRA,
and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on the
LSRs. You can see that the LSRs learn the routes to Loopback1 of each other.
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
On each node, enable MPLS TE and RSVP-TE in the MPLS view and in the interface view.
Enable CSPF on the ingress node.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 500
[LSRA-Vlanif500] mpls
[LSRA-Vlanif500] mpls te
[LSRA-Vlanif500] mpls rsvp-te
[LSRA-Vlanif500] quit
The configurations on LSRB, LSRC, and LSRD are similar to the configuration of LSRA,
and are not mentioned here. CSPF only needs to be configured on the ingress nodes of the
primary tunnel and bypass tunnel. That is, CSPF needs to be enabled on only LSRA.
Step 3 Configure OSPF TE.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations on LSRB, LSRC, and LSRD are similar to the configuration of LSRA,
and are not mentioned here.
Step 4 Configure the explicit paths for the primary and backup CR-LSPs.
# Configure the explicit path of the primary CR-LSP on LSRA.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 3.3.3.9
[LSRA-explicit-path-pri-path] quit
After the configurations are complete, you can view explicit paths through commands.
[LSRA] display explicit-path pri-path
Path Name : pri-path Path Status : Enabled
1 172.1.1.2 Strict Include
2 172.2.1.2 Strict Include
3 3.3.3.9 Strict Include
# Configure CR-LSP hot standby on the tunnel interface, set the WTR time to 15 seconds,
specify an explicit path, and configure the best-effort path.
[LSRA-Tunnel1] mpls te backup hot-standby wtr 15
[LSRA-Tunnel1] mpls te path explicit-path backup-path secondary
[LSRA-Tunnel1] mpls te backup ordinary best-effort
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
After the configurations are complete, run the display mpls te tunnel-interface tunnel 1
command on LSRA. You can see that the primary and backup CR-LSPs are successfully
established.
[LSRA] display mpls te tunnel-interface tunnel 1
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 10
Hot-Standby LSP State : UP
Main LSP State : READY LSP ID : 32773
Run the display mpls te hot-standby state interface tunnel 1 command on LSRA to view
CR-LSP hot standby information.
[LSRA] display mpls te hot-standby state interface Tunnel 1
---------------------------------------------------------------------
Verbose information about the Tunnel1 hot-standby state
---------------------------------------------------------------------
session id : 100
main LSP token : 0xc
hot-standby LSP token : 0xb
HSB switch result : Primary LSP
HSB switch reason : -
WTR config time : 15s
WTR remain time : -
using overlapped path : no
Run the ping lsp te command on LSRA to detect connectivity of the hot-standby CR-LSP.
[LSRA] ping lsp te tunnel 1 hot-standby
LSP PING FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 : 100 data bytes, pres
s CTRL_C to break
Reply from 3.3.3.9: bytes=100 Sequence=1 time=11 ms
Reply from 3.3.3.9: bytes=100 Sequence=2 time=2 ms
Reply from 3.3.3.9: bytes=100 Sequence=3 time=2 ms
Reply from 3.3.3.9: bytes=100 Sequence=4 time=2 ms
Reply from 3.3.3.9: bytes=100 Sequence=5 time=2 ms
--- FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 ping statistics ---
5 packet(s) transmitted
5 packet(s) received
0.00% packet loss
round-trip min/avg/max = 2/3/11 ms
Run the tracert lsp te command on LSRA to check the path of the hot-standby CR-LSP.
[LSRA] tracert lsp te tunnel 1 hot-standby
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C t
o break.
TTL Replier Time Type Downstream
0 Ingress 172.5.1.2/[1027 ]
1 172.5.1.2 9 ms Transit 172.3.1.1/[3 ]
2 3.3.3.9 10 ms Egress
Run the display mpls te tunnel-interface tunnel 1 command on LSRA. You can see that
traffic switches to the backup CR-LSP.
[LSRA] display mpls te tunnel-interface tunnel 1
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Hot-Standby LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : DOWN
Main LSP State : SETTING UP
After attaching the cable into GE0/0/1 (running the undo shutdown command on
VLANIF100 of LSRA), you can see that traffic switches back to the primary CR-LSP in 15
seconds.
After you remove the cable from GE0/0/1 on LSRA or LSRB and the cable from GE0/0/1 on
LSRC or LSRD, the tunnel interface goes Down and then Up. This means that the best-effort
path has been set up successfully, allowing traffic to switch to the best-effort path.
# Run the shutdown command on VLANIF100 of LSRA, and then run the shutdown
command on VLANIF300 of LSRC.
[LSRA] interface vlanif 100
[LSRA-Vlanif100] shutdown
[LSRA-Vlanif100] quit
[LSRC] interface vlanif 300
[LSRC-Vlanif300] shutdown
[LSRC-Vlanif300] quit
Run the display mpls te tunnel-interface tunnel 1 command on LSRA. You can see that the
tunnel interface becomes Down and the best-effort path is being established.
[LSRA] display mpls te tunnel-interface tunnel 1
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : DOWN
Active LSP : -
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : DOWN
Primary LSP State : DOWN
Main LSP State : SETTING UP
Hot-Standby LSP State : DOWN
Main LSP State : SETTING UP
Best-Effort LSP State : DOWN
Main LSP State : SETTING UP
After several seconds, run the display mpls te tunnel-interface tunnel 1 command on
LSRA. You can see that the tunnel interface is Up and the best-effort path is successfully
established.
[LSRA] display mpls te tunnel-interface tunnel 1
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Best-Effort LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 3.3.3.9
Admin State : UP Oper State : UP
Primary LSP State : DOWN
Main LSP State : SETTING UP
Hot-Standby LSP State : DOWN
Main LSP State : SETTING UP
Best-Effort LSP State : UP
Main LSP State : READY LSP ID : 32776
Hop 2 4.4.4.9
Hop 3 172.4.1.2
Hop 4 172.4.1.1
Hop 5 2.2.2.9
Hop 6 172.2.1.1
Hop 7 172.2.1.2
Hop 8 3.3.3.9
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 500
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup-path
next hop 172.5.1.2
next hop 172.3.1.1
next hop 3.3.3.9
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te record-
route
mpls te path explicit-path pri-
path
mpls te path explicit-path backup-path
secondary
mpls te backup hot-standby mode revertive wtr
15
mpls te backup ordinary best-
effort
mpls te commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
mpls-te
enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300 400 500
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0
0.0.0.255
network 172.4.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te
enable
#
return
Networking Requirements
As shown in Figure 5-47, the primary CR-LSP is along the path LSRA -> LSRB -> LSRC ->
LSRD, and the link between LSRB and LSRC needs to be protected by FRR.
A bypass CR-LSP is set up along the path LSRB -> LSRE -> LSRC. LSRB functions as the
PLR and LSRC functions as the MP.
The primary and bypass MPLS TE tunnels need to be set up by using explicit paths. RSVP-
TE is used as the signaling protocol.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
LSRD
GE0/0/1
VLANIF300
172.3.1.2/24
GE0/0/1 GE0/0/2
Primary CR-LSP VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Bypass CR-LSP LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface, enable IS-IS globally, configure the NET, and
enable IS-IS on each interface including the loopback interface.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, and
CSPF on each node and interface. Enable IS-IS TE and change the cost type.
3. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface.
4. Enable TE FRR on the interface of the primary tunnel on the ingress node.
5. Create a tunnel interface on the ingress node LSRB of the bypass tunnel of the protected
link, set the IP address, tunnel protocol, destination IP address, tunnel ID, and RSVP-TE
for the tunnel interface, and specify the interface of the protected link.
Procedure
Step 1 Assign IP addresses to interfaces.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100
# Configure IP addresses for interfaces of LSRB, LSRC, LSRD, and LSRE according to
Figure 5-47. The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the
configuration of LSRA, and are not mentioned here.
Step 2 Configure IS-IS to advertise routes.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
[LSRA-isis-1] is-level level-2
[LSRA-isis-1] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] isis enable 1
[LSRA-Vlanif100] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] isis enable 1
[LSRA-LoopBack1] quit
The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the configuration of
LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs learn the routes from each other. The display on LSRA is
used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 13 Routes : 13
Step 3 Configure basic MPLS functions and enable MPLS TE, CSPF, RSVP-TE, and IS-IS TE.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] isis
[LSRA-isis-1] cost-style wide
[LSRA-isis-1] traffic-eng level-2
[LSRA-isis-1] quit
The configurations of LSRB, LSRC, LSRD, and LSRE are similar to the configuration of
LSRA, and are not mentioned here. CSPF needs to be enabled only on the ingress node of the
primary tunnel (LSRA) and the ingress node LSRB of the bypass tunnel (LSRB); CSPF does
not need to be enabled on LSRC, LSRD, or LSRE.
Step 4 On LSRA, create an MPLS TE tunnel for the primary CR-LSP.
# Configure the explicit path of the primary CR-LSP.
[LSRA] explicit-path pri-path
[LSRA-explicit-path-pri-path] next hop 172.1.1.2
[LSRA-explicit-path-pri-path] next hop 172.2.1.2
[LSRA-explicit-path-pri-path] next hop 172.3.1.2
[LSRA-explicit-path-pri-path] next hop 4.4.4.9
[LSRA-explicit-path-pri-path] quit
# Enable TE FRR.
[LSRA-Tunnel1] mpls te fast-reroute
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
After the configurations are complete, run the display interface tunnel command on LSRA.
You can see that the status of Tunnel1 is Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-21 10:58:49
Description:
...
Run the display mpls te tunnel verbose command on LSRA. You can view detailed
information about the tunnel interface.
[LSRA] display mpls te tunnel verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 0 LSP Index : 2048
Session ID : 100 LSP ID : 3
LSR Role : Ingress LSP Type : Primary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : -
Out-Interface : Vlanif100
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : -
ER-Hop Table Index : 1 AR-Hop Table Index: 0
C-Hop Table Index : 1
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8253
Created Time : 2013-09-16 17:57:06+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x0
Bypass In Use : Not Exists
Bypass Tunnel Id : -
BypassTunnel : -
Bypass LSP ID : - FrrNextHop : -
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
After the configurations are complete, run the display interface tunnel command on LSRB.
You can see that the status of Tunnel2 is Up.
Run the display mpls lsp command on all the LSRs. You can view the LSP entry and that two
LSPs pass through LSRB and LSRC.
[LSRA] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.9/32 NULL/1032 -/Vlanif100
[LSRB] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.9/32 1032/1040 Vlanif100/Vlanif200
3.3.3.9/32 NULL/1025 -/Vlanif400
[LSRC] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.9/32 1040/3 Vlanif200/Vlanif300
3.3.3.9/32 3/NULL Vlanif500/-
[LSRD] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.9/32 3/NULL Vlanif300/-
[LSRE] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
3.3.3.9/32 1025/3 Vlanif400/Vlanif500
Run the display mpls te tunnel command on all the LSRs. You can view tunnel
establishment and that two tunnels pass through LSRB and LSRC.
[LSRA] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
1.1.1.9 4.4.4.9 3 --/1032 I Tunnel1
[LSRB] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
1.1.1.9 4.4.4.9 3 1032/1040 T Tunnel1
2.2.2.9 3.3.3.9 2 --/1025 I Tunnel2
[LSRC] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
2.2.2.9 3.3.3.9 2 3/-- E Tunnel2
1.1.1.9 4.4.4.9 3 1040/3 T Tunnel1
[LSRD] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
1.1.1.9 4.4.4.9 3 3/-- E Tunnel1
[LSRE] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
2.2.2.9 3.3.3.9 2 1025/3 T Tunnel2
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. You can see
that the bypass tunnel is bound to the outbound interface VLANIF200 and not in use.
[LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : -
TunnelIndex : 1 LSP Index : 4098
Session ID : 100 LSP ID : 3
LSR Role : Transit
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : Vlanif100
Out-Interface : Vlanif200
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : 1 AR-Hop Table Index: 0
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8247
Created Time : 2013-09-16 17:59:06+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x1
Bypass In Use : Not Used
Bypass Tunnel Id : 18221014254
BypassTunnel : Tunnel Index[Tunnel2], InnerLabel[1040]
Bypass LSP ID : 2 FrrNextHop : 172.5.1.2
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
Run the display interface tunnel 1 command on LSRA. You can view the status of the
primary CR-LSP and that the status of the tunnel interface is still Up.
Run the tracert lsp te tunnel 1 command on LSRA. You can view the path that the tunnel
passes.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C t
o break.
TTL Replier Time Type Downstream
0 Ingress 172.1.1.2/[1032 ]
1 172.1.1.2 2 ms Transit 172.4.1.2/[1040 1025 ]
2 172.4.1.2 2 ms Transit 172.5.1.2/[3 ]
3 172.5.1.2 1 ms Transit 172.3.1.2/[3 ]
4 4.4.4.9 11 ms Egress
The preceding information shows that services on the link have been switched to the bypass
CR-LSP.
NOTE
Run the display mpls te tunnel-interface command to view detailed information about tunnel
interfaces. You can view two CR-LSPs in Up state. This is because FRR establishes a new LSP by using
the make-before-break mechanism. The original LSP is deleted only after the new LSP is established
successfully.
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. You can see
that the bypass CR-LSP is in use.
[LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : -
TunnelIndex : 1 LSP Index : 4098
Session ID : 100 LSP ID : 3
LSR Role : Transit
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : Vlanif100
Out-Interface : Vlanif200
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : - AR-Hop Table Index: 2
C-Hop Table Index : -
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8247
Created Time : 2013-09-16 18:17:06+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 7 Hold-Priority : 7
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x1
Bypass In Use : In Use
Bypass Tunnel Id : 18221014254
Run the display interface tunnel 1 command on LSRA. You can view the primary CR-LSP
status and that the tunnel interface status is Up.
After a period of time, run the display mpls te tunnel name Tunnel1 verbose command on
LSRB. You can see that Tunnel1 is bound to VLANIF200 and remains unused.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 172.3.1.2
next hop 4.4.4.9
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te tunnel-id 100
mpls te record-route label
mpls te path explicit-path pri-path
mpls te fast-reroute
mpls te commit
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls te timer fast-reroute 120
mpls rsvp-te
mpls te cspf
#
explicit-path by-path
next hop 172.4.1.2
next hop 172.5.1.2
next hop 3.3.3.9
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0002.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0004.00
traffic-eng level-2
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
isis enable 1
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
isis 1
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0005.00
traffic-eng level-2
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
isis enable 1
#
return
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
GE0/0/2
Loopback1 Loopback1 VLANIF300 Loopback1
1.1.1.9/32 GE0/0/2 2.2.2.9/32 GE0/0/4 172.3.1.1/24 3.3.3.9/32
VLANIF600 GE0/0/1 GE0/0/2 VLANIF700
172.6.1.1/24 VLANIF100 VLANIF200 172.7.1.2/24
172.1.1.2/24 172.2.1.1/24
GE0/0/1 GE0/0/1
LSRB LSRC
VLANIF100 VLANIF200
LSRA 172.1.1.1/24 GE0/0/3 172.2.1.2/24 GE0/0/3
VLANIF400 Loopback1 VLANIF500
172.4.1.1/24 5.5.5.9/32 172.5.1.2/24
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Primary CR-LSP LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, CSPF
on each node and interface, and enable OSPF TE.
3. Enable auto TE FRR in the MPLS view of the ingress node of the primary tunnel and
configure node protection. Enable auto TE FRR in the MPLS view of the ingress node of
the bypass tunnel and configure link protection.
4. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and RSVP-TE for the
tunnel interface.
5. Enable TE FRR on the tunnel interface of the ingress node of the primary tunnel.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 600
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] ip address 172.6.1.1 255.255.255.0
[LSRA-Vlanif600] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/2
[LSRA-GigabitEthernet0/0/2] port link-type trunk
[LSRA-GigabitEthernet0/0/2] port trunk allow-pass vlan 600
[LSRA-GigabitEthernet0/0/2] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
[LSRA-ospf-1-area-0.0.0.0] network 172.1.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] network 172.6.1.0 0.0.0.255
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
# Configure IP addresses for interfaces on LSRB, LSRC, LSRD, LSRE, and LSRF according
to Figure 5-48. The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to
the configuration of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs learn the routes to Loopback1 of each other. The display on
LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] mpls
[LSRA-Vlanif600] mpls te
[LSRA-Vlanif600] mpls rsvp-te
[LSRA-Vlanif600] quit
The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here. CSPF only needs to be configured on the ingress nodes
of the primary tunnel and bypass tunnel. That is, CSPF needs to be enabled on only LSRA
and LSRB.
Step 3 Configure OSPF TE.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here.
Step 4 Enable auto TE FRR.
# Configure LSRA.
[LSRA] mpls
[LSRA-mpls] mpls te auto-frr
[LSRA-mpls] quit
# Configure LSRB.
[LSRB] mpls
[LSRB-mpls] mpls te auto-frr
[LSRB-mpls] quit
[LSRB] interface vlanif 200
[LSRB-Vlanif200] mpls te auto-frr link
[LSRB-Vlanif200] quit
Run the display mpls te tunnel name Tunnel1 verbose command on LSRA and LSRB to
view LSP information. You can view information about the primary tunnel and bound bypass
tunnels.
[LSRA] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : Tunnel1
TunnelIndex : 3 LSP Index : 2050
Session ID : 100 LSP ID : 34
LSR Role : Ingress LSP Type : Primary
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : -
Out-Interface : Vlanif100
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
LspConstraint : -
ER-Hop Table Index : 0 AR-Hop Table Index: 0
C-Hop Table Index : 0
PrevTunnelIndexInSession: - NextTunnelIndexInSession: -
PSB Handle : 8205
Created Time : 2013-09-16 16:11:50+00:00
RSVP LSP Type : -
--------------------------------
DS-TE Information
--------------------------------
Bandwidth Reserved Flag : Unreserved
CT0 Bandwidth(Kbit/sec) : 0 CT1 Bandwidth(Kbit/sec): 0
CT2 Bandwidth(Kbit/sec) : 0 CT3 Bandwidth(Kbit/sec): 0
CT4 Bandwidth(Kbit/sec) : 0 CT5 Bandwidth(Kbit/sec): 0
CT6 Bandwidth(Kbit/sec) : 0 CT7 Bandwidth(Kbit/sec): 0
Setup-Priority : 4 Hold-Priority : 3
--------------------------------
FRR Information
--------------------------------
Primary LSP Info
TE Attribute Flag : 0x63 Protected Flag : 0x2
Bypass In Use : Not Used
Bypass Tunnel Id :
1200144821
BypassTunnel : Tunnel Index[Tunnel2048], InnerLabel[1063]
Bypass LSP ID : 3 FrrNextHop : 172.7.1.2
ReferAutoBypassHandle : -
FrrPrevTunnelTableIndex : - FrrNextTunnelTableIndex: -
Bypass Attribute(Not configured)
Setup Priority : - Hold Priority : -
HopLimit : - Bandwidth : -
IncludeAnyGroup : - ExcludeAnyGroup : -
IncludeAllGroup : -
Bypass Unbound Bandwidth Info(Kbit/sec)
CT0 Unbound Bandwidth : - CT1 Unbound Bandwidth: -
CT2 Unbound Bandwidth : - CT3 Unbound Bandwidth: -
CT4 Unbound Bandwidth : - CT5 Unbound Bandwidth: -
CT6 Unbound Bandwidth : - CT7 Unbound Bandwidth: -
--------------------------------
BFD Information
--------------------------------
NextSessionTunnelIndex : - PrevSessionTunnelIndex: -
NextLspId : - PrevLspId : -
You can see that the primary tunnel on LSRA is bound to Tunnel2048 and the primary tunnel
on LSRB is bound to Tunnel2048.
Run the display mpls te tunnel name Tunnel2048 verbose command on LSRA and the
display mpls te tunnel name Tunnel2048 verbose command on LSRB. You can view details
of auto bypass tunnels.
You can see that the outbound interface VLANIF100 is protected by the auto bypass tunnel on
LSRA, and LSRB is protected. You can see that the outbound interface VLANIF200 is
protected by the auto bypass tunnel on LSRB, and the link between LSRB and LSRC is
protected.
Run the display mpls te tunnel path command on LSRA and LSRB. You can view path
information of the primary tunnel and the auto bypass tunnel, and view that node protection
and link protection are provided for the outbound interface on the primary tunnel.
[LSRA] display mpls te tunnel path
Tunnel Interface Name : Tunnel1
Lsp ID : 1.1.1.9 :100 :34
Hop Information
Hop 0 172.1.1.1 Local-Protection available | node
Hop 1 172.1.1.2 Label 1055
Hop 2 2.2.2.9 Label 1055
Hop 3 172.2.1.1 Local-Protection available
Hop 4 172.2.1.2 Label 1063
Hop 5 3.3.3.9 Label 1063
Hop 6 172.3.1.1
Hop 7 172.3.1.2 Label 3
Hop 8 4.4.4.9 Label 3
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 600
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls te auto-
frr
mpls rsvp-
te
mpls te cspf
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop
172.3.1.2
next hop 4.4.4.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 600
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 4.4.4.9
mpls te tunnel-id 100
mpls te record-route label
mpls te priority 4 3
mpls te path explicit-path pri-path
mpls te fast-reroute
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.6.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls te auto-frr
mpls rsvp-te
mpls te cspf
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls te auto-frr link
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300 500 700
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/4
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
mpls
te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRF
#
sysname LSRF
#
vlan batch 600 700
#
mpls lsr-id 6.6.6.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 6.6.6.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 6.6.6.9
0.0.0.0
network 172.6.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
GE0/0/2
Loopback1 Loopback1 VLANIF300 Loopback1
1.1.1.9/32 GE0/0/2 2.2.2.9/32 GE0/0/4 172.3.1.1/24 3.3.3.9/32
VLANIF600 GE0/0/1 GE0/0/2 VLANIF700
172.6.1.1/24 VLANIF100 VLANIF200 172.7.1.2/24
172.1.1.2/24 172.2.1.1/24
GE0/0/1 GE0/0/1
LSRB LSRC
VLANIF100 VLANIF200
LSRA 172.1.1.1/24 GE0/0/3 172.2.1.2/24 GE0/0/3
VLANIF400 Loopback1 VLANIF500
172.4.1.1/24 5.5.5.9/32 172.5.1.2/24
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 172.5.1.1/24
Primary CR-LSP LSRE
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign an IP address to each interface and configure OSPF to ensure that there are
reachable routes between LSRs.
2. Configure an ID for each LSR and globally enable MPLS, MPLS TE, RSVP-TE, CSPF
on each node and interface, and enable OSPF TE.
3. On the ingress node of the primary tunnel, create a tunnel interface, and specify the IP
address, tunneling protocol, destination IP address, tunnel ID, and dynamic signaling
protocol RSVP-TE for the tunnel interface.
4. Enable TE FRR on the interface of the primary tunnel on the ingress node.
5. On the ingress node LSRB, configure a bypass tunnel along the path LSRB -> LSRE ->
LSRC to protect the link between LSRB and LSRC.
6. On the ingress node, set up an ordinary backup CR-LSP along the path LSRA -> LSRF -
> LSRC -> LSRD.
7. On the ingress node, configure association between the bypass tunnel and the backup
CR-LSP in the view of the interface of the primary tunnel.
Procedure
Step 1 Assign an IP address to each interface and configure OSPF.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 600
[LSRA] interface vlanif 100
# Configure IP addresses for interfaces on LSRB, LSRC, LSRD, LSRE, and LSRF according
to Figure 5-49. The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to
the configuration of LSRA, and are not mentioned here.
After the configurations are complete, run the display ip routing-table command on each
LSR. You can see that the LSRs learn the routes to Loopback1 of each other. The display on
LSRA is used as an example.
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 17 Routes : 21
Step 2 Configure basic MPLS functions and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 600
[LSRA-Vlanif600] mpls
[LSRA-Vlanif600] mpls te
[LSRA-Vlanif600] mpls rsvp-te
[LSRA-Vlanif600] quit
The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here. CSPF only needs to be configured on the ingress nodes
of the primary tunnel and bypass tunnel. That is, CSPF needs to be enabled on only LSRA
and LSRB.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations of LSRB, LSRC, LSRD, LSRE, and LSRF are similar to the configuration
of LSRA, and are not mentioned here.
# Enable TE FRR.
[LSRA-Tunnel1] mpls te fast-reroute
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
After the configurations are complete, run the display interface tunnel command on LSRA.
You can see that the status of Tunnel1 is Up.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-21 10:58:49
Description:
...
Run the display mpls lsp command on all the LSRs. You can view the LSP entry and that two
LSPs pass through LSRB and LSRC. The display on LSRB is used as an example.
[LSRB] display mpls lsp
----------------------------------------------------------------------
LSP Information: RSVP LSP
----------------------------------------------------------------------
FEC In/Out Label In/Out IF Vrf Name
4.4.4.9/32 1059/1068 Vlanif100/Vlanif200
3.3.3.9/32 NULL/1036 -/Vlanif400
Run the display mpls te tunnel command on all the LSRs. You can view tunnel
establishment and that two tunnels pass through LSRB and LSRC. The display on LSRB is
used as an example.
[LSRB] display mpls te tunnel
------------------------------------------------------------------------------
Ingress LsrId Destination LSPID In/Out Label R Tunnel-name
------------------------------------------------------------------------------
1.1.1.9 4.4.4.9 40 1059/1068 T Tunnel1
2.2.2.9 3.3.3.9 4 --/1036 I Tunnel2
Run the display mpls te tunnel name Tunnel1 verbose command on LSRB. You can see
that the bypass tunnel is bound to the outbound interface VLANIF200 and not in use.
[LSRB] display mpls te tunnel name Tunnel1 verbose
No : 1
Tunnel-Name : Tunnel1
Tunnel Interface Name : -
TunnelIndex : 3 LSP Index : 2048
Session ID : 100 LSP ID : 40
LSR Role : Transit
Ingress LSR ID : 1.1.1.9
Egress LSR ID : 4.4.4.9
In-Interface : Vlanif100
Out-Interface : Vlanif200
Sign-Protocol : RSVP TE Resv Style : SE
IncludeAnyAff : 0x0 ExcludeAnyAff : 0x0
IncludeAllAff : 0x0
ER-Hop Table Index : 3 AR-Hop Table Index: 1
Step 6 On LSRA, create an MPLS TE tunnel for the ordinary backup CR-LSP.
Step 7 Configure association between TE FRR and CR-LSP on the ingress node of the primary CR-
LSP.
# Configure LSRA.
[LSRA] interface tunnel 1
[LSRA-Tunnel1] mpls te backup frr-in-use
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
Run the display mpls te tunnel-interface Tunnel 1 command on the ingress node LSRA to
view information about the primary CR-LSR.
[LSRA] display mpls te tunnel-interface Tunnel 1
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Primary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 40
Run the display mpls te tunnel-interface command on the ingress node LSRA. You can see
that the tunnel status is Up. That is, the primary is in FRR in-use state and the ordinary CR-
LSP is being set up.
[LSRA] display mpls te tunnel-interface
----------------------------------------------------------------
Tunnel1
----------------------------------------------------------------
Tunnel State Desc : UP
Active LSP : Ordinary LSP
Session ID : 100
Ingress LSR ID : 1.1.1.9 Egress LSR ID: 4.4.4.9
Admin State : UP Oper State : UP
Primary LSP State : UP
Main LSP State : READY LSP ID : 40
Modify LSP State : SETTING UP
Ordinary LSP State : UP
Main LSP State : READY LSP ID : 32774
When the primary CR-LSP is faulty (that is, the primary CR-LSP is in FRR in-use state), the
system uses the TE FRR bypass tunnel and attempts to restore the primary CR-LSP. At the
same time, the system attempts to set up a backup CR-LSP.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 600
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup-
path
next hop
172.6.1.2
next hop
172.7.1.2
next hop
172.3.1.2
next hop
4.4.4.9
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop
172.3.1.2
next hop 4.4.4.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 600
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
4.4.4.9
mpls te tunnel-id
100
mpls te record-route
label
mpls te path explicit-path pri-
path
mpls te path explicit-path backup-path
secondary
mpls te fast-
reroute
mpls te backup
ordinary
mpls te backup frr-in-
use
mpls te commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.6.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200 400
#
mpls lsr-id
2.2.2.9
mpls
mpls
te
mpls rsvp-
te
mpls te
cspf
#
explicit-path by-
path
next hop
172.4.1.2
next hop
172.5.1.2
next hop
3.3.3.9
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls te auto-frr link
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
interface Tunnel2
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
300
mpls te record-
route
mpls te path explicit-path by-
path
mpls te bypass-
tunnel
mpls te protected-interface Vlanif200
mpls te commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
network 172.4.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300 500 700
#
mpls lsr-id
3.3.3.9
mpls
mpls
te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/4
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
network 172.7.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300
#
mpls lsr-id
4.4.4.9
mpls
mpls
te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRF
#
sysname LSRF
#
vlan batch 600 700
#
mpls lsr-id 6.6.6.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif600
ip address 172.6.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif700
ip address 172.7.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 600
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 700
#
interface LoopBack1
ip address 6.6.6.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 6.6.6.9
0.0.0.0
network 172.6.1.0
0.0.0.255
network 172.7.1.0 0.0.0.255
mpls-te enable
#
return
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 LSRE 172.5.1.1/24
GE0/0/3 GE0/0/2
VLANIF400 VLANIF500
172.4.1.1/24 Loopback1 172.5.1.2/24
Loopback1 2.2.2.9/32 Loopback1
GE0/0/1 GE0/0/2
1.1.1.9/32 3.3.3.9/32
VLANIF100 VLANIF200
172.1.1.2/24 172.2.1.1/24
LSRA LSRC
GE0/0/1 GE0/0/1
VLANIF100 VLANIF200
172.1.1.1/24 LSRB 172.2.1.2/24
Primary CR-LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Assign IP addresses to interfaces and configure OSPF to ensure that public network
routes between the nodes are reachable.
2. Configure two MPLS TE tunnels for tunnel protection.
3. Configure an MPLS TE tunnel protection group. Specify the working tunnel and
protection tunnel.
Procedure
Step 1 Assign IP addresses to interfaces and configure OSPF on the LSRs.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100 400
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] ip address 172.4.1.1 255.255.255.0
[LSRA-Vlanif400] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface gigabitethernet 0/0/3
[LSRA-GigabitEthernet0/0/3] port link-type trunk
[LSRA-GigabitEthernet0/0/3] port trunk allow-pass vlan 400
[LSRA-GigabitEthernet0/0/3] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
[LSRA] ospf 1
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] network 1.1.1.9 0.0.0.0
Assign IP addresses to interfaces of LSRB, LSRC, and LSRE according to Figure 5-50. The
configurations on these LSRs are similar to the configuration on LSRA, and are not
mentioned here.
After the configurations are complete, run the display ip routing-table command on the
LSRs. You can see that the LSRs learn the routes of Loopback1 from each other.
Step 2 Configure basic MPLS capabilities and enable MPLS TE, RSVP-TE, and CSPF.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
[LSRA] interface vlanif 400
[LSRA-Vlanif400] mpls
[LSRA-Vlanif400] mpls te
[LSRA-Vlanif400] mpls rsvp-te
[LSRA-Vlanif400] quit
The configurations on LSRB, LSRC, and LSRE are similar to the configuration on LSRA,
and are not mentioned here. CSPF needs to be enabled only on the ingress of the working
tunnel.
# Configure LSRA.
[LSRA] ospf
[LSRA-ospf-1] opaque-capability enable
[LSRA-ospf-1] area 0
[LSRA-ospf-1-area-0.0.0.0] mpls-te enable
[LSRA-ospf-1-area-0.0.0.0] quit
[LSRA-ospf-1] quit
The configurations on LSRB, LSRC, and LSRE are similar to the configuration on LSRA,
and are not mentioned here.
Run the display interface tunnel command on LSRA to check the tunnel status. The tunnel is
in Up state.
[LSRA] display interface tunnel 1
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-09-17 21:00:21
Description:
...
# Run the tracert lsp te tunnel 1 command on LSRA to check the path of the working tunnel.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to br
eak.
TTL Replier Time Type Downstream
0 Ingress 172.1.1.2/[4101 ]
1 172.1.1.2 7 ms Transit 172.2.1.2/[3 ]
2 3.3.3.9 3 ms Egress
Run the display mpls te protection tunnel all command on LSRA to check information
about the tunnel protection group.
[LSRA] display mpls te protection tunnel all
------------------------------------------------------------------------
No. Work-tunnel status /id Protect-tunnel status /id Switch-Result
------------------------------------------------------------------------
1 non-defect /100 non-defect /101 work-tunnel
# Run the shutdown command on GE0/0/1 of LSRA to simulate a failure of the working
tunnel.
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] shutdown
[LSRA-GigabitEthernet0/0/1] quit
Run the tracert lsp te tunnel 1 command on LSRA again. You can see that traffic has been
switched to the protection tunnel.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to br
eak.
TTL Replier Time Type Downstream
0 Ingress 172.4.1.2/[1028 ]
1 172.4.1.2 4 ms Transit 172.5.1.2/[3 ]
2 3.3.3.9 3 ms Egress
Run the display mpls te protection tunnel all command on LSRA to check information
about the tunnel protection group.
[LSRA] display mpls te protection tunnel all
------------------------------------------------------------------------
No. Work-tunnel status /id Protect-tunnel status /id Switch-Result
------------------------------------------------------------------------
1 in defect /100 non-defect /101 protect-tunnel
----End
Configuration File
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
explicit-path backup-
path
next hop
172.4.1.2
next hop
172.5.1.2
next hop 3.3.3.9
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te protection tunnel 101 mode revertive wtr
4
mpls te path explicit-path pri-
path
mpls te commit
#
interface
Tunnel2
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
101
mpls te path explicit-path backup-
path
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 500
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
Networking Requirements
As shown in Figure 5-51, two MPLS TE tunnels are set up between LSRA and LSRC. An
MPLS TE tunnel protection group needs to be configured for the working tunnel. Dynamic
BFD is used to detect CR-LSP failures.
NOTE
STP must be disabled on the network. Otherwise, an interface may be blocked by STP.
Figure 5-51 Networking of dynamic BFD for an MPLS TE tunnel protection group
Loopback1
5.5.5.9/32
GE0/0/1 GE0/0/2
VLANIF400 VLANIF500
172.4.1.2/24 LSRE 172.5.1.1/24
GE0/0/3 GE0/0/2
VLANIF400 VLANIF500
172.4.1.1/24 Loopback1 172.5.1.2/24
Loopback1 2.2.2.9/32 Loopback1
GE0/0/1 GE0/0/2
1.1.1.9/32 3.3.3.9/32
VLANIF100 VLANIF200
172.1.1.2/24 172.2.1.1/24
LSRA LSRC
GE0/0/1 GE0/0/1
VLANIF100 VLANIF200
172.1.1.1/24 LSRB 172.2.1.2/24
Primary CR-LSP
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure an MPLS TE tunnel protection group.
2. On the ingress node of the working tunnel, enable active BFD session setup on the
tunnel interface. On the egress node of the working tunnel, enable passive BFD session
setup in the MPLS view.
Procedure
Step 1 Configure an MPLS TE tunnel protection group.
Configure an MPLS TE tunnel protection group by referring to 5.10.16 Example for
Configuring an MPLS TE Tunnel Protection Group.
Step 2 Configure dynamic BFD for the MPLS TE tunnel protection group.
# Configure LSRA.
[LSRA] bfd
[LSRA-bfd] quit
[LSRA] interface tunnel 1
[LSRA-Tunnel1] mpls te bfd enable
[LSRA-Tunnel1] mpls te bfd min-tx-interval 100 min-rx-interval 100 detect-
multiplier 3
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
# Configure LSRC.
[LSRC] bfd
[LSRC-bfd] mpls-passive
[LSRC-bfd] quit
# Run the tracert lsp te tunnel 1 command on LSRA to check the path of the working tunnel.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to br
eak.
TTL Replier Time Type Downstream
0 Ingress 172.1.1.2/[4101 ]
1 172.1.1.2 7 ms Transit 172.2.1.2/[3 ]
2 3.3.3.9 3 ms Egress
Run the display mpls te protection tunnel all command on LSRA to check information
about the tunnel protection group.
[LSRA] display mpls te protection tunnel all
------------------------------------------------------------------------
No. Work-tunnel status /id Protect-tunnel status /id Switch-Result
------------------------------------------------------------------------
1 non-defect /100 non-defect /101 work-tunnel
Run the display bfd session all command on LSRA. You can see that the BFD session is in
Up state.
[LSRA] display bfd session all
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
8252 8245 3.3.3.9 Up D_TE_LSP Tunnel1
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
# Run the shutdown command on GE0/0/1 of LSRA to simulate a failure of the working
tunnel.
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] shutdown
[LSRA-GigabitEthernet0/0/1] quit
Run the tracert lsp te tunnel 1 command on LSRA again. You can see that traffic has been
switched to the protection tunnel.
[LSRA] tracert lsp te tunnel 1
LSP Trace Route FEC: TE TUNNEL IPV4 SESSION QUERY Tunnel1 , press CTRL_C to br
eak.
TTL Replier Time Type Downstream
0 Ingress 172.4.1.2/[1028 ]
1 172.4.1.2 4 ms Transit 172.5.1.2/[3 ]
2 3.3.3.9 3 ms Egress
Run the display mpls te protection tunnel all command on LSRA to check information
about the tunnel protection group.
[LSRA] display mpls te protection tunnel all
------------------------------------------------------------------------
No. Work-tunnel status /id Protect-tunnel status /id Switch-Result
------------------------------------------------------------------------
1 in defect /100 non-defect /101 protect-tunnel
----End
Configuration File
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 400
#
bfd
#
mpls lsr-id
1.1.1.9
mpls
mpls
te
mpls rsvp-
te
mpls te cspf
#
explicit-path backup-
path
next hop
172.4.1.2
next hop
172.5.1.2
next hop 3.3.3.9
#
explicit-path pri-
path
next hop
172.1.1.2
next hop
172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te protection tunnel 101 mode revertive wtr
4
mpls te bfd
enable
mpls te bfd min-tx-interval 100 min-rx-interval 100
mpls te path explicit-path pri-
path
mpls te commit
#
interface
Tunnel2
ip address unnumbered interface
LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
101
mpls te path explicit-path backup-
path
mpls te commit
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRB
#
sysname LSRB
#
vlan batch 100 200
#
mpls lsr-id 2.2.2.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif100
ip address 172.1.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.2.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 500
#
bfd
mpls-
passive
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRE
#
sysname LSRE
#
vlan batch 400 500
#
mpls lsr-id 5.5.5.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls te srlg 2
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 400
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 5.5.5.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 5.5.5.9
0.0.0.0
network 172.4.1.0
0.0.0.255
network 172.5.1.0 0.0.0.255
mpls-te enable
#
return
STP must be disabled on the network. Otherwise, some interfaces may be blocked by STP.
GE0/0/1 GE0/0/1
VLANIF100 VLANIF300
172.1.1.1/24 172.3.1.1 /24
GE0/0/2 GE0/0/2
LSRA VLANIF500 VLANIF200 LSRC
172.5.1.1/24 172.2.1.2/24
Loopback1 Loopback1
1.1.1.9/32 3.3.3.9/32
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a hot-standby CR-LSP and best-effort path.
2. Create two BFD sessions on the ingress node and bind them to CR-LSPs to detect
primary and backup CR-LSPs. Configure two BFD sessions on the egress node and bind
them to IP addresses (ensure that the route from LSRC to LSRA is reachable).
Procedure
Step 1 Configure a hot-standby CR-LSP and best-effort path.
Configure the primary CR-LSP, backup CR-LSP, and best-effort path according to 5.10.12
Example for Configuring CR-LSP Hot Standby.
Step 2 Configure static BFD for CR-LSPs.
Establish BFD sessions between LSRA and LSRC to detect faults on primary and backup CR-
LSPs. Bind the BFD session on LSRA to the CR-LSP and BFD session on LSRC to the IP
address. Set the intervals for sending and receiving BFD packets to 500 ms and the local
detection multiplier of BFD to 3.
# Configure LSRA.
[LSRA] bfd
[LSRA-bfd] quit
[LSRA] bfd prilsp2lsrc bind mpls-te interface tunnel 1 te-lsp
# Configure LSRC.
[LSRC] bfd
[LSRC-bfd] quit
[LSRC] bfd reversepri2lsra bind peer-ip 1.1.1.9
[LSRC-bfd-session-reversepri2lsra] discriminator local 239
[LSRC-bfd-session-reversepri2lsra] discriminator remote 139
[LSRC-bfd-session-reversepri2lsra] min-tx-interval 500
[LSRC-bfd-session-reversepri2lsra] min-rx-interval 500
[LSRC-bfd-session-reversepri2lsra] detect-multiplier 3
[LSRC-bfd-session-reversepri2lsra] commit
[LSRC-bfd-session-reversepri2lsra] quit
[LSRC] bfd reversebac2lsra bind peer-ip 1.1.1.9
[LSRC-bfd-session-reversebac2lsra] discriminator local 439
[LSRC-bfd-session-reversebac2lsra] discriminator remote 339
[LSRC-bfd-session-reversebac2lsra] min-tx-interval 500
[LSRC-bfd-session-reversebac2lsra] min-rx-interval 500
[LSRC-bfd-session-reversebac2lsra] detect-multiplier 3
[LSRC-bfd-session-reversebac2lsra] commit
[LSRC-bfd-session-reversebac2lsra] quit
After the configurations are complete, run the display bfd session discriminator command
on LSRA and LSRC. You can see that the BFD session status is Up.
The display on LSRA is used as an example.
[LSRA] display bfd session discriminator 139
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
139 239 3.3.3.9 Up S_TE_LSP Tunnel1
--------------------------------------------------------------------------------
cable from GE0/0/2 on LSRA or LSRD within 15s. You can see that traffic switches back to
the primary CR-LSP at the millisecond level.
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100 500
#
bfd
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls te cspf
#
explicit-path backup-path
next hop 172.5.1.2
next hop 172.3.1.1
next hop 3.3.3.9
#
explicit-path pri-path
next hop 172.1.1.2
next hop 172.2.1.2
next hop 3.3.3.9
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls
te
destination
3.3.3.9
mpls te tunnel-id
100
mpls te record-
route
mpls te path explicit-path pri-
path
mpls te path explicit-path backup-path
secondary
commit
interface Vlanif200
ip address 172.2.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 2.2.2.9 255.255.255.255
#
ospf 1
opaque-capability enable
area 0.0.0.0
network 2.2.2.9 0.0.0.0
network 172.1.1.0 0.0.0.255
network 172.2.1.0 0.0.0.255
network 172.4.1.0 0.0.0.255
mpls-te enable
#
return
l Configuration file of LSRC
#
sysname LSRC
#
vlan batch 200 300
#
bfd
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
bfd reversebac2lsra bind peer-ip
1.1.1.9
discriminator local
439
discriminator remote
339
min-tx-interval
500
min-rx-interval
500
commit
commit
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
mpls-te
enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300 400 500
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0
0.0.0.255
network 172.4.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te
enable
#
return
Networking Requirements
Figure 5-53 shows an MPLS network. A TE tunnel with LSRA as the ingress node and LSRC
as the egress node needs to be established on LSRA. A hot-standby CR-LSP and best-effort
path also need to be configured.
l The path of the primary CR-LSP is LSRA -> LSRB -> LSRC.
l The path of the backup CR-LSP is LSRA -> LSRD -> LSRC.
When the primary CR-LSP fails, traffic switches to the backup CR-LSP. After the primary
CR-LSP recovers, traffic switches back to the primary CR-LSP in 15 seconds. If both the
primary CR-LSP and backup CR-LSP fail, traffic switches to the best-effort path. Explicit
paths can be configured for the primary and backup CR-LSPs. A best-effort path can be
generated automatically. In this example, the best-effort path is LSRA -> LSRD -> LSRB ->
LSRC. The calculated best-effort path varies according to the faulty node.
Dynamic BFD for CR-LSPs needs to be configured to detect primary and backup CR-LSPs:
l When the primary CR-LSP fails, traffic fast switches to the backup CR-LSP.
l When the backup CR-LSP fails within 15 seconds after the primary CR-LSP recovers,
traffic switches back to the primary CR-LSP.
NOTE
STP must be disabled on the network. Otherwise, some interfaces may be blocked by STP.
GE0/0/1 GE0/0/1
VLANIF100 VLANIF300
172.1.1.1/24 172.3.1.1 /24
GE0/0/2 GE0/0/2
LSRA VLANIF500 VLANIF200 LSRC
172.5.1.1/24 172.2.1.2/24
Loopback1 Loopback1
1.1.1.9/32 3.3.3.9/32
NOTE
Compared with static BFD, dynamic BFD is much easy to configure. In addition, dynamic BFD can
reduce the number of BFD sessions, and use less network resources because only one BFD session can
be created on a tunnel interface.
Configuration Roadmap
The configuration roadmap is as follows:
1. Configure a hot-standby CR-LSP and best-effort path.
2. Enable BFD on the ingress node, configure dynamic BFD for CR-LSPs, and set the
intervals for sending and receiving BFD packets and local BFD detection multiplier.
3. Enable the capability to passively create BFD sessions on the egress node.
Procedure
Step 1 Configure a hot-standby CR-LSP and best-effort path.
Configure the primary CR-LSP, backup CR-LSP, and best-effort path according to 5.10.12
Example for Configuring CR-LSP Hot Standby.
Step 3 Enable the capability to passively create BFD sessions on the egress node.
# Configure LSRC.
[LSRC] bfd
[LSRC-bfd] mpls-passive
[LSRC-bfd] quit
After the configurations are complete, run the display bfd session mpls-te interface Tunnel
1 te-lsp command on LSRA. You can see that the BFD session status is Up.
[LSRA] display bfd session mpls-te interface Tunnel 1 te-lsp
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
8192 8192 3.3.3.9 Up D_TE_LSP Tunnel1
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
Run the display bfd session passive-dynamic command on LSRC. You can see that a BFD
session is created passively.
[LSRC] display bfd session passive-dynamic
--------------------------------------------------------------------------------
Local Remote PeerIpAddr State Type InterfaceName
--------------------------------------------------------------------------------
8192 8192 1.1.1.9 Up E_Dynamic -
--------------------------------------------------------------------------------
Total UP/DOWN Session Number : 1/0
----End
Configuration Files
l Configuration file of LSRA
#
sysname LSRA
#
enable
area
0.0.0.0
network 1.1.1.9
0.0.0.0
network 172.1.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te enable
#
return
#
sysname LSRC
#
vlan batch 200 300
#
bfd
mpls-passive
#
mpls lsr-id 3.3.3.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif200
ip address 172.2.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.1 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 200
#
interface LoopBack1
ip address 3.3.3.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 3.3.3.9
0.0.0.0
network 172.2.1.0
0.0.0.255
network 172.3.1.0
0.0.0.255
mpls-te
enable
#
return
l Configuration file of LSRD
#
sysname LSRD
#
vlan batch 300 400 500
#
mpls lsr-id 4.4.4.9
mpls
mpls te
mpls rsvp-te
#
interface Vlanif300
ip address 172.3.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif400
ip address 172.4.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface Vlanif500
ip address 172.5.1.2 255.255.255.0
mpls
mpls te
mpls rsvp-te
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 300
#
interface GigabitEthernet0/0/2
port link-type trunk
port trunk allow-pass vlan 500
#
interface GigabitEthernet0/0/3
port link-type trunk
port trunk allow-pass vlan 400
#
interface LoopBack1
ip address 4.4.4.9 255.255.255.255
#
ospf
1
opaque-capability
enable
area
0.0.0.0
network 4.4.4.9
0.0.0.0
network 172.3.1.0
0.0.0.255
network 172.4.1.0
0.0.0.255
network 172.5.1.0
0.0.0.255
mpls-te
enable
#
return
Configuration Roadmap
RSVP GR can be configured on the network to ensure uninterrupted data forwarding during
an active/standby switchover.
The configuration roadmap is as follows:
1. Configure IS-IS to ensure that routes between backbone nodes are reachable.
2. Enable MPLS TE and RSVP TE on the backbone nodes so that they can set up MPLS
TE tunnels.
3. Enable IS-IS TE and change the cost type to enable the nodes to advertise TE
information using IS-IS.
4. On the ingress node, create a tunnel interface and configure tunnel attributes on the
tunnel interface. Enable MPLS TE CSPF to dynamically set up MPLS TE tunnels.
5. Configure IS-IS GR and RSVP GR on each node to ensure uninterrupted data
forwarding during an active/standby switchover.
Procedure
Step 1 Assign IP addresses to interfaces.
# Configure LSRA.
<HUAWEI> system-view
[HUAWEI] sysname LSRA
[LSRA] vlan batch 100
[LSRA] interface vlanif 100
[LSRA-Vlanif100] ip address 172.1.1.1 255.255.255.0
[LSRA-Vlanif100] quit
[LSRA] interface gigabitethernet 0/0/1
[LSRA-GigabitEthernet0/0/1] port link-type trunk
[LSRA-GigabitEthernet0/0/1] port trunk allow-pass vlan 100
[LSRA-GigabitEthernet0/0/1] quit
[LSRA] interface loopback 1
[LSRA-LoopBack1] ip address 1.1.1.9 255.255.255.255
[LSRA-LoopBack1] quit
Assign IP addresses to interfaces of LSRB and LSRC according to Figure 5-54. The
configurations on LSRB and LSRC are similar to the configuration on LSRA, and are not
mentioned here.
Step 2 Configure IS-IS to advertise routes.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] network-entity 00.0005.0000.0000.0001.00
# Configure LSRB.
[LSRB] isis 1
[LSRB-isis-1] network-entity 00.0005.0000.0000.0002.00
[LSRB-isis-1] is-level level-2
[LSRB-isis-1] quit
[LSRB] interface vlanif 100
[LSRB-Vlanif100] isis enable 1
[LSRB-Vlanif100] quit
[LSRB] interface vlanif 200
[LSRB-Vlanif200] isis enable 1
[LSRB-Vlanif200] quit
[LSRB] interface loopback 1
[LSRB-LoopBack1] isis enable 1
[LSRB-LoopBack1] quit
# Configure LSRC.
[LSRC] isis 1
[LSRC-isis-1] network-entity 00.0005.0000.0000.0003.00
[LSRC-isis-1] is-level level-2
[LSRC-isis-1] quit
[LSRC] interface vlanif 200
[LSRC-Vlanif200] isis enable 1
[LSRC-Vlanif200] quit
[LSRC] interface loopback 1
[LSRC-LoopBack1] isis enable 1
[LSRC-LoopBack1] quit
Run the display ip routing-table command on the LSRs, and you can see that they learn the
routes from each other. The command output on LSRA is provided as an example:
[LSRA] display ip routing-table
Route Flags: R - relay, D - download to fib
------------------------------------------------------------------------------
Routing Tables: Public
Destinations : 8 Routes : 8
Step 3 Configure basic MPLS capabilities, enable MPLS TE and RSVP TE.
Enable MPLS, MPLS TE, RSVP-TE globally on the LSRs and on the interfaces that the
tunnel passes through.
# Configure LSRA.
[LSRA] mpls lsr-id 1.1.1.9
[LSRA] mpls
[LSRA-mpls] mpls te
[LSRA-mpls] mpls rsvp-te
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls
[LSRA-Vlanif100] mpls te
[LSRA-Vlanif100] mpls rsvp-te
[LSRA-Vlanif100] quit
The configurations on LSRB and LSRC are similar to the configuration on LSRA, and are not
mentioned here.
# Configure LSRA.
[LSRA] isis 1
[LSRA-isis-1] cost-style wide
[LSRA-isis-1] traffic-eng level-2
[LSRA-isis-1] graceful-restart
[LSRA-isis-1] quit
The configurations on LSRB and LSRC are similar to the configuration on LSRA, and are not
mentioned here.
On the ingress of the tunnel, create a tunnel interface and set the IP address, tunneling
protocol, destination IP address, tunnel ID, and dynamic signaling protocol for the tunnel
interface. Then, run the mpls te commit command to commit the configuration.
# Configure LSRA.
[LSRA] mpls
[LSRA-mpls] mpls te cspf
[LSRA-mpls] quit
[LSRA] interface tunnel 1
[LSRA-Tunnel1] ip address unnumbered interface loopback 1
[LSRA-Tunnel1] tunnel-protocol mpls te
[LSRA-Tunnel1] destination 3.3.3.9
[LSRA-Tunnel1] mpls te tunnel-id 100
[LSRA-Tunnel1] mpls te commit
[LSRA-Tunnel1] quit
Run the display interface tunnel command on LSRA. You can see that the tunnel interface is
Up.
[LSRA] display interface tunnel
Tunnel1 current state : UP
Line protocol current state : UP
Last line protocol up time : 2013-01-14 09:18:46
Description:
...
# Configure LSRA.
[LSRA] mpls
[LSRA-mpls] mpls rsvp-te hello
[LSRA-mpls] mpls rsvp-te hello full-gr
[LSRA-mpls] quit
[LSRA] interface vlanif 100
[LSRA-Vlanif100] mpls rsvp-te hello
[LSRA-Vlanif100] quit
The configurations on LSRB and LSRC are similar to the configuration on LSRA, and are not
mentioned here.
Run the display mpls rsvp-te graceful-restart peer command on LSRA to view the GR
status of the neighboring node.
[LSRA] display mpls rsvp-te graceful-restart peer
Neighbor on Interface Vlanif100
Neighbor Addr: 172.1.1.2 Last Attribute: Added Usually
SrcInstance: 0x7C832B3D NbrSrcInstance: 0x6A48E0F5
Neighbor Capability:
Can Do Self GR
Can Support GR
GR Status: Normal
Restart Time: 90015 Millisecond
Recovery Time: 0 Millisecond
Stored GR message number: 0
PSB Count: 0 RSB Count: 1
Total to be Recover PSB Count: 0 Recovered PSB Count: 0
Total to be Recover RSB Count: 0 Recovered RSB Count: 0
P2MP PSB Count: 0 P2MP RSB Count: 0
Total to be Recover P2MP PSB Count: 0 Recovered P2MP PSB Count: 0
Total to be Recover P2MP RSB Count: 0 Recovered P2MP RSB Count: 0
----End
Configuration File
l Configuration file of LSRA
#
sysname LSRA
#
vlan batch 100
#
mpls lsr-id 1.1.1.9
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
mpls rsvp-te hello full-gr
mpls te cspf
#
isis 1
graceful-restart
is-level level-2
cost-style wide
network-entity 00.0005.0000.0000.0001.00
traffic-eng level-2
#
interface Vlanif100
ip address 172.1.1.1 255.255.255.0
isis enable 1
mpls
mpls te
mpls rsvp-te
mpls rsvp-te hello
#
interface GigabitEthernet0/0/1
port link-type trunk
port trunk allow-pass vlan 100
#
interface LoopBack1
ip address 1.1.1.9 255.255.255.255
isis enable 1
#
interface Tunnel1
ip address unnumbered interface LoopBack1
tunnel-protocol mpls te
destination 3.3.3.9
mpls te tunnel-id 100
mpls te commit
#
return
#
return
5.11 References
This section lists references of MPLS TE.
The following table lists the references for MPLS TE.