Sie sind auf Seite 1von 16

Unified MPLS Functionality, Features, and

Configuration Example
Document ID: 118846
Contributed by Atahar Khan and Sudhir Kumar, Cisco TAC Engineers.
Mar 20, 2015

Contents
Introduction
Prerequisites
Requirements
Components Used
Configure
Network Evolution
Cisco Unified MPLS
Features and Components
Carry Label Information in BGP4 (RFC 3107)
BGP PrefixIndependent Convergence (BGP PIC)
BGP AddPath
LoopFree Alternates and rLFA for IGP FastConvergence
Cisco Unified MPLS Architecture Example
Unified MPLS Configuration Example
Core Area Border Router Cisco IOS XR
Core Area Border Router Configuration
PreAggregation Configuration
Cell Site Gateway (CSG) Configuration
MTG Configuration
Verify
CSG Node Output
PreAgg Node Outputs
Core ABR Node Outputs
Troubleshoot
Related Information

Introduction
This document describes Unified Multiprotocol Label Switching (MPLS), which is all about scaling. It
provides a framework of technology solutions to bring simple endtoend traffic and/or services across a
traditionally segmented infrastructure. It makes use of both the benefits of a hierarchical infrastructure as it
improves scalability and the simplicity of network design.

Prerequisites
Requirements
There are no specific requirements for this document.
Components Used
This document is not restricted to specific software and hardware versions.

The information in this document was created from the devices in a specific lab environment. All of the
devices used in this document started with a cleared (default) configuration. If your network is live, make sure
that you understand the potential impact of any command.

Configure
Network Evolution
When you look at the history of the network packetbased services, then a change in network business values
can be observed. This goes from discrete connectivity enhancements in order to make applications as fluent as
possible, to collaboration technologies in order to support mobile collaboration. Finally, the ondemand cloud
services are introduced with the application services in order to optimize the tools used with an organization
and improve stability and costofownership.

Figure 1

This continuous value and functionality enhancement of the network results in a much more pervasive need
for network simplicity, manageability, integration, and stability where networks have been segmented as a
result of disjointed operational islands and no real endtoend path control. Now there is a need to bring it all
together with a single architecture which is easy to manage, provides scalability to 100,000's of nodes, and
uses the current High Availability and Fast Convergence technologies. This is what Unified MPLS brings to
the table, which is the segmented network into a single control plane and endtoend path visibility.

Modern Network Requirements

Increase bandwidth demand (Video)


Increase application complexity (Cloud and virtualization)
Increase need for convergence (Mobility)

How can you simplify MPLS operations in increasingly larger networks with more complex application
requirements?
Traditional MPLS Challenges with Different Access Technologies

Complexity in order to achieve 50 millisecond convergence with Traffic Engineering Fast Reroute
(TE FRR)
Need for sophisticated routing protocols and interaction with Layer 2 Protocols
Split large networks into domains while services are delivered endtoend
Common endtoend convergence and resiliency mechanisms
Troubleshoot and provision endtoend across multiple domains

The Unified MPLS attraction is summarized in this list:

Reduced number of operational points.


In general transport platforms, a service has to be configured on every network element via
Operational Points. The management system has to know the topology.
In Unified MPLS, with the integration of all MPLS islands, the minimum number of
Operational Points is achieved.
Possibility to easily provision services: Layer 3 (L3) VPN, Virtual Private Wire Service (VPWS),
Virtual Private LAN Service (VPLS), without pseudowirestitching (PWstitching) or InterAS
mechanisms. With the introduction of MPLS within the aggregation, some static configuration is
avoided which creates MPLS Islands.
Provide endtoend MPLS transport.
Keep Interior Gateway Protocol (IGP) areas separated and small routing tables.
Fast convergence.
Easy to configure and troubleshoot.
Ability to integrate with any access technology.
IPv6 readiness.

Cisco Unified MPLS


Unified MPLS is defined by the addition of extra features with classical/traditional MPLS and it gives more
scalability, security, simplicity and manageability. In order to deliver the MPLS services endtoend,
endtoend Labeled Switches Path (LSP) is needed. The goal is to keep the MPLS services (MPLS VPN,
MPLS L2VPN) as they are, but introduce greater scalability. In order to do this, move some of the IGP
prefixes into Border Gateway Protocol (BGP) (the loopback prefixes of the Provider Edge (PE) routers),
which then distributes the prefixes endtoend.

Figure 2
Before the Cisco Unified MPLS architecture is discussed, it is important to understand the key features used
in order to make this a reality.

Features and Components


Carry Label Information in BGP4 (RFC 3107)

It is a prerequisite to have a scalable method in order to exchange prefixes between network segments. You
could simply merge the IGPs (Open Shortest Path First (OSPF), Intermediate SystemtoIntermediate System
(ISIS), or Enhanced Interior Gateway Routing Protocol (EIGRP)) into a single domain. However an IGP is
not designed to carry 100,000s of prefixes. The protocol of choice for that purpose is BGP. It is a wellproven
protocol which supports the Internet with 100,000's of routes and MPLSVPN environments with millions of
entries. Cisco Unified MPLS uses BGP4 with label information exchange (RFC3107). When BGP
distributes a route, it can also distribute an MPLS label that is mapped to that route. The MPLS label mapping
information for the route is carried in the BGP update message that contains the information about the route.
If the next hop is not changed, the label is preserved and the label changes if the next hop changes. In Unified
MPLS, the next hop changes at Area Border Routers (ABRs).

When you enable RFC 3107 on both BGP routers, the routers advertise to each other that they can then send
MPLS labels with the routes. If the routers successfully negotiate their ability to send MPLS labels, the
routers add MPLS labels to all outgoing BGP updates.

The label exchange is needed in order to keep the endtoend path information between segments. As a
result, each segment becomes small enough to be managed by operators and at the same time there is circuit
information distributed for path awareness between two different IP speakers.

How does it work?

Figure 3

In Figure 3 you can see that there are three segments with Label Discovery Protocol Labeled Switches Path
(LDP LSP) and the access network does not have LDP enabled. The objective is to join them together so that
there is a single MPLS path (Internal BGP (iBGP) hierarchal LSP) between PreAggregation (PreAgg)
Nodes. As the network is a single BGP Autonomous System (AS), all sessions are iBGP sessions. Each
segment runs its own IGP (OSPF, ISIS,or EIGRP) and LDP LSP paths within the IGP domain. Within Cisco
Unified MPLS, the routers (ABRs) that join the segments must be BGP inline routereflectors with the
NextHopSelf and RFC 3107 in order to carry a IPv4 + Label configured on the sessions. These BGP
speakers are within the Cisco Unified MPLS Architecture referenced to as ABRs.

Why are the ABRs inline routereflectors?

One of the goals of Unified MPLS is to have a highly scalable endtoend infrastructure. Thus, each segment
should be kept simple in order to operate. All peerings are iBGP peerings, therefore there is a need for a
fullmesh of peerings between all iBGP speakers within the complete network. That results in a very
impractical network environment if there are thousands of BGP speakers. If the ABRs are made
routereflectors, the number of iBGP peering is reduced to the number of BGP speakers 'persegment' instead
of between 'all' BGP speakers of the complete AS.

Why NextHopSelf?

BGP operates on the base of recursive routing lookups. This is done in order to accommodate scalability
within the underlying IGP that is utilized. For the recursive lookup, BGP uses NextHop attached to each
BGP route entry. Thus, for example, if a SourceNode desires to send a packet to a DestinationNode and if
the packet hits the BGP router, then the BGP router does a routing lookup in its BGP routing table. It finds a
route toward DestinationNode and finds the NextHop as a next step. This NextHop must be known by the
underlying IGP. As the final step, the BGP router forwards the packet onwards based upon the IP and MPLS
label information attached to that NextHop.

In order to make sure that within each segment only the NextHops are needed to be known by the IGP, it is
needed that the NextHop attached to the BGP entry is within the network segment and not within a neighbor
or further away segment. If you rewrite the BGP NextHop with the NextHopSelf feature, ensure that the
NextHop is within the local segment.

Put It All Together

Figure 4

Figure 4 provides an example of how the L3 VPN prefix 'A' and label exchange operates and how the MPLS
label stack is created to have the endtoend path information for the traffic flow between both PEs.

The network is partitioned as three independent IGP/LDP domains. The reduced size of routing and
forwarding tables on the routers is to enable better stability and faster convergence. LDP is used to build
intradomain LSPs within domains. RFC 3107 BGP IPv4+ labels are used as interdomain label distribution
protocol in order to build hierarchical BGP LSPs across domains. BGP3107 inserts one extra label in the
forwarding label stack in the Unified MPLS architecture.

Intradomain LDP LSP


Interdomain BGP Hierarchical LSP

Figure 5

VPN Prefix 'A' is advertised by PE31 to PE11 with L3VPN service label 30 and next hop as PE31's loopback
via endtoend interdomain hierarchical BGP LSP. Now, look at the forwarding path for VPN prefix 'A' from
PE11 to PE31.

On PE11, Prefix A is known via BGP session with PE31 as nexthop PE31 and PE31 is recursively
reachable via P1 with BGP label 100. PE11 received IPv4 + Label information from P1 as BGP
updates because it is enabled with the RFC 3107 feature in order to send the IPv4 + Label
information.
P1 is reachable from PE11 via intradomain LDP LSP and it adds another LDP label on top of the
BGP label. Finally, the packet goes out of the PE11 node with three labels. For example, the 30
L3VPN service label, the 100 BGP label, and the 200 LDP IGP label.
The LDP top label continues to swap in intradomain LDP LSP and the packet reaches P1 with two
labels after Penultimate Hop Popping (PHP).
P1 is configured as inline Route Reflector (RR) with nexthop self and it joins two IGP domains or
LDP LSP.
On P1, the next hop for PE31 is changed to P2 and the update is received via BGP with IPv4 + Label
(RFC3107). The BGP label is swapped with new label because nexthop is changed and the IGP label
is pushed on top.
The packet goes out of the P1 node with three labels and service label 30 is untouched. That is, the 30
L3VPN service label, 101 BGP label, and 201 LDP label.
The LDP top label swaps in intradomain LDP LSP and the packet reaches P2 with two labels after
PHP.
On P2, the next hop for PE31 is changed again and it is reachable via IGP. The BGP label is removed
as an implicitnull BGP label is received from PE31 for PHP.
The packet leaves with two labels. For example, the 30 L3VPN service label and the 110 LDP label.
On PE31, the packet arrives with one label after PHP of the LDP label and based on the service label
30. The unlabeled packet is forwarded to the CE31 destination under Virtual Routing and Forwarding
(VRF).

When you look at the MPLS label stack, the switching of the packet between a source and destination device
based upon the previous prefix and label exchange is observed within the MPLS switching environment.
Figure 6

BGP PrefixIndependent Convergence (BGP PIC)

This is a Cisco technology which is used in BGP failure scenarios. The network converges without a loss of
the traditional seconds in the BGP reconvergence. When BGP PIC is used, most failure scenarios can be
reduced to a reconvergence time below 100 msec.

How is this done?

Traditionally when BGP detects a failure, it recalculates for each BGP entry for the best path. When there is a
routing table with thousands of route entries, this can take a considerable amount of time. In addition, this
BGP router needs to distribute all those new best paths to each of its neighbors in order to inform them of the
changed network topology and the changed bestpaths. As the final step, each of the recipient BGP speakers
needs to make a best path calculation in order to find the new best paths.

Every time the first BGP speaker detects something wrong, it starts the best path calculation until all of its
neighbor BGP speakers have done their recalculation, the traffic flow might be dropped.

Figure 7
The BGP PIC for IP and MPLS VPN feature improves BGP convergence after a network failure. This
convergence is applicable to both core and edge failures and can be used in both IP and MPLS networks. The
BGP PIC for IP and the MPLS VPN feature creates and stores a backup/alternate path in the routing
information base (RIB), forwarding information base (FIB), and Cisco Express Forwarding (CEF) so that
when a failure is detected, the backup/alternate path can immediately take over, thus it enables fast failover.

With a single rewrite of the nexthop information the traffic flow is restored. Additionally the network BGP
convergence happens in the background, but the traffic flows are not impacted anymore. This rewrite happens
within 50 msec. If you use this technology, network convergence is reduced to from seconds to 50 msec plus
the IGP convergence.

BGP AddPath

BGP AddPath is an improvement on how BGP entries are communicated between BGP speakers. If on a
certain BGP speaker there is more than a single entry towards a certain destination, then that BGP speaker
only sends the entry which is its best path for that destination to its neighbors. The result is that no provisions
are made in order to allow the advertisement of multiple paths for the same destination.

BGP AddPath is a BGP feature to allow more as only the best path, and allows multiple paths for the same
destination without the new paths implicitly replacing any previous ones. This extension to BGP is
particularly important in order to aid with BGP PIC, when BGP routereflectors are used, so that the different
BGP speakers within an AS have access to more BGP paths as just the 'Best BGP path' in accordance with the
routereflector.

LoopFree Alternates and rLFA for IGP FastConvergence

Operations to achieve 50millisecond restoration after a link or node failure can be simplified dramatically
with the introduction of a new technology called loopfree alternates (LFAs). LFA enhance the linkstate
routing protocols (ISIS and OSPF) in order to find alternative routing paths in a loopfree manner. LFA
allows each router to define and use a predetermined backup path if an adjacency (network node or link) fails.
In order to deliver a 50 msec restoration time in case of link or node failures, MPLS TE FRR can be deployed.
However, this requires the addition of another protocol (Resource Reservation Protocol, or RSVP) for setup
and management of TE tunnels. While this might be necessary for bandwidth management, the protection and
restoration operation does not require bandwidth management. Hence, the overhead associated with the
addition of RSVP TE is considered high for simple protection of links and nodes.

LFA can provide a simple and easy technique without the deployment of RSVP TE in such scenarios. As a
result of these techniques, today's interconnected routers in largescale networks can deliver 50 msec
restoration for link and node failures without a configuration requirement for the operator.

Figure 8

The LFAFRR is a mechanism that provides local protection for unicast traffic in IP, MPLS, Ethernet Over
MPLS (EoMPLS), Inverse Multiplexing over ATM (IMA) over MPLS, Circuit Emulation Service over
Packet Switched Network (CESoPSN) over MPLS, and StructureAgnostic Time Division Multiplexing over
Packet (SAToP) over MPLS networks. However, some topologies (such as the ring topology) require
protection that is not afforded by LFAFRR alone. The Remote LFAFRR feature is useful in such situations.

The Remote LFAFRR extends the basic behavior of LFAFRR to any topology. It forwards the traffic
around a failed node to a remote LFA that is more than one hop away. In Figure 9, if the link between C1 and
C2 fails to reach A1 then C2 sends the packet over a directed LDP session to C5 which has reachability to A1.

Figure 9

In Remote LFAFRR, a node dynamically computes its LFA node. After the alternate node is determined
(which is not directly connected), the node automatically establishes a directed Label Distribution Protocol
(LDP) session to the alternate node. The directed LDP session exchanges labels for the particular forward
error correction (FEC).

When the link fails, the node uses label stacking in order to tunnel the traffic to the remote LFA node, in order
to forward the traffic to the destination. All the label exchanges and tunneling to the remote LFA node are
dynamic in nature and preprovisioning is not required. The whole label exchange and tunneling mechanism is
dynamic and does not involve any manual provisioning.

For intradomain LSPs, remote LFA FRR is utilized for unicast MPLS traffic in ring topologies. Remote LFA
FRR precalculates a backup path for every prefix in the IGP routing table, which allows the node to rapidly
switch to the backup path when a failure is encountered. This provides recovery times on the order of 50
msec.

Cisco Unified MPLS Architecture Example


When all of the previous tools and features are put together within a network environment, it creates the Cisco
Unified MPLS network environment. This is the architecture example for large service providers.
Figure 10

The Core and Aggregation are organized as distinct IGP/LDP domains.


Interdomain hierarchical LSPs based on RFC 3107, BGP IPv4+ Labels which are extended out to the
Preagg.
Intradomain LSPs based on LDP.
The interdomain Core/Aggregation LSPs are extended in the Access Networks by the distribution of
the Radio Access Networks Interior Gateway Protocol (RAN IGP) into the interdomain iBGP and
distribute the necessary labelled iBGP prefixes (MPC (Mobile Packet Core) gateway) into RAN IGP
(via BGP communities).

Unified MPLS Configuration Example


Here ia a simplified example of Unified MPLS.

Core Area Border Router Cisco IOS XR

PreAggregation and Cell Site Gateway Routers Cisco IOS

Figure 11

200:200 MPC Community


300:300 Aggregation Community

Core IGP Domain ISIS Level 2


Aggregation IGP Domain ISIS Level 1
Access IGP Domain OSPF 0 Areas
Core Area Border Router Configuration

Figure 12

! IGP Configuration
router isis coreagg
net 49.0100.1010.0001.0001.00
addressfamily ipv4 unicast
metricstyle wide
propagate level 1 into level 2 routepolicy dropall ! Disable L1 to L2 redistribution
!
interface Loopback0
ipv4 address 10.10.10.1 255.255.255.255
passive
!
interface TenGigE0/0/0/0
!
interface TenGigE0/0/0/1
circuittype level2only ! Core facing ISIS L2 Link

!
interface TenGigE0/0/0/2
circuittype level1 ! Aggregation facingis ISIS L1 Link

!
routepolicy dropall
drop
endpolicy

! BGP Configuration

router bgp 100


bgp routerid 10.10.10.1
addressfamily ipv4 unicast
allocatelabel all ! Send labels with BGP routes
!
sessiongroup infra
remoteas 100
clusterid 1001
updatesource Loopback0
!
neighborgroup agg
use sessiongroup infra
addressfamily ipv4 labeledunicast
routereflectorclient

routepolicy BGP_Egress_Filter out ! BGP Community based Egress filtering

nexthopself
!
neighborgroup mpc
use sessiongroup infra
addressfamily ipv4 labeledunicast
routereflectorclient
nexthopself
!
neighborgroup core
use sessiongroup infra
addressfamily ipv4 labeledunicast
nexthopself
communityset AllowedComm
200:200,
300:300,
!
routepolicy BGP_Egress_Filter
if community matchesany AllowedComm then
pass

PreAggregation Configuration

Figure 13

interface Loopback0
ipv4 address 10.10.9.9 255.255.255.255
!
interface Loopback1
ipv4 address 10.10.99.9 255.255.255.255

! PreAgg IGP Configuration

router isis coreagg


net 49.0100.1010.0001.9007.00
istype level1 ! ISIS L1 router
metricstyle wide
passiveinterface Loopback0 ! Coreagg IGP loopback0

!RAN Access IGP Configuration

router ospf 1
routerid 10.10.99.9
redistribute bgp 100 subnets routemap BGP_to_RAN ! iBGP to RAN IGP redistribution
network 10.9.9.2 0.0.0.1 area 0
network 10.9.9.4 0.0.0.1 area 0
network 10.10.99.9 0.0.0.0 area 0
distributelist routemap Redist_from_BGP in ! Inbound filtering to prefer
labeled BGP learnt prefixes

ip communitylist standard MPC_Comm permit 200:200


!
routemap BGP_to_RAN permit 10 ! Only redistribute prefixes
marked with MPC community
match community MPC_Comm
set tag 1000
routemap Redist_from_BGP deny 10
match tag 1000
!
routemap Redist_from_BGP permit 20

! BGP Configuration
router bgp 100
bgp routerid 10.10.9.10
bgp clusterid 909
neighbor csr peergroup
neighbor csr remoteas 100
neighbor csr updatesource Loopback100 ! Cell Site Routers RAN IGP
loopback100 as source
neighbor abr peergroup
neighbor abr remoteas 100
neighbor abr updatesource Loopback0 ! Core POP ABRs coreagg IGP
loopback0 as source
neighbor 10.10.10.1 peergroup abr
neighbor 10.10.10.2 peergroup abr
neighbor 10.10.13.1 peergroup csr
!
addressfamily ipv4
bgp redistributeinternal
network 10.10.9.10 mask 255.255.255.255 routemap AGG_Comm ! Advertise with
Aggregation Community (100:100)
redistribute ospf 1 ! Redistribute RAN IGP prefixes
neighbor abr sendcommunity
neighbor abr nexthopself

neighbor abr sendlabel ! Send labels with BGP routes


neighbor 10.10.10.1 activate
neighbor 10.10.10.2 activate
exitaddressfamily
!
routemap AGG_Comm permit 10
set community 300:300

Cell Site Gateway (CSG) Configuration

Figure 14

interface Loopback0
ip address 10.10.13.2 255.255.255.255

! IGP Configuration
router ospf 1
routerid 10.10.13.2
network 10.9.10.0 0.0.0.1 area 0
network 10.13.0.0 0.0.255.255 area 0
network 10.10.13.3 0.0.0.0 area 0

MTG Configuration

Figure 15

Interface lookback0
ip address 10.10.11.1 255.255.255.255

! IGP Configuration
router isis coreagg
istype level2only ! ISIS L2 router
net 49.0100.1010.0001.1001.00
addressfamily ipv4 unicast
metricstyle wide

! BGP Configuration
router bgp 100
bgp routerid 10.10.11.1
addressfamily ipv4 unicast
network 10.10.11.1/32 routepolicy MPC_Comm ! Advertise Loopback0 with MPC Community
allocatelabel all ! Send labels with BGP routes
!
sessiongroup infra

remoteas 100
updatesource Loopback0
!
neighborgroup abr
use sessiongroup infra
addressfamily ipv4 labeledunicast
nexthopself
!
neighbor 10.10.6.1
use neighborgroup abr
!
neighbor 10.10.12.1
use neighborgroup abr

communityset MPC_Comm
200:200
endset
!
routepolicy MPC_Comm
set community MPC_Comm
endpolicy

Verify
The loopback prefix of the Mobile Packet Gateway (MPG) is 10.10.11.1/32, so that prefix is of interest. Now,
look at how packets are forwarded from CSG to MPG.

The MPC prefix 10.10.11.1 is known to the CSG router from Preagg with route tag 1000 and it can be
forwarded as a labeled packet with outgoing LDP label 31 (intra domain LDP LSP). The MPC community
200:200 was mapped with route tag 1000 in Preagg node while redistribution is in OSPF.

CSG Node Output


CSG#sh mpls forwardingtable 10.10.11.1 detail
Local Outgoing Prefix Bytes Label Outgoing Next Hop
Label Label or Tunnel Id Switched interface
34 31 10.10.11.1/32 0 Vl40 10.13.1.0
MAC/Encaps=14/18, MRU=1500, Label Stack{31}

PreAgg Node Outputs


In Preagg node, the MPC prefix is redistributed from BGP to RAN access OSPF process with
communitybased filtering and the OSPF process is redistributed into BGP. This controlled redistribution is
necessary in order to make endtoend IP reachabilty, at the same time each segment has minimum required
routes.

The 10.10.11.1/32 prefix is known via hierarichal BGP 100 with the MPC 200:200 community attached. The
16020 BGP 3107 label received from the core Area Border Router (ABR) and the LDP label 22 is added on
top for intradomain forwarding after the next hop recursive lookup.

PreAGG1#sh ip route 10.10.11.1


Routing entry for 10.10.11.1/32
Known via "bgp 100", distance 200, metric 0, type internal
Redistributing via ospf 1
Advertised by ospf 1 subnets tag 1000 routemap BGP_TO_RAN
Routing Descriptor Blocks:
* 10.10.10.2, from 10.10.10.2, 1d17h ago
Route metric is 0, traffic share count is 1
AS Hops 0
MPLS label: 16020

PreAGG1#sh bgp ipv4 unicast 10.10.11.1


BGP routing table entry for 10.10.11.1/32, version 116586
Paths: (2 available, best #2, table default)
Not advertised to any peer
Local
<SNIP>
Local
10.10.10.2 (metric 30) from 10.10.10.2 (10.10.10.2)
Origin IGP, metric 0, localpref 100, valid, internal, best
Community: 200:200
Originator: 10.10.11.1, Cluster list: 0.0.3.233, 0.0.2.89
mpls labels in/out nolabel/16020

PreAGG1#sh bgp ipv4 unicast labels


Network Next Hop In label/Out label
10.10.11.1/32 10.10.10.1 nolabel/16021
10.10.10.2 nolabel/16020

PreAGG1#sh mpls forwardingtable 10.10.10.2 detail


Local Outgoing Prefix Bytes Label Outgoing Next Hop
Label Label or Tunnel Id Switched interface
79 22 10.10.10.2/32 76109369 Vl10 10.9.9.1
MAC/Encaps=14/18, MRU=1500, Label Stack{22}

PreAGG#sh mpls forwardingtable 10.10.11.1 detail


Local Outgoing Prefix Bytes Label Outgoing Next Hop
Label Label or Tunnel Id Switched interface
530 16020 10.10.11.1/32 20924900800 Vl10 10.9.9.1
MAC/Encaps=14/22, MRU=1496, Label Stack{22 16020}

Core ABR Node Outputs


The prefix 10.10.11.1 is known via intradomain IGP (ISISL2) and as per the MPLS forwarding table. It is
reachable through LDP LSP.

ABRCore2#sh ip route 10.10.11.1


Routing entry for 10.10.11.1/32
Known via "isis coreagg", distance 115, metric 20, type level2
Installed Sep 12 21:13:03.673 for 2w3d
Routing Descriptor Blocks
10.10.1.0, from 10.10.11.1, via TenGigE0/0/0/0, Backup
Route metric is 0
10.10.2.3, from 10.10.11.1, via TenGigE0/0/0/3, Protected
Route metric is 20
No advertising protos.

For the distribution of the prefixes between the segmented areas, BGP with the label (RFC 3107) is utilized.
What needs to reside still within the segmented areas of IGP are the loopbacks of the PEs and addresses
related to the central infrastructure.

The BGP routers that connect different areas together are the ABRs that act as a BGP RouteReflector. These
devices use the NextHopSelf feature, in order to avoid the need to have all NextHops of the complete
Autonomous System within the IGP, instead of only the IP addresses of the PEs and the central infrastructure.
Loop detection is completed based upon the BGP ClusterIDs.
For Network resilience, BGP PIC with the BGP Add Path feature should be used with BGP and LFA with
IGP. These features are not used in previous example.

Troubleshoot
There is currently no specific troubleshooting information available for this configuration.

Related Information
Seamless MPLS Architecture
Cisco Unified MPLS White Paper
Cisco Carrier Packet Transport (CPT) System
Technical Support & Documentation Cisco Systems

Updated: Mar 20, 2015 Document ID: 118846

Das könnte Ihnen auch gefallen