Sie sind auf Seite 1von 92

Cisco Connected Rail Solution

Implementation Guide
November 2016

Cisco Systems, Inc. www.cisco.com


Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL:
www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1721R)

2
Contents
Connected Rail Solution Implementation Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Solution Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Solution Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Connected Trackside Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Wireless Offboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Long Term Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Fluidmesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
MPLS Transport Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
MPLS Transport Gateway Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Pre-Aggregation Node Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Dual Homed Hub-and-Spoke Ethernet Access . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Per VLAN Active/Active MC-LAG (pseudo MC-LAG). . . . . . . . . . . . . . . . . . . . . . 15
L3VPN Service Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Connected Train Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
REP Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Gateway Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Lilee Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Klas Telecom TRX-R6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Wireless Offboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
LTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Fluidmesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Overlay Services Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Video Surveillance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Installation and Initial Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Camera Template - Basic 24x7 Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Camera Template - Scheduled Recording . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Event-Based Recording Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Connected Edge Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Long Term Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Integration with Davra RuBAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Wi-Fi Access Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Cisco Systems, Inc. www.cisco.com

1
Web Passthrough . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Performance, Scale, and QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
QoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Klas Throughput Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Field Trial Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

2
Connected Rail Solution Implementation
Guide
This document is the Cisco Connected Rail Solution Implementation Guide, which provides details about the test
topology, relevant feature configuration, and deployment of this solution. It is meant to be representative of a deployed
solution and not all-inclusive for every feature presented. It will assist in deploying solutions faster by showing an
end-to-end configuration along with relevant explanations.

Previous releases of the Connected Transportation System focused on Positive Train Control, Connected Roadways, and
Connected Mass Transit.

Audience
The audiences for this document are Cisco account teams, Cisco Advanced Services teams, and systems integrators
working with rail authorities. It is also intended for use by the rail authorities to understand the features and capabilities
enabled by the Cisco Connected Rail Solution design.

Organization
This guide includes the following sections:

Solution Overview, page 2 Provides an overview of the Connected Rail Solution services.
Network Topology, page 2 Description of network topology for the two ways that exist to implement the
solution.
Solution Components, page 4 Lists major solution components.
Connected Trackside Implementation, Describes the configuration of the trackside network infrastructure.
page 5
Connected Train Implementation, Describes the configuration of the REP ring, Gateway Mobility, and Wireless
page 28 Offboard.
Overlay Services Implementation, Describes the configuration of video surveillance that is used to provide live
page 57 and recorded video to security personnel.
Wi-Fi Access Implementation, Describes the Wi-Fi configuration that is used to enable connectivity for the
page 68 passengers on the train, law enforcement personnel, and the rail employees.
Performance, Scale, and QoS, Describes QoS, Klas Throughput Performance, and Scale for this solution.
page 74
Field Trial Results, page 77 Describes real world wireless field trial results.
Glossary, page 85 List of acronyms and initialisms used in this document.

Cisco Systems, Inc. www.cisco.com

1
Connected Rail Solution Implementation Guide

Solution Overview

Solution Overview
This section will provide an overview of the Connected Rail Solution services, including the Connected Trackside
implementation, Connected Train, overlay services such as video surveillance and infotainment, and onboard Wi-Fi
service.

 The Connected Trackside implementation includes the network topology supporting the data center services,
Multiprotocol Label Switching (MPLS) backhaul, Long Term Evolution (LTE), and trackside wireless radios. When the
train is in motion, it must maintain a constant seamless connection to the data center services by means of a mobility
anchor. This mobility anchor maintains tunnels over each connection to the train and can load share traffic or failover
links if one of the links fail.

 The Connected Train implementation includes the network topology supporting the intra-train communications
among all the passengers, employees, law enforcement personnel, and onboard systems. It also helps enable the
video surveillance, voice communications, and data traffic offloading to the trackside over the wireless network.

 The overlay services depend on the Connected Train implementation and include video surveillance, infotainment,
and network management. Video surveillance is provided by the Cisco Video Surveillance Management system,
which includes the Video Surveillance Operations Manager (VSOM), and Long Term Storage (LTS) server in the data
center, a Video Surveillance Media Server (VSMS) locally onboard the train, and a number of rail-certified IP cameras
on the train. The passengers can access local information or entertainment from the onboard video servers and the
employees or law enforcement officers can see the video surveillance feeds in real-time. The Davra RuBAN network
management system is used for incident monitoring triggered by a number of soft or hard triggers.

 The onboard Wi-Fi service provides connectivity to all train passengers with separate Service Set Identifiers (SSIDs)
for passengers, employees, and law enforcement personnel. This traffic is tunneled back to the data center and
relies on the seamless roaming provided by the Connected Train implementation to provide a consistent user
experience.

QoS, performance and scale, and results from a live field trial to test wireless roaming at high speed are also covered in
later chapters.

The Cisco Connected Rail Design Guide is a companion document to this Implementation Guide. The Design Guide,
which includes design options for all the services and this guide details the validation of all the services but not
necessarily all the available options, can be found at the following URL:

 https://docs.cisco.com/share/proxy/alfresco/url?docnum=EDCS-11479438

Network Topology
In this solution, two distinct ways exist to implement the proposed solutions. In both, the passengers and other riders on
the train need access to network resources from the Internet, the provider's data center, or within the train. Both
implementations use a gateway on the train that forms tunnels with a mobility anchor in the provider's network.

 The Lilee solution uses Layer 2 tunneling to bridge the train Local Area Network (LAN) to a LAN behind the mobility
anchor. In this respect, the Lilee solution is similar to a Layer 2 Virtual Private Network (L2VPN).

 The Klas Telecom solution relies on Cisco IOS and specifically PMIPv6 to provide the virtual connection from the train
gateway to the mobility anchor in the data center. The networks on the train are advertised to the mobility anchor as
Layer 3 mobile routes. These mobile routes are only present on the mobility anchor and not the intermediate
transport nodes, so the Klas Telecom solution is similar to a Layer 3 Virtual Private Network (L3VPN).

The onboard network behind the mobility gateway is common to both solutions. Each car has a number of switches that
are connected to the cars in front and behind to form a ring. Cisco Resilient Ethernet Protocol (REP) is configured on the
switches to prevent loops and reduce the convergence time in the event of a link or node failure. The proposed switches
are the IP67-rated Cisco IE 2000 or the Klas Telecom S10/S26, which is based on the Cisco Express Security
Specialization (ESS) 2020. In each carriage, one or more wireless access points (the hardened Cisco IW3702) exist to
provide wireless access to the passengers. These access points communicate with a Wireless LAN Controller (WLC)

2
Connected Rail Solution Implementation Guide

Network Topology

installed in the data center. For video surveillance, each carriage also has a number of IP cameras, which communicate
with an onboard hardened server running the Cisco VSMS. An onboard infotainment system is also supported on the
train to provide other services to the passengers.

The Klas gateway solution is based on a virtualized Cisco Embedded Services Router (ESR) with Proxy Mobile IPv6
(PMIPv6) performing the mobility management running on the Klas TRX-R6 or TRX-R2. The Klas gateway on the train
performs the role of Mobile Access Gateway (MAG) while a Cisco Aggregation Services Router (ASR) 100X in the data
center performs the role of Local Mobility Anchor (LMA).

An example of an end-to-end system based on the IOS/Klas Telecom gateway is shown in Figure 1.

Figure 1 Topology Diagram for Solution Based on IOS/Klas Gateway

In the system based on the Lilee gateway, the MIG2450-ME-100 (sometimes referred to as ME-100) mobile gateway on
the train builds an Layer 2 tunnel over the infrastructure to the virtual Lilee Mobility Controller (vLMC) in the data center.
After the tunnel is formed, the vLMC will bridge all the train traffic to an access VLAN or VLAN trunk.

An example of an end-to-end system based on the Lilee gateway is shown in Figure 2.

3
Connected Rail Solution Implementation Guide

Solution Components

Figure 2 Topology Diagram for System Based on Lilee Gateway

Solution Components
The Connected Rail Solution includes onboard, trackside, backhaul, and data center equipment.

The train equipment includes:

 Klas Telecom TRX-R2/R6 (for the Cisco IOS-based solution)

 Lilee Systems ME-100 (for the Lilee Systems-based solution)

 Cisco IE2000-IP67 switch

 Klas Telecom TRX-S10/S26 switch

 Cisco IW3702 access point

 Cisco IPC-3050/IPC-7070 IP camera

 Fluidmesh FM4200 radio

 Cisco VSMS on a rail certified server

The trackside equipment includes:

 Cisco IE 4000 switch

 Cisco ASR 920/903 router

4
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

 Fluidmesh FM3200 radio

To support the train and trackside deployment, the data center includes:

 Cisco ASR 100X router

 Cisco WLC

 Cisco Unified Computing System (UCS) to support applications including:

— DHCP

— RuBAN Network Management

— Cisco VSOM

Hardware model numbers and software releases that were validated are listed in Table 1..
Table 1 Hardware Models and Software Releases

Hardware Software Release Role


Cisco IW3702-4E-x-K9 Release 8.2 Onboard wireless access point
Cisco CIVS-IPC-3050 / CIVS-IPC-7070 Release 2.8 IP camera
Cisco AIR-CT5508 Release 8.2 Wireless LAN controller
Klas Telecom TRX-R2/R6 ESR5921 IOS 15.6(2)T Onboard mobile gateway
Klas Telecom TRX-S10/S26 ESS2020 IOS 15.2(4)EA1 Onboard access switch
Lilee Systems LMS-2450-ME-100 LileeOS Release 3.1 Onboard mobile gateway
Lilee Virtual LMC LileeOS Release 3.1 Mobility anchor for Lilee
Fluidmesh FM4200 Release 8.1 Offboarding radio for train to track
communication
Fluidmesh FM3200 Release 8.1 Trackside wireless radio
Cisco IE-2000 IP67 IOS 15.2(4)EA1 Onboard access switch
Cisco ASR 1000 IOS-XE 3.16.1aS Mobility anchor for train gateways
Cisco UCS Server platform for hosting Lilee vLMC,
Davra RuBAN
Cisco IE 4000 IOS 15.2(4)EA1 Trackside access switch
Cisco ASR 920 XE 3.18.0S Trackside pre-aggregation node
Cisco ASR 903 XE 3.18.0S Trackside pre-aggregation / aggregation
node

Connected Trackside Implementation


This section includes the following major topics:

 Wireless Offboard, page 6

 MPLS Transport Network, page 8

 Per VLAN Active/Active MC-LAG (pseudo MC-LAG), page 15

 L3VPN Service Implementation, page 20

5
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Wireless Offboard
The trackside wireless infrastructure includes everything needed to support a public or private LTE network and the
Fluidmesh radio network. A public mobile operator using a public or private Access Point Name (APN) provides the LTE
network in this solution. The Fluidmesh radios operate in the 4.9 - 5.9 GHz space using a proprietary implementation to
facilitate nearly seamless roaming between trackside base stations.

Long Term Evolution


The LTE implementation in this solution relies on a public mobile operator. A detailed description of this setup is out of
scope for this document. Because the train operator may use multiple LTE connections, the mobility anchor address must
be reachable from each public LTE network. In both the Klas Telecom and Lilee Systems gateway implementations, the
mobility anchor must have a single publicly reachable IP address to terminate the tunnels over the LTE connections.

Fluidmesh
The trackside radios are responsible for bridging the wireless traffic from the train to the trackside wired connections.
Within a group of trackside radios, one is elected or configured as a mesh end and the rest are mesh points. The mesh
point radios will forward the data from the connected train radios to the mesh end radio. The mesh end radio is similar
to a root radio and acts as the local anchor point for all the traffic from the trackside radios. It is configured with a default
gateway and performs all the routing for the trackside radio data.

The trackside radios are connected to the trackside switch network on a VLAN shared with the other trackside radios,
which is connected to the MPLS backhaul via a service instance (Bridge Domain Interface or BDI) on the provider edge
router. The trackside switched network is configured in a REP ring connected to a pair of provider edge routers for
redundancy. The provider edge routers run Virtual Router Redundancy Protocol (VRRP) between the BDIs to provide a
single virtual gateway address for the trackside radios. The BDIs are then placed in a L3VPN Virtual Routing and
Forwarding (VRF) for transit across the MPLS backhaul to the data center network.

The radios are configured through a web interface with the default IP set to 192.168.0.10/24. Figure 3 shows an example
of a mesh end radio configuration.

Figure 3 Trackside Radio General Mode

The radio is configured as a trackside radio on the FLUIDITY page. The unit role in this case is Infrastructure.

6
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Figure 4 Trackside Radio FLUIDITY Configuration

After performing this configuration, new links will be available, FLUIDITY Quadro and FMQuadro.

Figure 5 Trackside Radio FLUIDITY Quadro

In this view, the trackside radios are displayed with the associated train radio seen as a halo around it. The real time signal
strength of the train radio is also shown.

During a roam, the train radio halo will move to the strongest trackside radio in range. In Figure 6, the signal strength is
shown after a roaming event.

7
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Figure 6 Trackside Radio FLUIDITY Quadro - Roam

MPLS Transport Network


The core and aggregation networks are integrated with a flat Interior Gateway Protocol (IGP) and Label Distribution
Protocol (LDP) control plane from the core to the Pre-Aggregation Nodes (PANs) in the aggregation domain. An example
MPLS transport network is shown in Figure 7.

Figure 7 Flat IGP/LDP Network with Ethernet Access

All nodes--MPLS Transport Gateway (MTG), Core, Aggregation Node (AGN), and PAN--n the combined
core-aggregation domain make up the IS-IS Level-2 domain or Open Shortest Path First (OSPF) backbone area.

In this model, the access network could be one of the following options:

 Routers configured as Customer Edge (CE) devices in point-to-point or ring topologies over fiber Ethernet running
native IP transport, supporting L3VPN services. In this case, the CEs pair with PANs configured as L3VPN Provider
Edges (PEs), enabling layer 3 backhaul. Other options are any Time Division Multiplexing (TDM) circuits connected
directly to the PANs, which provide Circuit Emulation services via pseudowire-based circuit emulation to the MTG.

8
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

 Ethernet Access Nodes in point-to-point and REP-enabled ring topologies over fiber access running native Ethernet.
In this case, the PANs provide service edge functionality for the services from the access nodes and connect the
services to the proper L2VPN or L3VPN service backhaul mechanism. The MPLS services are always enabled by the
PANs in the aggregation network.

MPLS Transport Gateway Configuration


This section shows the IGP/LDP configuration required on the MTG to build the Label Switched Paths (LSPs) to the PANs.

Figure 8 MPLS Transport Gateway

Interface Configuration
interface Loopback0
description Global Loopback
ipv4 address 100.111.15.1 255.255.255.255
!
!***Core-facing Interface***
interface TenGigE0/0/0/0
description To CN-K0201 (CORE) Ten0/0/0/0
cdp
service-policy output PMAP-NNI-E
ipv4 address 10.2.1.9 255.255.255.254
carrier-delay up 2000 down 0
load-interval 30
transceiver permit pid all
!
!***Core-facing Interface***
interface TenGigE0/0/0/1
description To CN-K0401 (CORE) Ten0/0/0/1
cdp
service-policy output PMAP-NNI-E
ipv4 address 10.4.1.5 255.255.255.254
carrier-delay up 2000 down 0
load-interval 30
transceiver permit pid all
!

IGP Configuration
router isis core-agg
set-overload-bit on-startup 250
net 49.0100.1001.1101.5001.00
nsf cisco
log adjacency changes
lsp-gen-interval maximum-wait 5000 initial-wait 50 secondary-wait 200
lsp-refresh-interval 65000
max-lsp-lifetime 65535
address-family ipv4 unicast
metric-style wide

9
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

ispf
spf-interval maximum-wait 5000 initial-wait 50 secondary-wait 200
!
interface Loopback0
passive
point-to-point
address-family ipv4 unicast
!
!
interface TenGigE0/0/0/0
circuit-type level-2-only
bfd minimum-interval 15
bfd multiplier 3
bfd fast-detect ipv4
point-to-point
address-family ipv4 unicast
fast-reroute per-prefix level 2
metric 10
mpls ldp sync
!
!
interface TenGigE0/0/0/1
circuit-type level-2-only
bfd minimum-interval 15
bfd multiplier 3
bfd fast-detect ipv4
point-to-point
address-family ipv4 unicast
fast-reroute per-prefix level 2
metric 10
mpls ldp sync
!
!
!
mpls ldp
router-id 100.111.15.1
discovery targeted-hello accept
nsr
graceful-restart
session protection
igp sync delay 10
log
neighbor
graceful-restart
session-protection
nsr
!
interface TenGigE0/0/0/0
!
interface TenGigE0/0/0/1
!
!

Pre-Aggregation Node Configuration


This section shows the IGP/LDP configuration required to build the intra-domain LSPs. Minimal BGP configuration is
shown as the basis for building the transport MPLS VPN.

10
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Figure 9 Pre-Aggregation Node (PAN)

Interface Configuration
interface Loopback0
ip address 100.111.14.3 255.255.255.255
!
!***Redundant PAN interface***
interface TenGigabitEthernet0/0/0
description To PAN-K1404 Ten0/0/0
ip address 10.14.3.0 255.255.255.254
ip router isis core
load-interval 30
carrier-delay msec 0
mpls ip
mpls ldp igp sync delay 10
bfd interval 50 min_rx 50 multiplier 3
no bfd echo
cdp enable
isis network point-to-point
isis metric 10
isis csnp-interval 10
service-policy output PMAP-NNI-E
hold-queue 1500 in
hold-queue 2000 out
!
!***Uplink interface***
interface TenGigabitEthernet0/1/0
description To AGN-K1102 Ten0/0/0/1
ip address 10.11.2.1 255.255.255.254
ip router isis core
load-interval 30
carrier-delay msec 0
mpls ip
mpls ldp igp sync delay 10
bfd interval 50 min_rx 50 multiplier 3
no bfd echo
cdp enable
isis circuit-type level-2-only
isis network point-to-point
isis metric 10
service-policy output PMAP-NNI-E
hold-queue 1500 in
hold-queue 2000 out
!
!***Interface toward native IP CE ring in MPLS VPN VRFS***
!***Shown here for reference. Not part of Unified MPLS config.***
interface GigabitEthernet0/4/2
description To CSG-901-K1314
vrf forwarding RFS
ip address 10.13.14.1 255.255.255.254

11
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

ip ospf network point-to-point


load-interval 30
negotiation auto
bfd interval 50 min_rx 50 multiplier 3
no bfd echo
hold-queue 350 in
hold-queue 2000 out
!

IGP/LDP Configuration
router isis core-agg
net 49.0100.1001.1101.4003.00
!***PAN is a IS-IS Level-1-2 node***
ispf level-1-2
metric-style wide
fast-flood
set-overload-bit on-startup 180
max-lsp-lifetime 65535
lsp-refresh-interval 65000
spf-interval 5 50 200
prc-interval 5 50 200
lsp-gen-interval 5 5 200
no hello padding
log-adjacency-changes
nsf cisco
passive-interface Loopback0
bfd all-interfaces
mpls ldp sync
!
mpls label protocol ldp
mpls ldp graceful-restart
mpls ldp discovery targeted-hello accept
mpls ldp router-id Loopback0 force

Dual Homed Hub-and-Spoke Ethernet Access


Dual homed topologies for hub-and-spoke access have been implemented in the following mode:

 Per Node Active/Standby Multi-Chassis Link Aggregation Group (MC-LAG)

 Per VLAN Active/Active MC-LAG (pseudo Multichassis Link Aggregation Control Protocol or mLACP)

Figure 10 Per Node Active/Standby MC-LAG

Per Node Active/Standby MC-LAG


The Ethernet access node is Dual Homed to the AGN nodes using a bundle interface. The AGN node establishes an
inter-chassis bundle and correlates the states of the bundle member ports using Inter-Control Center Communications
Protocol (ICCP).

12
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

At steady state, links connected to AGN1 are selected as active, while links to AGN2 are kept in standby state ready to
take over in case of a failure.

The following configuration shows the implementation of the AGN nodes, AGN-K1101 and AGN-K1102, and the Ethernet
Access Node.

Aggregation Node Configuration

AGN1: Active Point-of-Attachment (PoA) AGN-K1101: ASR9000


NNI Interfaces

For reference throughout this document, the following is a list of settings used for MC-LAG configuration.

The access-facing virtual bundle interface is configured as follows:

 Suppress-flaps timer set to 300 ms. This prevents the bundle interface from flapping during a LACP failover.

 Associated with ICCP redundancy group 300.

 Lowest possible port-priority (to ensure node serves as active PoA initially).

 Media Access Control (MAC) address for bundle interface. This needs to match the MAC address configured on the
other PoA's bundle interface.

 Wait-while timer set to 100 ms to minimize LACP failover time.

 Maximum links allowed in the bundle limited to 1. This configuration ensures that the access node will never enable
both links to the PoAs simultaneously if ICCP signaling between the PoAs fails.

!*** Interface configuration towards the OLT ***


interface TenGigE0/2/0/1
bundle id 102 mode active
!
interface Bundle-Ether102
mlacp iccp-group 102
mlacp switchover type revertive
mlacp switchover recovery-delay 300
mlacp port-priority 10
mac-address 0.1101.1102
!

ICCP and Multichassis LACP

For reference throughout this document, the following is a list of settings used for ICCP configuration. The ICCP
redundancy group is configured as follows:

 Group ID.

 mLACP node ID (unique per node).

 mLACP system MAC address and priority (same for all nodes). These two values are concatenated to form the
system ID for the virtual LACP bundle.

 ICCP peer address. Since ICCP works by establishing an LDP session between the PoAs, the peer's LDP router ID
should be configured.

 Backbone interfaces. If all interfaces listed go down, core isolation is assumed and a switchover to the standby PoA
is triggered.

!*** ICCP configuration ***


redundancy

13
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

iccp
group 102
mlacp node 1
mlacp system mac 0000.1101.1111
mlacp system priority 20
member
neighbor 100.111.11.2
!
backbone
interface TenGigE0/0/0/0
interface TenGigE0/0/0/2
!
!
!
!

AGN2: Active Point-of-Attachment (PoA) AGN-A9K-K1102: ASR9000


NNI Interfaces

interface Bundle-Ether300
!*** Interface configuration towards the OLT ***
interface TenGigE0/1/1/1
bundle id 102 mode active
!
interface Bundle-Ether102
mlacp iccp-group 102
mlacp switchover type revertive
mlacp switchover recovery-delay 300
mlacp port-priority 20
mac-address 0.1101.1102
!

ICCP and Multichassis LACP

The ICCP redundancy group is configured as follows:

 Group ID.

 mLACP node ID (unique per node).

 mLACP system MAC address and priority (same for all nodes). These two values are concatenated to form the
system ID for the virtual LACP bundle.

 ICCP peer address. Since ICCP works by establishing an LDP session between the PoAs, the peer's LDP router ID
should be configured.

 Backbone interfaces. If all interfaces listed go down, core isolation is assumed and a switchover to the standby PoA
is triggered.

!*** ICCP Configuration ***


redundancy
iccp
group 102
mlacp node 2
mlacp system mac 0000.1101.1111
mlacp system priority 20
member
neighbor 100.111.11.1
!
backbone
interface TenGigE0/0/0/0
interface TenGigE0/0/0/2
!
!

14
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

!
!

Ethernet Access Node Configuration


The following configuration is taken from a Cisco router running IOS. Configurations for Ethernet switches and other
access nodes can be easily derived from the following configuration.

NNI Interfaces
!*** Interface configuraton towards the AGN nodes ***
interface GigabitEthernet0/8
description por to 1101 gi 0/0/1/16
no ip address
load-interval 30
negotiation auto
channel-protocol lacp
channel-group 6 mode active
!
interface GigabitEthernet0/6
description por to 1102 gi 0/0/1/17
no ip address
load-interval 30
negotiation auto
channel-protocol lacp
channel-group 6 mode active
!
!*** Port-Channel configuration towards the AGN nodes ***
interface Port-channel6
no ip address
load-interval 30
no negotiation auto
ethernet dot1ad nni
!
!

Per VLAN Active/Active MC-LAG (pseudo MC-LAG)


The Ethernet access node connects to each AGN via standalone Ethernet links or Bundle interfaces that are part of a
common bridge domain(s). All the links terminate in a common multi-chassis bundle interface at the AGN and are placed
in active or hot-standby state based on node and VLAN via ICCP-SM negotiation.

In steady state conditions, each AGN node forwards traffic only for the VLANs is responsible for, but takes over
forwarding responsibility for all VLANs in case of peer node or link failure.

The following configuration example shows the implementation of active/active per VLAN MC-LAG for VLANs 100 and
101, on the AGN nodes, AGN-K1101 and AGN-K1102, and the Access Node, ME-K0904.

15
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Figure 11 Per VLAN Active/Active MC-LAG

Aggregation Nodes Configuration

AGN1: Active Point-of-Attachment (PoA) AGN-A9K-K1101: ASR9000

NNI Interfaces
interface Bundle-Ether1
!
interface Bundle-Ether1.100 l2transport
encapsulation dot1q 100
!
interface Bundle-Ether1.101 l2transport
encapsulation dot1q 101
!
interface GigabitEthernet0/0/1/1
bundle id 1 mode on

ICCP and ICCP-SM and mLACP


For reference throughout this document, here is a list of settings used for ICCP-SM configuration. The ICCP-SM
redundancy group is configured as follows:

 Group ID.

 Multi-homing node ID (1 or 2 unique per node).

 ICCP peer address. Since ICCP works by establishing an LDP session between the PoAs, the peer's LDP router ID
should be configured.

 Backbone interfaces. If all interfaces listed go down, core isolation is assumed and a switchover to the standby PoA
is triggered.

redundancy
iccp
group 1
member
neighbor 100.111.11.2
!
backbone
interface TenGigE0/0/0/0
interface TenGigE0/0/0/2
!
!
!
!

l2vpn
redundancy
iccp group 1

16
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

multi-homing node-id 1
interface Bundle-Ether1
primary vlan 100
secondary vlan 101
recovery delay 60
!
!
!

Standby Point-of-Attachment (PoA) AGN-A9K-K1102: ASR9000

NNI Interfaces
interface GigabitEthernet0/3/1/12
bundle id 1 mode on
!
interface Bundle-Ether1
!
interface Bundle-Ether1.100 l2transport
encapsulation dot1q 100
!
interface Bundle-Ether1.101 l2transport
encapsulation dot1q 101
!

ICCP and mLACP


The ICCP redundancy group is configured as follows:

redundancy
iccp
group 1
member
neighbor 100.111.11.1
!
backbone
interface TenGigE0/0/0/0
interface TenGigE0/0/0/2
!
!
!
!*** ICCP-SM configuration ***
l2vpn
redundancy
iccp group 1
multi-homing node-id 2
interface Bundle-Ether1
primary vlan 101
secondary vlan 100
!
!
!

Ethernet Access Node


In this example, the Ethernet access node is a Cisco Ethernet switch running IOS. Configurations for other access node
devices can be easily derived from this configuration example, given that it shows a simple Ethernet trunk configuration
for each interface.

NNI Interfaces
interface GigabitEthernet0/13
port-type nni
switchport trunk allowed vlan 100-101

17
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

switchport mode trunk


load-interval 30
!
interface GigabitEthernet0/14
port-type nni
switchport trunk allowed vlan 100-101
switchport mode trunk
load-interval 30
!

Ethernet Access Rings


In addition to hub-and-spoke access deployments, the Connected Rail Solution design supports native Ethernet access
rings off of the MPLS Transport domain. These Ethernet access rings are comprised of Cisco Industrial Ethernet switches,
providing ruggedized and resilient connectivity to many trackside devices.

The Ethernet access switch provides transport of traffic from the trackside Fluidmesh radios and other trackside
components. To provide segmentation between services over the Ethernet access network, the access switch
implements 802.1q VLAN tags to transport each service. Ring topology management and resiliency for the Ethernet
access network is enabled by implementing Cisco REP segments in the network.

The Ethernet access ring is connected to a pair of PANs at the edge of the MPLS Transport network. The PAN maps the
service transport VLAN from the Ethernet access network to a transport MPLS L3VPN VRF instance, which provides
service backhaul across the Unified MPLS transport network. The REP segment from the access network is terminated
on the pair of access nodes, providing closure to the Ethernet access ring.

If the endpoint equipment being connected at the trackside only supports a single default gateway IP address, VRRP is
implemented on the pair of PANs to provide a single virtual router IP address while maintaining resiliency functionality.

Pre-Aggregation Node Configuration


The following configurations are the same for both access nodes.

VRF Configuration
Route Target (RT) constrained filtering is used to minimize the number of prefixes learned by the PANs. In this example,
RT 10:10 is the common transport RT which has all prefixes. While all nodes in the transport network export any
connected prefixes to this RT, only the MTG nodes providing connectivity to the data center infrastructure and backend
systems will import this RT. These nodes will also export the prefixes of the data center infrastructure with RT 1001:1001.
The PAN nodes import this RT, as only connectivity with the data center infrastructure is required.

ip vrf DC
rd 10:10
!***Common RT for all nodes
route-target export 10:10
!***RT for DC-connected nodes only***
route-target import 1001:1001

Ethernet Access Ring NNI Configuration


interface GigabitEthernet0/0
description to Ethernet access ring
no ip address
negotiation auto
!***REP segment configuration***
rep segment 1 edge
cdp enable
!***Transport VLAN***
service instance 200 ethernet
encapsulation dot1q 200
rewrite ingress tag pop 1 symmetric
bridge-domain 200
! end

18
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

IP/MPLS Access Ring NNI Configuration


This interface has two service instances configured. The untagged service instance provides the Layer 3 connectivity for
the MPLS transport. The tagged service instance closes the Ethernet access ring and REP segment with the other access
node.

interface GigabitEthernet0/11
description to IP/MPLS Access Ring
no ip address
load-interval 30
carrier-delay msec 0
negotiation auto
rep segment 1 edge
synchronous mode
cdp enable
ethernet oam
!***VLAN for IP/MPLS transport***
service instance 10 ethernet
encapsulation untagged
bridge-domain 10
!
!***VLAN to close Ethenet access ring REP segment***
service instance 200 ethernet
encapsulation dot1q 200
rewrite ingress tag pop 1 symmetric
bridge-domain 200
! end

VRRP Configuration
The following configuration example shows how VRRP is implemented on each access node to enable a single gateway
IP address for an endpoint device.

PAN-1

interface Vlan200
ip vrf forwarding DC
ip address 192.168.0.2 255.255.255.0
vrrp 1 ip 192.168.0.1
vrrp 1 timers advertise 2
vrrp 1 preempt delay minimum 10
vrrp 1 priority 110
vrrp 1 track 1 decrement 20

PAN-2

interface Vlan200
ip vrf forwarding DC
ip address 192.168.0.3 255.255.255.0
vrrp 1 ip 192.168.0.1
vrrp 1 timers advertise 2
vrrp 1 preempt delay minimum 10
vrrp 1 priority 90
vrrp 1 track 1 decrement 20

Ethernet Access Node Configuration


The identical configuration is used for each Ethernet access switch in the ring. Only one switch configuration is shown
here.

Ethernet Ring NNI Configuration


interface GigabitEthernet1/1
switchport mode trunk

19
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

rep segment 1
!
interface GigabitEthernet1/2
switchport mode trunk
rep segment 1
!

UNI to Trackside Radio Configuration


interface FastEthernet1/2
switchport access vlan 200
switchport mode access
!

L3VPN Service Implementation

Layer 3 MPLS VPN Service Model


This section describes the implementation details and configurations for the core transport network required for Layer 3
MPLS VPN service model.

This section is organized into the following sections:

 MPLS VPN Core Transport, which gives the implementation details of the core transport network that serves all the
different access models.

 L3VPN Hub-and-Spoke Access Topologies, which describes direct endpoint connectivity at the PAN.

 L3VPN Ring Access Topologies, which provides the implementation details for REP-enabled Ethernet access rings.

Note: ASR 903 RSP1 and ASR 903 RSP2 support L3VPN Services with non-MPLS access.

Figure 12 MPLS VPN Service Implementation

MPLS VPN Core Transport


This section describes the L3VPN PE configuration on the PANs connecting to the access network, the L3VPN PE
configuration on the MTGs in the core network, and the route reflector required for implementing the L3VPN transport
services.

This section also describes the Border Gateway Protocol (BGP) control plane aspects of the L3VPN service backhaul.

20
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Figure 13 BGP Control Plane for MPLS VPN Service

MPLS Transport Gateway MPLS VPN Configuration


This is a one-time MPLS VPN configuration done on the MTGs. No modifications are made when additional access nodes
or other MTGs are added to the network.

Data Center UNI


interface TenGigE0/0/0/2.1100
description Connected to Data Center.
vrf DC102
ipv4 address 115.1.102.3 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:115:1:102::3/64
encapsulation dot1q 1100
!

VRF Definition
vrf DC102
address-family ipv4 unicast
!***Common Access RT imported by MTG***
import route-target
10:10
!
!***Export MTG RT.***
!***Imported by every PAN in entire network.***
export route-target
1001:1001
!
!
address-family ipv6 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
!

21
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

MTG-1 VPNv4/v6 BGP Configuration


router bgp 1000
bgp router-id 100.111.15.1
bgp update-delay 360
!
vrf DC102
rd 1001:1001
address-family ipv4 unicast
redistribute connected
!
address-family ipv6 unicast
redistribute connected
!
!

MTG-2 VPNv4/v6 BGP Configuration


router bgp 1000
bgp router-id 100.111.15.2

!
vrf DC102
rd 1001:1002
address-family ipv4 unicast
redistribute connected
!
address-family ipv6 unicast
redistribute connected
!
!

Note: Each MTG has a unique RD for the MPLS VPN VRF to properly enable BGP FRR Edge functionality.

PAN VPNv4 PE Configuration


router bgp 1000
bgp router-id 100.111.14.1

!***CN-RR***
neighbor 100.111.15.50 peer-group cn-rr
!
address-family vpnv4
bgp nexthop trigger delay 3
!***CN-RR***
neighbor cn-rr send-community both
neighbor 100.111.15.50 activate
exit-address-family
!
address-family vpnv6
bgp nexthop trigger delay 3
!***CN-RR***
neighbor cn-rr send-community both
neighbor 100.111.15.50 activate
exit-address-family
!
!***RT Constrained Route Distribution towards CN-RR***
address-family rtfilter unicast
neighbor cn-rr send-community extended
neighbor 100.111.15.50 activate
exit-address-family
!

22
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

Centralized CN-RR Configuration


The BGP configuration requires the small change of activating the neighborship when a new PAN is added to the
core/aggregation network.

Centralized vCN-RR Configuration


router bgp 1000
bgp router-id 100.111.15.50
!
address-family vpnv4 unicast
nexthop trigger-delay critical 2000
!
address-family vpnv6 unicast
nexthop trigger-delay critical 2000
!
!***Peer group for all nodes***
session-group intra-as
remote-as 1000
!
!***Neighbor Group for MTGs***
neighbor-group mtg
use session-group intra-as
!
!***MTGs are Route-Reflector Clients***
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
!***Neighbor Group for PANs
neighbor-group pan
use session-group intra-as
!
!***PANs are Route-Reflector Clients***
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
exit-address-family
!
!***MTGs***
neighbor 100.111.15.1
use neighbor-group mtg
!
neighbor 100.111.15.2
use neighbor-group mtg
!
!***PANs***
neighbor 100.111.14.1
use neighbor-group pan
!
neighbor 100.111.14.2
use neighbor-group pan
!

end-policy

23
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

MTG VPNv4/v6 PE Configuration


router bgp 1000
nsr
bgp router-id 100.111.15.1
!
session-group intra-as
!
neighbor-group cn-rr
use session-group intra-as
!
address-family vpnv4 unicast
!
address-family vpnv6 unicast
!
!
!***CN-RR***
neighbor 100.111.15.50
use neighbor-group cn-rr
!

L3VPN over Hub-and-Spoke Access Topologies


This section describes the implementation details of direct endpoint connectivity at the PAN over hub-and-spoke access
topologies.

Direct Endpoint Connectivity to PAN Node


This section shows the configuration of PAN K1401 to which the endpoint is directly connected.

MPLS VPN PE Configuration on PAN K1401


Directly-attached Endpoint UNI

interface GigabitEthernet0/3/6
vrf forwarding VPN224
ip address 114.1.224.1 255.255.255.0
load-interval 30
negotiation auto
ipv6 address 2001:114:1:224::1/64

VRF Definition
vrf definition VPN224
rd 10:104
!
address-family ipv4
export map ADDITIVE
route-target export 10:104
route-target import 10:104
route-target import 1001:1001
route-target import 236:236
route-target import 235:235
exit-address-family
!
address-family ipv6
export map ADDITIVE
route-target export 10:104
route-target import 10:104
route-target import 1001:1001
route-target import 235:235
exit-address-family
!

!***Route map to export Global RT 10:10 in addition to Local RT 10:203***


route-map ADDITIVE permit 10
set extcommunity rt 10:10 additive

24
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

!***VPN BGP Configuration***


router bgp 1000
neighbor pan peer-group
neighbor pan remote-as 1000
neighbor pan password lab
neighbor pan update-source Loopback0
!
address-family vpnv4
bgp nexthop trigger delay 2
neighbor pan send-community extended
!
address-family vpnv6
bgp nexthop trigger delay 2
neighbor pan send-community extended
!
address-family ipv4 vrf VPN224
!***For Directly Connected endpoint***
redistribute connected
exit-address-family
!
address-family ipv6 vrf VPN224
!***For Directly Connected endpoint***
redistribute connected
exit-address-family

L3VPN over Ring Access Topologies


L3VPN transport over ring access topologies are implemented for REP-enabled Ethernet access rings. This section
shows the configuration for the PANs terminating the service from the Ethernet access ring running IOS-XR, as well as
a sample router access node.

PAN dual homing is achieved by a combination of VRRP, Routed pseudowire (PW), and REP providing resiliency and load
balancing in the access network. In this example, the PANs, AGN-1 and AGN-2, implement the service edge (SE) for the
Layer 3 MPLS VPN transporting traffic to the data center behind the MTG. A routed BVI acts as the service endpoint. The
Ethernet access network is implemented as a REP access ring and carries a dedicated VLAN to Layer 3 MPLS VPN-based
service. A PW running between the SE nodes closes the service VLAN providing full redundancy on the ring.

VRRP is configured on the Routed BVI interface to ensure the endpoints have a common default gateway regardless of
the node forwarding the traffic.

AGN-2 Configuration
interface TenGigE0/2/1/3.302 l2transport
encapsulation dot1q 302
rewrite ingress tag pop 1 symmetric
!
l2vpn
bridge group L2VPN
bridge-domain L3VPN-302
interface TenGigE0/2/1/3.302
!
!*** Routed PW configured to other SE Node 100.111.3.1***
neighbor 100.111.3.1 pw-id 302
!
routed interface BVI302
!
!
!***VRF Definition***
vrf VPN224
address-family ipv4 unicast
import route-target
!***Local RT***

25
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

10:104
235:235
236:236
1001:1001
!
export route-policy ADDITIVE
export route-target
10:104
!
!
address-family ipv6 unicast
import route-target
10:104
235:235
236:236
1001:1001
!
export route-policy ADDITIVE
export route-target
10:104
!
!
!
!***BVI Interface Configuration***
interface BVI302
vrf VPN224
ipv4 address 30.2.1.2 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:13:2:102::2/64
!
!***VRRP Configuration***
router vrrp
interface BVI302
address-family ipv4
vrrp 2
!***Highest Priority value to be active***
priority 253
preempt delay 600
address 30.2.1.1
bfd fast-detect peer ipv4 30.2.1.3
!
!

AGN-1 Configuration
interface TenGigE0/2/1/3.302 l2transport
encapsulation dot1q 302
rewrite ingress tag pop 1 symmetric
!
l2vpn
bridge group L2VPN
bridge-domain L3VPN-302
interface TenGigE0/2/1/3.302
!
!*** Routed PW configured to other SE Node 100.111.3.2***
neighbor 100.111.3.2 pw-id 302
!
routed interface BVI302
!
!
!
!***VRF Definition***
vrf VPN224
address-family ipv4 unicast
import route-target

26
Connected Rail Solution Implementation Guide

Connected Trackside Implementation

!***Local RT ***
10:104
235:235
236:236
1001:1001
!
export route-policy ADDITIVE
export route-target
10:104
!
!
address-family ipv6 unicast
import route-target
10:104
235:235
236:236
1001:1001
!
export route-policy ADDITIVE
export route-target
10:104
!
!
!
!***BVI Interface Configuration***
interface BVI302
vrf VPN224
ipv4 address 30.2.1.3 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:13:2:102::3/64
!
!***VRRP Configuration***
router vrrp
interface BVI302
address-family ipv4
vrrp 2
!***Highest Priority value to be active***
priority 252
address 30.2.1.1
bfd fast-detect peer ipv4 30.2.1.2
!
!

Sample Access Node Configuration


interface GigabitEthernet0/5
!***connection to endpoint***
service instance 302 ethernet
encapsulation dot1q 302
rewrite ingress tag pop 1 symmetric
bridge-domain 302
!
interface TenGigabitEthernet0/1
!*** NNI port***
service instance 302 ethernet
encapsulation dot1q 302
rewrite ingress tag pop 1 symmetric
bridge-domain 302
interface TenGigabitEthernet0/0
!*** NNI port****
service instance 302 ethernet
encapsulation dot1q 302
rewrite ingress tag pop 1 symmetric

27
Connected Rail Solution Implementation Guide

Connected Train Implementation

bridge-domain 302

Connected Train Implementation


This section includes the following major topics:

 REP Ring, page 28

 Gateway Mobility, page 30

 Wireless Offboard, page 49

REP Ring
To maintain a resilient switched network onboard the train, the switches are connected in a ring topology configured with
Cisco REP. The onboard gateway can be connected in line with the ring or attached to the ring as a "router-on-a-stick."
If the onboard gateway is cabled in line with the ring, it must be configured to close the ring. If the ring is not closed, it
will not have the proper failover protection. Figure 14 shows an example with the gateway in line with the ring and
Figure 15 shows an example of the gateway attached to a single switch.

28
Connected Rail Solution Implementation Guide

Connected Train Implementation

Figure 14 Train Gateway in line with REP Ring

Switch1 Switch2 Switch3

Train
Gateway

377190
Switch4 Switch5 Switch6

Figure 15 Train Gateway Singly Attached to Switch

Train
Gateway

Switch1 Switch2 Switch3


377191

Switch4 Switch5 Switch6


Neither the Lilee ME-100 nor the Klas Telecom TRX routers support REP; therefore, if put in line with the ring, the
connected switches must be configured with REP Edge No-Neighbor (RENN). This will allow the ring to close and
maintain failure protection and a loop free architecture. The reasons to put the gateway in line with the REP ring are if the
switches only have two Gigabit Ethernet connections. In this case, putting the gateway in line on the Gigabit ports will
maintain a high bandwidth ring. If the switch ports are all the same speed, then attaching the router on a single port could
be operationally less complex. The following is an example of a switch port connected to an in line gateway.

In line
Switch1

interface GigabitEthernet1/1
description to TRX-R6 eth 0/1
switchport mode trunk
switchport nonegotiate
rep segment 100 edge no-neighbor primary

Switch4

interface GigabitEthernet1/1
description to TRX-R6 eth 0/2
switchport mode trunk
rep segment 100 edge no-neighbor preferred

29
Connected Rail Solution Implementation Guide

Connected Train Implementation

The following is an example of a switch configured as an edge when the gateway is not in line.

Router on a Stick
Switch1

interface GigabitEthernet1/1
switchport mode trunk
rep segment 100 edge
!
interface GigabitEthernet1/2
switchport mode trunk
rep segment 100 edge

The interface facing the gateway in this case is configured as a trunk.

interface FastEthernet1/1
switchport mode trunk

Gateway Mobility

Lilee Systems
The Lilee-based solution requires an onboard gateway, the ME-100, and an offboard mobility anchor, the virtual Lilee
Mobility Controller (vLMC). The ME-100 supports a number of cellular, Wi-Fi, and Ethernet connections for the offboard
WAN connectivity. In this system, the cellular and Ethernet ports were used for validating connectivity to the trackside
infrastructure.

ME-100

WAN Connections
Please refer to Wireless Offboard, page 6 for the specific configurations for LTE and Fluidmesh.

LAN Connections
Each mobile network must be attached to a VLAN interface configured on the ME-100. When the Layer 2 mobility
function is enabled on the ME-100 and vLMC, these mobile networks will be connected at Layer 2 to the LAN side of the
vLMC. It is therefore important to ensure the addresses in the mobile network subnet are not duplicated by the addresses
on the LAN side of the vLMC.

The LAN connections can be configured as access ports or 802.1q trunk ports. In this system, the ME-100 was inserted
into the REP ring with the LAN ports configured as trunks. The configuration is given below.

config add interface vlan 10


config add interface vlan 20
config add interface vlan 21
config switch add vlan 10
config switch add vlan 20
config switch add vlan 21
config switch vlan 10 add port 1/1
config switch vlan 10 add port 1/2
config switch vlan 20 add port 1/1
config switch vlan 20 add port 1/2
config switch vlan 21 add port 1/1
config switch vlan 21 add port 1/2
config switch port 1/1 egress tagged
config switch port 1/2 egress tagged
config interface vlan 10 enable
config interface vlan 10 ip address 10.1.10.3 netmask 255.255.255.0
config interface vlan 20 enable
config interface vlan 20 ip address 10.1.20.3 netmask 255.255.255.0

30
Connected Rail Solution Implementation Guide

Connected Train Implementation

config interface vlan 21 enable


config interface vlan 21 ip address 10.1.21.3 netmask 255.255.255.0

Layer 2 Mobility
Enabling Layer 2 mobility on the ME-100 and vLMC will cause tunnels to be created between the devices and enable
Layer 2 connectivity between the two LANs. This will enable the vLMC to manage seamless roaming between the WAN
interfaces and maintain Layer 2 connectivity between the LANs.

! Helps enable L2 mobility service


config mobility type layer-2
! Configure the mobility controller on the Fluidmesh connection
config host mobility-controller ip address 10.4.4.5
! If WAN facing interface on the LMC is not in the same subnet
! as the Fluidmesh facing interface, a static route is needed.
! The gateway address is the VRRP virtual address configured on the
! aggregation nodes connecting to the trackside access switches.
config route ip network 10.4.4.0 netmask 255.255.255.0 gateway 192.168.0.1
! Configures the WAN interfaces to be used for connectivity to
! the LMC
! The IP used for the dialer interfaces must be reachable through
! the cellular network
config mobility uplink interface dialer 0 controller 91.91.91.5
config mobility uplink interface dialer 1 controller 91.91.91.5
config mobility uplink interface vlan 200 controller 10.4.4.5

vLMC
The Mobility Controller is used as the topological anchor point for the ME-100s. It is a Layer 3 device with the ability to
bridge Layer 3 interfaces to Layer 2 VLANs. The Lilee Mobility Controller (LMC) can be installed as a physical network
appliance or as a virtual machine. In this system, the LMC is virtualized and has dual WAN connections to keep the cellular
network separate from the wireless backhaul network. The LAN Ethernet connection is bridged to a VLAN interface which
is used for Layer 2 mobility.

WAN Connections
! Interface used for cellular connectivity
config interface eth 1 description "To-WAN-ASR1K-ER-g0/0/1"
config interface eth 1 enable
config interface eth 1 ip address 91.91.91.5 netmask 255.255.255.252
! Interface used for Fluidmesh connectivity
config interface eth 2 description "To-WAN-ASR1K-ER-g0/0/3"
config interface eth 2 enable
config interface eth 2 ip address 10.4.4.5 netmask 255.255.255.0
! Configures a default route to the WAN edge router
config route ip default gateway 91.91.91.6
! Configures a more specific route to the Fluidmesh network
config route ip network 192.168.0.0 netmask 255.255.255.0 gateway 10.4.4.4

LAN Connections
! Configures VLAN interface for mobile networks
config add interface vlan 10
config add interface vlan 20
config add interface vlan 21
! Configures Ethernet port that will be used for L2 connectivity
! to LAN side
config interface eth 3 description "To-DCswitch-g1/0/1"
config interface eth 3 enable
! Helps enable L3 support on VLAN interface
config interface vlan 10 enable
config interface vlan 10 ip address 10.1.10.2 netmask 255.255.255.0
config interface vlan 20 enable

31
Connected Rail Solution Implementation Guide

Connected Train Implementation

config interface vlan 20 ip address 10.1.20.2 netmask 255.255.255.0


config interface vlan 21 enable
config interface vlan 21 ip address 10.1.21.2 netmask 255.255.255.0

Layer 2 Mobility
Enabling the Layer 2 mobility service on the vLMC only requires configuring the interface that will be bridged to the VLAN
interfaces and which VLANs will be bridged.

! Helps enable L2 mobility service


config mobility type layer-2
! Bridges the LAN connections from the ME-100 to the specified port
config mobility bridge interface eth 3

With the above configuration, the Ethernet port is logically equivalent to a trunk port, all frames will be VLAN tagged. To
configure the bridge interface with a single VLAN, the line can be appended with a VLAN identifier.

config mobility bridge interface eth 3 vlan-access 10

In this scenario, the switch port should be configured as an access port in VLAN 10. In the former example, the switch
port should be configured as an 802.1q trunk.

In the case of a vLMC with the Ethernet port acting as a trunk, the port associated with this virtual Ethernet interface
should have the VLAN ID set to ALL (4095). Additionally, it must have promiscuous mode set to Accept. This is due to
the behavior of the virtual machine environment. Even though the port is in a vSwitch, it does not do MAC learning.
Because of this, it will filter out any traffic that does not match the MAC address of the Virtual Machine Network Interface
Controller (vmNIC). The vLMC, however, uses a different MAC address for the VLAN interfaces, which does not match
the vmNIC MAC. Without promiscuous mode, traffic to these VLANs would be dropped.

Load Balancing
The Lilee solution allows for equal and unequal load balancing between the different links used for roaming. The load
balancing profile can also be changed depending on the system conditions. For instance, in the steady state, the
Fluidmesh radios could receive 100% of the traffic. A condition could be configured that if the Fluidmesh connection were
to become unavailable, then the traffic would be split evenly over the remaining cellular interfaces. This scenario is
explained below.

! Creates the name of the condition being monitored


create event-condition "wifi-down"
! Configures the event condition to monitor whether the L2 mobility
! tunnel is active on VLAN 200
config event-condition "wifi-down" interface vlan 200 mobility tunnel down
! Creates a policy called "default" where dialer 0 and dialer 1 are
! disabled while VLAN 200 receives the rest of the traffic
config mobility policy-profile "default" uplink interface dialer 0 load-balance weight 0
config mobility policy-profile "default" uplink interface vlan 200 load-balance weight 1
config mobility policy-profile "default" uplink interface dialer 1 load-balance weight 0
! Creates a policy called "lte-only" where dialer 0 and dialer 1 are
! configured to share the traffic equally and VLAN 200 receives no
! traffic
config mobility policy-profile "lte-only" uplink interface dialer 0 load-balance weight 1
config mobility policy-profile "lte-only" uplink interface dialer 1 load-balance weight 1
config mobility policy-profile "lte-only" uplink interface vlan 200 load-balance weight 0
! Activates the "default" policy
config mobility activate policy-profile "default"
! Activates the "lte-only" policy if the Fluidmesh connection is
! unavailable
config mobility activate policy-profile "lte-only" by event-condition "wifi-down"

With the “default" policy activated, the tunnel output looks like the following.

ME-100-1.localdomain > show mobility tunnel all


Uplink | Uplink IP:Port | LMC IP:Port | Flags | Priority | Weight
--------------------------------------------------------------------------------------------------

32
Connected Rail Solution Implementation Guide

Connected Train Implementation

dialer 0 45.47.0.24:57522 91.91.91.5:8086 U 1 0


dialer 1 10.1.201.114:40501 91.91.91.5:8086 U 1 0
vlan 200 192.168.0.100:50943 10.4.4.5:8086 UA 1 1
ME-100-1.localdomain >

The 'U' flag indicates that the tunnel is up. The “A” flag indicates that the tunnel is active. When the Fluidmesh connection
is lost and the Layer 2 mobility tunnel is no longer active on that link, the "lte-only" policy takes effect.

ME-100-1.localdomain > show mobility tunnel all


Uplink | Uplink IP:Port | LMC IP:Port | Flags | Priority | Weight
--------------------------------------------------------------------------------------------------
dialer 0 45.47.0.24:57522 91.91.91.5:8086 UA 1 1
dialer 1 10.1.201.114:40501 91.91.91.5:8086 UA 1 1
vlan 200 192.168.0.100:52875 10.4.4.5:8086 1 0

As seen above, both dialer interfaces are up and active while the VLAN 200 tunnel is not up or active.

The interfaces also support unequal load balancing as well as numerous event conditions to influence the load balancing
and failover behavior. An example of unequal load balancing is given below.

config mobility policy-profile "unequal" uplink interface dialer 0 load-balance weight 1


config mobility policy-profile "unequal" uplink interface dialer 1 load-balance weight 50
config mobility policy-profile "unequal" uplink interface vlan 200 load-balance weight 100
config mobility activate policy-profile "unequal"

With this configuration, dialer 1 will receive 50 times more traffic then dialer 0 and VLAN 200 will receive twice as much
traffic as dialer 1 and 100 times more traffic then dialer 0. This can be seen in the show output of the tunnels.

ME-100-1.localdomain > show mobility tunnel all


Uplink | Uplink IP:Port | LMC IP:Port | Flags | Priority | Weight
--------------------------------------------------------------------------------------------------
dialer 0 45.47.0.21:53888 91.91.91.5:8086 UA 1 1
dialer 1 10.1.201.112:51233 91.91.91.5:8086 UA 1 50
vlan 200 192.168.0.100:58368 10.4.4.5:8086 UA 1 100

This can also be verified in the traffic counters on the tunnel interfaces.

ME-100-1.localdomain > show mobility tunnel counters


Uplink | RX bytes | TX bytes | RX packets | TX packets
----------------------------------------------------------------------------------------
dialer 0 0 276000 0 276
dialer 1 0 14449192 0 14452
vlan 200 3827 29093854 24 29203

The TX bytes through Dialer 1 is approximately 50 times more than the TX bytes through Dialer 0 and VLAN 200 has
approximately twice as many TX bytes as Dialer 1.

Other event conditions and mobility profile options can be found in the LileeOS Software Configuration Guide and
Command Reference Guide.

Klas Telecom TRX-R6


The TRX product runs the ESR5921 router as a virtual machine. Therefore, the TRX hardware requires configuring
independently of the virtual router. The configuration method is designed to look similar to Cisco IOS with much of the
same look and feel. The steps to configuring it for a Cisco ESR5921 image are described below.

Virtual Machine Deployment


If a storage pool does not yet exist on the TRX, it must be created.

trx-r6(config)# vm pool create disk1 vm_pool

33
Connected Rail Solution Implementation Guide

Connected Train Implementation

The ESR5921 qcow file then needs to be copied from a Secure Copy (SCP) or TFTP server to the TRX-R6 disk storage.

trx-r6(config)# copy tftp: disk1:

Once the image is copied to the filesystem, the virtual machine needs to be created.

trx-r6(config)# vm add c5921 vm_pool disk1 <qcow image>

The Klas router uses vSwitches to provide a connection from the physical interfaces to the virtual interfaces in the virtual
machine. They must be created prior to configuring them in the virtual machine.

trx-r6(config)# interface vSwitch 1

The vSwitches can now be configured on the virtual machine. It is important to choose the correct Network Interface
Controller (NIC) type when adding them to a virtual machine. The options are virtio and e1000. Leaving off the NIC type
will result in configuring it with the default type.

trx-r6(config)# vm configure c5921 nic add vSwitch 1

After configuring all the virtual machine options, it must be started.

trx-r6(config)# vm start c5921

Then the console can be connected.

trx-r6# vm console c5921

Connecting to the console will first bring the user to the KlasOS wrapper which sits between the KlasOS hardware and
the virtual machine. From this wrapper, the virtual machine must also be started.

voyagervm>
voyagervm> enable
voyagervm# configure terminal
voyagervm(config)# c5921 start
voyagervm(config)# end
voyagervm# write

Once the virtual machine is started, the ESR CLI can be accessed.

voyagervm# c5921 cli

Single Gateway

MAG Configuration
PMIPv6 is used in this system to enable seamless roaming between the WAN interfaces on the train gateway to support
failover and load sharing. The onboard IP gateway is known as the MAG and it anchors itself to the LMA hosted on an
ASR-100X in the data center. Once the MAG is configured with all the available roaming interfaces, it can use any or all
of the links for the train to trackside traffic.

Note: Changes to the PMIPv6 configuration cannot be made while there are bindings present.

Since PMIPv6 is built around IPv6, IPv6 must be enabled on the MAG.

ipv6 unicast-routing

Note: The MAG must also be time synced to the LMA. This can be accomplished by using a common Network Time
Protocol (NTP) source.

Each MAG is configured with a unique IP address on a loopback interface that serves as the unique identifier to the LMA.
It is known as the home address.

interface Loopback100

34
Connected Rail Solution Implementation Guide

Connected Train Implementation

ip address 100.100.100.100 255.255.255.255

Each WAN interface must be configured with the proper layer 2/3 configurations to connect to the cellular or wireless
network.

Please refer to Wireless Offboard, page 6, for the specific configurations for LTE and Fluidmesh.

All the data traffic from the onboard train network, including passengers and employees, enters into the MAG on one or
more interfaces connected to the switching network. In this implementation, each traffic type was configured on a
different subinterface with an 802.1q tag.

interface Ethernet0/3.10
description WiredClients
encapsulation dot1Q 10
ip address 10.1.10.1 255.255.255.0
!
interface Ethernet0/3.20
description WirelessClients
encapsulation dot1Q 20
ip address 10.1.20.1 255.255.255.0
ip access-group Data center in
ip helper-address 10.4.1.3
!
interface Ethernet0/3.21
description APMgmt
encapsulation dot1Q 21 native
ip address 10.1.21.1 255.255.255.0
ip helper-address 10.4.1.3
!
interface Ethernet0/3.30
description Wireless-TransitEmployee
encapsulation dot1Q 30
ip address 10.1.30.1 255.255.255.0
ip helper-address 10.4.1.3
!
interface Ethernet0/3.40
description LawEnforcement clients
encapsulation dot1Q 40
ip address 10.1.40.1 255.255.255.0
ip helper-address 10.4.1.3
!
interface Ethernet0/3.50
description ContentServer-Local
encapsulation dot1Q 50
ip address 10.1.50.1 255.255.255.0
!
interface Ethernet0/3.60
description Video Surveillance
encapsulation dot1Q 60
ip address 10.1.60.1 255.255.255.0
!

Just as the LMA identifies each MAG by its home address, the MAG must know the LMA by a single IP address. Each
MAG WAN interface must have a route to the LMA's WAN-facing interface.

ip route 0.0.0.0 0.0.0.0 Ethernet0/1 192.168.201.1


ip route 0.0.0.0 0.0.0.0 Ethernet0/0 192.168.101.1
ip route 0.0.0.0 0.0.0.0 Ethernet0/2 192.168.0.1

The PMIPv6 configuration is divided into two sections, pmipv6-domain and pmipv6-mag.

PMIPv6-Domain

35
Connected Rail Solution Implementation Guide

Connected Train Implementation

The domain configuration contains the encapsulation type, LMA definition, Network Access Indicator (NAI), and mobile
map definition if desired. To enable the multipath component of the MAG, the encapsulation must be set to udptunnel
instead of gre. The LMA address must be reachable from all WAN interfaces whether it is the public cellular network or
the operator's wireless network. Since the MAG is acting as the Mobile Node (MN), it requires an NAI for itself, which
takes on the form of [user]@realm. In this case, the user represents the MAG and the realm represents a way to bundle
all the mobile networks from all MAGs in a single network definition on the LMA. This NAI then points to the previously
configured LMA.

ipv6 mobile pmipv6-domain CTS_DOM


encap udptunnel
lma CTS_LMA
ipv4-address 91.91.91.10
nai LMN_T1@cts.com
lma CTS_LMA

PMIPv6-MAG

Contained in the pmipv6-mag section are all the details specific to the particular MAG. The MAG name, MAG_T1 in this
case, must be unique among the other MAGs connected in this domain to the LMA. It represents an entire train and all
the traffic behind it. An important feature to note is the heartbeat. When configured on the MAG and LMA, each device
will send a heartbeat message with the source address of each configured roaming interface. The MAG and LMA will
then maintain a table of the status of each roaming interface. If a connection fails, the heartbeat would time out and the
tunnel over that connection would be brought down. This way the MAG and LMA can accurately know which paths are
active.

The other key feature in use is the logical mobile network feature. With a traditional mobile node, the client device is seen
as mobile and the LMA handles the addressing and mobility. With a logical mobile network, the MAG is considered mobile
as well as all the networks configured behind it. In this case, the LMA handles mobility for the MAG and its subnets, but
does not provide mobility to a specific client device. In other words, the LMA builds tunnels to the MAG, not the end
device. The logical-mn configuration indicates which interfaces are connected to the end devices.

! Defines the MAG name and the domain it belongs to


ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
! helps enable clients on different mobile networks to communicate without
! traversing the PMIPv6 tunnel
local-routing-mag
! Helps enable the heartbeat mechanism, which helps enable the LMA to know which
! roaming interfaces are active.
heartbeat interval 3 retries 3 timeout 3
! Helps enable QoS for the PMIPv6 control packets
dscp control-plane 48
! Declares which interfaces PMIPv6 should use for the WAN connectivity
address dynamic
roaming interface Ethernet0/0 priority 1 egress-att LTE label LTE0-GREEN
roaming interface Ethernet0/1 priority 1 egress-att LTE label LTE1-GREEN
roaming interface Ethernet0/2 priority 1 egress-att ETHERNET label FM-GREEN
! Helps enable the MAG to use multiple WAN interfaces and build tunnels
! over all of them simultaneously. The roaming interface priority is
! forced to 1 in this configuration.
multipath
! Declares which interface to use as the MAG interface
interface Loopback100
! Declares the LMA name, domain, and IP address
lma CTS_LMA CTS_DOM
ipv4-address 91.91.91.10
! Declares which interfaces are used as the mobile network and which
! interface is to be used as the home
logical-mn LMN_T1@cts.com
mobile network Ethernet0/3.10 label WiredClients
mobile network Ethernet0/3.20 label WifiClients
mobile network Ethernet0/3.21 label ApMgmt
mobile network Ethernet0/3.30 label TransitEmployee
mobile network Ethernet0/3.60 label VideoSurv

36
Connected Rail Solution Implementation Guide

Connected Train Implementation

home interface Loopback100

LMA Configuration
The LMA represents the topological anchor point for all the MAGs. The PMIPv6 tunnels are terminated here and are
created dynamically when bindings are formed.

The LMA must have a single IP address that is reachable via the public cellular network and the private wireless network.

The LMA must have an ip local pool configured for the MAG home addresses as a placeholder. The LMA does not hand
out DHCP addresses from this pool.

ip local pool TRAIN_POOL 100.100.100.2 100.100.100.254

The PMIPv6 specific configuration is similar to the MAG in that it has a separate domain and LMA configuration section.

PMIPv6-Domain

The minimum requirements for the domain definition are the udptunnel encapsulation and the NAI definition. If the MAGs
can share a common summary address for the mobile networks, they can be known by the LMA with a common realm
definition. This allows for one network definition to be used instead of unique network statements for each MAG.

ipv6 mobile pmipv6-domain CTS_DOM


encap udptunnel
nai @cts.com
network LMN_T1NET

PMIPv6-LMA

Contained in the pmipv6-lma section are all the details specific to the LMA. The configuration here is similar to the MAG.
Instead of configuring the MAGs statically in the case of traditional mobile nodes where the MAG is stationary, they are
learned dynamically to allow the MAGs to connect with multiple roaming interfaces. The pool statement ensures that the
connecting MAGs have a home interface in the defined range. The mobile network pool configures the subnets being
used for all the traffic behind the trains. In this example, 10.0.0.0/8 indicates the entire pool of addresses in use across
all the trains, but it will only expect mobile subnets with a 24-bit mask. This allows the operator to use the second octet
to represent a specific train, the third octet to represent a specific traffic type, and the fourth octet to represent a specific
host.

! Defines the LMA name and the domain it belongs to


ipv6 mobile pmipv6-lma CTS_LMA domain CTS_DOM
! Defines the LMA IP address
address ipv4 91.91.91.10
! Helps enable the heartbeat mechanism for MAG reachability
heartbeat interval 3 retries 3
! Helps enable the LMA to accept PMIPv6 signaling from MAGs that are not
! statically configured
dynamic mag learning
! Helps enable multipath
multipath
! Helps enable QoS for the PMIPv6 control packets
dscp control-plane 48
! Defines the mobile address pools for a network identifier
network LMN_T1NET
! Address pool for the MAG home address
pool ipv4 TRAIN_POOL pfxlen 24
! Defines the pool that contains the mobile networks
mobile-network pool 10.0.0.0 pool-prefix 8 network-prefix 24

Verification
Once the configuration is complete, the MAG will initiate a connection with the LMA and the bindings will be created. It
should be verified that the PMIPv6 tunnels are active and the LMA has learned the mobile networks from the MAG.

37
Connected Rail Solution Implementation Guide

Connected Train Implementation

MAG
MAG1# show ipv6 mobile pmipv6 mag binding
Total number of bindings: 1
----------------------------------------
[Binding][MN]: Domain: CTS_DOM, Nai: LMN_T1@cts.com
[Binding][MN]: State: ACTIVE
[Binding][MN]: Interface: Loopback100
[Binding][MN]: Hoa: 100.100.100.100, Att: 4, llid: LMN_T1@cts.com
[Binding][MN]: HNP: 0
[Binding][MN][LMA]: Id: CTS_LMA
[Binding][MN][LMA]: Lifetime: 3600
[Binding][MN]: Yes
[Binding][MN][Mobile Network]: Ethernet0/3.10
[Binding][MN][PATH]: interface: Ethernet0/0, Label: LTE0
State: PATH_ACTIVE
Tunnel: Tunnel1
Refresh time: 3240(sec), Refresh time Remaining: 45(sec)
[Binding][MN][PATH]: interface: Ethernet0/1, Label: LTE1
State: PATH_ACTIVE
Tunnel: Tunnel2
Refresh time: 3240(sec), Refresh time Remaining: 48(sec)
[Binding][MN][PATH]: interface: Ethernet0/2, Label: FM
State: PATH_ACTIVE
Tunnel: Tunnel0
Refresh time: 3240(sec), Refresh time Remaining: 37(sec)
----------------------------------------

MAG1# show ipv6 mobile pmipv6 mag tunnel


----------------------------------------------------
[MAG_T1] Tunnel Information
Peer [CTS_LMA] : Tunnel Bindings 1
Tunnel0:
src 192.168.0.51, dest 91.91.91.10
encap UDP/IP, mode reverse-allowed
Outbound Interface Ethernet0/2
11040371 packets input, 925585435 bytes, 0 drops
13325877 packets output, 19791427939 bytes
Peer [CTS_LMA] : Tunnel Bindings 1
Tunnel1:
src 192.168.101.2, dest 91.91.91.10
encap UDP/IP, mode reverse-allowed
Outbound Interface Ethernet0/0
1 packets input, 84 bytes, 0 drops
10 packets output, 1024 bytes
Peer [CTS_LMA] : Tunnel Bindings 1
Tunnel2:
src 192.168.201.2, dest 91.91.91.10
encap UDP/IP, mode reverse-allowed
Outbound Interface Ethernet0/1
0 packets input, 0 bytes, 0 drops
10 packets output, 1024 bytes
MAG1#

MAG1# show ipv6 mobile pmipv6 mag heartbeat


----------------------------------------------------
[MAG_T1] HeartBeat : enabled
Timer interval : 3, retries : 3, timeout : 3

[MAG_T1] Heartbeat Path Information


Path : src: 192.168.201.2, dst: 91.91.91.10, src port: 5436,
dst port: 5436, state: ACTIVE, label LTE1
interval: 3, retries: 3, timeout: 3
Path : src: 192.168.101.2, dst: 91.91.91.10, src port: 5436,
dst port: 5436, state: ACTIVE, label LTE0

38
Connected Rail Solution Implementation Guide

Connected Train Implementation

interval: 3, retries: 3, timeout: 3


Path : src: 192.168.0.51, dst: 91.91.91.10, src port: 5436,
dst port: 5436, state: ACTIVE, label FM
interval: 3, retries: 3, timeout: 3
MAG1#

LMA
LMA# show ipv6 mobile pmipv6 lma binding
Total number of bindings: 1
----------------------------------------
[Binding][MN]: State: BCE_ACTIVE
[Binding][MN]: Domain: CTS_DOM, NAI: LMN_T1@cts.com
[Binding][MN]: HOA: 100.100.100.100, Prefix: 24
[Binding][MN]: HNP: 0
[Binding][MN][PEER]: Default Router: 100.100.100.2
[Binding][MN]: ATT: WLAN (4), Label: FM, Color: red
[Binding][MN][PEER2]:Transport VRF:
[Binding][MN][PEER2]:LLID: LMN_T1@cts.com
[Binding][MN][PEER2]: Id: MAG_T1
[Binding][MN][PEER2]: Lifetime: 3600(sec)
[Binding][MN][PEER2]: Lifetime Remaining: 2939(sec)
[Binding][MN][PEER2]: Tunnel: Tunnel0
[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0
[Binding][MN]: ATT: WLAN (4), Label: LTE0, Color: green
[Binding][MN][PEER3]:Transport VRF:
[Binding][MN][PEER3]:LLID: LMN_T1@cts.com
[Binding][MN][PEER3]: Id: MAG_T1
[Binding][MN][PEER3]: Lifetime: 3600(sec)
[Binding][MN][PEER3]: Lifetime Remaining: 2947(sec)
[Binding][MN][PEER3]: Tunnel: Tunnel1
[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0
[Binding][MN]: ATT: WLAN (4), Label: LTE1, Color: yellow
[Binding][MN][PEER4]:Transport VRF:
[Binding][MN][PEER4]:LLID: LMN_T1@cts.com
[Binding][MN][PEER4]: Id: MAG_T1
[Binding][MN][PEER4]: Lifetime: 3600(sec)
[Binding][MN][PEER4]: Lifetime Remaining: 2949(sec)
[Binding][MN][PEER4]: Tunnel: Tunnel2

LMA# show ipv6 mobile pmipv6 lma tunnel


----------------------------------------------------
[CTS_LMA] Tunnel Information
Peer [] : Tunnel Bindings 1
Tunnel0:
src 91.91.91.10
encap MUDP/IP, mode reverse-allowed
Outbound Interface NULL
228726 packets input, 336669806 bytes, 0 drops
7 packets output, 1132 bytes
Peer [] : Tunnel Bindings 1
Tunnel1:
src 91.91.91.10
encap MUDP/IP, mode reverse-allowed
Outbound Interface NULL
13326288 packets input, 19418427361 bytes, 0 drops
11040574 packets output, 925620028 bytes
Peer [] : Tunnel Bindings 1
Tunnel2:
src 91.91.91.10
encap MUDP/IP, mode reverse-allowed
Outbound Interface NULL
2 packets input, 152 bytes, 0 drops

39
Connected Rail Solution Implementation Guide

Connected Train Implementation

1 packets output, 84 bytes


LMA#

LMA# show ipv6 mobile pmipv6 lma heartbeat


----------------------------------------------------
[CTS_LMA] HeartBeat : enabled
Timer interval : 3, retries : 3, timeout : 3

[CTS_LMA] Heartbeat Path Information


Path : src: 91.91.91.10, dst: 10.1.201.11, src port: 5436,
dst port: 5436, state: ACTIVE, label LTE1
interval: 3, retries: 3, timeout: 3
Path : src: 91.91.91.10, dst: 10.1.201.12, src port: 5436,
dst port: 5436, state: ACTIVE, label LTE0
interval: 3, retries: 3, timeout: 3
Path : src: 91.91.91.10, dst: 192.168.0.51, src port: 5436,
dst port: 5436, state: ACTIVE, label FM
interval: 3, retries: 3, timeout: 3

LMA#show ip route mobile

10.0.0.0/8 is variably subnetted, 20 subnets, 3 masks


M 10.1.10.0/24 is directly connected, Tunnel2
is directly connected, Tunnel1
is directly connected, Tunnel0
M 10.1.20.0/24 is directly connected, Tunnel2
is directly connected, Tunnel1
is directly connected, Tunnel0
100.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
M 100.100.100.0/24 is directly connected, Null0
M 100.100.100.100/32 is directly connected, Tunnel2
is directly connected, Tunnel1
is directly connected, Tunnel0

Mobile Maps
Mobile maps can be used to enable application-based routing for a greater degree of load sharing between the roaming
interfaces. Without the mobile map feature, no way exists to guarantee traffic will choose one particular path when
multipath is enabled. The mobile map feature works with access lists to assign a specific order of roaming interfaces to
a given traffic type.

MAG Configuration
Access lists must be created for all traffic that will be subject to the mobile map configuration under PMIPv6.

ip access-list extended WiredBottom


permit ip 10.1.15.0 0.0.0.255 any
deny ip any any
ip access-list extended WiredTop
permit ip 10.1.10.0 0.0.0.255 any
deny ip any any

The PMIPv6 domain configuration section contains the access list to link-type mapping. The link-type names must match
the labels given to the roaming interfaces for the mobile maps to work properly.

ipv6 mobile pmipv6-domain CTS_DOM


mobile-map MPATH 1
match access-list WiredTop
set link-type FM LTE0 LTE1
mobile-map MPATH 2
match access-list WiredBottom
set link-type LTE0 LTE1 FM

40
Connected Rail Solution Implementation Guide

Connected Train Implementation

Many mobile maps can be configured, but only one can be active on the MAG and LMA. Multiple access-lists can be
added by using different sequence numbers in the mobile-map configuration. The link-type list defines which links will
be used in order by the matching traffic. If the first link-type is unavailable, the second link will be used. The keyword
NULL can be added to the end to drop traffic if the configured link-types are unavailable. If the link-type does not match
the entry in the binding, the traffic will not be filtered properly.

Total number of bindings: 1


----------------------------------------
[Binding][MN]: Domain: CTS_DOM, Nai: LMN_T1@cts.com
[Binding][MN]: State: ACTIVE
[Binding][MN]: Interface: Loopback100
[Binding][MN]: Hoa: 100.100.100.100, Att: 4, llid: LMN_T1@cts.com
[Binding][MN]: HNP: 0
[Binding][MN][LMA]: Id: CTS_LMA
[Binding][MN][LMA]: Lifetime: 3600
[Binding][MN]: Yes
[Binding][MN][Mobile Network]: Ethernet0/3.10
[Binding][MN][Mobile Network]: Ethernet0/3.20
[Binding][MN][Mobile Network]: Ethernet0/3.21
[Binding][MN][Mobile Network]: Ethernet0/3.30
[Binding][MN][Mobile Network]: Ethernet0/3.40
[Binding][MN][Mobile Network]: Ethernet0/3.60
[Binding][MN][Mobile Network]: Ethernet0/3.70
[Binding][MN][Mobile Network]: Ethernet0/3.15
[Binding][MN][Mobile Network]: Ethernet0/3.25
[Binding][MN][Mobile Network]: Ethernet0/3.35
[Binding][MN][Mobile Network]: Ethernet0/3.45
[Binding][MN][Mobile Network]: Ethernet0/3.75
[Binding][MN][PATH]: interface: Ethernet0/0, Label: LTE0
State: PATH_ACTIVE
Tunnel: Tunnel1
Refresh time: 3240(sec), Refresh time Remaining: 1390(sec)
[Binding][MN][PATH]: interface: Ethernet0/1, Label: LTE1
State: PATH_ACTIVE
Tunnel: Tunnel2
Refresh time: 3240(sec), Refresh time Remaining: 1390(sec)
[Binding][MN][PATH]: interface: Ethernet0/2, Label: FM
State: PATH_ACTIVE
Tunnel: Tunnel0
Refresh time: 3240(sec), Refresh time Remaining: 1390(sec)
----------------------------------------

Once the mobile map is configured, it must be enabled in the pmipv6-mag configuration section.

ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM


mobile-map MPATH

Once the bindings are established, there will be dynamic route-map entries created referencing those access-lists.

route-map MIP-08/12/16-19:02:11.714-1-PMIPV6, permit, sequence 1, identifier 570425522


Match clauses:
ip access-list ( AND ): PMIP_V4RT_2198 PMIPv6-WiredTop
Set clauses:
interface Tunnel0
Policy routing matches: 0 packets, 0 bytes
route-map MIP-08/12/16-19:02:11.714-1-PMIPV6, permit, sequence 2, identifier 1962934428
Match clauses:
ip access-list ( AND ): PMIP_V4RT_2198 PMIPv6-WiredBottom
Set clauses:
interface Tunnel1
Policy routing matches: 0 packets, 0 bytes

41
Connected Rail Solution Implementation Guide

Connected Train Implementation

LMA Configuration
The mobile map configuration on the LMA is similar to the MAG. Access lists must be created for all traffic that will be
subject to the mobile map configuration under PMIPv6.

In the following example, the traffic sources are networks behind the LMA in the data center.

ip access-list extended CORP1toMAG


permit ip 10.4.0.0 0.0.255.255 any
deny ip any any
ip access-list extended CORP2toMAG
permit ip 10.3.0.0 0.0.255.255 any
deny ip any any

The PMIPv6 domain configuration section contains the access list to link-type mapping. The link-type names must be
consistent across the connected MAGs for the mobile maps to work properly.

ipv6 mobile pmipv6-domain CTS_DOM


mobile-map MPATH 1
match access-list CORP1toMAG
set link-type FM LTE0 LTE1
mobile-map MPATH 2
match access-list CORP2toMAG
set link-type LTE0 LTE1 FM

As with the MAG, if the link-type does not match the entry in the binding, the traffic will not be filtered properly.

Total number of bindings: 1


----------------------------------------
[Binding][MN]: State: BCE_ACTIVE
[Binding][MN]: Domain: CTS_DOM, NAI: LMN_T1@cts.com
[Binding][MN]: HOA: 100.100.100.100, Prefix: 24
[Binding][MN]: HNP: 0
[Binding][MN][PEER]: Default Router: 100.100.100.2
[Binding][MN]: ATT: WLAN (4), Label: FM, Color: blue
[Binding][MN][PEER1]:Transport VRF:
[Binding][MN][PEER1]:LLID: LMN_T1@cts.com
[Binding][MN][PEER1]: Id: MAG_T1
[Binding][MN][PEER1]: Lifetime: 3600(sec)
[Binding][MN][PEER1]: Lifetime Remaining: 556(sec)
[Binding][MN][PEER1]: Tunnel: Tunnel0

Once the mobile map is configured, it must be enabled in the pmipv6-lma configuration section.

ipv6 mobile pmipv6-lma CTS_LMA domain CTS_DOM


mobile-map MPATH
interface GigabitEthernet0/0/2

The interface statement helps enable the mobile map on that interface. Once the bindings are established, dynamic
route-map entries will be created.

X24-ASR1006X-LMA#show route-map dynamic


route-map MIP-08/12/16-14:11:43.657-5-PMIPV6, permit, sequence 36, identifier 838860805
Match clauses:
ip address (access-lists): PMIPv6-CORP2toMAG
Set clauses:
color red blue
Policy routing matches: 28451141 packets, 5006166050 bytes
route-map MIP-08/12/16-14:11:43.657-5-PMIPV6, permit, sequence 69, identifier 1476395014
Match clauses:
ip address (access-lists): PMIPv6-CORP1toMAG
Set clauses:
color blue red
Policy routing matches: 206358 packets, 306647988 bytes
Current active dynamic routemaps = 1

42
Connected Rail Solution Implementation Guide

Connected Train Implementation

Multi Gateway
A dual-MAG or multi-MAG setup gives the operator the flexibility to have more roaming interfaces for a set of mobile
networks that can be used for redundancy, failover, or load sharing. However, one restriction in PMIPv6 is that two MAGs
cannot advertise the same mobile network. Therefore, the redundant MAGs must be configured to look like a single MAG
to the LMA. This is accomplished by using a First Hop Redundancy Protocol (FHRP) to let the MAGs share the mobile
networks. One MAG will be the active router for a set of mobile networks and the other MAG will be the backup. The LMA
will see one MAG entry with the roaming interfaces from both MAGs.

If MAG failover is the primary goal, configuring one MAG as active for one or more mobile networks and backup for the
others will accomplish this. The limitation with this approach is that a MAG could receive a disproportionate amount of
traffic depending on the mobile network distribution between the MAGs. Another approach is by splitting the user traffic
in half and configuring each MAG to be the active router for each half of the traffic. This approach provides load balancing
and failover for all traffic types.

The traffic must be split in a way that makes logical and logistical sense given the physical environment. If half of the
wireless access points on a train are in one mobile network and the other half are on a different mobile network, when a
wireless client roams, he will at some point receive a new IP address. This will cause some amount of disruption in his
connection. If access points for both mobile networks are present in every car, if a user roams from car to car, he could
potentially roam back and forth between the mobile networks. The advantage to this approach is that each car has the
same configuration.

The alternate approach is to use one mobile network for the back half of a train and another mobile network for the front
half. As long as a mobile user stays in his half of the train, he is not likely to roam to a different network. The disadvantage
to this approach is that the cars cannot be added or removed without ensuring it is configured for the correct spot in the
train consist.

Using multiple MAGs also has implications for the switching network on the train. With a single gateway, it can be
connected in line with the REP ring or directly off one of the switches. However, with multiple MAGs, they cannot be
connected in line. The reason for this is because a REP ring only supports one edge whereas an in line gateway must be
configured as the edge. Therefore, multiple MAGs must be connected on trunk ports and not REP edge ports.

In this system, the first approach will be used with identically configured cars.

MAG Configuration
Each traffic type that is being load balanced across the MAGs will need its own unique subnet on each MAG. It may be
preferable to keep some traffic on a single subnet for ease of deployment, like video surveillance. In this case, the traffic
will have failover protection, but it won't be load balanced. Each subnet is then configured on a separate interface on
both MAGs. In this system, VRRP is used as the FHRP to provide a virtual gateway for the mobile networks. One
configuration example for a traffic type is given below. In this example, half the clients would be connected to VLAN 10
and the other half on VLAN 15.

MAG1
interface Ethernet0/3.10
description WiredClients-Top
encapsulation dot1Q 10
ip address 10.1.10.2 255.255.255.0
vrrp 10 ip 10.1.10.1
vrrp 10 preempt delay minimum 60
vrrp 10 priority 105
end
interface Ethernet0/3.15
description WiredClients-bottom
encapsulation dot1Q 15
ip address 10.1.15.3 255.255.255.0
vrrp 15 ip 10.1.15.1
end

MAG1#show vrrp

43
Connected Rail Solution Implementation Guide

Connected Train Implementation

Ethernet0/3.10 - Group 10
State is Master
Virtual IP address is 10.1.10.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled, delay min 60 secs
Priority is 105
Master Router is 10.1.10.2 (local), priority is 105
Master Advertisement interval is 1.000 sec
Master Down interval is 3.589 sec
Ethernet0/3.15 - Group 15
State is Backup
Virtual IP address is 10.1.15.1
Virtual MAC address is 0000.5e00.010f
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 100
Master Router is 10.1.15.2, priority is 105
Master Advertisement interval is 1.000 sec
Master Down interval is 3.609 sec (expires in 3.545 sec)

The preempt delay is configured to ensure all forwarding paths are up before the router becomes the master again. The
priority is set higher then default (100) to ensure it is the primary path.

MAG2
interface Ethernet0/3.10
description WiredClients-Top
encapsulation dot1Q 10
ip address 10.1.10.3 255.255.255.0
vrrp 10 ip 10.1.10.1
end
interface Ethernet0/3.15
description WiredClients-bottom
encapsulation dot1Q 15
ip address 10.1.15.2 255.255.255.0
vrrp 15 ip 10.1.15.1
vrrp 15 preempt delay minimum 60
vrrp 15 priority 105
end

MAG2#show vrrp
Ethernet0/3.10 - Group 10
State is Backup
Virtual IP address is 10.1.10.1
Virtual MAC address is 0000.5e00.010a
Advertisement interval is 1.000 sec
Preemption enabled
Priority is 100
Master Router is 10.1.10.2, priority is 105
Master Advertisement interval is 1.000 sec
Master Down interval is 3.609 sec (expires in 3.089 sec)

Ethernet0/3.15 - Group 15
State is Master
Virtual IP address is 10.1.15.1
Virtual MAC address is 0000.5e00.010f
Advertisement interval is 1.000 sec
Preemption enabled, delay min 60 secs
Priority is 105
Master Router is 10.1.15.2 (local), priority is 105
Master Advertisement interval is 1.000 sec
Master Down interval is 3.589 sec

44
Connected Rail Solution Implementation Guide

Connected Train Implementation

As seen in the VRRP status, MAG1 is the master for VLAN 10 and the backup for VLAN 15 while MAG2 is the backup for
VLAN 10 and the master for VLAN 15. In the event of a MAG failure, the other MAG would become master for both VLANs.

Because the MAGs are configured for load balancing, mobile maps are necessary for proper operation. As described in
the single gateway configuration, access-lists must be configured for all the mobile networks. The access-lists should
be the same on both MAGs to ensure consistent traffic matching.

ip access-list extended Clients


permit ip 10.1.10.0 0.0.0.255 any
permit ip 10.1.15.0 0.0.0.255 any
permit ip 10.1.20.0 0.0.0.255 any
permit ip 10.1.21.0 0.0.0.255 any
permit ip 10.1.25.0 0.0.0.255 any
deny ip any any

Cases may exist where further control of the VRRP decision-making process is desired. For instance, if all the WAN links
lost their connection but the gateway is still active, the VRRP state will not change. To mitigate this issue, VRRP object
tracking with IP SLA can be used.

! Creates a tracking object for IP SLA


track 20 ip sla 20 reachability
! IP SLA entry
ip sla 20
! Sends an ICMP echo to the LMA address from a specific
! client subinterface
icmp-echo 91.91.91.10 source-interface Ethernet0/3.20
threshold 3000
frequency 5
ip sla schedule 20 life forever start-time now

The client subinterface is then configured with VRRP tracking on the previously configured object.

interface Ethernet0/3.20
vrrp 20 track 20

MAG1#show ip sla summary


IPSLAs Latest Operation Summary
Codes: * active, ^ inactive, ~ pending

ID Type Destination Stats Return Last


(ms) Code Run
-----------------------------------------------------------------------
*20 icmp-echo 91.91.91.10 RTT=1 OK 0 seconds ago

PMIPv6
Each MAG is configured nearly the same so the LMA will consider them as one MAG. This includes the loopback interface
used as the home interface, the MAG name, the domain name, and the NAI. Since each MAG will be active for one set
of mobile networks and backup for another, all the mobile networks must be configured in the PMIPv6 section.

ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM


logical-mn LMN_T1@cts.com
mobile network Ethernet0/3.10 label WiredClients-Top
mobile network Ethernet0/3.15 label WiredClients-Bottom
mobile network Ethernet0/3.20 label WifiClients-Top
mobile network Ethernet0/3.21 label ApMgmt
mobile network Ethernet0/3.25 label WifiClients-Bottom

The difference between the PMIPv6 configurations will be found in the roaming interface definition. Each MAG can use
different interfaces for roaming and the labels must also be different between the MAGs.

45
Connected Rail Solution Implementation Guide

Connected Train Implementation

MAG1
ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
address dynamic
roaming interface Ethernet0/0 priority 1 egress-att LTE label LTE0-RED
roaming interface Ethernet0/1 priority 1 egress-att LTE label LTE1-RED
roaming interface Ethernet0/2 priority 1 egress-att ETHERNET label FM-RED

MAG2
ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
address dynamic
roaming interface Ethernet0/0 priority 1 egress-att LTE label LTE0-GREEN
roaming interface Ethernet0/1 priority 1 egress-att LTE label LTE1-GREEN
roaming interface Ethernet0/2 priority 1 egress-att ETHERNET label FM-GREEN

The mobile-map definition is similar to the single gateway configuration for each respective MAG.

MAG1
ipv6 mobile pmipv6-domain CTS_DOM
mobile-map MPATH 1
match access-list Clients
set link-type FM-RED LTE0-RED LTE1-RED
ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
mobile-map MPATH

MAG2
ipv6 mobile pmipv6-domain CTS_DOM
mobile-map MPATH 1
match access-list Clients
set link-type FM-GREEN LTE0-GREEN LTE1-GREEN
ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
mobile-map MPATH

In this example, the traffic is configured to always prioritize the Fluidmesh connection over the cellular interfaces.

LMA Configuration
The LMA will see the two MAGs as a single MAG, which means the mobile maps and access-lists need to be configured
properly to ensure an optimal traffic flow. Each access-list should contain the mobile networks that are active for a
particular MAG. Mobile maps that point to these access-lists will ensure that traffic is routed properly.

ip access-list extended CORPtoGREEN


permit ip any 10.1.15.0 0.0.0.255
permit ip any 10.1.25.0 0.0.0.255

ip access-list extended CORPtoRED


permit ip any 10.1.10.0 0.0.0.255
permit ip any 10.1.20.0 0.0.0.255
permit ip any 10.1.21.0 0.0.0.255

ipv6 mobile pmipv6-domain CTS_DOM


mobile-map MPATH 1
match access-list CORPtoRED
set link-type FM-RED FM-GREEN
mobile-map MPATH 2
match access-list CORPtoGREEN
set link-type FM-GREEN FM-RED

The mobile map configuration will ensure that the MAGs receive the traffic that is active on them. In the event of a MAG
failure, the second link-type configured will ensure that the traffic takes the higher bandwidth link on the remaining MAG.
If the configured link-types are unavailable, the traffic will fall back to the standard PMIPv6 routing decisions.

46
Connected Rail Solution Implementation Guide

Connected Train Implementation

Verification
Since the MAGs appear as one to the LMA, the show output will look similar to the single gateway model.

MAG1
MAG1# show ipv6 mobile pmipv6 mag binding
Total number of bindings: 1
----------------------------------------
[Binding][MN]: Domain: CTS_DOM, Nai: LMN_T1@cts.com
[Binding][MN]: State: ACTIVE
[Binding][MN]: Interface: Loopback100
[Binding][MN]: Hoa: 100.100.100.100, Att: 4, llid: LMN_T1@cts.com
[Binding][MN]: HNP: 0
[Binding][MN][LMA]: Id: CTS_LMA
[Binding][MN][LMA]: Lifetime: 3600
[Binding][MN]: Yes
[Binding][MN][Mobile Network]: Ethernet0/3.10
[Binding][MN][Mobile Network]: Ethernet0/3.20
[Binding][MN][Mobile Network]: Ethernet0/3.21
[Binding][MN][Mobile Network]: Ethernet0/3.15
[Binding][MN][Mobile Network]: Ethernet0/3.25
[Binding][MN][PATH]: interface: Ethernet0/0, Label: LTE0-RED
State: PATH_ACTIVE
Tunnel: Tunnel1
Refresh time: 3240(sec), Refresh time Remaining: 2564(sec)
[Binding][MN][PATH]: interface: Ethernet0/1, Label: LTE1-RED
State: PATH_ACTIVE
Tunnel: Tunnel0
Refresh time: 3240(sec), Refresh time Remaining: 2564(sec)
[Binding][MN][PATH]: interface: Ethernet0/2, Label: FM-RED
State: PATH_ACTIVE
Tunnel: Tunnel2
Refresh time: 3240(sec), Refresh time Remaining: 2565(sec)
----------------------------------------
MAG1#

MAG1#show route-map dynamic


route-map MIP-01/09/09-06:03:07.711-9-PMIPV6, permit, sequence 1, identifier 3690987686
Match clauses:
ip access-list ( AND ): PMIP_V4RT_3934 PMIPv6-Clients
Set clauses:
interface Tunnel2
Policy routing matches: 0 packets, 0 bytes
route-map MIP-01/09/09-06:03:07.711-9-PMIPV6, permit, sequence 2, identifier 184549486
Match clauses:
ip access-list ( AND ): PMIP_V4RT_3933 PMIPv6-Clients
Set clauses:
interface Tunnel1
Policy routing matches: 0 packets, 0 bytes
route-map MIP-01/09/09-06:03:07.711-9-PMIPV6, permit, sequence 3, identifier 285212780
Match clauses:
ip access-list ( AND ): PMIP_V4RT_3932 PMIPv6-Clients
Set clauses:
interface Tunnel0
Policy routing matches: 0 packets, 0 bytes

MAG2
MAG2# show ipv6 mobile pmipv6 mag binding
Total number of bindings: 1
----------------------------------------
[Binding][MN]: Domain: CTS_DOM, Nai: LMN_T1@cts.com
[Binding][MN]: State: ACTIVE

47
Connected Rail Solution Implementation Guide

Connected Train Implementation

[Binding][MN]: Interface: Loopback100


[Binding][MN]: Hoa: 100.100.100.100, Att: 4, llid: LMN_T1@cts.com
[Binding][MN]: HNP: 0
[Binding][MN][LMA]: Id: CTS_LMA
[Binding][MN][LMA]: Lifetime: 3600
[Binding][MN]: Yes
[Binding][MN][Mobile Network]: Ethernet0/3.10
[Binding][MN][Mobile Network]: Ethernet0/3.20
[Binding][MN][Mobile Network]: Ethernet0/3.21
[Binding][MN][Mobile Network]: Ethernet0/3.15
[Binding][MN][Mobile Network]: Ethernet0/3.25
[Binding][MN][PATH]: interface: Ethernet0/0, Label: LTE0-GREEN
State: PATH_NULL
Refresh time: 3240(sec), Refresh time Remaining: 0(sec)
[Binding][MN][PATH]: interface: Ethernet0/1, Label: LTE1-GREEN
State: PATH_NULL
Refresh time: 3240(sec), Refresh time Remaining: 0(sec)
[Binding][MN][PATH]: interface: Ethernet0/2, Label: FM-GREEN
State: PATH_ACTIVE
Tunnel: Tunnel0
Refresh time: 3240(sec), Refresh time Remaining: 2523(sec)
----------------------------------------
MAG2#

LMA
LMA# show ipv6 mobile pmipv6 lma binding
Total number of bindings: 1
----------------------------------------
[Binding][MN]: State: BCE_ACTIVE
[Binding][MN]: Domain: CTS_DOM, NAI: LMN_T1@cts.com
[Binding][MN]: HOA: 100.100.100.100, Prefix: 24
[Binding][MN]: HNP: 0
[Binding][MN][PEER]: Default Router: 100.100.100.2
[Binding][MN]: ATT: WLAN (4), Label: FM-GREEN, Color: blue
[Binding][MN][PEER1]:Transport VRF:
[Binding][MN][PEER1]:LLID: LMN_T1@cts.com
[Binding][MN][PEER1]: Id: MAG_T1
[Binding][MN][PEER1]: Lifetime: 3600(sec)
[Binding][MN][PEER1]: Lifetime Remaining: 2604(sec)
[Binding][MN][PEER1]: Tunnel: Tunnel1
[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0
[Binding][MN]: ATT: WLAN (4), Label: FM-RED, Color: red
[Binding][MN][PEER2]:Transport VRF:
[Binding][MN][PEER2]:LLID: LMN_T1@cts.com
[Binding][MN][PEER2]: Id: MAG_T1
[Binding][MN][PEER2]: Lifetime: 3600(sec)
[Binding][MN][PEER2]: Lifetime Remaining: 3588(sec)
[Binding][MN][PEER2]: Tunnel: Tunnel0
[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0
[Binding][MN]: ATT: WLAN (4), Label: LTE1-RED, Color: yellow
[Binding][MN][PEER3]:Transport VRF:
[Binding][MN][PEER3]:LLID: LMN_T1@cts.com
[Binding][MN][PEER3]: Id: MAG_T1
[Binding][MN][PEER3]: Lifetime: 3600(sec)
[Binding][MN][PEER3]: Lifetime Remaining: 3588(sec)
[Binding][MN][PEER3]: Tunnel: Tunnel2
[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0
[Binding][MN]: ATT: WLAN (4), Label: LTE0-RED, Color: green
[Binding][MN][PEER4]:Transport VRF:
[Binding][MN][PEER4]:LLID: LMN_T1@cts.com
[Binding][MN][PEER4]: Id: MAG_T1
[Binding][MN][PEER4]: Lifetime: 3600(sec)
[Binding][MN][PEER4]: Lifetime Remaining: 3588(sec)
[Binding][MN][PEER4]: Tunnel: Tunnel3

48
Connected Rail Solution Implementation Guide

Connected Train Implementation

[Binding][MN][GREKEY]: Upstream: 4742, Downstream: 0


----------------------------------------
LMA#

LMA#show ip route mobile

10.0.0.0/8 is variably subnetted, 20 subnets, 3 masks


M 10.1.10.0/24 is directly connected, Tunnel3
is directly connected, Tunnel2
is directly connected, Tunnel0
is directly connected, Tunnel1
M 10.1.15.0/24 is directly connected, Tunnel3
is directly connected, Tunnel2
is directly connected, Tunnel0
is directly connected, Tunnel1

Wireless Offboard

LTE

Lilee
Each cellular modem requires a cellular-profile to connect to the service provider. This can include the access point name
(APN), username, and password. Additionally, the cellular modem may have slots for one or two Subscriber Identity
Model (SIM) cards.

! Creates a cellular profile


create cellular-profile "CP1"
! Configures the APN required to connect
config cellular-profile "CP1" access-point-name "vzwims"
! Configures the modem to use the SIM card in slot 0
config cellular-profile "CP1" sim-card-slot 0
create cellular-profile "CP2"
config cellular-profile "CP2" access-point-name "internet"
config cellular-profile "CP2" sim-card-slot 1

Dialer interfaces must then be configured to bind the cellular profile to the physical modem.

! Adds a dialer interface


config add interface dialer 0
! Associates the cellular profile with the dialer
config interface dialer 0 profile "CP1"
! Associates the physical port (slot 3, bay 1) with the dialer
config interface dialer 0 line cellular 3/1
config add interface dialer 1
config interface dialer 1 profile "CP2"
config interface dialer 1 line cellular 3/2
! helps enable the dialer interfaces
config interface dialer 0 enable
config interface dialer 1 enable

Validation
ME-100-1.localdomain > show line cellular 3/1 hardware
Hardware Information: line cellular 3/1
Modem Power: Online
Module Vendor Information: SierraWireless MC7354
Model: MC7354
Revision: SWI9X15C_05.05.16.03 r22385 carmd-fwbuild1 2014/06/04 15:01:26
MEID: 35922505190586
ESN: 80914D85

49
Connected Rail Solution Implementation Guide

Connected Train Implementation

IMEI: 359225051905867

ME-100-1.localdomain > show line cellular 3/2 hardware


Hardware Information: line cellular 3/2
Modem Power: Online
Module Vendor Information: SierraWireless MC7354
Model: MC7354
Revision: SWI9X15C_05.05.16.03 r22385 carmd-fwbuild1 2014/06/04 15:01:26
MEID: 35922505191030
ESN: 80EA9F4B
IMEI: 359225051910305

ME-100-1.localdomain > show line cellular 3/1 subscriber


Subscriber Information: line cellular 3/1
IMSI: 123156555000035

ME-100-1.localdomain > show line cellular 3/2 subscriber


Subscriber Information: line cellular 3/2
IMSI: 123156555000059

ME-100-1.localdomain > show line cellular 3/1 detail


Description : Firmware : AT&T
RSSI (dBm) : -64 Data Bearer Tech : LTE
Base Station ID : 257
Line Data Rate : 50000000/100000000 RF Information : LTE Band 0
RSRP : -91 RSRQ : -7

ME-100-1.localdomain > show line cellular 3/2 detail


Description : Firmware : Verizon
RSSI (dBm) : -46 Data Bearer Tech : LTE
Base Station ID : 6772736
Line Data Rate : 50000000/100000000 RF Information : LTE Band 13
RSRP : -69 RSRQ : -6

ME-100-1.localdomain > debug line cellular 3/2 atcmd "at!gstatus?"


timeout set to 5 seconds
send (at\!gstatus?^M)
expect (OK)
at!gstatus?

!GSTATUS:
Current Time: 262 Temperature: 46
Bootup Time: 5 Mode: ONLINE
System mode: LTE PS state: Attached
LTE band: B13 LTE bw: 10 MHz
LTE Rx chan: 5230 LTE Tx chan: 23230
EMM state: Registered Normal Service
RRC state: RRC Connected
IMS reg state: In Prog IMS mode: Normal

RSSI (dBm): -46 Tx Power: 0


RSRP (dBm): -69 TAC: 0012 (18)
RSRQ (dB): -6 Cell ID: 00675800 (6772736)
SINR (dB): 30.0

ME-100-1.localdomain > debug line cellular 3/1 atcmd "at!gstatus?"


timeout set to 5 seconds
send (at\!gstatus?^M)
expect (OK)
at!gstatus?

!GSTATUS:
Current Time: 674528 Temperature: 40
Bootup Time: 5 Mode: ONLINE
System mode: LTE PS state: Attached

50
Connected Rail Solution Implementation Guide

Connected Train Implementation

LTE band: B17 LTE bw: 10 MHz


LTE Rx chan: 5790 LTE Tx chan: 23790
EMM state: Registered Normal Service
RRC state: RRC Connected
IMS reg state: No Srv

RSSI (dBm): -64 Tx Power: 0


RSRP (dBm): -91 TAC: 000A (10)
RSRQ (dB): -7 Cell ID: 00000101 (257)
SINR (dB): 20.0

ME-100-1.localdomain > show interface dialer 0 detail


Interface : dialer 0 Description :
Administrative : enable Operational : up
IP address : 45.47.0.42 IP netmask : 255.255.255.252
HW address : e6:bd:7d:5f:0b:08 MTU : 1500
Proxy ARP : disable RX bytes : 1239321
RX packets : 10626 RX errors : 0
RX dropped : 0 RX overruns : 0
RX frame errors : 0 TX bytes : 6444194
TX packets : 36588 TX errors : 0
TX dropped : 0 TX overruns : 0
TX carrier errors : 0 TX KBps(1m) : 0.017188
RX KBps(1m) : 0.015613 TX KBps(5m) : 0.020776
RX KBps(5m) : 0.015835 TX KBps(1h) : 0.070392
RX KBps(1h) : 0.008121

ME-100-1.localdomain > show interface dialer 1 detail


Interface : dialer 1 Description :
Administrative : enable Operational : up
IP address : 10.1.201.71 IP netmask : 255.255.255.240
HW address : e6:bd:7d:5f:07:08 MTU : 1500
Proxy ARP : disable RX bytes : 165865311
RX packets : 464317 RX errors : 0
RX dropped : 0 RX overruns : 0
RX frame errors : 0 TX bytes : 501389987
TX packets : 1637092 TX errors : 0
TX dropped : 0 TX overruns : 0
TX carrier errors : 0 TX KBps(1m) : 0.001467
RX KBps(1m) : 0.014584 TX KBps(5m) : 0.016732
RX KBps(5m) : 0.015188 TX KBps(1h) : 0.036185
RX KBps(1h) : 0.025649

Klas
The cellular configuration involves configuring the physical interfaces under KlasOS and the virtual interfaces within the
ESR virtual machine.

KlasOS
The Klas router is designed to work with many different types of cellular modems and therefore must have a consistent
way to interface with them. When the LTE modem receives an IP address from the mobile provider, it will perform Network
Address Translation (NAT) and act as a DHCP server to the Klas gateway.

trx-r6# show modem 0 context

Modem 0 u-blox TOBY-L200


-----------------------
CONTEXT 4 : APN: broadband.mnc156.mcc123.gprs IP: 10.1.201.12 QOS: 9

Therefore, the modem interface must be configured as a DHCP client.

interface Modem 0

51
Connected Rail Solution Implementation Guide

Connected Train Implementation

description "2G/3G/4G Cellular Modem in slot 0, (u-blox MODEM-LTE)"


ip address dhcp
modem lte initialcontext broadband none ipv4

The resulting interface status is shown below.

trx-r6# show ip interface brief


Interface IP-Address Status Protocol
modem0 192.168.0.55/24 up up

The modem interface must then be associated with the virtual machine. First, the virtual machine must have the
appropriate number of vSwitches configured.

trx-r6# show vm c5921


Name: c5921
UUID: 16b6a764-6cc3-4557-ad63-cd90c6a4a41f
Storage Pool: vm_pool
Virtual Disk: c5921-15.6.qcow2
HDD: 3.0 GiB
Status: running
VNC: 5900
Allocated RAM: 512 MiB
Allocated CPUs: 1
NIC: vSwitch1, MAC: 52:54:00:d4:1c:f6, Type: virtio
NIC: vSwitch2, MAC: 52:54:00:2a:3d:ba, Type: virtio
NIC: vSwitch3, MAC: 52:54:00:13:73:13, Type: virtio
NIC: vSwitch4, MAC: 52:54:00:0c:e9:b6, Type: virtio
Serial Port: None

Next, a DHCP pool must be created where the ESR virtual interface connected to the modem interface is the DHCP client.

ip dhcp pool modem0


network 192.168.101.0 255.255.255.252

A vSwitch is then created and tied to the specific modem interface.

interface vSwitch 1
ip address 192.168.101.1 255.255.255.252
!
map interface Modem 0 to interface vSwitch 1

Finally, a Port Address Translation (PAT) statement is configured to match on all outgoing traffic leaving the modem
interface.

access-list 1 permit any


!
ip nat inside source list 1 interface modem0 overload

trx-r6# show ip nat translations


Pro Inside global Inside local Outside local Outside global
udp 192.168.0.55:5436 192.168.101.2:5436 91.91.91.10:5436 91.91.91.10:5436
udp 192.168.1.91:5436 192.168.201.2:5436 91.91.91.10:5436 91.91.91.10:5436

IOS
After configuring the KlasOS modem interface, the virtual interface on the ESR is configured to receive a DHCP address
from the pool created above.

interface Ethernet0/0
description ESR5921-vSW1-Modem0
ip address dhcp
load-interval 30
duplex full
speed 1000

52
Connected Rail Solution Implementation Guide

Connected Train Implementation

R6-ESR#sh ip int br eth0/0


Interface IP-Address OK? Method Status Protocol
Ethernet0/0 192.168.101.2 YES DHCP up up

Fluidmesh
In the Connected Rail Solution, Fluidmesh wireless radios are used to provide the connectivity from the train to the
trackside network. For optimum Radio Frequency (RF) coverage, there should be at least two train radios, one at each
end of the train consist. When the train radios are configured in the same mobility group, they will both evaluate the RF
path to the trackside and the radio with the best connection will become the active path. The spacing between trackside
radios should be determined by a site survey. Figure 16 depicts a train at two positions along the trackside. The green
line indicates the active RF path while the red line depicts a backup path. As the train moves down the track, the path
will change depending on which RF path is best.

Figure 16 Intra Train Fluidmesh Roaming

Train Posion 1

Train Posion 2
377192

Train Radio Configuration


The radio configuration is done through a web interface. From a PC connected to the 192.168.0.X network, navigate to
the radio's IP of 192.168.0.10. The first configuration page is the General mode under the General Settings section.

53
Connected Rail Solution Implementation Guide

Connected Train Implementation

Figure 17 Train Radio General Mode

This is where the mode and IP addresses are set. All train radios should be configured as a mesh point. The default
gateway should be the routed interface on the trackside network facing the Fluidmesh radios.

The next configuration step is on the Wireless Radio page.

Figure 18 Train Radio Wireless Radio

The passphrase configured here is used on all the train radios and trackside radios in the same network. The RF
configuration can be done automatically or set manually. This configuration will depend on the site survey.

To configure the radio as a train radio and not a trackside, it must be configured in the FLUIDITY page under Advanced
Settings.

54
Connected Rail Solution Implementation Guide

Connected Train Implementation

Figure 19 Train Radio FLUIDITY Configuration

FLUIDITY is enabled, the unit role is Vehicle, and the vehicle ID is set. All train radios on the same train consist must have
the same ID.

When the radio is properly configured, the trackside radios in range will show up under the Antenna Alignment and Stats
page with the relative wireless strength.

55
Connected Rail Solution Implementation Guide

Connected Train Implementation

Figure 20 Train Radio Antenna Alignment and Stats

Lilee
The Fluidmesh radios can be directly connected to the ME-100 gigabit interface or a switch connected to the ME-100
gigabit ports. In this system, they are connected directly to the ME-100.

! Adds a VLAN interface


config add interface vlan 200
! Adds the VLAN to the switching hardware
config switch add vlan 200
! Associates two physical ports with the VLAN
config switch vlan 200 add port 1/3
config switch vlan 200 add port 1/4
! Configures the ports as access ports
config switch vlan 200 port 1/3 egress untagged
config switch vlan 200 port 1/4 egress untagged
config switch port 1/3 default vlan 200
config switch port 1/3 egress untagged
config switch port 1/4 default vlan 200
config switch port 1/4 egress untagged
! Helps enable the VLAN as a layer 3 interface
config interface vlan 200 enable
! Configures an IP address in the same subnet as the Fluidmesh
! radios
config interface vlan 200 ip address 192.168.0.100 netmask 255.255.255.0

Klas
Similar to the LTE configuration, both the Klas router physical ports and ESR virtual ports must be configured to connect
to the Fluidmesh radios.

56
Connected Rail Solution Implementation Guide

Overlay Services Implementation

KlasOS
Like the cellular interfaces, the physical Ethernet interfaces must be tied to a vSwitch. If multiple Fluidmesh radios are
connected to the Ethernet interfaces, they can be configured in the same vSwitch. A one-to-one relationship exists
between a vSwitch and an Ethernet port in IOS; therefore, multiple ports in a vSwitch will show up as a single port in IOS.

interface Ethernet 0/0


description "Fluid Mesh"
vSwitch-group 3

IOS
Once the interface is configured in KlasOS, the interface must be configured within IOS. The port IP is configured in the
same subnet as the Fluidmesh radio, which allows IP connectivity between the ESR and the Fluidmesh radios.

interface Ethernet0/2
description ESR5921-KLASvSW3-PhyETH0/0&1-FM
ip address 192.168.0.50 255.255.255.0
load-interval 30
duplex full
speed 1000

Overlay Services Implementation


This section includes the following major topic:

 Video Surveillance, page 57

Video Surveillance
The Connected Rail Solution helps provide physical security for the operator's equipment, as well as the employees and
passengers using the trains. Cisco VSM is used to provide live and recorded video to security personnel. This section
describes the basic configuration and some of the options that are most relevant for a train operator.

Installation and Initial Setup


The Video Surveillance Media Server (VSMS), Video Surveillance Operations Manager (VSOM), Safety and Security
Desktop (SASD), and IP Cameras should be installed and set up according to the official Cisco documentation below:

 Cisco Video Surveillance Operations Manager User Guide, Release 7.8:

— http://www.cisco.com/c/dam/en/us/td/docs/security/physical_security/video_surveillance/network/vsm/7_8/ad
min_guide/vsm_7_8_vsom.pdf

 Cisco Video Surveillance Manager Safety and Security Desktop User Guide, Release 7.8:

— http://www.cisco.com/c/dam/en/us/td/docs/security/physical_security/video_surveillance/network/vsm/7_8/sa
sd/vsm_7_8_sasd.pdf

 Cisco Video Surveillance Install and Upgrade Guide, Release 7.7 and Higher:

— http://www.cisco.com/c/dam/en/us/td/docs/security/physical_security/video_surveillance/network/vsm/install_
upgrade/vsm_7_install_upgrade.pdf

 Cisco Video Surveillance Virtual Machine Deployment and Recovery Guide for UCS Platforms, Release 7.x:

— http://www.cisco.com/c/dam/en/us/td/docs/security/physical_security/video_surveillance/network/vsm/vm/dep
loy/VSM-7x-vm-deploy.pdf

57
Connected Rail Solution Implementation Guide

Overlay Services Implementation

 Cisco Video Surveillance 7070 IP Camera Installation Guide:

— http://www.cisco.com/c/en/us/td/docs/security/physical_security/video_surveillance/ip_camera/7000_series/7
070/install_guide/7070.html

 Cisco Video Surveillance 3050 IP Camera Installation Guide:

— http://www.cisco.com/c/en/us/td/docs/security/physical_security/video_surveillance/ip_camera/3000_series/3
050/install_guide/3050.html

The initial installation and setup includes installing instances of the VSMS virtual machine (via the Open Virtualization
Format or OVF template available on Cisco.com) at the data center and onboard the train. The servers hosting the VSM
virtual machines all run VMware ESXi for the hypervisor, and the hardware itself is expected to be a high performance
Cisco UCS server in the data center. Onboard the train, a ruggedized server is used to survive the more severe conditions
such as vibration, which are typical on a train. The data center instances should be configured to include a VSOM for
management as well as a LTS. The VSMS instance onboard the train should be configured to act as a media server for
the local cameras.

It is important that Layer 3 connectivity is available between the IP cameras and the VSMS server, as well as between
the VSMS, VSOM, and LTS servers, before beginning the installation and configuration of these applications.

Camera Template - Basic 24x7 Recording


VSOM uses the concept of Camera Templates to manage the properties of all similar (by model, role, or other criteria)
cameras. To create a new camera template, log in to VSOM and browse to Cameras > Templates. On the Templates
tab, create a new Template that will apply to a group of cameras. In this example, a template has been created for all
3050 model cameras.

Figure 21 shows that the Streaming, Recording and Events tab has been selected. From this tab ensure that the Basic
Recording: 24x7 schedule has been selected from the drop-down menu, and that for Video Stream A, the far right
button has been selected, which indicates that video will be recorded continuously and motion events will be marked in
the recording.

Figure 21 VSOM Camera Template - Continuous Recording

Notice that the Video Quality slider has been set to Custom. After doing this, click the Custom hyperlink, which brings
up a pop-up window where the custom parameters can be defined. Using this option lets you fine tune the video quality
(and perhaps more importantly, the resulting bandwidth that is consumed). In Figure 22, a modest custom quality has
been defined that is appropriate for the camera's usage.

58
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 22 VSOM Custom Quality Setting

This is enough information to define the basic camera template for basic recording. Save the template and begin to add
cameras.

Still on the Cameras tab, click the Camera tab (see Figure 23), and then click Add. On this page, fill in the required
information for the camera including the IP address, access credentials, location, and most importantly the Template that
was defined earlier.

Figure 23 VSOM Add Camera and Apply Template

Camera Template - Scheduled Recording


Continuous recording around the clock may be the most common and simplest recording scheme; however, based on
security requirements and available storage, it may make more sense to only record during parts of the day. For example,
if the trains are only used during specific hours of the day (morning and evening rush hour) and otherwise the trains sit
parked, it may only be necessary to record during the active hours.

59
Connected Rail Solution Implementation Guide

Overlay Services Implementation

To implement this use case, a schedule can be defined in VSOM and subsequently applied to one or more camera
templates. In the System Settings tab in VSOM, select Schedules under Shared Resources.

Figure 24 VSOM System Settings

Click Add and then enter general information as requested before setting the Recurring Weekly Patterns. In the example
in Figure 25, a new Time Slot called Rush Hour is assigned the purple color. By using simple mouse clicks, the schedule
is created so that Rush Hour is defined to be between 7:00 AM and 10:00 AM as well as 4:00 PM and 7:00 PM in the
evening.

Figure 25 VSOM Custom Schedule

After the schedule is created, browse to the Camera Template as described earlier. This time, instead of selecting the
default schedule called Continuous Recording: 24/7, we select the newly created schedule called Morning and Evening
Rush Hour. Notice that after the schedule is selected, additional lines appear allowing a different recording option (off,
motion, continuous, or continuous with motion) for each time slot on the schedule. Each time slot corresponds to a
different color on the graphical schedule.

60
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 26 VSOM Camera Template - Scheduled Recording

Event-Based Recording Options


In addition to recording continuously or based on a pre-determined schedule, VSM allows for video to be intelligently
recorded when some type of event or incident occurs. Events can be triggered by motion detection, contact
opening/closure, or event a digital soft trigger from a panic button for example. When an event occurs, multiple actions
can be taken—from raising an alert in SASD for security personnel, to automatically pointing the camera to a new position,
to starting to record video for a set amount of time. Additional details about the possible triggers and actions for events
are covered in the official Cisco documentation referenced at the beginning of this section. Figure 27 shows an example
of several "Motion Started" events.

61
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 27 VSOM Camera Events

Connected Edge Storage


Many Cisco IP cameras (such as the 3050 and 7070 models) include a built-in MicroSD card slot that can optionally be
used to add video storage directly on the camera. This functionality, which is called Connected Edge Storage, helps
enable the camera to record to the MicroSD card instead of the VSMS server.

Enabling the camera to record locally to the MicroSD card allows it to have a backup copy of the video. If the VSMS itself
fails, or connectivity between the camera and VSMS fails, the camera will continue to record locally. Using the
Auto-Merge feature will allow the camera to automatically copy the locally recorded video over to the VSMS once
connectivity is restored, allowing any gaps in the VSMS recording to be filled in from the local copy.

Configuring these features is done in the Advanced Storage section of the Camera Template, as shown in Figure 28. VSM
7.8 added the ability to do scheduled copies of recordings from the camera's MicroSD card to the VSMS.

62
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 28 VSOM Camera Template - Connected Edge Storage

Long Term Storage


The video storage space available both onboard the camera and the VSMS server will be limited. Based on the configured
video quality settings, recording time will typically be on the order of hours to days. Depending on the security and data
retention policies, it may be preferable to retain video for longer periods of time.

The Cisco VSM solution includes the LTS functionality, which consists of a centralized high-capacity server or servers
dedicated to retaining video recordings for long periods of time. Using the camera templates, video archiving can be
configured to retain all video, or only video containing motion events. Also, the camera template can be set to archive
selected video at a certain time each day, such as 11:00 PM. Figure 29 shows that the 7070 camera template is selected,
and within the Advanced Storage pop-up, a LTS policy is set to archive all video for a period of 10 days, and for the LTS
upload to occur daily at 15:00.

63
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 29 VSOM Camera Template - Long Term Storage

Integration with Davra RuBAN


The latest release of Davra RuBAN greatly simplifies integration with Cisco VSM to provide single pane management of
devices and video monitoring.

Before implementing video with RuBAN, make certain that the network device (for example, mobile gateway or switch)
has been provisioned successfully. This device will be used to associate with the camera so that in a map view video can
be viewed for all cameras in a specific location (per train, for example).

To begin integrating VSM into RuBAN, log in to RuBAN and go to the Administration page. From there, click VSOM
Integration, as shown in Figure 30.

Figure 30 RuBAN Administration Page

On the Camera Management page, click Setup VSOM Server.

64
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 31 RuBAN Camera Management

In the dialog window that pops up, enter the IP address and login credentials for VSOM.

Figure 32 RuBAN Setup VSOM Server

After successfully adding the VSOM server, the Camera Management page will list all of the cameras that it discovered
from VSOM. The next step is to associate the camera with a network device. In the IoT Gateway column, select the
appropriate device—in this example, an IE2000 switch is selected. Also, add a Tag to the camera that will be used later
when adding the video feed to the switch's dashboard. In Figure 33, the tag camera3 is chosen, but it could be any text.

Figure 33 RuBAN Camera List, Add Tag

A new Dashboard is created which can be used to display an enlarged view of the camera's video feed. Begin by clicking
New Dashboard at the top of the RuBAN web interface. From there, use the highly customizable dashboard creation
wizard to make the desired layout with one or more video streams. In Figure 34, a layout with a single pane is shown,
and a Camera component (under the Security category) is added to the pane.

65
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 34 RuBAN Add New Dashboard for Video

Clicking the gear icon at the top right of the CAMERA pane opens a pop0up window where the camera tag we defined
earlier is selected. This determines which camera's video is shown in this dashboard pane.

Figure 35 RuBAN Configure Camera for Video Dashboard

After dashboard configuration has been completed, save it by clicking on the three parallel bars and then click Save, as
shown in Figure 36. This is the most basic video dashboard possible; extensive customization is available to get the exact
content displayed with the desired look-and-feel.

66
Connected Rail Solution Implementation Guide

Overlay Services Implementation

Figure 36 RuBAN Save New Dashboard

In addition to the dashboard view shown above, RuBAN's map view can also display streaming video feeds from all
cameras associated with a network device. To add video from a camera to a gateway or switch in the map view, edit the
IoT Profile for the device. In the example in Figure 37, a profile is created that includes a single camera, identified by the
camera tag camera3 that was defined earlier.

Figure 37 RuBAN IoT Profile

After the profile is saved and applied to the IoT device, the RuBAN Internet of Everything (IoE) Portal will display the
pop-up dashboard including the video from the specified camera in a graphical map view, as shown in Figure 38.
Clicking Cisco 3050 will cause the video pane to display.

67
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

Figure 38 RuBAN Map View with Video Pop Out

Wi-Fi Access Implementation


This section includes the following major topic:

 Web Passthrough, page 72

In this implementation, Wi-Fi is used to enable connectivity for the passengers on the train in addition to law enforcement
personnel and the rail employees. This guide is not meant to be an exhaustive resource for a wireless implementation,
but rather one possible implementation of a train-based wireless solution. The complete configuration guide for the
Wireless Controller Software used in this release—Cisco Wireless Controller Configuration Guide, Release 8.2—can be
found at the following URL:

 http://www.cisco.com/c/en/us/td/docs/wireless/controller/8-2/config-guide/b_cg82.html

1. The Wireless LAN Controller (WLC) must have the management interface configured to communicate with the
access points on the train. This is the address the access points use to build a Control and Provisioning of Wireless
Access Points (CAPWAP) tunnel for communication and management. See Figure 39.

68
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

Figure 39 WLC Interfaces

2. SSIDs must be configured under the WLAN section for each type of wireless client that will need access. The
passengers will use Web-Passthrough while the employees and law enforcement personnel should use something
secure like WPA2. See Figure 40.

Figure 40 WLC SSID

3. In this implementation, the clients used FlexConnect Local Switching, which helps enable accessing local resources
on the train network without being tunneled back to the WLC. This must be enabled per WLAN. See Figure 41.

Figure 41 WLC Enable FlexConnect

4. The switchport connected to the access point must also be configured to support FlexConnect, which means
enabling a trunk with VLANs for management and the wireless clients. In this example, VLAN 21 is used for the
access point management traffic, VLAN 20 is for a regular passenger, VLAN 30 is for employees, and VLAN 40 is
used for law enforcement. The management traffic must be configured to use the native VLAN.

interface FastEthernet1/3

69
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

description Connected to AP3702 for clients


switchport trunk allowed vlan 20,21,30,40
switchport trunk native vlan 21
switchport mode trunk
ip device tracking maximum 0
end

5. Klas—The router will act as the default gateway for all the devices and should therefore have a subinterface for all
the wireless client types. This configuration is explained in the MAG Configuration section. It is important to note the
presence of the ip helper-address command. This command forwards DHCP requests from the access point and
clients to the DHCP server in the data center. The following configuration example is from the Klas router for the
access point management traffic.

interface Ethernet0/3.21
description APMgmt
encapsulation dot1Q 21 native
ip address 10.1.21.2 255.255.255.0
ip helper-address 10.4.1.3
end

Once the switches and router are configured properly, the access point will try to get a DHCP address and the
address for the WLC. Configuring the DHCP server with option 43 will enable the access point to contact the WLC.
Once the access point successfully contacts the WLC, it will download the correct image and reboot. Once it finishes
rebooting and reconnects to the WLC, it can be configured for FlexConnect mode.

Note: The MAG must have local-routing-mag configured under the pmipv6-mag section to enable the FlexConnect
clients to access local resources without also traversing the PMIPv6 tunnel.

Lilee—The ME-100 should be the default gateway for all the clients to keep local traffic within the train network. The
interface configuration can be found in the ME-100 subsection of Gateway Mobility. Because the access points and
clients will use DHCP for address resolution, it is necessary to configure DHCP relay on the ME-100.

config dhcp-relay interface vlan 20


config dhcp-relay interface vlan 21
config dhcp-relay interface vlan 30
config dhcp-relay interface vlan 40
config dhcp-relay server-ip 10.4.1.3
config service dhcp-relay enable

6. In the WLC under the Wireless tab, click the access point that needs to be configured for FlexConnect.

Figure 42 WLC AP Configuration

7. Under the General tab, the AP Mode must be changed from local to FlexConnect.

70
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

Figure 43 Figure 43 WLC AP Details

8. Afterward, the FlexConnect tab must be used to configure the native VLAN and all the VLAN mappings. First
configure the Native VLAN ID. In this case, it is 21.

Figure 44 FlexConnect Native VLAN

9. Next, the VLAN mappings must be created for each SSID. This will allow traffic for each SSID to be isolated from
each other.

71
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

Figure 45 FlexConnect VLAN Mappings

Web Passthrough
The web passthrough feature is one way to allow passengers to connect to the network without their needing to supply
a username and password. When the passenger connects to the network and tries to navigate to a URL, he will be
redirected to a splash page and required to accept the terms and conditions. After accepting the conditions, the user
will have normal access to the network resources. The steps to configure this are described below.

72
Connected Rail Solution Implementation Guide

Wi-Fi Access Implementation

The passenger WLAN must be configured with the correct security features. Layer 2 security doesn't exist, only Layer 3.

Figure 46 Passenger WLAN Layer 2 Security

Figure 47 Passenger WLAN Layer 3 Security

Verification
When a client connects and tries to navigate to a web page, he will be redirected to the splash page.

73
Connected Rail Solution Implementation Guide

Performance, Scale, and QoS

Figure 48 Web Client Redirect

Figure 49 Web Client Accept

If the passenger were to put his computer to sleep or roam to another access point, he would not have to accept the
terms again because he would still be authenticated to the WLC. If the timeout timer expires, the passenger would have
to accept the terms and conditions again before access to the network would be granted.

Performance, Scale, and QoS


This section includes the following major topics:

 QoS, page 74

 Klas Throughput Performance, page 77

 Scale, page 77

QoS
In the Connected Rail Solution, QoS is directly influenced in the upstream direction from the train network. In the Lilee
solution, QoS is not supported so all traffic will be treated the same. Therefore, only the Klas solution will be discussed.

In this solution, all traffic entered the Klas router through a trunk port on the switching network. Traffic separation was
achieved with subinterfaces for each set of clients. With this configuration, different policy-maps could be applied to
each subinterface to remark all traffic to the desired value. The following is an example of some sample policy-maps and
how they are applied to the client traffic. All configurations are done within the ESR virtual machine running IOS.

policy-map PMAP-UP-Voice-I
class class-default
set dscp ef
policy-map PMAP-UP-VideoSurv-I
class class-default
set dscp cs4
policy-map PMAP-UP-WirelessClients-I
class class-default
set dscp cs0

74
Connected Rail Solution Implementation Guide

Performance, Scale, and QoS

interface Ethernet0/3.10
description Voice
encapsulation dot1Q 20
service-policy input PMAP-UP-Voice-I
!
interface Ethernet0/3.20
description WirelessClients
encapsulation dot1Q 20
service-policy input PMAP-UP-WirelessClients-I
!
interface Ethernet0/3.30
description VideoSurveillance
encapsulation dot1Q 20
service-policy input PMAP-UP-VideoSurv-I

Validation
In the example below, traffic is being sent from the switching ring to a destination behind the LMA in the data center. It
is being remarked before forwarding to the PMIPv6 tunnel.

R6-ESR#sh policy-map int eth0/3.20


Ethernet0/3.20

Service-policy input: PMAP-UP-WirelessClients-I

Class-map: class-default (match-any)


275227 packets, 408841906 bytes
30 second offered rate 10047000 bps, drop rate 0000 bps
Match: any
QoS Set
dscp cs0
Packets marked 275227

PMIPv6
PMIPv6 relies on control packets to keep the tunnels up and active. By default, they are marked with the default
Differentiated Services Code Point (DSCP) value of cs0. When the heartbeat feature is used between the MAG and LMA,
it must be configured with a timeout and retry interval. To help minimize convergence time, those values may be
configured near the minimum permissible values. Since these heartbeats have a default DSCP value of cs0, they are at
risk of being starved out during traffic congestion. If the timeout interval elapses, the tunnel interface will be torn down
and traffic will see disruption. It is, therefore, desirable to configure the PMIPv6 control packets with a higher DSCP value.
The following are the required commands on the MAG and LMA to prioritize this traffic.

MAG
ipv6 mobile pmipv6-mag MAG_T1 domain CTS_DOM
! Configures the control plane packets with DSCP cs6
dscp control-plane 48

The control packets also need a class-map that matches on the DSCP value and then is applied in the egress direction
on the WAN interfaces.

class-map match-all GOLD


match dscp cs6
!
policy-map EGRESS-QOS
class GOLD
! Configured in a priority queue
priority percent 10
!
interface Ethernet0/0
description ESR5921-vSW1-Modem0
service-policy output EGRESS-QOS

75
Connected Rail Solution Implementation Guide

Performance, Scale, and QoS

interface Ethernet0/1
description ESR5921-vSW2-Modem1
service-policy output EGRESS-QOS
!
interface Ethernet0/2
description Connected to Fluidmesh
service-policy output EGRESS-QOS

LMA
ipv6 mobile pmipv6-lma CTS_LMA domain CTS_DOM
! Configures the control plane packets with DSCP cs6
dscp control-plane 48

The LMA also needs to prioritize the PMIPv6 control plane packets in the egress direction toward the MAGs.

class-map match-all GOLD


match dscp cs6
!
policy-map EGRESS-QOS
class GOLD
priority percent 10
!
interface GigabitEthernet0/0/4
description to Edge-Router
service-policy output EGRESS-QOS

Fluidmesh
The Fluidmesh radios also support QoS with four hardware queues mapped from Class of Service (CoS). The default
CoS mapping is CS0/3, CS1/2, CS4/5, and CS6/7. When a packet enters into the radio, the three most significant bits
in the DSCP field are used to assign the priority class. At the time of this writing, QoS is controlled through the CLI of the
radio and not through the GUI. The following is the procedure to enable and view the status of QoS on the radio.

1. Use SSH to access the radio with credentials admin/admin.

X23-ASR920-5#ssh -vrf Trackside -l admin 192.168.0.13


Password:
_____ _ _ _ _
| ___| |_ _(_) __| |_ __ ___ ___ ___| |__
| |_ | | | | | |/ _` | '_ ` _ \ / _ \/ __| '_ \
| _| | | |_| | | (_| | | | | | | __/\__ \ | | |
|_| |_|\__,_|_|\__,_|_| |_| |_|\___||___/_| |_|
____________________________________________
| |
| 2005-2015 (c) Fluidmesh Networks, Inc. |
| www.fluidmesh.com - info@fluidmesh.com |
|____________________________________________|

Welcome to Fluidmesh CLI - Press '?' for help


#

2. Verify the current QoS status.

# qos
QoS disabled

3. Enable QoS, write, and reboot.

# qos status enabled


# write
# reboot

4. After the radio reboots, re-login and verify QoS status.

76
Connected Rail Solution Implementation Guide

Field Trial Results

# qos
QoS enabled

Klas Throughput Performance


The Klas TRX router hardware supports 1 Gigabit Ethernet interfaces. The ESR5921, however, is subject to throughput
licensing. At the time of this document's release, the highest throughput level is 200Mbps. This is calculated as the
aggregate throughput on all interfaces on egress. Per the ESR documentation, if this threshold is exceeded on egress,
the traffic will be randomly dropped.

The offboarding throughput will depend on the wireless site survey and maximize the signal-to-noise ratio and signal
strength for the respective wireless technologies.

Scale
In the Connected Rail Solution, the number of passengers supported is indirectly proportional to the throughput each
person receives. The limiting factor for Internet traffic will be the offboard wireless connection, which means even
passengers using 802.11ac clients could see 3G speeds depending on the RF conditions. Example scale numbers are
150 passengers for a single level carriage and 300 passengers for a dual level carriage. During a field trial test with
Fluidmesh as the RF transport, Transmission Control Protocols (TCPs) traffic were transmitted at an average of 85Mbps.
This would yield around 560 Kbps for 150 passengers and 280 Kbps for 300 passengers. These numbers depend highly
on passenger density and a proper site survey and RF deployment.

Field Trial Results


To better understand radio performance in an end-to-end solution under real world circumstances, tests were
conducted at a test track purpose built for controlled testing with speeds and hand-offs at 100MPH. The routers used
were the Klas TRX-R6 and the Lilee Systems LMS-2450. A number of radio vendors were present to install and configure
their systems to maximize performance over a section of track. The following performance results are specific to the
Fluidmesh radios.

The trackside radios were installed on a 2 mile section of track at a height of approximately 20ft above the track. The
antennas were installed and aligned after conducting a site survey at each catenary pole. A pair of radios was installed
on top of a locomotive engine with the antennas aligned to give maximum signal strength. The radios on top of the train
were configured in a master/slave relationship where the radio with the best signal would actively transmit data.

77
Connected Rail Solution Implementation Guide

Field Trial Results

Figure 50 Test Track Pole Locations

Figure 51 Trackside Equipment

Figure 52 Equipment Mounted on Train

78
Connected Rail Solution Implementation Guide

Field Trial Results

Figure 53 Antennas Mounted on Train

A simplified version of the Connected Rail Solution was used along with a traffic generator to provide simulated customer
traffic. The Klas and Lilee solutions were tested serially to ensure uniform results.

79
Connected Rail Solution Implementation Guide

Field Trial Results

Figure 54 Field Trial Network

Cellular
Cellular
Klas TRX-R TRAIN-SW LMS-2450

FM FM
VLAN20

FM VLAN10 FM FM FM FM Klas eNB Klas eNB

POLE-1-SW POLE-2-SW POLE-3-SW POLE-4-SW POLE-5-SW POLE-6-SW

VLAN Trunk: 10,20

377193
LMC-5500 ASR-1K

After the Fluidmesh network was installed and optimized for RF, the routers were configured to handle the mobility
between cellular and Fluidmesh. If the Fluidmesh network was properly deployed, the RF connection would not have any
holes in coverage and the routers would see no disconnection in service while in the test area. The connection would
only be lost when the train moved beyond the first and last poles. At that point, the router would roam to the cellular
connection.

The test procedure was as follows:

1. Ensure underlying Layer 2 and Layer 3 connectivity is up between Fluidmesh radios and trackside switches.

2. Verify connectivity from the router on train through the Fluidmesh radio to the trackside mobility anchor. This means
verify that the tunnels are up on the gateway under test.

3. Verify that the laptop behind the router can send traffic to the laptop behind the mobility anchor on the track.

4. Perform a 30MPH test run to ensure that the train and track are operating correctly.

5. Start GPS tracker and traffic generator between a pair of laptops. GPS was used to correlate train position with speed
and traffic throughput.

6. Start 100MPH test run and collect results.

In post processing, the GPS data had to be correlated with the throughput data to have an accurate picture of where the
throughput and signal strength was the highest. Having the time synchronized among all the units enabled that data to
be correlated properly. Using data retrieved from the Fluidmesh radios, the Received Signal Strength Indicator (RSSI) and
handover data could also be correlated to the throughput and physical location.

Figure 55 shows a few results from the testing.

80
Connected Rail Solution Implementation Guide

Field Trial Results

Figure 55 Fluidmesh Only - TCP Upstream


110 Pole 2 Pole 3 Pole 4 Pole 5 Pole 6 -20
105
100 -25
95
90 -30
85
-35
80
75 -40
70
Throughput (Mbps)

65 -45
60 Handover
55 -50 Pole
50
-55 Throughput
45
40 Average
35 -60
RSSI
30
-65
25
20 -70
15
10 -75
5
0 -80

377194
10
13
16
19
22
25
28
31
34
37
40
43
46
49
52
55
58
61
64
67
70
73
76
79
82
85
88
91
94
97
1
4
7

100
103
106
109
112
115
118
121
124
127
130
133
136
139
142
145
148
151
154
157
160
163
166
169
Time (s)
Note: The Lilee LMS-2450 used in testing had a maximum throughput of around 50Mbps over the tunnel interface. In
the results shown in Figure 56, gaps occurred in the test results between poles 2 and 4 and between poles 4 and 5. The
graphing application was configured to connect every data point, which explains the smooth straight lines over a long
time interval.

81
Connected Rail Solution Implementation Guide

Field Trial Results

Figure 56 Lilee with Bidirectional TCP Traffic over Fluidmesh


60Pole 1 Pole 2 Pole 3 Pole 4 Pole 5 Pole 6 -20

55

50 -30

45

40 -40
Throughput (Mbps)

35
Handover
30 -50 Pole

25 Throughput
Average
20 -60
RSSI
15

10 -70

0 -80
11
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
49
51
53
55
57
59
61
63
65
67
69
71
73
75
77
79
81
83
85
87
89
91
93
95
97
99

115
117
119
121
1
3
5
7
9

101
103
105
107
109
111
113

123
125
127
129
131
133
135
137
139
141
143
145
147
149
151
153
155
157
159
161
163
165
167
169
171
173

377195
Time (s)

Figure 57 Klas TRX-R6 Bidirectional TCP over Fluidmesh


125
120
115
110
105
100
95
90
85
80
Throughput (Mbps)

75
70
65
60 Throughput
55
50 Average
45
40
35
30
25
20
15
10
5
0
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240

377196
Time (s)

Figure 58 Lilee Roam - Fluidmesh -> Cellular

82
Connected Rail Solution Implementation Guide

Field Trial Results

Fluidmesh

Cellular

377197
Figure 59 Lilee Roam - Cellular -> Fluidmesh

Cellular

Fluidmesh
377198

Figure 60 Klas TRX Roam - Fluidmesh -> Cellular

83
Connected Rail Solution Implementation Guide

Field Trial Results

Fluidmesh

Cellular

377199
Figure 61 Klas TRX Roam - Cellular -> Fluidmesh

Cellular

Fluidmesh
377200

Figure 62 shows the inverse cumulative distribution function for the measured throughput.

84
Connected Rail Solution Implementation Guide

Glossary

Figure 62 Inverse Cumulative Distribution Function (CDF)

100%

90%

80%

70%

60%
Frequency

50% FM - TCP - bidir


FM - RTP+TCP
40%
Klas - TCP - bidir
30%

20%

10%

0%

377335
0 10 20 30 40 50 60 70 80 90 100 110 120 130
Throughput (Mbps)

Glossary
Table 2 is a list of acronyms and initialisms used in this document.r

Table 2 Acronyms and Initialisms

Term Definition
AGN Aggregation Node
AP Access Point
APN Access Point Name
ASR Aggregation Services Router
BDI Bridge Domain Interface
BGP Border Gateway Protocol
BVI Bridge Virtual Interface
CAPWAP Control and Provisioning of Wireless Access Points
CDF Cumulative Density Function
CE Customer Edge
CN-RR Core Node - Route Reflector
COS Class of Service
DHCP Dynamic Host Configuration Protocol
DSCP Differentiated Services Code Point
ESR Cisco Embedded Services Router
ESS Cisco Express Security Specialization
FHRP First Hop Redundancy Protocol
FRR Fast Reroute
GPS Global Positioning System
ICCP Interchassis Communication Protocol

85
Connected Rail Solution Implementation Guide

Glossary

Table 2 Acronyms and Initialisms (continued)

Term Definition
IGP Interior Gateway Protocol
IoE Internet of Everything
IoT Internet of Things
L2VPN Layer 2 Virtual Private Network
L3VPN Layer 3 Virtual Private Network
LAN Local Area Network
LDP Label Distribution Protocol
LMA Local Mobility Anchor
LMC Lilee Mobility Controller
LSP Label Switched Path
LTE Long-Term Evolution
LTS Long Term Storage
MAC Media Access Control
MAG Mobile Access Gateway
MC-LAG Multi-Chassis Link Aggregation Group
mLACP Multichassis Link Aggregation Control Protocol
MN Mobile Node
MPLS Multiprotocol Label Switching
MTG MPLS Transport Gateway
NAI Network Access Indicator
NAT Network Address Translation
NIC Network Interface Controller
NNI Network to Network Interface
NTP Network Time Protocol
OSPF Open Shortest Path First
PAN Pre-Aggregation Node
PAT Port Address Translation
PE Provider Edge
PMIPv6 Proxy Mobile IPv6
PoA Point of Attachment
PW Pseudowire
QoS Quality of Service
RENN REP Edge No-Neighbor
REP Cisco Resilient Ethernet Protocol
RF Radio Frequency
RSSI Received Signal Strength Indicator
RT Route Target
SASD Safety and Security Desktop
SCP Secure Copy

86
Connected Rail Solution Implementation Guide

Glossary

Table 2 Acronyms and Initialisms (continued)

Term Definition
SE Service Edge
SIM Subscriber Identity Module
SLA Service Level Agreement
SSID Service Set Identifier
TCP Transmission Control Protocol
TDM Time Division Multiplexing
TFTP Trivial File Transfer Protocol
UCS Unified Computing System
UNI User Network Interface
URL Uniform Resource Locator
VLAN Virtual Local Area Network
vLMC Virtual Lilee Mobility Controller
VM Virtual Machine
vmNIC Virtual Machine Network Interface Controller
VPN Virtual Private Network
VRF Virtual Routing and Forwarding
VRRP Virtual Router Redundancy Protocol
VSM Video Surveillance Manager
VSMS Video Surveillance Media Server
VSOM Video Surveillance Operations Manager
WAN Wide Area Network
WLAN Wireless Local Area Network
WLC Wireless LAN Controller
WPA Wi-FI Protected Access

87
Connected Rail Solution Implementation Guide

Glossary

88

Das könnte Ihnen auch gefallen