Sie sind auf Seite 1von 282

Mobile First Campus

Validated Reference Architecture

1
Mobile First Campus
Validated Reference Architecture

Contents
REVISION HISTORY ........................................................................................................................... 5
INTRODUCTION .................................................................................................................................. 5
DESIGN GOALS ............................................................................................................................... 6
TARGET AUDIENCE ........................................................................................................................ 6
SCOPE ............................................................................................................................................. 6
Reference Material ............................................................................................................................ 7
Related Documents........................................................................................................................... 7
Graphical Icons .............................................................................................................................. 8
Acronym List .................................................................................................................................. 8
SOLUTION BUILDING BLOCKS........................................................................................................ 12
OVERVIEW ..................................................................................................................................... 12
DESIGN PRICIPLES....................................................................................................................... 12
MODULAR DESIGNS ..................................................................................................................... 12
CAMPUS ACCESS LAYER ......................................................................................................... 14
CAMPUS AGGREGATION LAYER ............................................................................................. 17
WIRELESS MODULE AGGREGATION LAYER.......................................................................... 24
WIRELESS MODULE REDUNDANCY ........................................................................................ 24
CAMPUS CORE LAYER ............................................................................................................. 28
Additional Design Elements & Considerations ................................................................................ 29
Quality of Service ......................................................................................................................... 29
Routing Protocols ........................................................................................................................ 29
SECURITY & ACCESS CONTROL ............................................................................................. 30
MANAGEMENT ........................................................................................................................... 31
REFERENCE ARCHITECTURE BUILDING BLOCKS.................................................................... 33
SMALL OFFICE ........................................................................................................................... 33
LARGE OFFICE .......................................................................................................................... 50
CAMPUS ..................................................................................................................................... 57
VRD CASE STUDY OVERVIEW........................................................................................................ 74
Headquarters Building ................................................................................................................. 74
Gold River .................................................................................................................................... 77
Squaw Valley ............................................................................................................................... 79

2
Mobile First Campus
Validated Reference Architecture

Kirkwood ...................................................................................................................................... 80
Mt. Rose ...................................................................................................................................... 81
DESIGN REQUIREMENTS ............................................................................................................ 83
Availability Requirements............................................................................................................. 83
Core Layer Requirements ............................................................................................................ 83
Aggregation Layer Requirements ................................................................................................ 83
Access Layer Requirements ........................................................................................................ 83
Layer 2 Requirements ................................................................................................................. 83
Routing Requirements ................................................................................................................. 84
Multicast Requirements ............................................................................................................... 84
Administrative Device Access ...................................................................................................... 84
Network Instrumentation & Management ..................................................................................... 84
End User Experience ................................................................................................................... 85
Security ........................................................................................................................................ 85
DESIGN OVERVIEW ...................................................................................................................... 85
SWITCHING ARCHITECTURE ...................................................................................................... 90
ROUTING ARCHITECTURE .......................................................................................................... 94
DATACENTER CONNECTIVITY .................................................................................................... 99
MULTICAST ROUTING ................................................................................................................ 100
Quality of Service .......................................................................................................................... 100
IP ADDRESSING .......................................................................................................................... 101
VLANs ........................................................................................................................................... 104
Mobility Services Block ................................................................................................................. 105
Network Services .......................................................................................................................... 108
Device & Network Management .................................................................................................... 109
Network Instrumentation ............................................................................................................... 110
Network Automation ...................................................................................................................... 110
Other Services .............................................................................................................................. 110
ADAPTING THIS CASE STUDY ...................................................................................................... 111
Port/Slot LAG Interface Diversity .................................................................................................. 111
Incorporating VRFs ....................................................................................................................... 111
Reducing Required Number of Physical Interfaces....................................................................... 111
Dynamic Segmentation ................................................................................................................. 111
ClearPass Configuration ............................................................................................................... 111
Adaptation Summary..................................................................................................................... 112

3
Mobile First Campus
Validated Reference Architecture

BUILDING THE NETWORK ............................................................................................................. 112


DOCUMENT CONTRIBUTORS ....................................................................................................... 112
APPENDIX A – LAB DEVICE CONFIGURATRIONS ...................................................................... 114
SWHQ-CORE1 Configuration ....................................................................................................... 114
SWHQ-CORE2 Configuration ....................................................................................................... 123
SWHQ-AGG1A Configuration ....................................................................................................... 134
SWHQ-AGG1B Configuration ....................................................................................................... 148
SWHW-MAGG1A Configuration ................................................................................................... 161
SWHQ-MAGG1B Configuration .................................................................................................... 174
SWHQ-WAN1 Configuration ......................................................................................................... 187
SWHQ-WAN2 Configuration ......................................................................................................... 196
SWHQ-ACC-1A ............................................................................................................................ 203
SWHQ-ACC-A1-2 ......................................................................................................................... 211
Mt. Rose Site Configurations ........................................................................................................ 219
SWMTR-WAN1 ............................................................................................................................. 219
SWMTR-WAN2 Configuration....................................................................................................... 227
SWMTR-CORE ............................................................................................................................. 235
SWMTR-ACC-1A-1 ....................................................................................................................... 242
Mobility Controller Configuration ................................................................................................... 251
DUMARS_INC Enterprise Level Config ........................................................................................ 251
HQ Site Configuration ................................................................................................................... 252
HQMC1A Controller ...................................................................................................................... 256
HQMC1B Controller ...................................................................................................................... 258
Gold River Site Config................................................................................................................... 260
GDRMC1A Configuration .............................................................................................................. 264
GDRMC1B Configuration .............................................................................................................. 267
DMZ Site Config ............................................................................................................................ 270
DMZMC1A Config ......................................................................................................................... 270
Appendix B - PLATFORM SCALING................................................................................................ 276
CAMPUS SWITCHING ................................................................................................................. 276
WIRELESS ................................................................................................................................... 277

4
Mobile First Campus
Validated Reference Architecture

REVISION HISTORY

The following table lists the revisions of this document:

Revision Date Change Description

1.0 September 2018 Initial Release

1.1 September 24, 2018 Minor typographical changes

Table 1 - Revision History

INTRODUCTION
The Aruba Mobile First Reference Architecture Validated Reference Design Guide has been prepared to enable the reader to
understand the building blocks of a Mobile-First network including validate device configurations which align to Aruba leading
practices.
The Aruba Mobile First Architecture accelerates innovation in the mobile, IoT and cloud era. It provides a secure, open, and
autonomous network that is policy-driven, API-centric, IoT ready, and automated. These key features all deliver a first-class
user experience and deliver on the promise of any user, any location, same experience.
There are six characteristics of a Mobile First Enterprise Network:

• Policy is unified and multi-vendor


• Manageability is end-to-end and multi-vendor
• Wireless is best-in-breed
• Wired is optimized for wireless and IoT aggregation
• Network analytics for IT, user analytics for Line-of-Business
• End-to-end compelling TCO

This Validated Reference Design (VRD) document will demonstrate how Aruba’s products are used together to build a Mobile
First Campus network. In this VRD, we will provide guidance on network design for small, medium, and large facilities and then
present a case study for a multi-site Mobile First network using the following Aruba solution building blocks:
• Mobility Controllers
• Access Points
• Switches
• ClearPass
• AirWave

The Aruba Mobile First Architecture was designed with automation integration in mind. The Network Analytics Engine utilizes
the open REST API across the Aruba portfolio to give a real-time monitoring and alert triggering system to help network

5
operators manage and inspect for network anomalies and changes. In designing a Mobile First network, care should be given
to ensure that plans are made to allow for automation for both the initial deployment as well as for changes that will occur over
the life of the network. The Aruba approach to ensuring that our design allows for automation is called D4AO – Designed for
Automation and Operation.

DESIGN GOALS
The design prepared in this case study has a target of sub-second failover when network devices or links experience a planned
or unplanned outage. When possible, the default configuration for protocol timers and settings are used and tuning of these
values is implemented only when required. The design elements presented in this document can be used to build new
networks as well as to optimize and redesign existing networks. This design document is not exhaustive of all design and
configuration options, but rather is representative of recommended design elements, network devices, and hardware.

TARGET AUDIENCE

This VRD is written for IT Professionals who need to design an Aruba wired and wireless network for a large organization with
multiple sites supporting between 15,000 and 20,000 users/devices. These IT professionals can fill a variety of roles:

• Systems engineers who need a standard set of procedures for implementing solutions.
• Project managers and those who estimate levels of effort to craft statements of work or project plans.
• Aruba partners who sell technology or create implementation documentation.

SCOPE
The Validated Reference Design series documents focus on particular aspects of Aruba technologies and deployment models.
Together these guides provide a structured framework to understand and deploy Aruba Mobile First Networks The VRD series
has four document categories:
• Foundation guides explain the core technologies of an Aruba Networks. These guides also describe different aspects
of planning, operation, and troubleshooting deployments
• Base Design guides describe the most common deployment models, recommendations, and configurations
• Application guides build on the base designs. These guides deliver specific information that is relevant to deploying
particular applications such as voice, video, or outdoor campus extension.
• Specialty Deployment guides involve deployments in conditions that differ significantly from the common base design
deployment models.

6
Figure 1 - Aruba Reference Architectures

The Campus Validated Reference Design is considered a Base Design guides within the VRD core technology series.

Reference Material
Readers should have a solid working understanding of basic wired and wireless LAN concepts as well as the Aruba technology
explained in the foundation level guides in order to read this VRD. The following resources will assist readers who require the
knowledge necessary to digest this document in the intended manner:
• For information on Aruba Mobility Controllers and deployment models, please refer to the Aruba Mobility Controllers
and Deployment Models Validated Reference Design
• The complete suite of Aruba technical documentation is available for download from the Aruba Support Site. These
documents present complete, detailed feature and functionality explanations beyond the scope of the VRD series. The
Aruba support site is located at:
• For more training on Aruba products or to learn about Aruba certifications, please visit the Aruba Training and
Certification page. This page contains links to class descriptions, calendars, and test descriptions.
• Aruba hosts a user forum site and user meetings called Airheads Community. The forum contains discussions of
deployment best practices, products, and troubleshooting tips. Airheads is an invaluable resource that allows network
administrators to interact with each other and Aruba experts.

Related Documents
The following documents may be helpful as supplemental reference material to this guide:
• ArubaOS 8 User Guide
• ArubaOS 8 CLI Reference Guide
• Aruba Solution Exchange

7
• ArubaOS 8 Fundamentals Guide
• Aruba Dynamic Segmentation for Wired Networks
• Aruba ClearPass Wired Policy Enforcement

Graphical Icons

Figure 2 - Icon Set

Acronym List
Acronym Definition
A-MPDU Aggregated Media Access Control Packet Data Unit

8
A-MSDU Aggregated Media Access Control Service Data Unit
AAA Authentication, Authorization, and Accounting
AAC AP Anchor Controller
ACR Advanced Cryptography
AD Active Directory
AP Access Point
API Application Programming Interface
BLMS Backup Local Management Switch
BGP Border Gateway Protocol
BYOD Bring Your Own Device
CoA Change of Authorization
CLI Command Line Interface
CPSec Control Place Security
CPPM ClearPass Policy Manager
CPU Central Processing Unit
DC Data Center
DNS Domain Name Service
DHCP Dynamic Host Configuration Protocol
DMZ Demilitarized Zone
EAP-PEAP Extensible Authentication Protocol-Protected EAP
EAP-TLS Extensible Authentication Protocol-Transport Layer Security
FQDN Fully-qualified Domain Name
GRE Generic Routing Encapsulation
GUI Graphical User Interface
HA High Availability
HMM Hardware MM
HTTP Hypertext Transfer Protocol
HTTPS HTTP Secure
IP Internet Protocol
IPsec Internet Protocol Security
LMS Local Management Switch
MAC Media Access Control
MC Mobility Controller

9
MCM Master Controller Mode
MD Managed Device
MD Mobility Device
MM Mobility Master
MM-HW Mobility Master - Hardware
MM-VA Mobility Master – Virtual Appliance
MN Managed Node
NAS Network Access Server
NAT Network Address Translation
NBAPI Northbound Application Programming Interface
OSPF Open Shortest Path First
PAPI Proprietary Access Protocol Interface
PEF Policy Enforcement Firewall
PSK Pre-shared Key
RADIUS Remote Authentication Dial In User Service
RAM Random Access Memory
REST API Representational State Transfer Application Programing Interface
RFP RF Protect
S-AAC Standby AP Anchor Controller
S-UAC Standby User Anchor Controller
SfB Skype for Business
SPT Spanning Tree Protocol
SSID Service Set Identifier
UAC User Anchor Controller
UCC Unified Communications and Collaboration
VIP Virtual Internet Protocol address
VLAN Virtual Local Area Network
VM Virtual Machine
VMC Virtual MC
VMM Virtual MM
VPN Virtual Private Network
VPNC Virtual Private Network Concentrator
VRRP Virtual Router Redundancy Protocol

10
VSF Virtual Switching Framework
WLAN Wireless Local Area Network
WPA2-PSK Wi-Fi Protected Access 2-Pre-Shared Key
XML Extensible Markup Language
ZTP Zero-touch Provisioning

11
SOLUTION BUILDING BLOCKS
OVERVIEW
This chapter addresses the design decisions and best practices that can be followed to implement an end-to-end Aruba mobile
first architecture for a typical enterprise network. This chapter focuses on architecture design recommendations and explains
the various configurations and considerations that are needed to for each architecture. Reference architectures are provided for
small, medium and large buildings as well as large campuses. For each architecture the following topics are discussed:
 Recommended modular local area network (LAN) designs
 Mobility controller cluster placement
 Design considerations and best practices
 Suggested switch and wireless platforms
The information provided in this chapter is useful for network architects responsible for greenfield designs, network admins
responsible for optimizing existing networks as well as network planners requiring a template that can be followed as their
network grows. The scope of this chapter applies to small offices with fewer than 32 Access Points through to large campuses
supporting up to 10,000 Access Points.

Small Office Medium Office Large Office Campus

Figure 3- Mobile-First Campus Scope

DESIGN PRICIPLES
The foundation of each reference architecture provided in this chapter is the underlying modular local area network (LAN)
design model that separates the network into smaller more manageable modular components. A typical LAN consists of a set of
common interconnected layers such as core, aggregation and access that forms the main network along with additional
modules that provide specific functions such as Internet, WAN, Wireless and server aggregation.
This modular approach simplifies the overall design and management of the LAN while providing the following benefits:
1. Modules can be easily replicated providing scale as the network grows.
2. Modules can be added and removed with minimum impact to other layers as network requirements evolve.
3. Modules allows the impact of operational changes to be constrained to a smaller subset of the network
4. Modules provide specific fault domains improving resiliency.
The modular design philosophies outlined in this chapter are consistent with industry leading practices and can be applied to
any size network.

MODULAR DESIGNS
The modular design that is selected for a specific LAN deployment is dependent on many factors. Most networks are built using
either a 2-tier or 3-tier modular design (figure 5 and 6). These two designs are commonly described as follows:

12
 2-tier Modular Network – Collapses the core and aggregation layers into a single layer. The switches in the core /
aggregation layer perform a dual-role by providing aggregation to the access layer as well as modules and performs IP
routing functions.
 3-tier Modular Network – Utilizes a dedicated aggregation layer between the core and access layers. The aggregation
layer switches provide aggregation to the access layers and connect directly to the core. For larger networks,
aggregation layer switches are commonly deployed to connect modules such as wireless, WAN, internet edge and
server farms.

Figure 4 - 2-tier Modular Network

A 2-tier modular design is well suited for small buildings with few wiring closets and access switches. The access layer VLANs
are extended between the access layer switches and the core / aggregation layer switches using 802.1Q trunking. The core /
aggregation switches provide layer 3 interfaces (VLAN interfaces or SVIs) for each VLAN and provide reachability to the rest of
the IP network.

Figure 5 - Layer 2 & Routed Access 3-tier LAN Designs

In a 3-tier modular design, the IP routing functions are distributed between the core and aggregation layers and may be
extended to the access layers. All 3-tier LAN designs will implement IP routing between the aggregation and core switches
using a dynamic routing protocol such as OSPF for reachability and address summarization:

13
 Layer 2 access layer – All VLANs from the access layer are extended to the aggregation layer switches using 802.1Q
trunking. The aggregation switches provide layer 3 interfaces (VLAN interfaces or SVIs) for each VLAN and provide
reachability to the rest of the IP network.
 Routed access layer – IP routing is performed between the aggregation and access layer switches (as well as
between the aggregation and core layers). In this deployment model each access layer switch or stack provides
reachability to the rest of the IP network.
The Aruba Mobile First Architecture supports both designs allowing our customers to leverage the benefits of Aruba solutions
with either network design.

CAMPUS ACCESS LAYER


When designing and planning a mobile first network the access layer is likely the most ‘feature rich’ service block in the
network. Features range from network centric configuration elements such as VLAN configurations and port security, to power
over ethernet, power supply redundancy, and device stacking. These features fall into three major categories:

• Switch Configuration Elements


• Redundancy & Virtualization
• Power over Ethernet

Switch Configuration Elements


The art of network design strives to provide the most secure network possible while having no perceptible impact to the end-
user. In a mobile-first network to provide the ‘any user, anylocation, same experience’ for both wired and wireless users, a
significant number of security policy elements are configured within ClearPass. Subsequently, these elements are then
dynamically applied to devices when a user connects to the network. If the user “roams” to another location, the policy will
follow the user. This design element is referred to as a ‘downloadable user role’ design. Aruba also provides the capability to
implement a design option providing tunneling of packets from switches back to a mobility controller or cluster of mobility
controllers. This design approach enables tunneling at either the port level (port-based tunneling) or at the user level (role-
based tunneling).

Dynamic Segmentation
Aruba’s ability to assign policy (roles) on the fly to a user or a switch port based on such things as access method of a client,
time-of-day, or type-of-machine is the foundation of Dynamic Segmentation. Dynamic Segmentation allows the same user
experience to wired users/devices as provided to wireless users/devices. No longer do switches need to have statically
configured ports or complex RADIUS VSAs. Aruba implements ‘Colorless Ports’ in which a port has a basic configuration and
until the network authenticates or profiles the user or port, network access is restricted.
Once authenticated or profiled, a ‘role’ is assigned to the user or port. Roles will dictate what VLAN is to be assigned, if the
traffic is locally-switched, or if tunneled back to a Mobility Controller or Cluster. Roles are generally aligned to business function
to provide for specialized network access for users and devices within a Mobile First network. Security controls to allow/deny
traffic flows can be applied at the local switch or at the Mobility Controller (with tunneled traffic). Roles can also assign policy
elements including QoS, reauthentication times, and captive portal information.
In the Mobile-First Architecture, it is common to use a tunneling configuration for many device roles. Tunneling allows for the
application of stateful firewall rules via the Mobility Controller on a per-user or per-port basis. Not all traffic must be tunneled
back to a Mobility Controller, rather key roles such as IoT devices, point of sale systems, and guests/specific users can be
tunneled.

In planning for dynamic segmentation, it is important to understand appropriate scale as there are different capabilities between
switch models. Platform selection is influenced by the number of role elements (ACE and QoS policies). ArubaOS-Switch does

14
not support enabling both Role-Based and Port-Based tunneling on the same switch/stack. Mobility Controller Clusters must
also have sufficient capacity and licensing to support tunneling from switches. Mobility Controller scaling is found in later
sections of this document.

Role-Based Tunnel Port-Based Tunnel Design


Max Number of Users Max Number of Ports
Switch Model
Aruba 2930F 1024 (stack) 208 (stacked)
Aruba 2930M 1024 (stack) 520 (stacked)
Aruba 3810 Series 1024 (stack) 520 (stacked)
Aruba 5400R Series 1024 (stack) 520 (stacked)
Figure 6 – Dynamic Segmentation Design/Scale Considerations

ArubaOS-Switch supports a maximum of 32 user tunnels per port up to a maximum total of 1024 tunnels per
NOTE
switch/switch stack.

Redundancy & Virtualization


In keeping with the design goal of reducing complexity without compromising function and operational resiliency, Aruba
recommends that Access layer devices implement redundant power supplies as well as device stacking when possible. For
very port-dense access layer designs, a chassis-based solution may also be used. We also recommend dual power supplies
for most designs. The primary reason for this is to provide continued device up-time as access layer switches are provide
power and network connectivity to access-points and wireless users.

Maximum #
Max Stack Access Switch
Switch Model Members Ports per Stack
Aruba 2930F 8 (VSF) 384
Aruba 2930M 10 480
Aruba 3810 Series 10 480
Aruba 5400R Series 2 (VSF) 576 (leaving no uplink ports)
Figure 7 - Access Layer Stacking Overview

“Just taking the defaults” is not always an optimal approach to having a well design network. With respect to access layer
stacking, Aruba recommends defining the stack commander and standby roles to optimally protect the infrastructure should a
device or link failure occur. In a switch stack of three or more members, we recommend associating these roles to devices
which do NOT have uplinks to the aggregation layer devices. Additionally, when connecting stacking cables between devices,
we recommend a ring topology.
Connectivity to access layer switches should be implemented to provide redundancy. By definition, there will be some level of
oversubscription of bandwidth between the access layer and the aggregation layer. Most access layer switches (or stacks) can
be supported with a pair of 10G links with the goal of being at or below a 20:1 oversubscription ratio. Care should be taken
when exceeding this ratio to ensure that performance is acceptable. Aruba recommends aggregating both links (or more as
needed) into one logical link to provide an active/active path between network devices/services blocks. ArubaOS-Switch uses
the term “Trunk” to refer to two or more physical links combined into one logical link.

15
Uplink Forwarding Switch/Stack Total Switch Ports Oversubscription Ratio
Capacity
20G (2x10GB links) 2930M with 4 members 192 9.6:1
20G (2x10GB links) 2930M with 8 members 384 19.2:1
40G (4x10GB links) 2930M with 10 members 480 12:1
40G (4x10GB links) Fully loaded 5400R 512 12.8:1
Figure 8 - Access Layer Uplink Oversubscription

The diagram below depicts the recommended physical connectivity for a stack containing 4 ArubaOS-Switches.

Figure 9 - ArubaOS-Switch Stacking Connectivity Recommendations

Power over Ethernet


Power over Ethernet is not a ‘nice to have’ it is all but mandatory in today’s networks. As the Internet of Things continues to
grow and evolve, more and more ‘things’ will be connected to the network and they will require power. PoE is not just for
access points and IP phones, it is being used to power digital signage, conference room control panels, building control
sensors, badge readers, cameras, industrial control systems, overhead paging and lighting systems.
PoE (802.3af) provides a maximum of 15.4 watts per port and PoE+ (802.3at) provides a maximum of 30 watts per port. The
“4PPoE” standard (802.3bt) will provide up to 60 watts per port. There is not a great deal of impact to network design with
respect to PoE. However, care must be taken to ensure that location where the switch(es) are installed has adequate power,
cooling, and UPS/battery backup capacity. There are also implications for the building cable plan which need to be considered
if using 802.3bt. For networks supporting 802.3af and/or 802.3a, copper cable plants have traditionally bundled 96 cable runs
together. With the increased heat generated by a 802.3bt, the recommended cable bundles should contain no more than 24
cables. An 802.3bt cable plant can run up to 15 degrees Celsius (59 degrees Fahrenheit) warmer than and 802.3af/at cable
plant.

Max. Number Max. Number Max. Number Required # Recommended


Switch 802.3af Ports 802.3at Ports 802.3bt Ports Power Supplies # of Power
(15 Watts/Port) (30 Watts/ Port) (60 Watts/Port) Supplies
Aruba 2930F 48 48 TBD 1 2
Aruba 2930M 48 48 TBD 1 2

16
Aruba 3810 Series 48 48 TBD 1 2
Aruba 5400R Series 48 48 TBD 1 2
Figure 10 - Access Layer PoE Overview

Switch product selection must also consider the use of per-port tunneling. Please review Dynamic Segmentation
NOTE
information above.

CAMPUS AGGREGATION LAYER


When designing and planning a mobile first network, when should you consider deploying an aggregation layer? The answer to
this question depends on several key factors:
1. The number of access layer switches that need to be connected – At some point the number of SFP / SFP+ / QSFP ports
required to connect the access layer will exceeds the physical port capacity of the core switches. An aggregation layer
between the core and access layers reduces the number physical ports required in the core.
2. The structured wiring design of the building – Intermediate distribution frames (IDFs) in larger buildings typically connect
using fiber to a main distribution frames (MDFs) at strategic locations within the building.
o Aggregation switches are often required in MDFs due to limited fiber capacity between the MDFs and main server
room or data center.
o When multi-mode fiber is deployed, aggregation switches allow the IDFs to be connected when the combined
fiber lengths (IDF + MDF + server room) exceed the optics distance specifications.
MDFs provide ideal locations for aggregation layer switches as they typically aggregate the fiber connections from the
access layer and provide connectivity to the core deployed in the main server room or data center.
3. The manageability, stability and scaling of the network dictates that specific fault domains be introduced into the network.
This is typically achieved by implementing IP routing between the core and respective aggregation layers ensuring that the
core is isolated from layer 2 faults or operational changes originating from other layers or modules.
4. Reducing layer 2 / layer 3 processing load on the core – As a network grows, MAC address table sizes and IP protocol
processing overhead increases. The inclusion of an aggregation layer offloads the layer 2 learning and IP protocol
processing overhead from the core to the respective aggregation layer switches. The aggregation layer becomes the layer
2 & layer 3 demarcation points for the clients allowing the core to be dedicated to IP routing functions.
5. By design, there will be some level of oversubscription between access layer devices and the aggregation layer as well as
between the aggregation layer and the core layer. A reasonable oversubscription ratio between the access layer and
aggregation layers is 15-20:1. Minimizing the oversubscription between the aggregation and core layers is recommended.
Application performance and traffic patterns may dictate changing these ratios in your environment.

In planning the bandwidth required between the aggregation layer to the Core layer, consideration must be given to
oversubscription rate to properly plan for the number of aggregation layer switches (or pairs of switches) as well as uplink port
density for core devices. Most aggregation layer switches (or pairs of switches) can be supported with a pair of 40G links with
the goal of being at or below a 12:1 oversubscription ratio. Aggregation layers which have two layer 3 paths to core devices
can perform equal-cost multipath (ECMP) routing which can nearly double throughput from the aggregation switch provided that
the destination prefixes have multiple routes of equal cost (or metric). The hashing algorithm used will approach a 50/50 ratio in
distributing the flows between layer 3 links. In designing a network for a great deal of fault tolerance, we recommend reducing
the oversubscription ratio to 6:1 when we have multiple aggregation layer device pairs such to protect against significant
throughput loss should a core device fail.

17
Table below illustrates potential configurations for aggregation layer switches. The throughput and port counts are based upon
a single switch and interfaces have been allocated to implement an optimal configuration using either VSX with ArubaOS-CX
devices or VSF with ArubaOS-Switch devices.

Uplink Forwarding 10G interfaces Max Access Layer Oversubscription Notes


Capacity for Access Layer Bandwidth Ratio
connectivity
Aruba 8320
20G (2x10GB links) 46 460Gb 24:1 2x40G for ISL
2x40G to each
Core
2x10G for
Keepalive
2x10G for L3
between VSX
peers

40G (Pair of 2x10GB links) 44 440Gb 12:1 2x40G for ISL


2x40G to each
Core
2x10G for
Keepalive
2x10G for L3
between VSX
peers

80G (2x40GB links) 44 440Gb 6:1 2x40G for ISL,


2x40G to each
Core
2x10G for
Keepalive
2x10G for L3
between VSX
peers

160G (Pair of 2x40GB links) 44 440Gb 3:1 2x40G for ISL,


2x40G to each
Core
2x10G for
Keepalive
2x10G for L3
between VSX
peers

Aruba 8400
20G (2x10GB links) 254 2540Gb 127:1 8 10G modules

40G (Pair of 2x10GB links) 252 2520Gb 63:1 8 10G modules

80G (2x40GB links) 192 1920Gb 24:1 6 10G modules


2 40G modules

160G (Pair of 2x40GB links) 192 1920Gb 12:1 6 10G modules

18
2 40G modules

Aruba 5406R
20G (2x10GB links) 46 460Gb 23:1 6 – 8 port
modules
40G (Pair of 2x10GB links) 44 440Gb 11:1 6 – 8 port
modules
80G (2x40GB links)* 32 320Gb 4:1 4 – 8 port
modules
2 – 6 port
modules for
Uplinks and
VSF
160G (Pair of 2x40GB links) 24 240Gb 1.5:1 3 – 8 port
modules
4 – 6 port
modules for
Uplinks and
VSF
Aruba 5412R
20G (2x10GB links) 93 930Gb 46.5:1 12 – 8 port
modules
Presuming
VSF, 2 – VSF
link to peer
1 link for uplink
40G (Pair of 2x10GB links) 91 920Gb 23.25:1 12 – 8 port
modules
Presuming
VSF, 2 – VSF
link to peer
2 link for uplink
80G (2x40GB links)* 72 720Gb 9:1 9 – 8 port
modules
2 – 2 port 40G
modules
Presuming
VSF,
2 – 40G VSF
link to peer
1 link for uplink
160G (Pair of 2x40GB 72 720Gb 4.5:1 9 – 8 port
links)* modules
2 – 2 port 40G
modules
Presuming
VSF,
2 – 40G VSF
link to peer
2 links for
uplink

In a three-tier model with a layer 2 access switches, the Aruba recommended solution to provide connectivity and high-
availability is to use ArubaOS-CX switches and leverage the Aruba Virtual Switching Extensions (VSX). VSX provides for the

19
aggregation of multiple links from each VSX device to downstream switches as well as for the synchronization of several
configuration elements including Access-Lists and VLANS.
Building an aggregation layer using VSX, the design must account for uplinks to the core, layer 3 links to the VSX peer device,
the Interswitch Link (ISL) and for the VSX keepalive link. The ISL link should be of equal or greater bandwidth than the uplinks.
Traffic will only traverse the ISL link when all layer-three uplinks from a VSX peer device have failed. This situation is very
unlikely to occur in a well-designed network.
When designing an aggregation layer with the Aruba 8320 switch which has six 40G interfaces, Aruba recommends the
following connectivity and interface allocation be used when using 40G interfaces.

Figure 11 - Aggregation Layer with 40g Interfaces

If the design requirements can be met using 10G interfaces, Aruba recommends the following connectivity and interface
allocation. Note that the ISL link is designed to use a pair of 40G interfaces even when using 10G uplinks.

Figure 12 - Aggregation Layer with 10g Interfaces

ArubaOS-Switch devices deployed in an aggregation role can be physically or virtually stacked together. Aruba 5400R devices
can be configured with VSF as they do not support backplane stacking. The design must account for uplinks to the core, layer
links to the VSF peer device. The VSF peer link should be a LAG with at least two interfaces with a total bandwidth equal to

20
sum of the uplink interface bandwidth. Multi Active Detection (MAD) is a ‘failsafe’ mechanism used in VSF enabled networks.
The diagram below depicts a VSF configuration with 20G of uplink bandwidth.

If you are implementing LLDP-MAD it is recommended to use an existing network path that is not a direct
NOTE connection between VSF devices. The LLDP-MAD traffic should not follow and east-west path between VSF
devices.

Figure 13 - VSF Interface Allocation

Distributed Trunking (DT) a precursor technology to VSF is an alternative to provide high-availability. DT does not provide the
same features as VSF including layer three forwarding. Aruba recommends migrating from DT to VSF. Fundamentally, DT
presents a unified forwarding plane to neighboring devices. DT uses a proprietary protocol that allows two or more aggregated
links to be distributed across two switches to create a link aggregation group called a DT-LAG. The DT-LAGS appear to the
downstream device as if they are from a single device. This allows third party devices such as switches, servers, or any other
networking device that supports trunking to interoperate with the distributed trunking switches seamlessly. Distributed trunking
provides device-level redundancy in addition to link failure protection.

21
Each distributed trunk (DT) switch in a DT pair must be configured with a separate ISC link and peer-keepalive link. The peer-
keepalive link is used to transmit keepalive messages when the ISC link is down to determine if the failure is a link-level failure
or the complete failure of the remote peer.

NOTE DT supports a maximum of two switches and is supported on 5400R and 3810M platforms.

With a combination of layer-two and layer-three services deployed in the aggregation layer, architects should ensure that they
do not exceed layer-two and layer-tree table sizes. This is of more concern with respect to layer-two tables as most campus
networks will not likely exceed layer-three table sizes. Consideration must also be given to Dual Stack (IPv4+IPv6)
environments. The tables below list validated scale Aruba switches used to provide aggregation layer services. Aruba
recommends not exceeding 80% of the capacity.

Switch Series MAC Table Size IPv4 ARP Entries IPv6 ND Entries Dual Stack Clients
( 1 IPv4 ARP + 2 IPv6 ND)
Aruba 3810 Series (16.06) 64,000 25,000 25,000 8,333
Aruba 5400R Series (16.06) 64,000 25,000 25,000 8,333
Aruba 8320 Series (CX 10.1.020) 47,000 47,000 44,000 22,000
Configured in Mobile-First Mode
Aruba 8400 Series (CX 10.1.020) 64,000 64,000 48,000 32,000
Table 14 - Aggregation Switch Layer-Two Table Sizes

Switch Series IPv4 IPv4 IPv6 IPv6


Unicast Routes Multicast Routes Unicast Routes Multicast Routes
Aruba 3810 Series (16.06) 10,000 2,048 5,000 12,500
Aruba 5400R Series (16.06) 10,000 2,048 5,000 12,500
Aruba 8320 Series (CX 10.1.020) 72,000 3,200 20,000 *
Configured in Routing-Mode
Aruba 8400 Series (CX 10.1.020) 100,000 4,000 20,000 *
Table 15 - Aggregation Switch Layer 3 Table Sizes

Beginning with ArubaOS-CX 10.1, the Aruba 8320 provides two modes of operation to allocate resources to either
NOTE a layer-two focused configuration or a layer-three focused configuration. The data presented in the tables above
are the maximum values for either a layer-two or layer-three configuration.

Consideration should also be given to the operating system and its default and long-term behavior when performing capacity
planning. Unfortunately the current state of IPv6 support across operating systems and versions is fragmented. This means a
single IPv6 addressing method will not necessarily support all the IPv6 cli- ents that can connect to a Mobile First network. For
example if your Mobile First network supports Android devices, you must implement Stateless Address Auto Configuration
(SLAAC) along with RFC-8106 to provide DNS information to clients. However RFC-8106 is not supported by older MacOS or
Windows operating systems. Therefore a combination of IPv6 addressing methods must be enabled to ensure all the devices
on the network can obtain IPv6 addressing and DNS information required to use the network.

22
Operating System SLAAC with RFC-8106 Stateless DHCPv6 Stateful DHCPv6
Mac OS X Yes (10.11 and above) Yes (10.7 and above) Yes (10.7 and above)
Windows 7/8/8.1/10 SLAAC only Yes Yes
Windows 10 Creators Update SLAAC + RDNSS Yes Yes
iOS Yes (11.0 and above) Yes (4.0 and above) Yes (4.3.1 and above)
Android Yes (5.0 and above) No No
Table 16 - Client IPv6 Addressing Support

All information in the above table has been gathered using various sources on the internet.
NOTE Initial release support for RDNSS for Apple devices is not well documented, but has been verified to work on current iOS and
MacOS releases.

DHCPv6 and SLAAC enabled networks behave differently. It is not uncommon for clients in SLAAC enabled environment to
have enabled IPv6 privacy extensions and thus have several IPv6 addresses.
The data in the table below was captured after the host had been online for 4 hours. There are several factors which could lead
to additional addresses being assigned to a device. If possible, data from the current environment should be used to help
construct a baseline to ensure that you have sufficient platform scaling capabilities.

Operating System IPv4 Addresses IPv6 Addresses when using IPv6 Addresses when using
DHCPv6 SLAAC
Mac OS X 10.12 1 2 (link local, global) 3 (link local, 2 global)
Windows 10 1 2 (link local, global) 3 (link local, 2 global)
Ubuntu 9 1 2 (link local, global) 3 (link local, 2 global)
iOS 12 Beta 1 2 (link local, global) 3 (link local, 2 global)
Android 1 Google does not support 3 (link local, 2 global)
DHCPv6 for address
assignment with Android
devices
Table 16 –IPv4 and IPv6 Addressing of Client Operating Systems

Note that the number of IPv6 addresses per client when using SLAAC is not the maximum number of addresses a client can
use. Rather, this is the observed number of IPv6 addresses after a device power-on/boot and being operational for four hours.
If both DHCPv6 and SLAAC configured, some clients will obtain IPv6 addresses from both SLAAC and DHCPv6. Larger
networks designed to support more than 10,000 devices should use care in planning for IPv6.

23
WIRELESS MODULE AGGREGATION LAYER
For the wireless module, a dedicated aggregation layer will typically be introduced once the number of wireless and dynamically
segmented client host addresses exceeds a specific count. As wireless and dynamically segmented client traffic is tunneled
from the Access Points and access layer switches to the mobility controller cluster, the MAC learning and IP processing
overhead is incurred by the first-hop router for those VLANs. In a 2-tier modular network design this overhead is incurred by the
aggregation / core layer while in a 3-tier modular network design this overhead is incurred by the core. The addition of a
dedicated wireless module aggregation layer migrates the MAC learning and IP processing overhead from the core to a
dedicated wireless aggregation layer providing stability, fault isolation and scaling.
As a general best practice Aruba recommends implementing a dedicated wireless aggregation layer when the total number of
IPv4+IPv6 addresses from both wireless and dynamically segmented clients exceeds 4,096. This recommendation future
proofs the network and ensures the core layer is not overwhelmed as new classes of devices such as IoT are added to the
network or IPv6 is introduced which can double or triple number of host IP addresses.
How large can I scale a wireless module? The answer to this question is dependent on the scaling capabilities of the
aggregation layer switches that you deploy for the wireless module. Switches are designed to support a specific number of
hosts which includes the necessary table sizes and processing power to perform layer 2 (MAC) and layer 3 (ARP+ND) learning
and table maintenance.
The latest generation of campus switches from Aruba can comfortably scale to support 64,000 addresses (IPv4 & IPv6). As a
general rule you should design your network appropriately so that the total number of host addresses per wireless module does
not exceed the capacity of your wireless module aggregation switches Larger wireless networks scaling beyond 64,000 x IPv4
or IPv6 host addresses requiring additional wireless modules consisting of an aggregation layer and mobility controller cluster.
Scaling for native IPv4 deployments is easy to calculate as each host is assigned a single IPv4 address. A wireless module
using Aruba 8400 series aggregation switches can comfortably scale to support 64,000 IPv4 only hosts. The number of hosts
than can be supported for a dual-stack (IPv4+IPv6) or native IPv6 deployments is more challenging to calculate as each IPv6
host can be assigned multiple IPv6 addresses (link-local address plus one or more global addresses). Therefore, the total
number of IPv6 addresses that are assigned per host determines the maximum overall number of hosts that can be supported
within each wireless module:
 Native IPv6 – Assuming each host is assigned one global remote address, each wireless module can support a
maximum of 48,000 wireless + dynamically segmented dual-stack hosts.
 Dual-Stack – Assuming each host is assigned one IPv4 address and two global IPv6 addresses, each wireless module
can support a maximum of 32,000 wireless + dynamically segmented hosts. As each host will consume one entry per
global address.

Strategies and architectures to scale to support networks with ARP and ND requirements beyond the scale data
noted above is discussed in the campus reference architecture section. An Aruba mobile first architecture can be
NOTE
scaled to support up to 100,000 clients per Mobility Master by implementing multiple mobility controller clusters
each with their own aggregation layer.

WIRELESS MODULE REDUNDANCY


One important aspect of an Aruba mobile first redundant design is the connectivity of the wireless module that contains the
Mobility Controllers. The cluster of Mobility Controllers terminates the Access Points management / control tunnels as well as
the wireless and dynamically segmented client tunnels. To provide redundancy to Access Point and clients, each cluster
consists of a minimum of two Mobility Controllers, scaling to four or twelve cluster members (depending on model). As a best
practice, each cluster must contain members of the same model.
Each of the Mobility Controllers in the cluster connects to a pair of Aruba switches using dynamic port-channels forming a LAG.
LACP is enabled to verify peer availability and provide layer 2 loop prevention. The Mobility Controllers are connected to core or
wireless aggregation layer switches depending on the 2-tier or 3-tier hierarchical network design selected for the deployment
and the number of wireless and dynamically segmented hosts.
Redundancy within the wireless module is provided at multiple layers:

24
 ArubaOS 8 Clustering – Each Access Point and client establishes a tunnel to a primary and secondary Mobility
Controller within the cluster (see chapter X). This ensures a network path is available to Access Points and clients in
the event of an in-service upgrade or a planned / unplanned Mobility Controller outage.
 Device / Link Redundancy – Each Mobility Controller is connected to two Aruba switches supporting network
virtualization function (NVF) in the core or wireless aggregation layer. ArubaOS-CX switches provide NVF with VSX
bundled interfaces for active/active forwarding. ArubaOS-Switch based devices implement NVF via physical stacking,
logical stacking (VSF or DT) and Trunks to bundle interfaces for active/active forwarding. This ensures a network path
is available to the Mobility Controllers, Access Points and clients in the event of a planned or unplanned core or
wireless aggregation switch outage or link failure.
 Path Redundancy – Is provided using Link Aggregation Control Protocol (LACP) which is part of the IEEE 802.3ad
standard. LACP is an active protocol that allows LAG switch peers to detect if their peer port and device is operational.
 First-Hop Router Redundancy – The network must provide for the continued forwarding of packets during the failure of
the default gateway. This feature is natively provided by Aruba Switches supporting NVF without the need for
implementing first-hop routing redundancy protocols such as VRRP.
For all mobile first reference architectures, the Mobility Controller ports in the LAG are distributed between pairs of Aruba
switches implementing NVF. The Aruba switches that the Mobility Controllers are connected to will be dependent on the 2-tier
or 3-tier hierarchical network design selected for the deployment and the number of wireless clients that are supported. The
Aruba switches supporting the wireless module can be stack of Aruba 3810Ms, a pair of Aruba 5400Rs configured for VSF or a
pair of Aruba 8320s or 8400s configured with VSX.

ARUBA 3810M SWITCHES


Figure 14 demonstrates how a cluster of Mobility Controllers is connected to a stack of Aruba 3810M switches deployed in the
core, core / aggregation or wireless aggregation layer. The Aruba stacking architecture virtualizes both the control and data
planes allowing the 3810M stack of switches to forward traffic and be configured and be managed as a single virtual switch.
In this example two or more 1 Gigabit or 10 Gigabit Ethernet ports from each Mobility Controller are configured as a LAG and
are distributed between the Aruba 3810M switches in the stack. The switchports are configured as a dynamic port-channel on
the Aruba Mobility Controllers and LACP trunks on the Aruba 3810M switches.
First-hop router redundancy for the cluster management and client VLANs is natively provided by the stack of Aruba 3810M
switches that provide the default gateway for each VLAN. One Aruba 3810M switch in the stack operates as a “commander”
role while a second switch operates as a “standby” role. Following switch configuration leading practices, the switch roles
should be manually assigned. The “commander” switch provides IP forwarding functions/services during normal operation and
the “standby” assumes these functions in the event that the “commander” switch fails.

25
Figure 17 - Core / Aggregation using Stacking

ARUBA 5400R SWITCHES


Figure 14 demonstrates how a cluster of Mobility Controllers is connected to a pair of Aruba 5400R switches deployed in the
core or wireless aggregation layer configured for VSF. The Aruba VSF architecture virtualizes both the control and data planes
allowing all the pair of 5400R switches to forward traffic and be configured and be managed as a single virtual switch.
In this example two or more 1 Gigabit, 10 Gigabit or 40 Gigabit Ethernet ports from each Mobility Controller are configured as a
LAG and are distributed between the pair of Aruba 5400R switches. The switchports are configured as a dynamic port-channel
on the Aruba Mobility Controllers and LACP trunks on the Aruba 5400R switches.
First-hop router redundancy for the cluster management and client VLANs is natively provided by the VSF pair of Aruba 5400R
switches that provide the default gateway for each VLAN. One Aruba 5400R switch operates as a “commander” role while the
second switch operates as a “standby” role. The switch roles should be defined manually. The “commander” switch provides
IP forwarding during normal operation and the “standby” switch provides backup in the event that the “commander” switch fails.

26
Figure 18 - Core / Aggregation using Virtual Switching Framework (VSF)

ARUBA 8320 & 8400 SWITCHES


Figure 15 demonstrates how a cluster of Mobility Controllers is connected to a pair of Aruba 8320 or 8400 switches deployed in
the core or wireless aggregation layer configured with VSX. The Aruba VSX architecture virtualizes data planes allowing all the
pair of 8320 / 8400 switches to forward traffic as a single virtual switch. While both devices maintain independent control
planes, VSX Configuration Sync is recommended to ensure identical configuration elements for interface pairs used in VSX
configurations. VSX Config Sync can synchronize key items including access-lists and VLANs.
In this example two or more 1 Gigabit, 10 Gigabit or 40 Gigabit Ethernet ports from each Mobility Controller are configured as a
MCLAG and are distributed between the pair of Aruba 8320 / 8400 switches. The switch ports are configured as a dynamic
port-channel on the Aruba Mobility Controllers and MC-LAG on the Aruba 8320 / 8400 switches.
First-hop router redundancy for the cluster management and client VLANs is natively provided by the VSX pair of Aruba 8320 /
8400 switches that provide the default gateway for each VLAN. The active-gateway feature is enabled for each VLAN providing
IP forwarding and failover on both switches.

27
Figure 19 - Core / Aggregation using VSX with Multi-Chassis LAG

CAMPUS CORE LAYER


The primary function of the campus core layer is to forward packets as quickly as possible. To that end, the design for the core
layer should be layer 3 centric with a minimum number of features enabled. One key element which must be included in a good
design is high-availability. The core must be reliable and available. There are several other considerations which shape the
design of the Campus Core Layer:
1. The number of aggregation layer and other service block switches that need to be connected – At some point the number
of SFP / SFP+ / QSFP ports required to connect other service blocks the will exceeds the physical port capacity of the core
switches. This consideration is more relevant for the fixed-port switches such as 8320s but it also applies to modular
chassis such as the 5400R and the 8400.
2. The structured wiring design of the building should be considered to avoid oversubscribing the links between the core and
the aggregation switches. The generally accepted oversubscription ratio for aggregation blocks connecting to the core is
4:1
3. Connections from the Core switches to Data Center switches should be provisioned at no greater than a 2:1 ratio.
4. Reducing layer 2 / layer 3 processing load on the core – As a network grows, MAC address table sizes and IP protocol
processing overhead increases. The inclusion of an aggregation layer offloads the layer 2 learning and IP protocol
processing overhead from the core to the respective aggregation layer switches. The aggregation layer becomes the layer
2 & layer 3 demarcation points for the clients allowing the core to be dedicated to IP routing functions.
5. Leading practices call for avoiding protocol redistribution on core devices when possible. This may not be possible in all
designs. It is not uncommon to redistribute connected interfaces into OSPF or BGP but it is recommended to use
appropriate network statements when possible to ensure that the appropriate prefixes are injected into routing information
base.
6. Core devices are likely candidates for multicast rendezvous point (RPs).

28
The primary function of the campus core layer is to forward packets as quickly as possible. To that end, the design for the core
layer should be layer 3 centric with a minimum number of features enabled. One key element which must be included in a good
design is high-availability. The core must be reliable and available. To that end, a chassis based device with redundant
management module and line cards is often the best option but it is not required. Redundant LAGs to aggregation devices
should be used whenever possible. Triangle topologies between layer 3 devices should be used as opposed to square
topologies to speed convergence in the event of a network path or device failure.

Topics to include here:


1. Disable unneeded features and functions
2. Be simple in the configuration to reduce overall complexity whenever possible

Additional Design Elements & Considerations


There are several topics which need to be addressed for the entire network as they have impact and relevant to the entire
system not a single service block or network layer. These topics commonly include:

• Quality of Service
• Inter-site and Intra-site Routing

Quality of Service
Quality of service (QoS) configurations can be very complex to design, implement and manage. Ultimately, the purpose of
QoS is to manage “unfairness” in the network and help prioritize packets so that business critical applications are most likely to
perform properly during times of network congestion. In designing a QoS configuration Aruba recommends using the following
guidelines:

1. Classification is very environment specific and must be adjusted for all but a very simple design
2. Build the most simple model possible to minimize the challenges in supporting a complex design.
3. Apply or re-mark packets as close to the ingress as possible.
4. Voice traffic should be placed in the highest priority queue and the configuration should ensure that the packets are
transmitted before all other queues (strict priority queue).

Routing Protocols
A question which is frequently asked is ‘which routing protocol should I use?’ There are two viable options for an MFRA
network. OSPF and BGP are viable protocols which can be used. OSPF is more commonly used in Campus environments but
it is becoming more common to see BGP used within the Campus. There are advantages and disadvantages to both protocols.
Single area OSPF designs with several OSPF speakers are generally less complicated to design and support than an BGP
solution. BGP provides much more flexibility and control with respect to prefix advertisement and filtering as compared to
OSPF.

Returning to the ‘which routing protocol should I use’ question, the best answer is use OSPF and BGP in the right places in
your network. For example, if you have a campus network with 500 OSPF speakers, it is advantageous to build a design with

29
multiple OSPF “islands” interconnected via BGP peers. This approach would provide the following advantages over a pure
OSPF network:

1. Reduced OSPF area complexity (potentially eliminating all ABRs)


2. Reduced impact to link/device flaps and thus SFP calculations

It would also require the use of redistribution (and potentially mutual redistribution) which depending upon the IP addressing
plan can be very tedious. Starting with the ‘simple is best’ design approach and using OSPF as our routing protocol, the table
below provides guidance as to when designs would likely benefit from using OSPF and BGP.

Connectivity Scenario Recommended Protocol(s)


Campus Access to Campus Aggregation OSPF 1
Campus Core to Campus Aggregation OSPF
Inter-site connectivity over layer 2 SP circuits BGP for inter-site routing and
OSPF for intra-site routing
Campus Core to Data Center(s) OSPF between the Campus
Core and the DC edge. DC may
be using BGP or OSPF.
Campus Core to “Computer Room” OSPF
Large single area OSPF Campus networks OSPF for intra-building routing
BGP for inter-building routing

Figure 20 - Routing Protocol Selection

SECURITY & ACCESS CONTROL


CLEARPASS

Aruba’s ClearPass Policy Manager, part of the Aruba 360 Secure Fabric, provides role- and device-based secure network access control for
IoT, BYOD, corporate devices, as well as employees, contractors and guests. With a built-in context-based policy engine, RADIUS,
TACACS+, non-RADIUS enforcement using OnConnect, device profling, posture assessment, onboarding, and guest access options,
ClearPass is unrivaled as a foundation for network security for organizations of any size. Mobile-First networks leverage ClearPass for -
end user authentication, device authentication/profiling, as well as administrative authentication to network infrastructure
devices/systems. User roles are configured in ClearPass and then are automatically pushed to Mobility Controllers and Access Switches as
required. User roles can include access-lists, QoS configuration elements, VLAN membership and similar elements.

ClearPass provides high-availability through a published-subscriber model. A simple HA design would include a publisher and two
subscribers. Network devices would be configured to interact with the subscribers while administrators would perform all configuration

1 OSPF in the access layer would only be seen if the network design implements a routed access layer model.

30
on the publisher. The publisher would then replicate configurations to the subscribers. Design requirements may call for having additional
subscribers in each facility.

ClearPass design and leading practices are beyond the scope of this document. Please review the VRDs and
NOTE
other documents available on Arubapedia for additional information regarding ClearPass.

MANAGEMENT
AIRWAVE
An end-to-end Mobile-First network leverages AirWave to provide network monitoring and management. AirWave provides
controllability and visibility for wired and wireless devices in any network with a single graphical interface. Key AirWave features
include:

• Real-Time Monitoring & Visibility


• Network Provisioning
• AppRF
• Connectivity Analysis
• RAPIDS
• VisualRF
• Configuration Management

Human error is one of, if not the, top reasons for network outages. The best designed network if not implemented and
managed properly will experience more unplanned outages than a network which was design and build to be operations
centric. Zero Touch Provisioning (ZTP) is a powerful way to ensure that configurations for devices are deployed automatically
and consistently using customized templates. To deliver on the Mobile First promise of any user, any location, same
experience, we must adopt an approach to design systems for automation and operation. Aruba calls this methodology D4AO.
D4AO will standardize configurations and provide a solution to manage the ‘network’ as a collection of systems providing a
service rather than a set of access points, switches, and mobility controllers.

Beyond provisioning, AirWave provides monitoring of devices via SNMP, SSH, ICMP, and other protocols to provide
administrators the capability to view the performance of their network and clients in real-time or historically.

NOTE The case study presented in this VRD will include configurations build using the D4AO methodology.

31
NETWORK ANALYTICS ENGINE
The Network Analytics Engine is a monitoring and analytics tool that is built in to the AOS-CX operating system. Powered by
Python and utilizing REST API, the Network Analytics Engine allows for constant monitoring for anomalous behavior in your
network, with the capability to automatically alert and take actions.
These actions include using REST, SSH, interacting with syslog events, and even allow for calling custom Python function
definitions. REST API is supported across all of the Aruba devices and applications, allowing for the Network Analytics Engine
to interact with the Aruba portfolio as well as 3rd party tools that support REST.
The capability of real time monitoring with an alert system enables the network operators to be immediately alerted when traffic
anomalies occur, and take action without interaction necessary from the operators. Utilizing the built in time series database,
the Network Analytics engine keeps a history of what it is monitoring. This enables operators to identify trends and traffic
regularities to better identify future anomalies.
As networks become increasingly complex, automation is necessary to help manage and keep tight control over a network and
the devices attached to it. The Network Analytics Engine utilizes automation in an easily viewable fashion, while
complementing other automation tools through our open REST API.

NETEDIT
NetEdit is Aruba tool that provides powerful network wide configuration and conformance services for ArubaOS-CX devices.
Network design in the D4AO model provides key advantages to leverage the power of NetEdit. For example, having a uniform
VLAN definition for all sites, a common hostname construct, and an address block for device management allows for creating
powerful conformance queries to ensure that the ‘as-designed’, ‘as-implemented’, and current state configurations are as
expected.
NetEdit provides the ability to compare configuration elements between dynamically defined groups of devices. Over time
network device configurations change due to new requirements, new technologies, and unplanned changes. Providing the
network administrator a powerful and easy to use tool that quickly provides actionable information is a key design goal of
NetEdit. Reducing “configuration drift” leads to better performing and “well behaved” networks – something that is equally
important to network administrators supporting small, medium and large networks.
The D4AO elements that are being incorporated into the case study in this VRD include:

Configuration Element
Hostname Hostnames should note the device location, role and a unique identifier.
Management IP Address Each device should have a unique IP address by which it is managed.
QoS Class & Policy Names Devices should share common names for QoS configuration elements
ACLs and ACEs Devices should share common ACL and ACE entries
Route maps Devices should share common route-maps for identical functions. Route-maps names
should suggest how the route-map is used. For example, a route-map for redistribution of
OSPF routes into BGP would be named ‘OSPF->BGP’.
IP Prefix Lists Common prefix lists should be defined for devices performing the same role (such as WAN
edge). Uppercase names are suggested as they stand out when reading a configuration.

Please review the references at the end of this document for additional information about NetEdit.

32
REFERENCE ARCHITECTURE BUILDING BLOCKS
This section includes mobile first reference architectures for small, medium and large buildings as well as campuses consisting
of multiple different size buildings. For convenience, a scenario is provided for each architecture to provide a baseline upon
which the modular network and wireless module design is derived. Each architecture also builds upon the previous design
adding additional layers as the access layer and client counts are increased.

SMALL OFFICE
SCENARIO
The following reference design is for a small office consisting of a single floor. The building includes one main distribution frame
(MDF) / server room and one intermediate distribution frame (IDF) that connects to the MDF using multi-mode fiber. The
building supports up to 150 employees and requires 15 x 802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz
coverage.

Building Characteristics:
 1 Floor / 20,000 sq. ft. Total Size
 150 x Employees / 300 x Concurrent IPv4 Clients
 15 x 802.11ac Wave 2 Access Points
 1 x Combined Server Room / Wiring Closet (MDF)
 1 x Wiring Closet (IDF)

This building implements two wiring closets and therefore does not require an aggregation layer between the core and access
layer. This building will implement a 2-tier modular network design where the access layer switches and modules connect
directly to a collapsed core / aggregation layer (figure 3-0). This 2-tier modular network design can also accommodate small
buildings with a larger sq. footage and additional floors if required.
The following is a summary of the modular network architecture and design:
LAN Core / Aggregation:
 Cluster or stack of switches with mixed ports:
o SFP/SFP+ (Access Layer Interconnects)
o 10/100/1000BASE-T Ports (Module Connectivity)
 IP routing
 Layer 2 Link Aggregation to Access layer devices and Module Connectivity
LAN Access:
 A stack of two or more switches per wiring closet:
o SFP/SFP+ (Core / Aggregation Layer Interconnects)
o 10/100/1000BASE-T with HPE SmartRate (Edge Ports)
 Layer 2 Link Aggregation to Core/Aggregation Layer Devices
 802.11ac Wave 2 Access Points

Figure 3-0. Small Office – 2-Tier Modular Network Design

33
The number of Access Points required for this hypothetical scenario was estimated based on the buildings square
footage and the wireless density / capacity requirements. For this scenario it was determined that 15 x Access
Points would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in
NOTE this scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.

34
CONSIDERATIONS & BEST PRACTICES
This section provides a list of key design and implementation considerations for this reference design.
LOCAL AREA NETWORK
The small building in this scenario has a two-tier network with a collapsed core/aggregation design. The core/aggregation
switch could be a single device or a stack of two or more switches (virtual stack or backplane stack). The core/aggregation
switches will connect to the access switches with layer-two links and will provide any and all routing between VLANS. It is likely
that there will be no more than 4 VLANS (one for device management, one for users and one for building management, and
one for security cameras). The core switch can provide connectivity to an optional switch stack for any local compute resources
(computer room stack). The “size” of the small building may not warrant having any in-building local compute or a dedicated
computer room switch.
The recommended core/aggregation design calls for switch redundancy which can be achieved using either ArubaOS-CX or
ArubaOS-Switch devices. It is likely that a small building will use ArubaOS-Switch devices and redundancy would be
implemented via backplane stacking or the Virtual Stacking Framework.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the core/aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in a LAG/MCLAG.
The optional computer room switch has similar redundancy considerations as the core/aggregation switch stack. While we
don’t need to plan for PoE we do need to consider the impact of a computer room switch outage. In most cases, the cost of an
outage is sufficient that having redundant computer room switches is highly desirable. This is especially true if the devices
which will be connected to the switches have the ability to be dual-attached, we can minimize the impact of a switch failure.

Table 7 below provides a summary of the applicable LAN considerations and best practices that should be considered for a 2-
tier modular network design:

Best Practice Core / Aggregation Layer Access Layer


Layer 3 Features / Functions
IP Routing Yes Optional
PIM BSR Yes No
PIM cRP Yes No
Layer 2 Features / Functions
PIM DR Yes No
IGMP Yes Yes (IGMP snooping)
Layer 2 Loop Prevention Yes Yes
Interface Features / Functions
LAG / VSX / MCLAG Yes Yes
UDLD 2 Yes Yes
QoS Yes Yes
Other Features / Functions
Device Hardening Yes Yes
Instrumentation Yes Yes
Management Yes Yes

2
UDLD should only be used on 1G links as 10G natively includes these services/functions.

35
Bidirectional Forwarding Potentially Potentially
Detection (BFD)
Power over Ethernet No Yes

Figure 21 - LAN Considerations & Best Practices

Table 8 below summarizes the device roles and provides general guidance on the number of devices recommended as part of
the Mobile-First Reference Architecture.

Component/Role Description Notes


Core/Aggregation Switch Building Core/Aggregation Switch 1 x Required / 2 x Recommended
Access Layer Switches Access Layer Switch Minimum of 2 per IDF Recommended
Computer Room Switch Provides connectivity for Compute Optional
Resources
Figure 22 - Small Building –LAN Components

WIRELESS LAN COMPONENTS


For small deployments, Aruba offers both controllerless and controller-based deployment options. A controllerless architecture
is provided using Aruba Instant Access Points (APs) while a controller-based architecture is provided using Mobility Controllers
and Campus APs. Both deployment options are valid for this reference design, however this guide focuses specifically on a
controller-based architecture.
The small building in this scenario includes various wireless components which are either deployed in the wireless module and
server room. To accommodate the Access Point (AP) and client counts for this scenario, a mobility master and a single cluster
of mobility controllers is required. The number of cluster members determined by the hardware or virtual mobility controller
model that is selected (see platform suggestions). For redundancy, the mobility controller cluster consists of a minimum of two
mobility controllers – each member providing adequate capacity and performance to operate the wireless network in the event
of a single mobility controller failure.
The table below provides a summary of these components

Component Description Notes


Aruba Mobility Master (MM) Virtual Appliance 1 x Required / 2 x Recommended
Aruba Mobility Controllers Hardware or Virtual Appliances 2 x Minimum (Clustered)
Aruba Access Points 802.11ac Wave 2 Access Points 15 x Required
Aruba ClearPass Virtual Appliance Recommended
Figure 23 - Small Building – Wireless LAN Components

While the number of required 802.11ac Wave 2 Access Points for this design is small, Aruba recommends implementing a
Mobility Master (MM) to take advantage of specific features that are required to provide mission critical wireless services when
wireless is the primary access medium. The additional of a mobility master to the design provides centralized configuration and
monitoring, supports features including clustering, AirMatch and Live Upgrades and finally provides centralized application
support (UCC and AppRF).

While a controller-based solution can be deployed without a mobility master (MM), it is not a recommended best
NOTE
practice.

36
REDUNDANCY
Redundancy for a small building reference architecture is provided across all layers. The redundancy built into the 2-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Often the cost of an outage is the key driver in developing an approach/plan to provide network redundancy. As a
first line of defense, most small networks use dual power supplies and often use a stack of switches to provide redundancy.
For this scenario the mobility master and mobility cluster members are deployed within a server room and connect directly to
the core / aggregation switches. To provide full redundancy, two virtual mobility masters and one cluster of hardware or virtual
mobility controllers is required:
 Aruba Mobility Master (MM):
o Two virtual MMs
o L2 master redundancy (Active / Standby)
 Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
 Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
 Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy

Figures 16 and 17 provide detailed example for how the virtual and hardware cluster members are connected to the core /
aggregation layer. Hardware mobility controllers are directly connected to the core / aggregation layer switches via two or more
1 Gigabit Ethernet ports configured in a LAG group. The LAG port members being distributed between core / aggregation layer
stack members.

37
Figure 24 - Hardware Mobility Controller Cluster – Core / Aggregation Layer

Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the core / aggregation switches via two or more 1 Gigabit or 10 Gigabit Ethernet ports implementing
802.3ad link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed between core /
aggregation layer switch stack members.

Figure 25 - Virtual Mobility Controller Cluster – Core / Aggregation Layer

38
The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode. While a small building can elect to implement a
single mobility master, there are no additional licenses required to implement a standby. The only overhead being the additional
CPU, memory and storage utilization on the virtual server host.

Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism

VIRTUAL MOBILITY CONTROLLERS


For small building deployments you can optionally elect to deploy virtual mobility controllers. If virtual mobility controllers are
deployed, the virtual server infrastructure must be scaled accordingly to provide the necessary CPU and memory resources to
each virtual mobility controller in the cluster:
1. Each virtual mobility controller in the cluster should be deployed across different virtual server hosts. For this design
two virtual server hosts are required.
2. Uplinks between the virtual server host and the core / aggregation layer must be scaled accordingly to support the
wireless and dynamically segmented client’s throughput requirements. The throughput of cluster will be limited by the
Ethernet PHYs installed on the virtual server host.
Redundancy between the virtual server host and its peer switches can use standard 802.3ad link aggregation or a proprietary
hypervisor specific load-balancing / failover mechanism. Each hypervisor supports specific load-balancing and failover
mechanisms such as active / standby, round-robin load-balancing or link aggregation. You should select the appropriate
redundancy mechanism to support your specific implementation and requirements.
SCALABILITY
For this scenario there are no specific LAN scalability considerations that need to be made. The core / aggregation and access
layers can easily accommodate the Access Points (APs) and client counts without modification or derivation from the design. A
wireless aggregation layer can be added in the future as additional APs and clients are added to the network.
Wireless module scaling is also not a concern as the mobility masters can be expanded and additional cluster members added
over time to accommodate additional APs, clients and switching capacity as the network grows.
For this small building design Aruba recommends implementing the MM-VA-50 mobility master and a cluster of two hardware or
virtual mobility controllers (see platform suggestions). The mobility master selected for this design can scale to support 50 x
APs, 500 x clients and 5 x mobility controllers.
VIRTUAL LANS
For this design the collapsed core / aggregation layer provides layer 2 transport (VLAN extension via 802.1q trunking) and
terminates all the VLANs from the access layer and wireless module with layer 3 interfaces. Aruba recommends using tagged
VLANs throughout the network.
The wireless module consists of one or more client VLANs depending on the security and policy model that is implemented. For
a single VLAN design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and
policies determining the level of access each client is provided on the network. The single VLAN is extended from the core /
aggregation layer switches to each physical or virtual mobility controller cluster member. Additional VLANs can be added and
extended as required (figure 10). For example your mobile first design may require separate VLANs to be assigned to wireless
and dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the core / aggregation layer and each mobility controller cluster member. One
VLAN is dedicated for management and Mobility Manager (MM) communications while the second VLAN is used for client
traffic. All VLANs are common between cluster members to permit seamless mobility. The core/aggregation layer switches are
configured with layer 3 interfaces and addressing to operate as the default gateway for each VLAN. First-hop router redundancy
is natively provided by the Aruba stacking architecture.

39
Figure 26 - Hardware Mobility Controller Cluster – VLANs

Figure 27 - Virtual Mobility Controller Cluster – VLANs

As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.

40
SCALING & PLATFORM SUGGESTIONS
Table10 provides platform suggestions for the small building scenario which is to support 15 Access Points and 300 concurrent
clients. Where appropriate a good, better and best suggestion is made based on feature, performance and scaling. These are
suggestions based on the described scenario and maybe substituted at your own discretion.

Good Better Best

Core / Aggregation Layer 2930 3810 3810


Switching

Access Layer 2930 2930 2930

Mobility Masters MM-VA-50


Wireless

Virtual Mobility Controller Cluster MC-VA-50

Mobility Controller Cluster 7024 7030

802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series

Figure 28 - Small Building Platform Suggestions

41
MEDIUM OFFICE
SCENARIO
The following reference design is for a medium office consisting of six floors. The building includes a data center which
connects via single-mode fiber to a main distribution frame (MDF) on each floor. Each floor includes three intermediate
distribution frames (IDFs) which connect to the MDF via multi-mode fiber. The building supports up to 1,500 employees and
requires 120 x 802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz coverage.

Building Characteristics:
 6 Floors / 150,000 sq. ft. Total Size
 1,500 x Employees / 3,000 x Concurrent IPv4 Clients
 120 x 802.11ac Wave 2 Access Points
 1 x Computer Room
 1 x MDF per floor (6 total)
 2 x IDFs per floor (12 total)

As this building implements a structured wiring design using MDFs and IDFs, an aggregation layer to connect the access layer
is required. This building will implement a 3-tier modular network design where the access layer switches connect via
aggregation layer switches in each MDF that connect directly to the core (figure 3-5). For scaling, aggregation and fault domain
isolation – this modular network design also includes an additional aggregation layers for the computer room.
The following is a summary of the modular network architecture and design:
LAN Core:
 A cluster of switches with fiber ports:
o SFP/SFP+/QSFP+ (Aggregation Layer Interconnects)
o SFP/SFP+ (Module Connectivity)
 IP routing to Aggregation Layer Devices and Modules
LAN Aggregation:
 A stack of two switches with fiber ports per MDF:
o SFP/SFP+/QSFP+ (Core and Access Layer Interconnects)
 IP routing to Core Layer Devices
 Layer 2 Link Aggregation to Access Layer Devices
LAN Access:
 A stack of two or more switches per MDF and IDF:
o SFP/SFP+ (Aggregation Layer Interconnects)
o 10/100/1000BASE-T with PoE+ (Edge Ports)
 Layer 2 Link Aggregation to Aggregation Layer Devices
 802.11ac Wave 2 Access Points

42
Figure 29 - Medium Office – 3-Tier Modular Network Design

The number of Access Points required for this hypothetical scenario was calculated based on the buildings sq. ft.
and the wireless density / capacity requirements. For this scenario it was determined that 120 x Access Points
would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in this
NOTE scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.

CONSIDERATIONS & BEST PRACTICES


This section provides a list of key design and implementation considerations for this reference design.

43
LOCAL AREA NETWORK
The medium building in this scenario has a three-tier network providing dedicated access, aggregation, and core layers. The
aggregation layer consists of two pair of switches with each pair providing redundant connectivity for connections to both the
core and access layers. The core layer will consist of a pair of devices to provide redundancy and eliminate single points of
failure. Product selection will determine the options available to implement HA configurations. The aggregation switches will
connect to the access switches with layer-two links and will provide any and all routing between VLANS. It is likely that there
will be no more than 4 VLANS (one for device management, one for users and one for building management, and one for
security cameras). The aggregation layer switches will connect to the core switches via layer 3 links. The core switch can
provide connectivity to an optional switch stack for any local compute resources (computer room stack). The “size” of the
medium building may not warrant having any in-building local compute or a dedicated computer room switch.
The recommended switch redundancy design can be achieved using either ArubaOS-CX or ArubaOS-Switch devices. It is
likely that a medium building will use ArubaOS-Switch devices and redundancy would be implemented via backplane stacking
or the Virtual Stacking Framework.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in an aggregated group (“Trunk” or
VSX/MCLAG).
The optional computer room switch has similar redundancy considerations as the aggregation switch stack. While we don’t
need to plan for PoE we do need to consider the impact of a computer room switch outage. In most cases, the cost of an
outage is sufficient that having redundant computer room switches is highly desirable. This is especially true if the devices
which will be connected to the switches have the ability to be dual-attached, we can minimize the impact of a switch failure.
The table below provides a summary of the applicable LAN considerations and best practices that should be considered for a 3-
tier modular network design:

Best Practice Core Layer Aggregation Layer Access Layer


Layer 3 Features / Functions
IP Routing Yes Yes Optional
PIM BSR Yes No No
Layer 2 Features / Functions
PIM DR N/A Yes No
IGMP N/A Yes No
IP Routing Yes Yes Optional
Layer 2 Features / Functions
Layer 2 Loop Prevention N/A Yes Yes
Interface Features / Functions
LAG / MCLAG Yes Yes Yes
3
UDLD Yes Yes Yes
QoS Yes Yes Yes
Other Features / Functions
Device Hardening Yes Yes Yes
Instrumentation Yes Yes Yes
Management Yes Yes Yes

3
UDLD is applicable to 1Gb links

44
Power over Ethernet No No Yes
Bidirectional Forwarding Potentially No No
Detection (BFD)
Figure 30 - LAN Considerations & Best Practices

WIRELESS LAN COMPONENTS


The medium building in this scenario includes various wireless components which are either deployed in the wireless module
and server room. To accommodate the Access Point (AP) and client counts for this scenario, a mobility master and a single
cluster of mobility controllers is required. The number of cluster members determined by the hardware or virtual mobility
controller model that is selected (see platform suggestions). For redundancy, the mobility controller cluster consists of a
minimum of two mobility controllers – each member providing adequate capacity and performance to operate the wireless
network in the event of a single mobility controller failure.
The table below provides a summary of these components:

Component Description Notes


Aruba Mobility Master (MM) Virtual Appliance 1 x Required / 2 x Recommended
Aruba Mobility Controllers Hardware or Virtual Appliances 2 x Minimum (Clustered)
Aruba Access Points 802.11ac Wave 2 Access Points 120 x Required
Aruba ClearPass Virtual Appliance Recommended
Figure 31 - Medium Building – Wireless LAN Components

REDUNDANCY
Redundancy for a medium building reference architecture is provided across all layers. The redundancy built into the 3-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Aruba recommends using NVF functions (stacking or MCLAG/VSX) to provide network redundancy as well as using
redundant links and power supplies to maximize network availability and resiliency.
For this scenario the mobility master and mobility cluster members are deployed within a computer room and connect directly to
the core or computer room aggregation switches. To provide full redundancy, two virtual mobility masters and one cluster of
hardware or virtual mobility controllers is required:
 Aruba Mobility Master (MM):
o Two virtual MMs
o L2 master redundancy (Active / Standby)
 Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
 Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
 Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
The figures below provide detailed examples for how the virtual and hardware cluster members are connected to their
respective layers. Hardware mobility controllers are directly connected to the core layer switches via two or more 1 Gigabit or

45
10 Gigabit Ethernet ports configured in a LAG group. The LAG port members being distributed between redundant core /
aggregation switches.

Figure 32 - Hardware Mobility Controller Cluster – Core Layer

Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the computer room aggregation switches via two or more 1 Gigabit or 10 Gigabit Ethernet ports
implementing 802.3ad link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed
between redundant computer room aggregation switches.

Figure 33 - Virtual Mobility Controller Cluster – Computer Room Aggregation Layer

46
The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode.

Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism

VIRTUAL MOBILITY CONTROLLERS


For medium building deployments you can optionally elect to deploy virtual mobility controllers. If virtual mobility controllers are
deployed, the virtual server infrastructure must be scaled accordingly to provide the necessary CPU and memory resources to
each virtual mobility controller in the cluster:
1. Each virtual mobility controller in the cluster should be deployed across different virtual server hosts. For this design
two virtual server hosts are required.
2. Uplinks between the virtual server host and the computer room aggregation layer must be scaled accordingly to
support the wireless and dynamically segmented clients throughput requirements. The throughput of cluster will be
limited by the Ethernet PHYs installed on the virtual server host.
Redundancy between the virtual server host and its peer switches can use standard 802.3ad link aggregation or a proprietary
hypervisor specific load-balancing / failover mechanism. Each hypervisor supports specific load-balancing and failover
mechanisms such as active / standby, round-robin load-balancing or link aggregation. You should select the appropriate
redundancy mechanism to support your specific implementation and requirements.

SCALABILITY
For this scenario there are no specific LAN scalability considerations that need to be made. The core, aggregation and access
layers can easily accommodate the Access Points (APs) and client counts without modification or derivation from the design. A
wireless aggregation layer can be added in the future as additional APs and clients are added to the network.
Wireless module scaling is also not a concern as the mobility masters can be expanded and additional cluster members added
over time to accommodate additional APs, clients and switching capacity as the network grows.
For this medium building design Aruba recommends implementing the MM-VA-500 mobility master and a cluster of two or more
hardware or virtual mobility controllers (see platform suggestions). The mobility master selected for this design can scale to
support 500 x APs, 5,000 x clients and 50 x mobility controllers.
VIRTUAL LANS
For this design the core or computer room aggregation layer terminates all the VLANs from the mobility controllers. The VLANs
are extended from the mobility controllers to the core or computer room aggregation layer using 802.1Q trunking. Aruba
recommends using tagged VLANs wherever possible to provide additional loop prevention.
The wireless module consists of one or more user VLANs depending on the security and policy model that is implemented. For
a single VLAN design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and
policies determining the level of access each user is provided on the network. The single VLAN is extended from the core or
computer room aggregation layer switches to each physical or virtual mobility controller cluster member. Additional VLANs can
be added and extended as required (Figures 14 and 15). For example your mobile first design may require separate VLANs to
be assigned to wireless and dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the core or computer room aggregation layer and each mobility controller
cluster member. One VLAN is dedicated for management and Mobility Manager (MM) communications while the second VLAN
is mapped to clients. All VLANs are common between cluster members to permit seamless mobility. The core or computer room
aggregation layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-
hop router redundancy is natively provided by the Aruba stacking architecture.

47
Figure 34 - Hardware Mobility Controller Cluster – VLANs

Figure 35 - Virtual Mobility Controller Cluster – VLANs

48
As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.

SCALING & PLATFORM SUGGESTIONS


The table below provides platform suggestions for the medium building scenario which is to support 120 x Access Points and
3,000 x concurrent clients. Where appropriate a good, better and best suggestion is made based on feature, performance and
scaling. These are suggestions based on the described scenario and maybe substituted at your own discretion.

Good Better Best

Core Layer 3810 5400R 8230


Switching

Aggregation Layer 3810 5400R 8320

Access Layer 2930 3810 5400R

Wireless Module 3810 5400R 8320

Mobility Masters MM-VA-500


Wireless

Virtual Mobility Controller Cluster MC-VA-250

Mobility Controller Cluster 7205 7210

802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series

Figure 36 - Medium Building Platform Suggestions

49
LARGE OFFICE
SCENARIO
The following reference design is for a large office consisting of 12 floors. The building includes a data center which connects
via single-mode fiber to a main distribution frame (MDF) on each floor. Each floor includes three intermediate distribution
frames (IDFs) which connect to the MDF via multi-mode fiber. The building supports up to 3,000 employees and requires 300 x
802.11ac Wave 2 Access Points to provide full 2.4GHz and 5GHz coverage.

Building Characteristics:
 12 Floors / 360,000 sq. ft. Total Size
 3,000 x Employees / 6,000 x Concurrent IPv4 Clients
 300 x 802.11ac Wave 2 Access Points
 1 x Computer Room
 1 x MDF per floor (12 total)
 2 x IDFs per floor (24 total)

As this building implements a structured wiring design using MDFs and IDFs, an aggregation layer to connect the access layer
is required. This building will implement a 3-tier modular network design where the access layer switches connect via
aggregation layer switches in each MDF that connect directly to the core (figure 3-10). For scaling, aggregation and fault
domain isolation – this modular network design also includes an additional aggregation layers for the computer room and
wireless modules.
The following is a summary of the modular network architecture and design:
LAN Core:
 A pair of redundant switches with a mix of 10G and 40G fiber ports:
o SFP/SFP+/QSFP+ (Aggregation Layer Interconnects)
 IP routing to Aggregation Layer Devices and Modules
 Optional NVF Functions (MCLAG/VSX)
LAN Aggregation:
 A stack of two switches with fiber ports per MDF:
o SFP/SFP+/QSFP+ (Core and Access Layer Interconnects)
 NVF Functions (MCLAG/VSX)
 IP routing to Core Layer Devices
LAN Access:
 A stack of two or more switches per MDF and IDF:
o SFP/SFP+ (Aggregation Layer Interconnects)
o 10/100/1000BASE-T with PoE+ (Edge Ports)
 Layer 2 Link Aggregation to Access Layer Devices
 802.11ac Wave 2 Access Points802.11ac Wave 2 Access Points

50
Figure 37 - Large Office – 3-Tier Modular Network Design

The number of Access Points required for this hypothetical scenario was calculated based on the buildings sq. ft.
and the wireless density / capacity requirements. For this scenario it was determined that 300 x Access Points
would be required based on each Access Point providing 1,200 sq. ft. of coverage. Each Access Point in this
NOTE scenario supporting 30 clients.
The actual number of Access Points and their placement for a real deployment should be determined using a site
survey factoring each individual coverage areas density requirements.

51
CONSIDERATIONS & BEST PRACTICES
This section provides a list of key design and implementation considerations for this reference design.
LOCAL AREA NETWORK
The large building in this scenario has a three-tier network providing dedicated access, aggregation, and core layers. The
wireless network also has a dedicated service block providing connectivity for the mobility controller cluster. The aggregation
layer consists of two pair of switches with each pair providing redundant connectivity for connections to both the core and
access layers. The core layer will consist of a pair of devices to provide redundancy and eliminate single points of failure. In a
large building, it is very likely that an ArubaOS-CX based switch will be used for both core and aggregation devices. The
aggregation switches will connect to the access switches with layer-two links and will provide any and all routing between
VLANS. It is likely that there will be no more than 4 VLANS (one for device management, one for users and one for building
management, and one for security cameras). The aggregation layer switches will connect to the core switches via layer 3 links.
The core switch will also provide connectivity to other sevice blocks which likely includes connectivity to a data-center, internet
edge, and WAN edge service blocks.
With a core layer that is entirely layer 3 connected, the network can leverage equal cost multipath routing to provide for
connectivity between core devices as well as to aggregation and other service blocks. Eliminating layer 2 protocols from the
core configuration will ensure that the core is focused on high-speed layer 3 packet forwarding.
The aggregation layer will provide high-availability using VSX. VSX will allow for the elimination of spanning-tree at the
aggregation layer and will allow for active/active forwarding to/from access-layer devices. VSX Config Sync will also be
leveraged to ensure that device pairs have identical configuration elements such as acess-lists and VLANS.
The recommended access switch design would be to use one or more switches per IDF in a stacking configuration. The switch
stacks would need to provide enough power for access points and other PoE devices as well as provide enough Ethernet
interfaces for wired systems. Stacking is recommended to build fault tolerant designs so that if one switch is off-line, the is still
connectivity to access points and the building core/aggregation switches. Connectivity to the aggregation layer would be
provided by two 10G ports (using ports from different switches in the stack) configured in an aggregated group (“Trunk” or
VSX/MCLAG).
Large facilities may have an on-site data center. Data center design is beyond the scope of this document. However,
connectivity from the Campus network to the data-center will be include in this design. Fundamentally, connectivity to the data-
center has very similar requirements to connectivity to other service blocks including the WAN edge or the internet edge.
The table below provides a summary of the applicable LAN considerations and best practices that should be considered for a 3-
tier modular network design:

Best Practice Core Layer Aggregation Layer Access Layer


Layer 3 Features / Functions
IP Routing Yes Yes Optional
PIM BSR Yes No No
Layer 2 Features / Functions
PIM DR N/A Yes No
IGMP N/A Yes No
Layer 2 Loop Prevention N/A Yes Yes
Interface Features / Functions
LAG / MCLAG Yes Yes Yes
UDLD N/A Unlikely Unlikely
QoS Yes Yes Yes
Other Features / Functions
Device Hardening Yes Yes Yes
Instrumentation Yes Yes Yes

52
Management Yes Yes Yes
Power over Ethernet No No Yes
Bidirectional Forwarding Potentially No No
Detection (BFD)
Figure 38 - LAN Considerations & Best Practices

WIRELESS LAN COMPONENTS


The Large building in this scenario includes various wireless components which are either deployed in the wireless module and
server room. To accommodate the Access Point (AP) and client counts for this scenario, a mobility master and a single cluster
of mobility controllers is required. The number of cluster members determined by the hardware or virtual mobility controller
model that is selected (see platform suggestions). For redundancy, the mobility controller cluster consists of a minimum of two
mobility controllers – each member providing adequate capacity and performance to operate the wireless network in the event
of a single mobility controller failure.
The table below provides a summary of these components:

Component Description Notes


Aruba Mobility Master (MM) Hardware or Virtual Appliances 2 x Required
Aruba Mobility Controllers Hardware or Virtual Appliances 2 x Minimum (Clustered)
Aruba Access Points 802.11ac Wave 2 Access Points 300 x Required
Aruba Airwave Hardware or Virtual Appliance Recommended
Aruba ClearPass Hardware or Virtual Appliance Recommended
Figure 39 - Large Building – Wireless LAN Components

REDUNDANCY
Redundancy for a large building reference architecture is provided across all layers. The redundancy built into the 3-tier
modular network design that establishes the foundation network determines the level of redundancy that is provided to the
modules. Aruba recommends using NVF functions (stacking or MCLAG/VSX) to provide network redundancy as well as using
redundant links and power supplies to maximize network availability and resiliency. The Aruba 8400 provides the maximum
redundancy of any Aruba Switch and is recommended for use in the Core, Aggregation, and Wireless Aggregation layers.
For this scenario the mobility master and mobility cluster members are deployed within a computer room and connect directly to
the wireless aggregation or computer room aggregation switches. To provide full redundancy, two hardware or virtual mobility
masters and one cluster of hardware or virtual mobility controllers is required:
 Aruba Mobility Master (MM):
o Two hardware or virtual MMs
o L2 master redundancy (Active / Standby)
 Hardware Mobility Controllers (MCs):
o Single cluster of hardware MCs
o Minimum of two cluster members
 Virtual Mobility Controllers (MCs):
o Single cluster of virtual MCs
o Minimum of two cluster members
o Separate virtual server hosts
 Access Points
o AP Master pointing to the clusters VRRP VIP

53
o Fast failover using cluster built-in redundancy
Figures 26 and 27 provide detailed example for how the virtual and hardware cluster members are connected to their respective
layers. Hardware mobility controllers are directly connected to the core layer switches via two or more 10 Gigabit Ethernet ports
configured in a LAG group. The LAG port members being distributed between redundant wireless aggregations switches.

Figure 40 - Hardware Mobility Controller Cluster – Wireless Aggregation Layer

Virtual Mobility Controllers are logically connected to a virtual switch within the virtual server host. The virtual host server is
directly connected to the computer room aggregation switches via two or more 10 Gigabit Ethernet ports implementing 802.3ad
link aggregation or a proprietary load-balancing / failover mechanism. Each port being distributed between redundant computer
room aggregation switches.

Figure 41 - Virtual Mobility Controller Cluster – Computer Room Aggregation Layer

The mobility master(s) are deployed in a similar manner to the cluster of virtual mobility controllers. Each virtual server host
supporting one virtual mobility master operating in an active / standby mode.

54
Redundancy for virtual servers is hypervisor dependent. To provide against link, path and node failures, the
NOTE
hypervisor may implement 802.3ad link aggregation or a proprietary load-balancing / failover mechanism

VIRTUAL MOBILITY CONTROLLERS


For large building deployments you can optionally elect to deploy virtual mobility controllers. If virtual mobility controllers are
deployed, the virtual server infrastructure must be scaled accordingly to provide the necessary CPU and memory resources to
each virtual mobility controller in the cluster:
1. Each virtual mobility controller in the cluster should be deployed across different virtual server hosts. For this design
two virtual server hosts are required.
2. Uplinks between the virtual server host and the computer room aggregation layer must be scaled accordingly to
support the wireless and dynamically segmented client throughput requirements. The throughput of cluster will be
limited by the Ethernet PHYs installed on the virtual server host.
Redundancy between the virtual server host and its peer switches can use standard 802.3ad link aggregation or a proprietary
hypervisor specific load-balancing / failover mechanism. Each hypervisor supports specific load-balancing and failover
mechanisms such as active / standby, round-robin load-balancing or link aggregation. You should select the appropriate
redundancy mechanism to support your specific implementation and requirements.

SCALABILITY
To accommodate the requirement to support 6,000 x wireless IPv4 hosts on the network, a wireless aggregation layer is
included in the design. As a general best practice Aruba recommends a wireless aggregation layer once the IPv4+IPv6 host
count exceeds 4,094. The wireless aggregation layer is needed if hardware mobility controllers are deployed and is connected
directly to the core layer. If virtual mobility controllers are deployed – the computer room aggregation switches provide this
function.
Future scaling is not a concern as the mobility masters can be expanded and additional cluster members added over time to
accommodate additional APs, clients and switching capacity as the network grows. For this large building design, Aruba
recommends implementing the MM-HW-5K or MM-VA-5K mobility master and a cluster of two or more hardware or virtual
mobility controllers (see platform suggestions). The mobility master selected for this design can scale to support 5,000 x APs,
50,000 x clients and 500 x mobility controllers.
VIRTUAL LANS
For this design the wireless module aggregation layer terminates all the layer 2 VLANs from the mobility controllers. The VLANs
are extended from the mobility controllers to its respective aggregation layer switches using 802.1Q trunking. Aruba
recommends using tagged VLANs wherever possible to provide additional loop prevention.
The wireless module consists of one or more user VLANs depending on the security and policy model that is implemented. For
a single VLAN design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and
policies determining the level of access each user is provided on the network. The single VLAN is extended from the respective
aggregation layer switches to each physical or virtual mobility controller cluster member. Additional VLANs can be added and
extended as required (figures 19 and 20). For example your mobile first design may require separate VLANs to be assigned to
wireless and dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the respective aggregation layer and each mobility controller cluster member.
One VLAN is dedicated for management and Mobility Manager (MM) communications while the second VLAN is mapped to
clients. All VLANs are common between cluster members to permit seamless mobility. The aggregation layer switches have
VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-hop router redundancy is natively
provided by the Aruba clustering or stacking architecture.

55
Figure 42 - Hardware Mobility Controller Cluster – VLANs

Figure 43 - Virtual Mobility Controller Cluster – VLANs

As a best practice Aruba recommends implementing unique VLAN ids within the wireless module. This allows for an
aggregation layer to be introduced in the future without disrupting the other layers within the network. This also allows for
smaller layer 2 domains which is key to reducing layer 2 instability due to operational changes, loops, or mis-configurations
originating from other layers or modules in the network from impacting the wireless module.

56
SCALING & PLATFORM SUGGESTIONS
Table 30 provides platform suggestions for the large building scenario which is to support 300 x Access Points and 6,000 x
concurrent clients. Where appropriate a good, better and best suggestion is made based on feature, performance and scaling.
These are suggestions based on the described scenario and maybe substituted at your own discretion.

Good Better Best

Core Layer 8320 8320 8400


Switching

Aggregation Layer 8320 8320 8400

Access Layer 2930 3810 5400R

Wireless Module 8320 8320 8400

Mobility Masters MM-VA-5K or MM-HW-5K


Wireless

Virtual Mobility Controller Cluster MC-VA-250

Mobility Controller Cluster 7210 7220

802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series

Figure 44 - Large Building Platform Suggestions

CAMPUS
The following reference design is for a campus which consists of multiple buildings (different sizes) and two datacenters. Each
building in the campus implements their own 2-tier or 3-tier modular network connecting to a campus backbone. The campus in
this scenario needs to support 64,000 x concurrent dual-stack clients and requires 6,000 x 802.11ac Wave 2 Access Points.
For a campus deployment, one key decision that needs to be made is where to place the mobility controller clusters. Due to the
high scaling requirements, a campus will generally require multiple clusters of mobility controllers which can either be
centralized in the datacenters or strategically distributed between the buildings. The clusters in both cases are managed by
hardware or virtual mobility masters deployed between the datacenters.
Both centralized and distributed mobility controller deployment models are valid for campus deployments with each model
supporting different mobility needs. As seamless mobility can only be provided between Access Points (APs) managed by a
common cluster, the mobility requirements will influence the cluster deployment model that is selected. .
An addition considerations for cluster placement are the traffic flows. If the user applications are primarily hosted in the
datacenter, a centralized cluster is a good choice as the wireless and dynamically segmented client sessions are terminated
within the cluster. Placing the cluster closer to the applications optimizes the north/south traffic flows. If the primary applications
are distributed between buildings in the campus, a distributed mobility controller model maybe a better choice to prevent the
unnecessary hairpining of traffic across the core.
Centralized Clusters:
 Permits a larger mobility domain when ubiquitous indoor / outdoor coverage is required.

57
 Efficient when the primary applications are hosted in the cloud or datacenter.
Distributed Clusters:
 Permits smaller mobility domains such as within buildings or between co-located buildings.
 Efficient when the primary applications are distributed or workgroup based.
The next two sections provide reference architectures for both centralized and distributed cluster deployments.

SCENARIO 1 – CENTRALISED CLUSTERS


The following reference design is for a campus such as a corporate headquarters with two datacenters implementing
centralized clusters. The campus LAN implements a high-speed layer 3 backbone that interconnects each building to both
datacenters. The campus needs to support 64,000 x concurrent dual-stack wireless clients across 6,000 x 802.11ac Wave 2
Access Points. Each host in this example being assigned a single global IPv6 address from a Stateful DHCPv6 server. For this
deployment roaming is required between large groups of buildings. To permit roaming, the indoor / outdoor APs for the groups
of buildings with overlapping coverage will be assigned to the same mobility controller cluster (figure 3-15).

Campus Characteristics:
 6,000 x 802.11ac Wave 2 Access Points
 64,000 x Concurrent Dual-Stack Clients
 2 x Datacenters with Layer 2 Extension

58
Figure 45 - Campus Modular Network Design – Centralized Mobility Controller Clusters

WIRELESS LAN COMPONENTS


The campus in this scenario includes the mobility masters and clusters of mobility controllers which are distributed across two
datacenters. The number of mobility masters and mobility controller clusters you deploy to provide full redundancy will be
influenced by the datacenter design. The datacenters can either support Layer 2 VLAN extensions or be separated at layer 3:
 Layer 2 Extensions – VLANs and their associated broadcast domains are common between datacenters.
 Layer 3 Separation – VLANs and their associated broadcast domains are unique per datacenter.
When VLANs can be extended between the datacenters, the mobility masters and mobility controller cluster members can be
split between the datacenters. Each datacenter hosting 1 x mobility master and 1/2 the mobility controllers. To accommodate
the Access Point (AP) and client counts for this scenario, two mobility masters and two clusters of mobility controllers are
required. For aggregation layer scaling and fault domain isolation, each cluster of mobility controllers is connected to separate
Aruba 8400 series aggregation layer switches, each aggregation layer accommodating up to 32,000 IPv4 and 64,000 IPv6 host
addresses.

59
The table below provides a summary of these components:

Component Description Notes


Aruba Mobility Master (MM) Hardware or Virtual Appliances 2 x Required
Aruba Mobility Controllers Hardware or Virtual Appliances 2 x Clusters
Aruba Access Points 802.11ac Wave 2 Access Points 6,000 x Required
Aruba Airwave Hardware or Virtual Appliance Recommended
Aruba ClearPass Hardware or Virtual Appliance Recommended
Figure 46 - Wireless LAN Components – Layer 2 Extension

When datacenters are separated at layer 3, a different approach is required. To support the AP and client counts and maintain
full redundancy, an active / standby model is implemented where each datacenter hosts an equal quantity of mobility masters
and mobility controllers:
1. Mobility Masters – Two mobility masters are hosted per datacenter implementing layer 2 and layer 3 master
redundancy. Layer 2 master redundancy is provided between mobility masters within each datacenter while layer 3
master redundancy provides redundancy between datacenters.
2. Mobility Controller Clusters – Two clusters of mobility controllers are hosted per datacenter. The APs are configured
with a primary LMS and backup LMS to determine their primary and secondary cluster assignments. Fast failover is
provided within the primary cluster while a full bootstrap is required to failover between the primary and secondary
clusters.
For aggregation layer scaling and fault domain isolation, each cluster of mobility controllers is connected to separate Aruba
8400 series aggregation layer switches, each aggregation layer accommodating up to 64,000 IPv4 and 16,000 IPv6 host
addresses. As each datacenter is separated at layer 3, four wireless modules and wireless aggregation layers are required to
accommodate an individual datacenter failure.

Table Y provides a summary of these components:


Table Y. Wireless LAN Components – Layer 3 Separation

Component Description Notes


Aruba Mobility Master (MM) Hardware or Virtual Appliances 4 x Required (L3 Redundancy)
Aruba Mobility Controllers Hardware or Virtual Appliances 4 x Clusters (2 per Datacenter)
Aruba Access Points 802.11ac Wave 2 Access Points 6,000 x Required
Aruba Airwave Hardware or Virtual Appliance Recommended
Aruba ClearPass Hardware or Virtual Appliance Recommended

ROAMING DOMAINS
With an ArubaOS 8 architecture, seamless mobility is provided between Access Points (APs) managed by a common cluster.
Each wireless and dynamically segmented client is assigned an Active User Anchor Controller (A-UAC) and Standby User
Anchor Controller (S-UAC) cluster member to provide fast failover in the event of a cluster member failure or live upgrades.
To provide scaling for this design, two clusters of mobility controllers are required. As seamless roaming can only be provided
between APs managed by the same cluster, special considerations need to be made to ensure that APs in groups of building
that require seamless roaming are managed by the same cluster. The following considerations need to be made:

60
1. APs in the same building must be managed by the same cluster. This ensures wireless client sessions are not
interrupted as the clients roam within the building.
2. Indoor and outdoor APs in co-located buildings with overlapping coverage must be managed by the same cluster. This
ensures client sessions are not interrupted as the clients roam within a building or between buildings.
APs in buildings that are geographically separated and do not have overlapping coverage can be distributed between clusters
as required with attention being made to ensure AP and client capacity is as evenly distributed as possible (figure 3-16):
Figure 3-16. Roaming Domains

If the campus deployment supports both wireless and dynamically segmented clients, you may consider deploying
NOTE
separate clusters for wireless and dynamically segmented clients.

REDUNDANCY
For this scenario the datacenters are located in separate buildings which are also connected to the campus backbone. The
datacenters are interconnected using high-speed links ensuring there is adequate bandwidth capacity available to support the
hosted applications and services that are hosted in each datacenter.
For a dual datacenter design, the mobility masters and mobility controller clusters are distributed between both datacenters.
The wireless components can be deployed using several strategies to achieve redundancy which is depend on the datacenter
design:
 Layer 2 Extension – If VLANs are extended between datacenters, the mobility masters and the mobility cluster
members can be split between the datacenters. Each datacenter hosting 1 x mobility master and 1/2 the cluster
members.
 Layer 3 Separation – The mobility masters and mobility cluster members are duplicated in each datacenter.
LAYER 2 EXTENSION
The layer 2 datacenter redundancy model is very easy to understand as it operates in the same manner as a single datacenter
deployment model. Each datacenter hosts a mobility master and 1/2 of the mobility controllers of each cluster. The mobility
masters configured for L2 redundancy while Access Points (APs) and client load-balancing and fast-failover is provided by each
cluster (figure 3-17):
 Aruba Mobility Master (MM):
o Two hardware or virtual MMs (one per datacenter)
o L2 master redundancy (Active / Standby)

61
 Hardware Mobility Controllers (MCs):
o Two clusters of hardware MCs
o Cluster members equally distributed between datacenters)
 Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
o Per building AP cluster assignment based on roaming requirements

By default the APs and clients will be load-balanced and distributed between cluster members residing in each
NOTE datacenter. With this design it is possible that APs and clients within a building will be assigned to cluster members
in different datacenters.

Figure 3-17. Redundancy – Layer 2 Extension

LAYER 3 SEPARATED
The layer 3 datacenter redundancy model differs from the model by duplicating the mobility masters and clusters within each
datacenter. Each datacenter hosting two mobility masters and two clusters of mobility controllers. The mobility masters
configured for L2 redundancy within the datacenter and L3 redundancy between datacenters. The Access Points (APs) within
each building are assigned a primary and backup cluster using the primary and backup LMS. AP and client fast-failover are
provided within each cluster while a full bootstrap is required to provide failover between clusters (figure 3-18):

62
 Aruba Mobility Master (MM):
o Four hardware or virtual MMs (two per datacenter)
o L2 master redundancy (Active / Standby)
o L3 master redundancy (Primary / Secondary)
 Hardware Mobility Controllers (MCs):
o Four clusters of hardware MCs (Primary / Secondary)
o Cluster members duplicated between datacenters
o Primary clusters alternating between datacenters
 Access Points
o Primary and Backup LMS using the Primary and Secondary cluster VRRP VIP
o Fast failover using cluster built-in redundancy
o Bootstrap failover between Primary and Secondary clusters
o Per building AP cluster assignment based on roaming requirements
Figure 3-18. Redundancy – Layer 3 Separation

63
SCALABILITY
Scaling is the primary concern for this campus scenario which is complicated by the inclusion of a secondary datacenter and
the datacenter deployment model. To accommodate the scaling and redundancy requirements for this campus scenario,
considerations were made for both the datacenter aggregation layer and the mobility controller cluster design.
DATACENTER AGGREGATION LAYER
Both datacenter deployment models require clusters of mobility controllers that are connected to their respective datacenter
aggregation layers. To accommodate 64,000 x concurrent dual-stack hosts, two clusters of mobility controllers are required –
each supporting up to 32,000 x dual-stack hosts. Each IPv6 host in this example being assigned a single global IPv6 address.
Clients using SLAAC are likely to obtain and use additional IPv6 addresses will reduce the number of supported devices.
Due to the high number of clients that must be supported, each cluster is connected to a separate Aruba 8400 series wireless
aggregation layer. This recommendation applies to both layer 2 extended and layer 3 separated datacenter designs:
 Layer 2 Extension – Requires two datacenter aggregation layers which are split between datacenters. Each wireless
aggregation layer supporting one cluster of mobility controllers.
 Layer 3 Separated – Requires two datacenter aggregation layers per datacenter. Each wireless aggregation layer
connecting a primary or secondary cluster of mobility controllers.
This datacenter aggregation layer design ensures that a single aggregation layer never accommodates more than 64,000 x
IPv4 or IPv6 host addresses during normal operation as well as during a datacenter failure.

64
Figure 47 - Datacenter Wireless Aggregation Layer Scaling

MOBILITY CONTROLLER CLUSTERS


Scaling for each mobility controller cluster is provided by selecting the appropriate mobility controller model and determining the
number members per cluster. The throughput capabilities of the mobility controller is also a factor for this decision as each
mobility controller model supports different switching capacities and PHYs. For this campus scenario the 7200 series mobility
controllers are recommended with each cluster implementing four mobility controllers (see platform suggestions).
While the virtual mobility controllers can be selected for a campus deployment, for throughput and performance it is
recommended that hardware mobility controllers be deployed. As the hardware is dedicated, this guarantees a specific level of
performance can be provided.

MOBILITY MASTER
For this campus design, Aruba recommends implementing the MM-HW-10K or MM-VA-10K mobility master (see platform
suggestions). As data switching throughput is not as big of a concern as with the mobility controller clusters, hardware or virtual
MMs can be deployed.
The mobility master selected for this design can scale to support 10,000 x APs, 100,000 x clients and 1,000 x mobility
controllers. This will provide adequate capacity to support the AP, client and mobility controller counts while providing additional
headroom for future growth. Additional clients and APs can be added as the campus grows by adding additional aggregation
layers and mobility controller clusters.

Scaling beyond 64,000 x dual-stack clients for a centralized deployment model can be achieved by deploying
additional mobility controller clusters within the datacenter. For an AOS 8.X deployment, a mobility master can be
NOTE
scaled to support up to 100,000 x clients, 10,000 x APs and 1,000 x mobility controllers (see appendicies).
Additional scaling being possible by deploying additional mobility masters and mobility controller clusters.

VIRTUAL LANS
For a centralized cluster design, the datacenter aggregation layer terminates all the VLANs from the mobility controller cluster
members. The datacenter architecture that is implemented determines the VLAN design. In both designs the VLANs are
extended from the mobility controllers to their respective datacenter aggregation layer switches using 802.1Q trunking. The
primary difference between the designs being the number of VLANs that are required.
LAYER 2 EXTENSION
When VLANs are extended between datacenters, each cluster implements its own unique VLAN ids and broadcast domains
that are extended between the datacenters. Each cluster consists of one or more user VLANs depending on the VLAN model
that has been implemented. For a single VLAN design, all wireless and dynamically segmented clients are assigned to a
common VLAN id with roles and policies determining the level of access each user is provided on the network. Each cluster
implementing unique VLAN ids.
The user VLANs are extended from the aggregation layer switches to each mobility controller cluster member (figure 3-20). At a
minimum two VLANs are required between the datacenter aggregation layers and each mobility controller cluster member. One
VLAN is dedicated for management, cluster and MM communications while the additional VLANs are mapped to clients. The
VLANs are common between cluster members split between the datacenters to permit seamless mobility. The datacenter
aggregation layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-
hop router redundancy is natively provided by VRRP or the Aruba clustering architecture.

65
66
Figure 3-20. Wireless and Dynamically Segmented Client VLANs – Layer 2 Extension

LAYER 3 SEPARATION
When the datacenters are separated at layer 3, the VLANs are unique per datacenter. The primary and secondary clusters in
each datacenter each requiring their own unique VLAN ids and broadcast domains. Each cluster consists of one or more user
VLANs depending on the VLAN model that has been implemented. For a single VLAN design, all wireless and dynamically
segmented clients are assigned to a common VLAN id with roles and policies determining the level of access each user is
provided on the network. Each cluster implementing unique VLAN ids.
The user VLANs are extended from the aggregation layer switches to each mobility controller cluster member (figure 3-21). At a
minimum two VLANs are required between the datacenter aggregation layers and each mobility controller cluster member. One
VLAN is dedicated for management, cluster and MM communications while the additional VLANs are mapped to clients. The
VLANs are common between cluster members in each datacenter to permit seamless mobility. The datacenter aggregation
layer switches have VLAN based IP interfaces defined and operate as the default gateway for each VLAN. First-hop router
redundancy is natively provided by VRRP or the Aruba clustering architecture.
Figure 3-21. Wireless and Dynamically Segmented Client VLANs – Layer 3 Separation

One difference between the two datacenter designs is the client VLAN assignment and broadcast domain membership during a
datacenter failure. While both models offer full redundancy, only the layer 2 VLAN extension model offers fast failover in the
event of a datacenter outage:

67
1. Layer 2 Extension – Impacted clients maintain their VLAN id and IP addressing after a datacenter failover. The APs,
Aruba switches and clients are assigned to a new cluster member in their existing cluster in the remaining datacenter.
2. Layer 3 Separated – Impacted clients are assigned a new VLAN id and IP addressing after a datacenter failover. The
APs, Aruba switches and clients being assigned to a secondary cluster member in the remaining datacenter.

SCALING & PLATFORM SUGGESTIONS


Table 17 provides platform suggestions for the centralized cluster campus deployment scenario that supports 6,000 x Access
Points and 64,000 x concurrent clients. Where appropriate a good, better and best suggestion is made based on feature,
performance and scaling. These are suggestions based on the described scenario and maybe substituted at your own
discretion.

Good Better Best

Core Layer
Building Specific
Switching

Aggregation Layer
(Follow Small, Medium and Large Recommendations)
Access Layer

Wireless Module 8400

Mobility Masters MM-VA-10K or MM-HW-10K


Wireless

Mobility Controller Clusters 7220 7240XM 7280

802.11ac Wave 2 Access Points 300 Series 310 Series 330/340 Series

Figure 48 - Centralized Campus Building Platform Suggestions

As each building in the campus can be different in size, each building will require its own respective 2-tier or 3-tier
hierarchical network design. As such switching suggestions for core, aggregation and access layers is not
NOTE
provided in table 17 as these selections will be unique per building. The individual building selections should be
made following the small, medium and large suggestions highlighted in the previous sections.

68
SCENARIO 2 – DISTRIBUTED CLUSTERS
The following reference design is for a campus such as a university with 285 buildings distributed over a 900 acre site. Each
building implementing its own 2-tier or 3-tier modular network design that connects to a common campus backbone. The
university has 20,000 faculty, staff and students with IPv4 and/or IPv6 clients. To provide coverage, the university has deployed
3,500 x 802.11ac Wave 2 Access Points (figure 3-22).

Campus Characteristics:
 3,500 x 802.11ac Wave 2 Access Points
 40,000 x Concurrent Clients (Native IPv4 and/or Dual-Stack)
 1 x Datacenter

69
Figure 49 - Campus Modular Network Design – Distributed Mobility Controller Clusters

WIRELESS LAN COMPONENTS


The campus in this scenario includes the mobility masters deployed in a datacenter and clusters of mobility controllers that are
distributed between buildings. The campus in this scenario includes a single datacenter, however multiple datacenters may
exist for your design. If multiple datacenters exist, the previous campus reference architecture provides details for the mobility
master deployments options that can be selected for layer 2 and layer 3 datacenter deployment models.
Unlike the previous campus example, the mobility controller clusters are distributed between the buildings rather than deployed
in the datacenter. This means the wireless and dynamically segmented traffic terminates within the buildings rather than the
datacenter. As roaming can only be provided within a cluster of mobility controllers, Access Points (APs) in co-located buildings

70
requiring overlapping coverage are serviced by a cluster of mobility controller strategically deployed in one of the co-located
buildings. APs and clients in standalone or isolated buildings being serviced by their own cluster of mobility controllers.
The modular network design and mobility controller cluster placement recommendations for each building in the campus follow
the same recommendations provided for the small, medium and large office reference designs. The mobility controller clusters
connecting to their respective layer depending on the buildings size. As with the previous recommendations, a wireless
aggregation layer being recommended when then wireless and dynamically segmented clients exceeds 4,096.
As the building sizes, number of APs and hosts vary – the mobility controller clusters are customized per building or co-located
buildings to meet the AP, client and throughput requirements. For ease of deployment, troubleshooting and repair, it is
recommended that you standardize on common models of mobility controllers for small, medium and large buildings. Your
design may include specifying two or three models of mobility controllers depending on the range of building sizes you need to
support.
Table 18 provides a summary of these components:

Component Description Notes


Aruba Mobility Master (MM) Hardware or Virtual Appliances 2 x Required
Aruba Mobility Controllers Hardware or Virtual Appliances Varies
Aruba Access Points 802.11ac Wave 2 Access Points 3,500 x Required (Distributed)
Aruba Airwave Hardware or Virtual Appliance Recommended
Aruba ClearPass Hardware or Virtual Appliance Recommended
Figure 50 - Wireless LAN Components

ROAMING DOMAINS
With an ArubaOS 8 architecture, seamless mobility is provided between Access Points (APs) managed by a common cluster.
Each wireless and dynamically segmented client is assigned a primary (UAC) and secondary (S-UAC) cluster member to
provide fast failover in the event of a cluster member failure or live upgrades.
This campus design includes both standalone and co-located buildings. Roaming is provided within each building as well as
strategically between co-located buildings where overlapping coverage is provided. Co-located buildings providing indoor /
outdoor coverage permitting roaming as faculty and students move between the co-located buildings (figure 34):
 Standalone Buildings – Are each serviced by a cluster of mobility controllers deployed within each building. When
necessary APs in small buildings being serviced by a mobility controller cluster in a neighboring building.
 Co-Located Buildings – Are services by a cluster of mobility controllers strategically deployed in one of the co-located
buildings. Each cluster servicing APs across two or more buildings.

Figure 51 - Roaming Domains

71
REDUNDANCY
For this scenario the mobility masters are deployed within the datacenter and connect directly to separate datacenter
aggregation switches. Redundancy within each building is provided by the modular network design and clusters of mobility
controllers. The mobility controllers are deployed following the same recommendations provided for the small, medium and
large office reference designs:
 Aruba Mobility Master (MM):
o Two hardware or virtual MMs
o L2 master redundancy (Active / Standby)
 Hardware Mobility Controllers (MCs):
o Multiple clusters of hardware MCs
o Minimum of two cluster members
 Virtual Mobility Controllers (MCs):
o Multiple clusters of virtual MCs
o Minimum of two cluster members
 Access Points
o AP Master pointing to the clusters VRRP VIP
o Fast failover using cluster built-in redundancy
Additional redundancy between clusters can be achieved if desired by implementing the backup LMS option. This will allow
Access Points (APs) in a building to failover to an alternative designated cluster in the event of an in-building cluster or wireless
aggregation layer failure. Please note that the APs will perform a full bootstrap to failover to the alternate cluster which is user
impacting. The alternate cluster and aggregation layer must also be scaled accordingly to accommodate the AP and client
counts.

SCALABILITY
The primary scaling concern for this scenario is mobility master scaling. For this campus design, not only do you need to
accommodate the total number of Access Points (APs) and clients, but also the total number of mobility controllers which are
distributed between buildings. For this scenario, the 285 buildings will be serviced by 180 clusters – each with a minimum of two
mobility controller members. Clusters in some larger lager buildings implementing three or four cluster members as required.
For this campus design, Aruba recommends implementing the MM-HW-5K or MM-VA-5K mobility master (see platform
suggestions). As the number of distributed mobility controllers is the primary concern, hardware or virtual MMs can be
deployed. The mobility master selected for this design can scale to support 5,000 x APs, 50,000 x clients and 500 x mobility
controllers. This will provide adequate capacity to support the AP, client and mobility controller counts while providing additional
headroom for future growth. If you’re specific campus design requires more mobility controllers, the MM-HW-10K or MM-VA-
10K mobility master can be selected which can support up to 1,000 x mobility controllers.
VIRTUAL LANS
For a distributed cluster design, the building core or wireless aggregation layer terminates all the VLANs from the buildings
wireless module. The wireless and dynamically segmented client VLANs are extended from the mobility controllers to their
buildings respective access layer switches using 802.1Q trunking.
The wireless module consists of one or more user VLANs depending on the model that is implemented. For a single VLAN
design, all wireless and dynamically segmented clients are assigned to a common VLAN id with roles and policies determining
the level of access each user is provided on the network. The single VLAN is extended from the respective aggregation layer
switches to each physical or virtual mobility controller cluster member. Additional VLANs can be added and extended as
required (figure 35). For example, your mobile first design may require separate VLANs to be assigned to wireless and
dynamically segmented clients for policy compliance.
At a minimum two VLANs are required between the buildings core or wireless aggregation layer switches and each mobility
controller cluster member. One VLAN is dedicated for management and mobility master communications while the additional

72
VLANs are mapped to clients. The VLANs are common between cluster members to permit seamless mobility within each
building.

Figure 52 - Hardware Mobility Controller Cluster – VLANs

Each building may implement common VLAN ids or unique VLAN ids as required. As each building is layer 3 separated from
the other buildings in the campus, the VLAN ids can be re-used simplifying the WLAN and dynamically segmented client
deployment. However, each VLAN will require its own IPv4 and IPv6 subnet assignments.

SCALING & PLATFORM SUGGESTIONS


The distributed campus scenario requires switching and wireless components to be selected per building. The component
selection for each building should be based on the small, medium and large suggestions highlighted in the previous sections.
Each building in the campus will implement a 2-tier or 3-tier hierarchical network design with appropriate selections to meet
each buildings wired and wireless connectivity, performance and redundancy requirements.
As previously highlighted it is recommended to standardize on common models of mobility controllers for the small, medium
and large buildings to simply deployment, troubleshooting and repair. Your specific campus design may standardize on a
common model mobility controller for all building one model per building size. The number of cluster members per building
being adjusted to meet each buildings redundancy and performance needs.
To support this distributed campus scenario, Aruba suggests the MM-VA-5K or MM-HW-5K which can scale to accommodate
5,000 x Access Points, 50,000 x Clients and 500 x Managed Devices. The suggested Mobility Manager models can meet the
initial requirements to support 3,500 x Access Points and 40,000 x concurrent clients while providing additional headroom for
growth. Larger Mobility Managers such as the MM-VA-10K or MM-HW-10K are available to support larger distributed campuses
if required. MM-VA-10K or MM-HW-10K each scaling to support 10,000 x Access Points, 100,000 x Clients and 1,000 x
Managed Devices.

73
VRD CASE STUDY OVERVIEW
In this section of the Campus VRD, we will examine the networking needs of a fictitious company named Dumars Industries.
Dumars Industries has 5 offices within a metropolitan area and is looking to expand to additional remote offices throughout the
country. Dumars Industries has been in business for 18 years and has approximately 25,000 employees. The majority of
employees work in one of the company offices, the primary exception being the field sales teams which consist of
approximately 1000 total users. Dumars Industries also has active college intern program and generally hosts 50-100
additional interns each quarter.
The five primary sites are depicted below as well as the connectivity between each site and internet connectivity. Headquarters
and Gold River are the two main-sites and support the largest user populations. Metro-E circuits have been provisioned as
depicted below.

Figure 53 - Facilities & Connectivity

Dumars Industries provides each employee a laptop or tablet device and allows employees to use their own devices as well.
The company has embraced SaaS and has moved approximately 85% of workloads to the cloud. Dumars Industries has
elected to keep building security systems on-premises.

Headquarters Building
The headquarters building is a 14-story facility and has an adjacent parking garage. The following business functions and
associated staff work in the headquarters building:
• Executive Offices

74
• Client Briefing Center
• Human Resources
• Legal Services
• Finance
• Safety & Security
The company space planner has projected to have approximately 400 users per floor for a total of 5,000 users. 95% of the
users will be connected to the network wirelessly. The majority of switchports will be used for connecting to access points,
building control IoT devices, and security cameras. Executive Offices will provide wired connectivity for IP desk phones while
other users will use soft-phone clients. Each floor also has several small and medium sized conference rooms and two large
conference rooms. All conference rooms will have wired connectivity for conference room phones and audio/visual equipment.
Each floor has three intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 7th floor. The fiber path from each IDF is a ‘home run’ to the MDF. The cable path between floors runs through
the first IDF of each floor. There are 24 strands of OM3 fiber running from the MDF to primary IDF on each floor. There are
also 12 strands of OM3 running from the main IDF on each floor to the second and third IDFs. The fiber plant provides the
ability to provision connections from each IDF to the MDF via intermedia patch panels.
Each IDF will contain a stack of switches to provide both power and connectivity for access points and other devices. To
provide the best user experience, the access switching stacks will support HPE SmartRate to deliver 5 Gbps of connectivity to
each access point. Smartrate will be used for network locations with higher bandwidth requirements. It is anticipated that 25
access points will be needed per floor – although more may be needed in the future. The switches will also provide power and
network connectivity to 5-10 security cameras, 15-20 badge readers, and 15-20 building control IoT devices.
The copper cabling plan had been recently upgraded in anticipation of supporting 802.3bt Power over Ethernet. Currently,
there are plans being developed to modernize the building infrastructure to take advantage of additional power provided by
802.3bt. The table below provides a summary of the copper cable plant.

Component Cabling Requirement Notes


Aruba Access Points 1 cable per AP 1 cable Is used for network connectivity
Second cable run can be used for console
Optional second cable or for ethernet connectivity
Conference Room Connectivity 4 cables per room 1 – A/V controller
1 – Conference Phone
1 – Room Reservation Pad/System
1 – Future Use
Badge Reader 1 cable per badge reader
Security Camera 1 cable per security camera
Building Control / IoT Sensors 1 cable per device
Figure 54 - Headquarters Per Floor Copper Cabling

The diagram below provides a high-level overview of the fiber plant on each floor and connectivity back to the building MDF
located on the 7th floor.

75
Figure 55 - Headquarters IDF Connectivity

The Data Center at Headquarters is located on the 7th floor. Dumars Industries has moved the majority of their workload to the
cloud but does maintain a few in-house systems/services. The systems which have not moved to the cloud are:
• Email and Calendaring
• Directory Services
• DHCP and DNS Services
• Security and Building Safety Systems
• ClearPass
• Network Management Applications
• IP Telephony Systems
Of the systems still located in the data center, most are virtualized with the exception of the application(s) to support the IP
camera systems. The data center network connects to the campus core switches via multiple links to provide both sufficient

76
capacity and redundancy. The data center is designed with a spine and leaf architecture and the building network is a ‘large
leaf’. Other services which are accessible via the data center network include internet access, remote access/VPN services,
and public web pages/content. Internet connectivity is provided by BGP peerings to two ISPs. The internet edge service
block advertises a default route to the Campus network. Default route handling will be detailed in the routing architecture
portion of this document.
The Gold River Datacenter is designed to replicate all of the services provided by the Headquarters Datacenter. The
virtualization environment provides capabilities to move virtual machines/workloads between datacenters and update host
addressing as required. This VRD will document the connectivity from the Campus network to the Data Center network but will
not include technical details of the data center environment.

Gold River
The Gold River building is an pair of adjacent eight story buildings supporting approximately 5000 users. The buildings are
known ‘Gold River’ and ‘Gold River North’ (often abbreviated GDRN). The following business functions and associated staff
work in the facility:
• Research & Development
• Client Support Services
• Internal and External Training
The company space planner has projected to have approximately 50 users on the first and second floors of the Gold River main
building and 400 users per floor on floors three through eight. Training facilities are only in the Gold River Main building and are
not located in the GDRN building. 90% of the users will be connected to the network wirelessly. The training facilities are are
essentially large conference rooms with partitions to support having training sessions for small groups of 10-12 and scaling to
large groups of up to 100. The training facility can support a maximum of 400 students at any given time.
The majority of switchports will be used for connecting to access points, building control IoT devices, and security cameras.
Floors three through six have several small and medium sized conference rooms and two large conference rooms. Conference
rooms will have wired connectivity for conference room phones and audio/visual equipment.
Each floor has three intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 8th floor. There are 24 strands of OM3 fiber running from the MDF to primary IDF on each floor. There are also
12 strands of OM3 running from the main IDF on each floor to the second and third IDFs. The fiber plant provides the ability to
provision connections from each IDF to the MDF via intermedia patch panels.
Each IDF will contain a stack of switches to provide both power and connectivity for access points and other devices. To
provide the best user experience, the access switching stacks will support HPE SmartRate to deliver the capability to provide
more than 1Gbps of connectivity to specific access points deployed in locations requiring high bandwidth. It is anticipated that
30-35 access points will be needed per floor – although more may be needed in the future. The switches will also provide
power and network connectivity to 5-10 security cameras, 15-20 badge readers, and 15-20 building control IoT devices.
The diagram below provides a high-level overview of the fiber plant on each floor and connectivity back to the building MDF
located on the 8th floor

77
Figure 56 - Gold River & Gold River North IDF Connectivity

The diagram below shows the links between service blocks in the Gold River and Gold River North facilities. Note that access
layer devices are omitted for clarity. The only layer 3 devices in the Gold River North building are a pair of 8320s.

Figure 57 - Gold River and Gold River North Service Blocks

78
The Gold River Data center is located on the 8th floor of the GDR facility. This facility serves as a redundant site to the primary
data center at Headquarters. The systems which have been replicated at the GDR DC are:
• Email and Calendaring
• Directory Services
• DHCP and DNS Services
• Security and Building Safety Systems
• ClearPass
• Network Management Applications
• IP Telephony Systems
The connectivity model for the Gold River DC to the Gold River Campus is the same as the Headquarters design. The GDR
DC uses a spine and leaf architecture and the Gold River Campus network is a ‘large leaf’. Aligning to the design of the
Headquarters DC, the Gold River DC provides redundant connectivity. Internet connectivity is provided by a BGP peering to an
additional ISP. The internet edge service block advertises a default route to the Campus network. Default route handling will
be detailed in the routing architecture portion of this document.

Squaw Valley
The Squaw Valley building is an two story building supporting approximately 1000 users. The primary business function at
this site is manufacturing. This facility runs 24x7. The majority of users will be connected via the wireless network.
Manufacturing systems are the primary devices connected to the wired network. Approximately 50% of the wired ports are
used to connect to manufacturing plant/equipment devices. The remaining switch ports will be used for connecting to access
points, building control IoT devices, and security cameras. There are two small and two medium sized conference rooms on
each floor.
Each floor has four intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 1st floor. There are 8 strands of OM3 fiber running from the MDF to each IDF. Each IDF will contain a stack of
switches to provide both power and connectivity for access points and other devices. To provide the best user experience, the
access switching stacks will support HPE SmartRate up to 5Gbps of connectivity to APs and other devices. It is anticipated that
40-50 access points will be needed per floor. The switches will also provide power and network connectivity to 5-10 security
cameras, 15-20 badge readers, and 15-20 building control IoT devices.

The diagram below provides a high-level overview of the IDF physical connectivity.

79
Figure 58 – Squaw Valley IDF Connectivity

Kirkwood
The Kirkwood building is a two story building supporting approximately 1000 users. The primary business functions at this site
are marketing and sales. The majority of users will be connected via the wireless network. The remaining switch ports will be
used for connecting to access points, building control IoT devices, and security cameras. There are twelve small and eight
medium and two large conference rooms on each floor.
Each floor has four intermediate distribution frames (IDFs) with fiber connectivity back to the main distribution frame (MDF)
located on the 1st floor. There are 8 strands of OM3 fiber running from the MDF to each IDF. Each IDF will contain a stack of
switches to provide both power and connectivity for access points and other devices. It is anticipated that 40-50 access points
will be needed per floor. The switches will also provide power and network connectivity to 5-10 security cameras, 15-20 badge
readers, and 15-20 building control IoT devices.
The diagram below provides a high-level overview of the IDF physical connectivity.

80
Figure 59 - Kirkwood IDF Connectivity

Mt. Rose

The Mt. Rose facility serves as the primary shipping/receiving location for raw materials and finished goods. The facility also
has a warehouse and the newly-launched materials recovery program where returned/old/damaged products are disassembled
in an effort to recover and re-use as much of the raw materials as possible. The building is approximately 250,000 square feet
and provides 12 truck bays. There are approximately 250 users in this facility. The facility has a few offices and a two
conference rooms but the majority of the space is used for warehouse space.

There are 10 IDFs in the building. Each IDF will have a pair of access switches providing connectivity to approximately 30
access points. The wireless network will also use outdoor access points to provide coverage of outdoor areas where
employees will be working. Outdoor security cameras will be used in this location.

81
Figure 60 - Mt. Rose IDF Connectivity

82
DESIGN REQUIREMENTS

Availability Requirements
1. Single device faults in the access layer should not impact more than 30 users
2. Single device faults in the aggregation layer must not impact more than 0 users
3. Single device faults in the core layer must not impact more than 0 users
4. Device or circuit outages should result in sub-second failover.
5. Devices should be configured with as much interchassis/interstack redundancy as possible
6. A single device failure of a mobility controller member should not result any perceptible impact to users.

Core Layer Requirements


1. In facilities with a dedicated aggregation layer, Core devices should provide layer 3 connectivity to all other devices
and service blocks.
2. In facilities with a collapsed core/aggregation model the core/aggregation design must meet both ‘Core Layer’ and
‘Aggregation Layer’ requirements with the exception of allowing for layer 2 connectivity to Access Layer devices.
3. Core device hardware must be identical
4. Core devices should provide link redundancy for all connections to other devices/service blocks

Aggregation Layer Requirements


1. Aggregation layer devices must be deployed in pairs
2. Aggregation layer devices must appear to Access Layer Devices as a single logical switch when using a Layer 2
Access Layer design
3. Aggregation layer devices must use layer 3 links to connect to Core devices
4. Aggregation layer devices can use layer 2 or layer 3 links to connect to Access Layer Devices

Access Layer Requirements


1. Access layer devices should use stacking (backplane or virtual) when two or more switches are in the same IDF
2. Access layer devices must connect to Aggregation layer devices using interfaces from different switch stack members
3. Access layer devices can use layer 2 or layer 3 links to connect to Aggregation Layer devices.
4. Access layer devices should support PoE on all access switch ports (uplinks are exempted from this requirement)

Layer 2 Requirements
1. Spanning-Tree domains must be as small as possible to minimize network convergence events
2. Spanning-Tree should be eliminated from the network provided there are sufficient safeguards in place to mitigate any
looping events.
3. LACP must be used on all link-aggregation configurations.

83
Routing Requirements
1. BGP will be used to provide connectivity between sites.
2. BGP will be configured to advertise aggregate addressing for each site.
3. Each site will have a unique private ASN.
4. OSPF will be used as the routing protocol within each building/campus.
5. OSPF will be implemented with a single area in each building/campus.
6. Each OSPF speaker should not have more than 12 adjacencies
7. Redistribution between routing protocols will only be configured on WAN edge devices
8. Headquarters and Gold River will provide internet connectivity for the enterprise network.

Multicast Requirements
1. PIM Sparse Mode will be used in each building
2. PIM BSR will be configured on core device(s) to provide an RP for each building.
3. IGMP and IGMP snooping will be configured on aggregation and access layer devices to optimize multicast traffic
flows.
4. There is no business need to transport multicast data between facilities today.
5. All network devices MUST implement features to optimize multicast traffic flooding to conserve both wired bandwidth
and wireless bandwidth.

Administrative Device Access


1. All network infrastructure devices must provide for unique user accounts per administrator and be implemented via a
central authentication system using either TACACS+ or RADIUS
2. Local account fallback must be configured on all network infrastructure devices
3. The system should provide for auditing of administrative logins to network devices as well as capturing commands
issued by each user.

Network Instrumentation & Management


1. All network devices must be configured to send SYSLOG to the primary and secondary SYSLOG hosts
2. All network devices must be configured to allow for RestAPI interaction/services
3. All network devices must be configured to support SNMP read operations
4. Access layer network devices MAY be configured to support SNMP read/write operations.
5. All devices should send SNMP traps
6. Aggregation and Core devices should be configured to provide sFlow to the primary collector.
7. NTP MUST be configured on all devices

84
End User Experience
1. Wired and wireless devices/users should be profiled/authenticated by the network so that pre-defined security policies
can be applied
2. The network must provide for the application of differing polices for employees using company systems and employee
owned devices (BYOD).
3. The network must provide ‘Guest Internet’ for wireless users
4. The network must provide ‘Internet Only’ access for wired ports in training facilities
5. The network must provide for the ability to block systems/hosts/users from different groups (such as employees,
Building Management Systems, IoT Devices, etc) from communicating with other groups
6. End-users and Visitors MUST be able to self-provision BYOD devices using either the guest network or BYOD
services.
7. Wireless roaming in buildings must be implemented such that a user can maintain connectivity when roaming on or
between floors.
8. Optimize RF design for voice and roaming

Security
1. All devices should only allow administrative access from defined networks/hosts.
2. Access layer devices should be configured to prevent attached host systems from influencing/changing the spanning-
tree topology of the network. Any spanning-tree related frames received on ports connected to access-layer hosts
should cause the port to be disabled.
3. All devices should authenticate peer/neighbor adjacencies for routing protocols.
4. All devices should only allow secure communication access methods for administrative access
5. When supported, all control plane protocols should be authenticated and encrypted.

DESIGN OVERVIEW
In crafting a design to meet the identified requirements as well as provide some ‘planning for the future’ consideration must be
given to service blocks which are the most likely to have additional requirements. The access layer is the most likely service
block to require changes to support new business needs. Changes to the aggregation layer are often driven by building or
expansion in the number of access layer devices/IDFs. In most cases, the overall design of the aggregation layer doesn’t
change substantially. The same holds true for the core layer. Using ClearPass for centralize policy definition and having the
network enforce the policies, there may not be a great deal of change to device configurations. Of course, there will be
exceptions to this such as the need to adjust a QoS configuration to support a new application.

An item that is often overlooked in designing and building a Mobile-First network is to ensure that MTUs are properly configured
on all devices so that features such as Dynamic Segmentation as well as OSPF (or other protocols/functions) which require
MTUs larger than 1500 bytes. Care should be taken to ensure that the IP path from access devices (switches or APs) can
provide an MTU of at least 1564 to the mobility controllers. In a Campus environment this likely doesn’t present a problem but
in a Metro-E network where backup controllers are placed at remote sites, the Metro-E circuits must support the jumbo frames.
An IP MTU of 2048 and an Ethernet MTU of 2048 are recommended.

85
Aruba does NOT recommend deploying dynamic segmentation across a WAN where devices are separated over
NOTE
low-speed higher latency links.sky

The table below lists the design models and operating systems used within each of the service blocks. All of the sites will
feature a common access-layer design leveraging leading practices including device hardening, link aggregation, loop-
protection, and dynamic segmentation. The single two-tier site will be designed with ArubaOS-Switch devices for all roles and
the other sites will use ArubaOS-CX in all roles save for the access layer. The Kirkwood site will not require a dedicated
Wireless Aggregation service block. The mobility controllers will be connected to the aggregation switches.

Network Model Wireless


Core Aggregation Aggregation Access

Headquarters Three-Tier CX CX CX AOS-S

Gold River & Gold River Three-Tier CX CX CX AOS-S


North

Squaw Valley Three-Tier CX CX CX AOS-S

Kirkwood Three-Tier CX CX AOS-S

Mt. Rose Two-Tier AOS-S AOS-S

The Metro-E devices used in this design are all 8320s with ArubaOS-CX. In this case study, we have elected to use the same
device for this role in all sites to ensure configuration constancy and reduce troubleshooting complexity.

The switches, access points, and mobility controllers planned for each site are listed below.

Headquarters Equipment List

Model & Quantity Notes

Headquarters

Core Switches 2x 8400s Redundant devices with dual management modules


and sufficient line cards to build cross-card
LAGs/VSX links.

Aggregation Switches 4x 8320s Two pairs of aggregation switches are used to


address port density as well as failure-domain sizing.

Access Switches 126 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
42 IDFs.

86
Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)

Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.

Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.

Metro-E Edge 2x8320s A pair of 8320s are being used to provide


connectivity to the Metro-E network. A dedicated pair
of devices were deployed (as opposed to using
interfaces on the core) to perform any routing
redistribution required and to maintain discreet
service blocks.

Gold River & Gold River North Equipment List

Model & Quantity Notes

Gold River & Gold River North

Core Switches 2x 8400s Redundant devices with dual management modules


and sufficient line cards to build cross-card
LAGs/VSX links.

Aggregation Switches 4x 8320s Two pairs of aggregation switches are used to


address port density as well as failure-domain sizing.
One pair of switches will support the Gold River
building while a second pair will support the Gold
River North building.

Access Switches 126 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.

Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)

Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.

87
Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.

Metro-E Edge 2x8320s A pair of 8320s are being used to provide


connectivity to the Metro-E network. A dedicated pair
of devices were deployed (as opposed to using
interfaces on the core) to perform any routing
redistribution required and to maintain discreet
service blocks.

Squaw Valley Equipment List

Model & Quantity Notes

Squaw Valley

Core Switches 2x 8320s One pairs of core switches are used to address port
density as well as failure-domain sizing.

Aggregation Switches 2x 8320s Two pairs of aggregation switches are used to


address port density as well as failure-domain sizing.
One pair of switches will support the Gold River
building while a second pair will support the Gold
River North building.

Access Switches 64 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.

Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)

Wireless Aggregation Switches 2x 8320s One pair of aggregation switches are used implement
an L3 attached wireless services block to off-load L2
processing from core devices and to address failure-
domain sizing.

Mobility Controllers 3x7220s Three mobility controllers will are called for to support
local HQ users as well as to act as backup devices
for Gold River and other remote sites.

Metro-E Edge 2x8320s A pair of 8320s are being used to provide


connectivity to the Metro-E network. A dedicated pair
of devices were deployed (as opposed to using
interfaces on the core) to perform any routing
redistribution required and to maintain discreet
service blocks.

88
Kirkwood Equipment List

Model & Quantity Notes

Kirkwood

Core Switches 2x 8320s One pairs of core switches are used to address port
density as well as failure-domain sizing.

Aggregation Switches 2x 8320s Two pairs of aggregation switches are used to


address port density as well as failure-domain sizing.
One pair of switches will support the Gold River
building while a second pair will support the Gold
River North building.

Access Switches 64 x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.

Access Points 340 Series APs (indoor) Quantity to be determined by site survey
370 Series APs (outdoor)

Wireless Aggregation Switches N/A The size of the user population at this site doesn’t
warrant having a dedicated wireless aggregation
switch/service block. Future growth may dictate the
addition of a pair of switches to provide this function.

Mobility Controllers 3x7210s Three mobility controllers will are called for to support
local users.

Metro-E Edge 2x8320s A pair of 8320s are being used to provide


connectivity to the Metro-E network. A dedicated pair
of devices were deployed (as opposed to using
interfaces on the core) to perform any routing
redistribution required and to maintain discreet
service blocks.

The Kirkwood site is unique in that there are some design compromises which could be made that would not adversely impact
the network performance. It would be possible to collapse the Metro-E edge functions into the Core layer. This would reduce
operational agility in that maintenance to the core devices would also impact access to remote sites and cloud-based
applications.

Mt. Rose Equipment List

89
Model & Quantity Notes

Mt. Rose

Collapsed Core & Aggregation 2x3810s The user density at this site allows for using a two-tier
Switches network. ArubaOS-Switch devices were selected to
support future growth (and deployment of a dedicated
core) which would then provide a nearly identical
functional design to other three-tier sites.

Access Switches TBD x 2930Ms Stacks of 2-4 switches will be deployed in each of the
IDFs.

Access Points 340 Series APs (indoor) Product count to be determined by site survey
370 Series APs (outdoor)

Wireless Aggregation Switches N/A The size of the user population at this site doesn’t
warrant having a dedicated wireless aggregation
switch/service block. Future growth may dictate the
addition of a pair of switches to provide this function.

Mobility Controllers 3x7210 Three mobility controllers will are called for to support
local users and provide HA.

Metro-E Edge 2x8320s A pair of 8320s are being used to provide


connectivity to the Metro-E network. A dedicated pair
of devices were deployed (as opposed to using
interfaces on the core) to perform any routing
redistribution required and to maintain discreet
service blocks.

Access layer product selection can vary based upon site specific and customer specific goals and needs. For
NOTE example, a customer can elect to use a chassis-based solution instead of a switch stack without significant impact
to the overall network design.

SWITCHING ARCHITECTURE

The three-tier model used in this design will have layer-two access devices with layer-three services provided by the
aggregation layer. The aggregation layer will be using VSX to provide active/active forwarding for the access-layer. VSX will
require that LACP be used on the VSX/MCLAG links. Spanning-tree will be implemented on the access-layer devices with
each access device/stack being a small spanning-tree domain. Each of the switches/stacks will be configured as the root
bridge. Spanning-tree frames will be disabled on the uplink ports connecting to the aggregation layer switches. Loop-protect
will be enabled on the uplink ports connecting the access-switch to the aggregation layer switches. Aruba 8400 and 8320
switches will be used in the aggregation role in this design.
Loop-protection can also be implemented on the VSX aggregation devices. It is critical to understand the behavior of loop-
protect to decide when and where to include it in network designs. In a VSX configuration which has enabled loop-protect, if a

90
loop is detected by the VSX switch, both links in the VSX/MCLAG bundle will stop forwarding traffic (provided that the
configured action is just to disable forwarding) until the device believes the loop condition is resolved. VSX with loop-protect is
recommended for networks in which there is a potential for loops to be created between access-layer devices. Aruba
recommends using loop-protect in this case as it will minimize the impact of a loop by disabling one pair of VSX forwarding
interfaces protecting the other access-layer devices and the VSX pair.

Figure 61 - Loop Protection Design Overview Diagram

The following table summarizes features used in the aggregation layer.

Feature/Config Element Notes

VSX Roles Define the device role for each participant.

VSX ISL Link Use a separate dedicated links/LAGs for the ISL link. Do not allow the
keepalive traffic to traverse this link.

VSX Keepalive Use a dedicated layer 3 LAG/Link(s) between peers. Optionally, this link can
be in a VRF other than the default VRF.

VSX Sync Enable VSX Sync on VLANS and SVIs to enable for access-lists and other
elements to be synchronized from the primary VSX device to the secondary
VSX device.

Loop-protect Enable loop-protect on LAGs connecting to access-layer devices.

91
MTU Ensure that you configure the interface MTU prior to assigning the interface to
a LAG. The MTU should be at least 20 bytes larger than the IP MTU

IP MTU Configure an IP MTU to support required applications/services. 2048 is a


recommended size to support Dynamic Segmentation.
Figure 62 – Three-Tier Aggregation Layer Feature Summary

In the three-tier model, the core devices will only provide layer 3 connectivity to other service blocks/devices. Aruba 8400 and
8320 switches will be used in the core role in this design.

The collapsed-core design for this case study will provide layer 2 access devices and a pair of core/aggregation switches
providing layer 2 and layer 3 services. The two-tier model will be implemented using ArubaOS-Switch devices. The collapsed
core/aggregation will be configured with Virtual Stacking Framework (VSF) to present a single logical device to neighboring
devices. Connectivity between devices/stacks will be provisioned with redundant links.

92
Figure 63 - Two-Tier Design Topology Overview

93
Feature/Config Element Notes

VSF & VSF MAD VSF and MAD would be configured when using devices which do not support
backplane stacking in the collapsed core/aggregation role. This would be
commonly seen in deployments using the 5400R switches.

Backplane Stacking When using 3810s or other devices which support backplane stacking in the
collapsed core/aggregation role, use backplane stacking to interconnect
devices.

Routing Simple layer three configurations are likely needed and will require routing to
reach the WAN/Metro-E edge.

Spanning-Tree Spanning-tree will be enabled on all devices. The root bridge will be the
collapsed core/aggregation switch.

MTU Ensure that you configure the interface MTU prior to assigning the interface to
a LAG. The MTU should be at least 20 bytes larger than the IP MTU

IP MTU Configure an IP MTU to support required applications/services. 2048 is a


recommended size to support Dynamic Segmentation.
Figure 64 – Two-Tier Collapsed Core/Aggregation Layer Feature Summary

ROUTING ARCHITECTURE

The following diagram shows the planned routing protocol configuration using OSPF within each building and BGP for
connectivity to other sites. Each site will use a private BGP ASN with two BGP speakers. Circuits to reach remote sites will be
distributed to each of the Metro-E edge switches. Redistribution will be performed on the Metro-E Edge devices. BGP will
redistribute a default route as well as summary routes for the remote sites into OSPF. BGP will be configured to announce
aggregate addresses for the OSPF prefixes. BGP communities will be applied to prefix advertisements such that a routing
policy can be configured if required. The baseline policy will be to have each site select a preferred and backup internet edge
to achieve some level of load balancing. Note that the internet edge, DMZ, and data center design is out of the scope of this
Campus VRD.

94
Figure 65 - Enterprise Routing Architecture Overview

The Metro-Ethernet edge switches will be iBGP peered to each other and will form eBGP peerings to other sites mirroring the
circuit topology. There is no IGP used within the Metro-E network, as such, the eBGP peerings are established using the
interface addresses. The iBGP peerings are established using the loopback addresses of the Metro-E switches. The BGP
features and configuration elements used in this design a highlighted in the table below.

Feature/Config Element Notes

bgp router-id Define a router-id to match the loopback 0 address

95
Fast External Fallover This feature will drop bgp sessions to eBGP peers when the outgoing
interfaces used to reach the peer goes down

BFD BFD is enabled on interfaces connecting to the Metro-E network as we do not


have a direct connection to the remote BGP speaker. BFD will rapidly detect
communication failures and will shut down the BGP session much faster than
relying upon BGP timers

Peer-Groups Peer groups are used for eBGP peers to ensure that we have consistent
applied outbound route-policy via route maps and other commands which
should be applied identically to eBGP peers

Neighbor Authentication BGP neighbors will be configured with passwords

BGP Communities Communities will be send to eBGP peers and will be used to adjust local AS
routing policy.

In designing an optimal OSPF network, two key goals are to minimize the number of OSPF adjacencies per device and to
avoid having large area (or areas) of devices with significant performance capabilities. A high-performing OSPF network design
would be have the fewest adjacencies possible and all OSPF speakers would be of similar performance capabilities. If your
network will have OSPF speakers of various performance capabilities, multiple OSPF areas may be required. This may also be
a driver to use BGP to build connectivity between OSPF domains. In Campus networks where the links between devices are
often more reliable than WAN circuits, it is less likely to experience the same potential volume and frequency of OSPF events.
Thus, optimizing OSPF for a campus design can have somewhat less rigid requirements. In the spirit of ‘simple is best’ it is still
a leading practice to work to conform your Campus OSPF design to the same principles and practices as an OSPF WAN
design. To that end, we recommend a Campus OSPF speaker have less than 12 adjacencies. In our case study, the
Headquarters site is the largest facility and the Core devices have 11 OSPF adjacencies. The table below lists all of the OSPF
speakers for the HQ facility. Other sites using a three-tier model will have a similar table. Given the size of the HQ facility, two
aggregation device pairs are used. Smaller sites may are likely to have fewer aggregation device pairs.

Headquarters OSPF Neighbor Table

Notes

Device name Device Role Number of OSPF Neighbors

SWHQ-CORE1 Campus Core Switch 11

SWHQ-CORE2 Campus Core Switch 11

SQHW-DC1 Data Center Edge 3

SWHQ-DC2 Data Center Edge 3

SWHQ-AGG1A Floors 1-7 Aggregation 3

SWHQ-AGG1B Floors 1-7 Aggregation 3

96
SWHQ-AGG2A Floors 8-14 Aggregation 3

SWHQ-AGG2A Floors 8-14 Aggregation 3

SWHQ-WAGG1A Wireless Aggregation Switch 3

SWHQ-WAGG1B Wireless Aggregation Switch 3

SWHQ-WAN1 Metro-E Edge Switch 3

SWHQ-WAN2 Metro-E Edge Switch 3

Figure 66 - HQ OSPF Adjacencies

Large networks may treat the Campus as a ‘leaf’ attached to the Data Center ‘spine’. In these types of designs, it
NOTE is likely that the WAN Edge block is also a ‘leaf’. In this case study, the decision was made to show the WAN
Edge connected to the Core.

97
Feature/Config Element Notes

ospf router-id Define a router-id to match the loopback 0 address

max-metric router-lsa on-startup This feature will drop bgp sessions to eBGP peers when the outgoing
interfaces used to reach the peer goes down

passive-interface default This feature will stop OSPF from forming adjacencies on OSPF enabled
interfaces unless the ‘no passive-interface’ command is used. This is
recommended to reduce the

trap-enable This feature will cause OSPF to send traps when various OSPF events such
as establishment/loss of a neighbor occur.

OSPF network types Point to point network types will be configured on all layer 3 point to point links
to optimize OSPF as no election is required for point to point links

Neighbor authentication OSPF authentication will be configured to ensure that OSPF adjacencies are
only formed with the devices who can properly authentication each other.

98
DATACENTER CONNECTIVITY
The diagram below builds upon the Enterprise Routing Architecture to show additional details about connectivity to both ISPs
and provides a high-level overview of the paths to reach compute resources in both the HQ and Gold River Data Centers. The
diagram below depicts the relationship between various service blocks for this case study. Comprehensive Data Center
connectivity is beyond the scope of this document.

Figure 67 - High Level Data Center Connectivity Diagram

99
MULTICAST ROUTING
Dumars Industries has very few applications and services which use multicast. Each building/site is multicast enabled but there
is not a business need today to transport multicast across the Metro-E network. The security camera system uses both
multicast and unicast packets. The camera archiving system joins specific multicast groups for the cameras and records the
footage. Playback of the footage generates unicast streams to the viewing device. Focusing on the multicast design of each
building, PIM sparse mode will be used along with IGMP to enable multicast forwarding functions. Core switches will be
configured as candidate rendezvous points. The primary core switch will be configured as the ‘best’ candidate RP while the
secondary core switch will be the ‘backup’ candidate RP.

All layer 3 interfaces which face intra-building switches will have PIM enabled. All layer 3 interfaces supporting user/host
VLANS will have both PIM and IGMP enabled. IGMP snooping will be enabled to optimize multicast flows.

The mobility controllers will be configured to support multicast flows and will convert multicast flows to unicast flows for radio
optimization.

Figure 68 - Multicast Overview Diagram

Quality of Service
To provide the best end-to-end user experience Quality of Service (QoS) should be configured. QoS configurations are often
unique to each network and are influence by the applications and services deployed. In this VRD, we will present an end-to-
end QoS model for network. A QoS design should apply markings (or remark) packets as close to ingress as possible. The
access layer configuration will classify and mark packets via DSCP. The aggregation layer will trust the DSCP markings of

100
packets received from access layer switches (via layer-two interfaces) and will trust the DSCP markings of packets received
from core devices (via layer-three interfaces). The mechanism that defines how we treat prioritize the transmission of packets
is the ‘schedule-profile’. The VRD is using the default 8 egress queue model which is common to both ArubaOS-Switch and
ArubaOS-CX devices. In this model, there will be 1 queue used to support “real-time” traffic (primarily VoIP) and 7 queues
used for other application traffic. Often there are differences in switch platform capabilities used and QoS configurations will
have minor variations to accommodate these differences. The diagram below highlight the QoS model used in the MFRA VRD.
Note we will trust the DSCP markings at ingress from APs but we will remark ingress traffic from other devices.

Figure 69 - QoS Model

IP ADDRESSING
The IP addressing used in this design is crafted from the RFC1918 address space. This design calls for providing wireless
roaming within each building. Given the distance between facilities, there is not a need to maintain connectivity when roaming
between facilities. The Gold River and Gold River North do have RF overlap and provide seamless roaming between buildings.
The address plan for IPv4 was crafted to be as close to ‘future proof’ as possible and provides substantially more address
space per facility and function than required. Dumars Industries is exploring moving to IPv6 in the future when the need to have
end-to-end IPv6 connectivity to cloud-based providers warrants the business decision to invest in moving to a IPv4/IPv6 dual
stack configuration.
The IP addressing is crafted to provide large address blocks to so that users (internal and external) and infrastructure systems
and devices have clear boundaries allowing for the creation of simple filtering via access-lists but also for route summarization
and ease of allocating address space to new facilities. The management address space is divided such that there a separate
address blocks for layer 3 connected devices (core and aggregation switches) as well as layer 2 connected devices (access
layer switches and APs). The address blocks are also crafted so that if Dumars Industries migrates to an layer 3 access model,

101
the management IP addressing will require minimal, if any, changes. The table below provides a summary of the addressing
plan.

Headquarters

10.1.0.0/16 Management for layer 3 attached devices and for /30 links between devices

10.2.0.0/16 Management for layer 2 attached devices.


Note this address space is split into smaller subnets to account for having multiple aggregation
switch blocks.
HQ has two aggregation layers that need to support this address range: aggregation 1 and
aggregation 2. The mobility block doesn’t need to have addresses in this range.

10.32.0.0/16 Headquarters User Address Space


This address space must be divided into blocks sized to support the required device/addressing
requirements for two wired aggregation layers as well as the mobility services block.
HQ has three aggregation layers: mobility, aggregation 1 and aggregation 2

Address Range VLAN Notes

Mobility Service Block Address Allocation

10.32.0.1-10.32.31.254 1281 /19 subnet

10.32.32.1-10.32.63.254 1281 /19 subnet

10.32.64.1-10.32.95.254 1282 /19 subnet

10.32.96.1-10.32.127.254 138 /19

Aggregation Block 1 (Floors 1 – 7) Address Allocation

10.32.128.1-10.32.131.254 1281 /22 subnet

10.32.132.1-10.32.135.254 1282 /22 subnet

10.32.136.1-10.32.139.254 1283 /22 subnet

10.32.140.1-10.32.143.254 Hold for future use

Aggregation Block 2 (Floors 8 – 14) Address Allocation

10.32.144.1-10.32.147.254 1281 /22 subnet

10.32.148.1-10.32.151.254 1282 /22 subnet

10.32.152.1-10.32.155.254 1283 /22 subnet

10.32.156.1-10.32.159.254 Hold for future use

102
172.31.0.0/16 Guest Wireless Address Space

172.16.0.0/16 IoT, Phones/AV, Building Controls, & Physical Security


Each device grouping has a address block from this range allocated to the appropriate VLAN.
Note this address space is further split into smaller subnets to account for having multiple
aggregation switch blocks.
HQ has three aggregation layers: mobility, aggregation 1 and aggregation 2

Mobility Aggregation Block Address Allocation

172.16.0.1-172.16.15.254 20 /20 subnet

172.16.16.1-172.16.31.254 30 /20 subnet

172.16.32.1-172.16.47.254 40 /20 subnet

172.16.48.1-182.16.63.254 Hold for future use

Aggregation Block 1 (Floors 1 – 7) Address Allocation

172.16.64.1-172.16.71.254 20 /20 subnet

172.16.72.1-172.16.79.254 30 /20 subnet

172.16.80.1-172.16.87.254 40 /20 subnet

172.16.88.1-172.16.91.254 Hold for future use

Aggregation Block 2 (Floors 8 – 14) Address Allocation

172.16.96.1-172.16.103.254 20 /20 subnet

172.16.104.1-172.16.111.254 30 /20 subnet

172.16.112.1-172.16.119.254 40 /20 subnet

172.16.120.1-172.16.127.254 Hold for future use

Gold River

10.3.0.0/16 Management for layer 3 attached devices and for /30 links between devices

10.4.0.0/16 Management for layer 2 attached devices. Note this address space is split into subnets to
account for 1 subnet per aggregation layer.

10.33.0.0/16 Gold River Users Address Space

172.31.0.0/16 Guest Wireless Address Space

172.17.0.0/16 IoT, Building Controls, & Physical Security

103
Squaw Valley

10.5.0.0/16 Management for layer 3 attached devices and for /30 links between devices

10.6.0.0/16 Management for layer 2 attached devices

10.34.0.0/17 Squaw Valley User Address Space

10.34.128.0/17 BYOD Users Address Space

172.31.0.0/16 Guest Wir eless Address Space

172.18.0.0/16 IoT, Building Controls, & Physical Security

Mt. Rose

10.16.0.0/21 Management for layer 3 attached devices and for /30 links between devices

10.16.8.0/21 Management for layer 2 attached devices

10.64.0.0/21 Mt. Rose Users Address Space

10.64.8.0/21 BYOD Users Address Space

172.31.0.0/16 Guest Wireless Address Space

172.19.0.0/16 IoT, Building Controls, & Physical Security.

Kirkwood

10.16.16.0/21 Management for layer 3 attached devices

10.16.32.0/21 Management for layer 2 attached devices

10.64.16.0/21 Kirkwood User Address Space

10.64.24.0/21 BYOD Users Address Space

172.31.0.0/16 Guest Wireless Address Space

172.20.0.0/16 IoT, Building Controls, & Physical Security

VLANs
In crafting a VLAN plan that is applicable to the majority, if not all sites, it is important to plan for both current and future needs.
This design calls for defining 12 VLANS per building/facility with the exception of the training facility where a wired-guest VLAN.
The HQ facility (or facilities with multiple aggregation blocks multiple address blocks assigned to support these VLANS). The

104
VLANS align with the address blocks allocated to each building. Some network engineers may elect to reduce the number of
VLANS or add other VLANS as dictated by business needs. In this VRD, the following VLAN IDs are used in all facilities:

VLAN ID Description

1 Not used

10 Network Infrastructure Management for L2 attached devices (access switches and APs)

20 IoT and Building Control Devices

30 Physical Security Devices

40 Phones & AV Equipment

1281 EXEC Corporate Users

1282 Engineering & Support Users

1283 All other Users

138 BYOD Wireless Devices

998 Guest Internet

999 Default VLAN with no connectivity to network resources. Used for initial authentication purposes.

VLAN 999 will be the ‘default vlan’ for each wired switchport and will require users/devices to be profiled and/or authenticated
before they are assigned to either the corporate wired VLAN or a BYOD wired VLAN. This process will be implemented using
dynamic segmentation with ArubaOS-Switch and ClearPass.

Very granular policies can be crafted and implemented using Dynamic Segmentation. In those types of designs, it
NOTE
is very likely that additional VLANS would be used as there is a 1:1 mapping of user roles to VLANS.

Mobility Services Block


With the overwhelming majority of users requiring wireless access a reliable and redundant system is a business necessity.
The size of the user/device population calls for having a dedicated mobility services block. Leading practices call for
transitioning a mobility cluster to a dedicated aggregation switch block if the number of devices is greater than 4096. In this
VRD, the Mt. Rose facility is targeted to have fewer than 4096 devices and as such, a dedicated mobility services block is not
required.

The overall mobility design calls for having Mobility Masters at HQ and Gold River and each site will have a Mobility Cluster
sized to support the planned site device population. HQ and Gold River will also provide failover services should the local site
experience an outage. The HQ and Gold River clusters will be sized to support a single remote site failure scenario. Cluster
sizing will allow for failure of two remote sites. Failure of a third site would lead to an extended network outage. The network
doesn’t provide to layer 2 connectivity between sites. During a failure of all mobility controllers at a site, access points will

105
reboot and connect to the mobility cluster This design limitation can be mitigated by providing either a redundant set of
controllers at each site or by adding additional Metro-E circuits. Dumars Industries is aware of this limitation and is willing to
accept this risk. For the remote sites to experience a failure, both Metro-E switches and/or circuits would need to fail along with
both of the aggregation switches and the entire local mobility controller cluster. The diagram below depicts the steady state
and failed state tunnels built between APs and controllers as well as wireless devices and controllers. Connectivity to the
Failover Site Mobility Cluster will be established once the local APs reboot.

Figure 70 - Mobility Controller and AP High Level Design

Guest internet access will be provided by configuring a guest SSID which will then be extended from the local mobility
controllers to a mobility controller cluster in the DMZ. All guest traffic is controlled by the DMZ mobility controllers. However,
the APs still terminate onto the local controllers. Between the local controllers and DMZ master controllers L2 GRE tunnels to
bridge guest SSID traffic to the DMZ. From there, guest traffic can be handled by the DMZ mobility controllers and be isolated
from the internal network.
Tunnels are configured one direction at a time. The VRRP-VIPs will be used as the tunnel endpoints for both sets of controllers.

106
Figure 71 – Guest Internet Access Diagram

The Aruba Solutions Exchange (ase.arubanetworks.com) has a wizard to generate configurations for the
NOTE
configuration describe above. The solution is called ‘L2 GRE to DMZ Controller with Captive Portal SSID’

The table below documents the mobility cluster active and standby configuration for each site.

Site Backup Site/Cluster

HQ Gold River

Gold River Headquarters

Squaw Valley Gold River

Kirkwood Headquarters

Mt. Rose Headquarters

Each of the Mobility Controllers will be attached via a LAG with two interfaces to the supporting switch and will provide a
forwarding capacity of 20G to each controller. The mobility services block switches will be configured as VSX pair and will have
a multiple 40G links to the core switches. Configuration of VSX will be nearly identical to a wired aggregation switch with the

107
exception of the allowed VLANS on the dot1q trunks. For the Mt. Rose facility, the mobility controllers will be attached to the
collapsed core/aggregation switches as there is not a mobility services block at this location.

A dot1q trunk will be configured from the Mobility Aggregation switches (configured as a VSX pair, VSF stack, or a backplane
stacked device pair) to each mobility controller. The dot1q trunk will transport all VLANS including the device management
VLAN for the environment.

Network Services
All network services will be provided by redundant systems at the HQ DC as well as by systems at the Gold River DC. The
services provided by the network are:
• Active Directory
• DNS
• DHCP
• NTP
• ClearPass
• Airwave
• Syslog
To implement a highly-available network, some of these services are hosted on clusters and/or multiple machines.

Service/System Description

AD1 (Provides DNS, DHCP and NTP) HQ Primary AD Host

AD2 (Provides DNS, DHCP and NTP) HQ Second AD Host

CCPM-HQ HQ ClearPass VM (subscriber)

CCPM-PUB HQ ClearPass Publisher

Airwave HQ AirWave

syslog-hq HQ Syslog

HQ-MM1 Headquarters Mobility Master 1

GDR-MM1 Gold River Mobility Master 1

AD3 (Provides DNS, DHCP and NTP) GDR Primary AD Host

AD4 (Provides DNS, DHCP and NTP) GDR Second AD Host

CCPM-GDR GDR ClearPass VM (subscriber)

Airwave GDR AirWave

syslog-gdr GDR Syslog

108
Figure 72 - Network Services Overview

Device & Network Management


All network device templates will have configuration elements to support management functions/services. The table below lists
the configuration elements.

Description

SNMP All devices will be configured to support SNMP operations including traps.

Logging All devices will be configured to send logs to two syslog hosts

NTP & Timezone All devices will be configured to obtain time from two NTP servers. The timezone will be
locally defined on each device.
Figure 73 - Device & Network Management

109
Network Instrumentation
All network device templates will have configuration elements to support management functions/services. The table below lists
the configuration elements.

Description

sFlow Aggregation and Core devices will be configured to export sFlow for layer 3 interfaces
which connect to upstream devices/peers. Two sFlow collectors will be configured.
Figure 74 - Network Instrumentation

Network Automation
All network device templates will enable access via the RestAPI to allow for interaction for automation tools and systems. This
will also allow NAE Agents to interact with other network devices.

Description

Network Analytics Engine Core and Aggregation devices using ArubaOS-CX will leverage the Network Analytics
(NAE) Engine to provide for agents to monitor, alert, and interact with the network in an
automated fashion.

Automated Orchestration Automation framework utilizing Python and REST are fully capable of interacting with
Frameworks Aruba devices, applications, and tools, such as NAE.

Other Services
All network device templates will have configuration elements to the following services:

Description

TACACS+ All devices will be configured to support AAA via TACACS+ for administrative access.
Authentication, Authorization, and Accounting will be enabled whenever possible.

Device Banners Message of the day and Exec banners will be configured on all devices.
Figure 75 – Other Services

110
ADAPTING THIS CASE STUDY
The case study presented in this document was conceived to be adaptable in both size and focus. With respect to size, the Mt.
Rose facility is a comparatively ‘small’ facility (compared to the others in this case study). The design concepts and
configurations for this facility could be used for smaller or larger sites. Some of the additional considerations to tailor this
example to specific environments would be to centralize the mobility controllers and/or distribute additional ClearPass
subscribers to additional buildings.
The Metro-E network in this case study could be a layer three MPLS VPN service in which all sites have full-mesh connectivity.
This change would alter the number/location of mobility controllers and the dynamic segmentation design as WAN transport for
dynamic segmentation is not recommended. The metro-E network might not be part of the design; rather all building might be
on the same campus. The primary difference this change presents would change the location and placement of the mobility
controllers.

Port/Slot LAG Interface Diversity


In adopting the attached configurations for real-world deployments, LAGs/VSX configurations should be configured with
interfaces from multiple line cards to maximize system redundancy. This is not a requirement but it is recommended to build a
highly redundant solution as this further extends overall system HA when used in conjunction with VSX.

Incorporating VRFs
If you are planning on incorporating the use of VRFs (VRF-lite) in your design, the sample configurations can be adapted for
this design requirement by moving the layer three functions from Routed Only Ports (ROPs) to SVIs and by associating the
SVIs with the VRFs. One use-case for VRFs would be to establish a management VRF. The management VRF would provide
for the layer 3 segmentation of users and devices attached to the network and the network devices. Syslog, AAA, SNMP, NTP,
SSH, VSX keepalive, WebUI, and other services are VRF aware and can be selectively enabled/disabled on specific VRFs.

Reducing Required Number of Physical Interfaces


The design for large sites with VSX enabled aggregation layer switches uses multiple interfaces bundled into LAGs to establish
an layer 3 path, and ISL link, and a VSX keepalive path. The design presented in this document was crafted to provide clear
delineation between interfaces performing functions at layer 2 and layer 3. To that end, the aggregation switches are using 6
interfaces (providing each function is a LAG with two interfaces). If a need arises to reduce the number of physical interfaces
used, it is possible to build the layer 3 path (shown in configs as LAG1) using SVIs and transport an additional VLAN across the
ISL link. The keepalive function can also use any IP path between VSX peers and could be transported over the LAG2 and
LAG3 interfaces as depicted in the sample configurations. The ISL and Keepalive functions SHOULD NEVER use the same
physical interface(s) to ensure proper function in the event of an ISL failure.

Dynamic Segmentation
In this case study all of the traffic save for IP Phones and AV equipment is being tunneled back to the mobility controllers. To
adapt this behavior to your environment you may choose to perform local switching for some users/systems. This change will
require having additional ClearPass configuration for tunneled and non-tunneled devices. This change will also require
implementing additional VLANS and associated IP addressing and services (DHCP).

ClearPass Configuration
Several network features/functions including Dynamic Segmentation, 802.1X user and device authentication as configured in
this case study leverage ClearPass. Aruba recommends reviewing the ClearPass Design Documents to craft configurations
for your environment, The configurations crafted for this case study are simple authentication policies which should be modified
to include proper security controls for your production network. Two configurations were created for this VRD:

111
• MAC Authentication to support Access Points and other “infrastructure” devices including phones, and physical
security devices.
• Dot1X user authentication for both wired and wireless connected users.
Three roles were created to allow for the assignment of each user (executives, engineering, and general employees) to the
appropriate VLAN.

Adaptation Summary

In summary, this case study was designed to be adaptable and to address broad design requirements. Following the
guidelines presented in the building blocks section of this document will allow for the scaling up or down as needed to best align
the design principles and practices to your specific environment. As always, please reach out on the Airheads community for
any questions.

BUILDING THE NETWORK


Appendix A of this VRD has configuration files for the HQ site and for the Mt. Rose site. All other sites have identical
configurations save for site specific addressing and the BGP inbound policy route-map. The configuration files contain
comments about various configuration stanzas. In adapting these configuration examples to real-world deployments many
elements will need to be updated including VLAN IDs, IP addressing, MTU, and route-maps/prefix lists.

DOCUMENT CONTRIBUTORS

The table below lists key contributors and reviewers of this document. The core TME team would like to acknowledge their
assistance in preparing, editing, and validating content and configurations in this document.

Name Role

Kevin Marshall Wireless TME

Makarios Moussa Wireless TME

Syed Ahmed Wireless TME

Justin Noonan Wired TME

Matt Fern Wired TME

Priyank Patel Wired TME

112
Vincent Giles Wired TME

Todd Allen Osterberg Wired TME

The team would also like the acknowledge the following Consulting Systems Engineers for their
assistance in reviewing the document:
Name Role

Kelly Small CSE

Chris Evans CSE

Deepak Kumar Singh CSE

113
APPENDIX A – LAB DEVICE CONFIGURATRIONS

SWHQ-CORE1 Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-CORE1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-CORE1 // 8400 // lookback0 10.1.1.1/32 *
* *
* Headquarters Core Switch 1
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning

114
logging 10.254.224.10 udp severity warning

! Sample sFlow configuration exporting to two collectors


sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.1.1
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-CORE1

115
snmp-server system-location HQ MDF // Row 6 Rack 6
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.1.1.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT

116
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
wfq queue 0 weight 1
wfq queue 1 weight 1
wfq queue 2 weight 1
wfq queue 3 weight 1
wfq queue 4 weight 1
wfq queue 5 weight 1
wfq queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1

117
description LAG to swhq-core-2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.1/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 2
description to SWHQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.6/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip ospf network point-to-point

interface lag 3
description to SWHQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.22/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 20
description SWHQ-AGG1A

118
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.25/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 21
description to SWHQ-AGG1B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.37/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 22
description SWHQ-AGG2A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.65/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

119
interface lag 23
description to SWHQ-AGG2B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.69/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 24
description to SWHQ-MAGG1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.45/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 25
description to SWHQ-MAGG2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.49/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point

120
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 31
description SWHQ-DC1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.86/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 32
description to SWHQ-DC2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.89/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface 1/1/1
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/2
description to SWHQ-AGG1B

121
no shutdown
mtu 2068
lag 21
interface 1/1/3
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/4
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/5
description to SWHQ-WAN2
no shutdown
mtu 2068
lag 3
interface 1/1/11
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/12
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/13
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/14
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/16

122
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 2

interface 1/1/49
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 1
interface 1/1/50
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 1

interface loopback 0
ip address 10.1.1.1/32
ip ospf 1 area 0.0.0.0

router pim
enable
rp-candidate source-ip-interface lag1
rp-candidate group-prefix 224.0.0.0/4
bsr-candidate source-ip-interface lag1
bsr-candidate priority 1

https-server rest access-mode read-write


https-server vrf mgmt

SWHQ-CORE2 Configuration

!Version ArubaOS-CX TL.10.01.0002


hostname SWHQ-CORE2
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *

123
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-CORE2 // 8400 // lookback0 10.1.1.2/32 *
* *
* Headquarters Core Switch 2
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! Sample sFlow configuration exporting to two collectors


sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.1.2
!
!
!

124
! fallback local account if TACACS is not reachable/functioning
user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-CORE2
snmp-server system-location HQ MDF // Row 6 Rack 6
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1

125
! define the OSPF router ID to match
! the loopback address
router-id 10.1.1.2
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT

126
wfq queue 0 weight 1
wfq queue 1 weight 1
wfq queue 2 weight 1
wfq queue 3 weight 1
wfq queue 4 weight 1
wfq queue 5 weight 1
wfq queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1
description LAG to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.2/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 2

127
description to SWHQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.18/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip ospf network point-to-point

interface lag 3
description to SWHQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.10/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 20
description SWHQ-AGG1A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.29/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 21
description to SWHQ-AGG1B

128
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.41/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 22
description SWHQ-AGG2A
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.73/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 23
description to SWHQ-AGG2B
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.77/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

129
interface lag 24
description to SWHQ-MAGG1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.53/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 25
description to SWHQ-MAGG2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.57/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 31
description SWHQ-DC1
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.93/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point

130
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface lag 32
description to SWHQ-DC2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.97/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

interface 1/1/1
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/2
description to SWHQ-AGG1B
no shutdown
mtu 2068
lag 21
interface 1/1/3
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/4
description to SWHQ-AGG1A
no shutdown
mtu 2068
lag 20
interface 1/1/5

131
description to SWHQ-WAN2
no shutdown
mtu 2068
lag 3
interface 1/1/11
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/12
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/13
description to SWHQ-MAGG1A
no shutdown
mtu 2068
lag 24
interface 1/1/14
description to SWHQ-MAGG1B
no shutdown
mtu 2068
lag 25
interface 1/1/16
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 2

interface 1/1/49
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 1
interface 1/1/50
description to SWHQ-CORE1
no shutdown
mtu 2068

132
lag 1

interface loopback 0
ip address 10.1.1.2/32
ip ospf 1 area 0.0.0.0

router pim
enable
rp-candidate source-ip-interface lag1
rp-candidate group-prefix 224.0.0.0/4
bsr-candidate source-ip-interface lag1
bsr-candidate priority 2

https-server rest access-mode read-write


https-server vrf mgmt

133
SWHQ-AGG1A Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-AGG1A
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-AGG1A // 8320 // lookback0 10.1.2.1/32 *
* *
* Headquarters Bldg Aggregation Block Switch 1 - VSX Pair *
* Pair supports Floors 1-7. *
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! define a VRF for the VSX keepalive


vrf VSX_KEEPALIVE

134
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.2.1
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-AGG1A
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact netops@dumarsinc.com

135
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.1.2.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
vlan 10
name MGMT for L2 attached
vsx-sync
vlan 20
name IoT - bldg control
vsx-sync
vlan 30

136
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync

! define the QOS queing profile


! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1

137
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

! configure LAG to peer device (east-west link)


interface lag 1
description L3 to SWHQ-AGG1B
no shutdown
! enable L3 counters
l3-counters
! define the IP MTU
ip mtu 2048
ip address 10.1.252.33/30
! enable active LACP
lacp mode active
qos trust dscp

! participate in OSPF process 1 in area 0.0.0.0


ip ospf 1 area 0.0.0.0
! disable passive to form OSPF adjacencies

138
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 2
description to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2068
ip address 10.1.252.26/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 3
description TO SWHQ-CORE2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.30/30
lacp mode active

ip ospf 1 area 0.0.0.0

139
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! configure the ISL link. Note this is an L2 LAG


! allow the VLANS tied to downstream VSX/MCLAG links
! to flow across the ISL link
interface lag 10
vsx-sync vlans
description ISL LAG to swhq-agg1b
no shutdown
no routing
vlan trunk native 1 tag
vlan trunk allowed 1,10,20,30,40,138,997-999,1281-1283
lacp mode active

! configure VSX keepalive in the VSX_KEEPALIVE vrf


interface lag 11
vrf attach VSX_KEEPALIVE
ip address 192.168.1.1/24
lacp mode active

! configuration sample for connecting to an L3 device


! in this case, this is an L3 access switch
interface lag 30
description to swacc-a0 // L3
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.101/30
lacp mode active
ip ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim-sparse enable

interface lag 41 multi-chassis


! enable the sync VLANS with VSX peer device

140
vsx-sync vlans
description TO swhq-acc-a1-1
! enable the interface to forward frames
no shutdown
! disable routing and make this an L2 lag
no routing
! leave the default VLAN as 1 which is NOT used for
! production traffic
vlan trunk native 1
! allow appropriate VLANS to traverse the dot1q link
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
! VSX requires LACP active mode
lacp mode active
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 42 multi-chassis


vsx-sync vlans
description TO swhq-acc-a1-2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 43 multi-chassis


vsx-sync vlans
description TO swhq-acc-a1-3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,100-104,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

! note additional multi-chassis interfaces are omitted from this


! document. However, they are identical in this design to
! configs shown for lag 41.

141
! physical interface configuration
! note the MTU is 20 bytes larger than the IP MTU defined
! on L3 lag interfaces

interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2

interface 1/1/2
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3

! interfaces for VSX connections to access layer switches

interface 1/1/17
description to swhq-acc-a1-1
no shutdown
lag 41
interface 1/1/18
description to swhq-acc-a1-2
no shutdown
lag 42
interface 1/1/19
description to swhq-acc-a1-3

142
no shutdown
lag 43

! note that additional physical interfaces for VSX forwarding


! interfaces are omitted from the document but are present
! in the full configuration

interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-AGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-AGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10

! create a loopback interface an assign a /32


! for device management. Add the loopback to OSPF
interface loopback 0
ip address 10.1.2.1/32
ip ospf 1 area 0.0.0.0

143
! note that no L3 is configured for VLAN 1
interface vlan1

! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan

! note we do NOT use the ‘no ip ospf passive’ command on


! these SVIs as there is no need to have an OSPF adjancy
! established on these VLANS.
interface vlan10
! VSX sync the active gateway config to secondary
! VSX peer device
vsx-sync active-gateways
description l2 ACCESS switch & AP & l2 device mgmt
ip address 10.2.127.253/17
! the same virtual MAC is reused in this design
! for all of the VSX interfaces
active-gateway ip 10.2.127.254 mac 00:00:00:10:11:12
! forward DHCP and other b’cast packets
! and to send DHCP requests to CPPM for profiling
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
! configure the interface to be included in
! the OSPF process 1 and area 0.0.0.0
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable

interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.16.31.253/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64

144
ip helper-address 10.254.134.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.63.253/19
active-gateway ip 172.16.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.95.253/19
active-gateway ip 172.16.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan138
vsx-sync active-gateways
attach VRF MOBILITY
description Corp BYOD
ip address 10.32.223.253/19
active-gateway ip 10.32.223.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64

145
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan999
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.253/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.95.253/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.159.253/19
active-gateway ip 10.32.159.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable

146
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.2 source 192.168.1.1 vrf VSX_KEEPALIVE

! define the domain name and name servers


! used by this device
ip dns domain-name dumarsinc.com
ip dns server-address 10.254.10.10
ip dns server-address 10.254.130.10

! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default

147
SWHQ-AGG1B Configuration

!Version ArubaOS-CX TL.10.01.0002


hostname SWHQ-AGG1B
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-AGG1B // 8320 // lookback0 10.1.2.2/32 *
* *
* Headquarters Bldg Aggregation Block Switch 1 - VSX Pair *
* Pair supports Floors 1-7. *
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! define a VRF for the VSX keepalive

148
vrf VSX_KEEPALIVEx

! Sample sFlow configuration exporting to two collectors


sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.2.2
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-AGG1B
snmp-server system-location HQ MDF // Row 6 Rack 8

149
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.1.2.2
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
vlan 10
name MGMT for L2 attached
vsx-sync
vlan 20
name IoT - bldg control
vsx-sync

150
vlan 30
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync

! define the QOS queing profile


! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1

151
dwrr queue 1 weight 1
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

! configure LAG to peer device (east-west link)


interface lag 1
description L3 to SWHQ-AGG1B
no shutdown
! enable L3 counters
l3-counters
! define the IP MTU
ip mtu 2048
ip address 10.1.252.34/30
! enable active LACP
lacp mode active
qos trust dscp

! participate in OSPF process 1 in area 0.0.0.0


ip ospf 1 area 0.0.0.0

152
! disable passive to form OSPF adjacencies
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 2
description to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2068
ip address 10.1.252.38/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 3
description TO SWHQ-CORE2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.42/30
lacp mode active

153
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! configure the ISL link. Note this is an L2 LAG


! allow the VLANS tied to downstream VSX/MCLAG links
! to flow across the ISL link
interface lag 10
vsx-sync vlans
description ISL LAG to swhq-agg1b
no shutdown
no routing
vlan trunk native 1 tag
vlan trunk allowed 1,10,20,30,40,138,997-999,1281-1283
lacp mode active

! configure VSX keepalive in the VSX_KEEPALIVE vrf


interface lag 11
vrf attach VSX_KEEPALIVE
ip address 192.168.1.2/24
lacp mode active

interface lag 41 multi-chassis


! enable the sync VLANS with VSX peer device
vsx-sync vlans
description TO swhq-acc-a1-1
! enable the interface to forward frames
no shutdown
! disable routing and make this an L2 lag
no routing
! leave the default VLAN as 1 which is NOT used for
! production traffic
vlan trunk native 1
! allow appropriate VLANS to traverse the dot1q link
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
! VSX requires LACP active mode

154
lacp mode active
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 42 multi-chassis


vsx-sync vlans
description TO swhq-acc-a1-2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 43 multi-chassis


vsx-sync vlans
description TO swhq-acc-a1-3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,100-104,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

! note additional multi-chassis interfaces are omitted from this


! document. However, they are identical in this design to
! configs shown for lag 41.

! physical interface configuration


! note the MTU is 20 bytes larger than the IP MTU defined
! on L3 lag interfaces

interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2

interface 1/1/2

155
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3

! interfaces for VSX connections to access layer switches

interface 1/1/17
description to swhq-acc-a1-1
no shutdown
lag 41
interface 1/1/18
description to swhq-acc-a1-2
no shutdown
lag 42
interface 1/1/19
description to swhq-acc-a1-3
no shutdown
lag 43

! note that additional physical interfaces for VSX forwarding


! interfaces are omitted from the document but are present
! in the full configuration

interface 1/1/47
description VSX keepalive
no shutdown
lag 11

156
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-AGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-AGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-AGG1B for ISL link
no shutdown
lag 10

! create a loopback interface an assign a /32


! for device management. Add the loopback to OSPF
interface loopback 0
ip address 10.1.2.2/32
ip ospf 1 area 0.0.0.0

! note that no L3 is configured for VLAN 1


interface vlan1

! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan

! note we do NOT use the ‘no ip ospf passive’ command on


! these SVIs as there is no need to have an OSPF adjancy
! established on these VLANS.

157
interface vlan10
! VSX sync the active gateway config to secondary
! VSX peer device
vsx-sync active-gateways
description l2 ACCESS switch & AP & l2 device mgmt
ip address 10.2.127.252/17
! the same virtual MAC is reused in this design
! for all of the VSX interfaces
active-gateway ip 10.2.127.254 mac 00:00:00:10:11:12
! forward DHCP and other b’cast packets
ip helper-address 10.254.34.64
! configure the interface to be included in
! the OSPF process 1 and area 0.0.0.0
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable

interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.16.31.252/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.63.252/19
active-gateway ip 172.16.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0

158
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.95.252/19
active-gateway ip 172.16.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.252/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.95.252/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283

159
description All other Users
ip address 10.32.159.252/19
active-gateway ip 10.32.159.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE

! define the domain name and name servers


! used by this device
ip dns domain-name dumarsinc.com
ip dns server-address 10.254.10.10
ip dns server-address 10.254.130.10

! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default

160
SWHW-MAGG1A Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-MAGG1A
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-MAGG1A // 8320 // lookback0 10.1.3.1/32 *
* *
* Headquarters Bldg Mobility Agg Block Switch 1 - VSX Pair *
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! define a VRF for the VSX keepalive


vrf VSX_KEEPALIVE

161
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.3.1
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-MAGG1A
snmp-server system-location HQ MDF // Row 6 Rack 9
snmp-server system-contact netops@dumarsinc.com

162
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.1.3.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
vlan 10
name MGMT for L2 attached
vsx-sync
vlan 20
name IoT - bldg control
vsx-sync
vlan 30

163
name PHYSEC Devices
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync

! define the QOS queing profile


! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profile and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1

164
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

! configure LAG to peer device (east-west link)


interface lag 1
description L3 to SWHQ-MAGG1B
no shutdown
! enable L3 counters
l3-counters
! define the IP MTU
ip mtu 2048
ip address 10.1.252.61/30
! enable active LACP
lacp mode active
qos trust dscp

! participate in OSPF process 1 in area 0.0.0.0


ip ospf 1 area 0.0.0.0
! disable passive to form OSPF adjacencies

165
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 2
description to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2068
ip address 10.1.252.46/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 3
description TO SWHQ-CORE2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.58/30
lacp mode active

ip ospf 1 area 0.0.0.0

166
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! configure the ISL link. Note this is an L2 LAG


! allow the VLANS tied to downstream VSX/MCLAG links
! to flow across the ISL link
interface lag 10
vsx-sync vlans
description ISL LAG to SWHQ-MAGG1B
no shutdown
no routing
vlan trunk native 1 tag
vlan trunk allowed 1,10,20,30,40,138,997-999,1281-1283
lacp mode active

! configure VSX keepalive in the VSX_KEEPALIVE vrf


interface lag 11
vrf attach VSX_KEEPALIVE
ip address 192.168.1.1/24
lacp mode active

interface lag 51 multi-chassis


! enable the sync VLANS with VSX peer device
vsx-sync vlans
description TO HQ-MC-1
! enable the interface to forward frames
no shutdown
! disable routing and make this an L2 lag
no routing
! leave the default VLAN as 1 which is NOT used for
! production traffic
vlan trunk native 1
! allow appropriate VLANS to traverse the dot1q link
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
! VSX requires LACP active mode
lacp mode active

167
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 52 multi-chassis


vsx-sync vlans
description TO HQ-MC-2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 53 multi-chassis


vsx-sync vlans
description TO HQ-MC-3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,100-104,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

! note additional multi-chassis interfaces are omitted from this


! document. However, they are identical in this design to
! configs shown for lag 51.

! physical interface configuration


! note the MTU is 20 bytes larger than the IP MTU defined
! on L3 lag interfaces

interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2

interface 1/1/2
description to SWHQ-CORE1

168
no shutdown
mtu 2068
lag 2
interface 1/1/3
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3

! interfaces for VSX connections to mobility controllers

interface 1/1/17
description to HQ-MC-1
no shutdown
lag 51
interface 1/1/18
description to HQ-MC-2
no shutdown
lag 52
interface 1/1/19
description to HQ-MC-3
no shutdown
lag 53

! note that additional physical interfaces for VSX forwarding


! interfaces are omitted from the document but are present
! in the full configuration

interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48

169
description VSX keepalive
no shutdown
lag 11
interface 1/1/49
description to SWHQ-MAGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-MAGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10

! create a loopback interface an assign a /32


! for device management. Add the loopback to OSPF
interface loopback 0
ip address 10.1.3.1/32
ip ospf 1 area 0.0.0.0

! note that no L3 is configured for VLAN 1


interface vlan1

! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan

! note we do NOT use the ‘no ip ospf passive’ command on


! these SVIs as there is no need to have an OSPF adjancy
! established on these VLANS.

170
interface vlan20
vsx-sync active-gateways
description IOT Devices
ip address 172.26.15.253/19
active-gateway ip 172.16.15.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable

interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.31.253/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.47.253/19
active-gateway ip 172.16.47.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281

171
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.253/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.63.253/19
active-gateway ip 10.32.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.95.253/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE

172
! define the domain name and name servers
! used by this device
ip dns domain-name dumarsinc.com
ip dns server-address 10.254.10.10
ip dns server-address 10.254.130.10

! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default

173
SWHQ-MAGG1B Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-MAGG1B
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
* Welcome to SWHQ-MAGG1B // 8320 // lookback0 10.1.3.2/32 *
* *
* Headquarters Bldg Mobility Agg Block Switch 1 - VSX Pair *
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! define a VRF for the VSX keepalive


vrf VSX_KEEPALIVE

174
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.1.3.2
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description HQSW-MAGG1B
snmp-server system-location HQ MDF // Row 6 Rack 9
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!

175
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.1.3.2
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
vlan 10
name MGMT for L2 attached
vsx-sync
vlan 20
name IoT - bldg control
vsx-sync
vlan 30
name PHYSEC Devices

176
vsx-sync
vlan 40
name PHONES-AV Devices
vsx-sync
vlan 999
name NO_ACCESS_VLAN
vsx-sync
vlan 1281
name EXEC_USERS
vsx-sync
vlan 1282
name ENGINEERING_SUPPORT_USERS
vsx-sync
vlan 1283
name DEFAULT_USERS
vsx-sync

! define the QOS queing profile


! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1
dwrr queue 2 weight 1

177
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

! configure LAG to peer device (east-west link)


interface lag 1
description L3 to SWHQ-MAGG1B
no shutdown
! enable L3 counters
l3-counters
! define the IP MTU
ip mtu 2048
ip address 10.1.252.62/30
! enable active LACP
lacp mode active
qos trust dscp
! participate in OSPF process 1 in area 0.0.0.0
ip ospf 1 area 0.0.0.0
! disable passive to form OSPF adjacencies
no ip ospf passive
! define the OSPF network time to p2p to optimize
! as no DR/BDR is needed

178
ip ospf network point-to-point
! enable OSPF authentication via MD5 and define an
! authentication key
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
! enable PIM spase mode for multicast forwarding
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 2
description to SWHQ-CORE1
no shutdown
l3-counters
ip mtu 2068
ip address 10.1.252.54/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>
ip pim-sparse enable

! see comments in LAG 1 configuration for further description of


! commands
interface lag 3
description TO SWHQ-CORE2
no shutdown
l3-counters
ip mtu 2048
ip address 10.1.252.61/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

179
ip pim-sparse enable

! configure the ISL link. Note this is an L2 LAG


! allow the VLANS tied to downstream VSX/MCLAG links
! to flow across the ISL link
interface lag 10
vsx-sync vlans
description ISL LAG to SWHQ-MAGG1B
no shutdown
no routing
vlan trunk native 1 tag
vlan trunk allowed 1,10,20,30,40,138,997-999,1281-1283
lacp mode active

! configure VSX keepalive in the VSX_KEEPALIVE vrf


interface lag 11
vrf attach VSX_KEEPALIVE
ip address 192.168.1.2/24
lacp mode active

interface lag 51 multi-chassis


! enable the sync VLANS with VSX peer device
vsx-sync vlans
description TO HQ-MC-1
! enable the interface to forward frames
no shutdown
! disable routing and make this an L2 lag
no routing
! leave the default VLAN as 1 which is NOT used for
! production traffic
vlan trunk native 1
! allow appropriate VLANS to traverse the dot1q link
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
! VSX requires LACP active mode
lacp mode active
! Loop protection is enabled
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 52 multi-chassis

180
vsx-sync vlans
description TO HQ-MC-2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

interface lag 53 multi-chassis


vsx-sync vlans
description TO HQ-MC-3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 10,20,30,40,100-104,138,997-999,1281-1283
lacp mode active
loop-protect vlan 10,20,30,40,100-104,138,997-999,1281-1283

! note additional multi-chassis interfaces are omitted from this


! document. However, they are identical in this design to
! configs shown for lag 51.

! physical interface configuration


! note the MTU is 20 bytes larger than the IP MTU defined
! on L3 lag interfaces

interface 1/1/1
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2

interface 1/1/2
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 2
interface 1/1/3

181
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3
interface 1/1/4
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 3

! interfaces for VSX connections to mobility controllers

interface 1/1/17
description to HQ-MC-1
no shutdown
lag 51
interface 1/1/18
description to HQ-MC-2
no shutdown
lag 52
interface 1/1/19
description to HQ-MC-3
no shutdown
lag 53

! note that additional physical interfaces for VSX forwarding


! interfaces are omitted from the document but are present
! in the full configuration

interface 1/1/47
description VSX keepalive
no shutdown
lag 11
interface 1/1/48
description VSX keepalive
no shutdown
lag 11
interface 1/1/49

182
description to SWHQ-MAGG1B
VSX keepalive no shutdown
lag 1
interface 1/1/50
description to SWHQ-MAGG1B
no shutdown
lag 1
interface 1/1/51
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10
interface 1/1/52
description to SWHQ-MAGG1B for ISL link
no shutdown
lag 10

! create a loopback interface an assign a /32


! for device management. Add the loopback to OSPF
interface loopback 0
ip address 10.1.3.2/32
ip ospf 1 area 0.0.0.0

! note that no L3 is configured for VLAN 1


interface vlan1

! define the VLANS for the design. Note that for the
! production design there are multiple aggregation
! switches supporting some buildings/floors
! vlan numbers are consistent but IP addresses
! change to support the address plan

! note we do NOT use the ‘no ip ospf passive’ command on


! these SVIs as there is no need to have an OSPF adjancy
! established on these VLANS.

interface vlan20
vsx-sync active-gateways
description IOT Devices

183
ip address 172.26.15.252/19
active-gateway ip 172.16.15.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable

interface vlan30
vsx-sync active-gateways
description Physical Security Devices
ip address 172.16.31.252/19
active-gateway ip 172.16.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable

interface vlan40
vsx-sync active-gateways
description Phones-AV equipment
ip address 172.16.47.252/19
active-gateway ip 172.16.47.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan138
vsx-sync active-gateways
attach VRF MOBILITY
description Corp BYOD

184
ip address 10.32.223.252/19
active-gateway ip 10.32.223.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 24 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1281
vsx-sync active-gateways
description EXEC Corp Users
ip address 10.32.31.252/19
active-gateway ip 10.32.31.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1282
description Engineering & Support Users
ip address 10.32.63.252/19
active-gateway ip 10.32.63.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
interface vlan1283
description All other Users
ip address 10.32.95.252/19
active-gateway ip 10.32.95.254 mac 00:00:00:10:11:12
ip helper-address 10.254.34.64
ip helper-address 10.254.134.64
ip helper-address 10.254.1.32

185
ip ospf 1 area 0.0.0.0
ip ospf cost 1
ip igmp enable
ip pim-sparse enable
! enable VSX
vsx
! define the LAG used for the ISL link
inter-switch-link lag 10
! define the VSX role for this device
role primary
! enable VSX keepalive
keepalive peer 192.168.1.1 source 192.168.1.2 vrf VSX_KEEPALIVE

! define the domain name and name servers


! used by this device
ip dns domain-name dumarsinc.com
ip dns server-address 10.254.10.10
ip dns server-address 10.254.130.10

! enable PIM
router pim
enable
! enable HTTPS for the WebUI and enable RESTAPI access
https-server rest access-mode read-write
https-server vrf default

186
SWHQ-WAN1 Configuration
!Version ArubaOS-CX TL.10.01.0002
hostname SWHQ-WAN1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
*
* Welcome to SWHQ-WAN1 // 8320 // Loopback 0 10.224.224.1
* *
* Headquarters WAN Edge to Metro E Network
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! Sample sFlow configuration exporting to two collectors


sflow

187
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.1
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-WAN1
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

188
! enable SSH from the default VRF
ssh server vrf default

! Please see the route maps below to understand the function


! of these prefix lists. The prefixes in these lists will
! need to be changed in most cases.

ip prefix-list ALLOW_REDISTRIBUTE seq 20 permit 10.254.0.0/16 ge 16 le 32


ip prefix-list DC_NETWORKS seq 10 permit 10.254.0.0/17 ge 17 le 17
ip prefix-list DEFAULT_ONLY seq 10 permit 0.0.0.0/0
ip prefix-list SITE_NETWORKS seq 10 permit 10.0.0.0/8 le 32
!

! This routemap controls which BGP prefixes are allowed to


! be sent into OSPF. We only need to send a default but we
! also allow for specific routes to be redistributed.
!
route-map BGP->OSPF permit seq 10
match ip address prefix-list DEFAULT_ONLY
route-map BGP->OSPF permit seq 20
match ip address prefix-list ALLOW_REDISTRIBUTE
route-map BGP->OSPF deny seq 99
description don't allow other prefixes into the OSPF domain

! HQ doesn’t have in bound BGP policy. Other sites without


! internet egress process community received and then set
! local preference to influence path selection
route-map BGP_INBOUND_POLICY permit seq 99
description DO NOTHING

! The outbound BGP policy is to set community on prefixes


! best reached via our site. We the DC prefixes best
! reached via HQ as 1:1 and we also add the site community
! should that be needed in the future

route-map BGP_OUTBOUND_POLICY permit seq 10


description Mark HQ learned default route

189
match ip address prefix-list DEFAULT_ONLY
set community 1:1

! Announce DC prefixes reachable via HQ and set the


! appropriate community

route-map BGP_OUTBOUND_POLICY permit seq 20


description Mark DC learned default route
match ip address prefix-list DC_NETWORKS
set community 1:1
route-map BGP_OUTBOUND_POLICY permit seq 99
set community 64512:1 additive

! route-map to filter OSPF -> BGP redistribution


! by default we take all prefixes into the BGP table
route-map OSPF->BGP permit seq 99
description match any clause to make this work

! route-map to filter statics flowing into BGP


! for routes we originated via statics, we set
! the local preference to influence prefix selection

route-map STATIC->BGP permit seq 10


set local-preference 500

! route-map to filter statics flowing into OSPF


! by default, we allow redistribution
route-map STATIC->OSPF permit seq 99
description CATCH ALL PERMIT LINE

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.224.224.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot

190
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1
dwrr queue 2 weight 1

191
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packetsappl


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1
description L3 to HQ-WAN2
no shutdown
ip mtu 2048
ip address 10.1.252.13/30

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 2
description to CORE1
no shutdown
ip mtu 2048

192
ip address 10.1.252.5/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 3
description L3 to HQ Core SW2
no shutdown
ip mtu 2048
ip address 10.1.252.9/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface 1/1/1
description Metro-E SV-SW01
no shutdown
mtu 2068

ip address 10.224.0.62/30
ip mtu 2048

interface 1/1/5
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 3
interface 1/1/48
description to SWHQ-CORE1
no shutdown

193
mtu 2068
lag 2
interface 1/1/54
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 1
interface loopback 0
ip address 10.224.224.1/32
ip ospf 1 area 0.0.0.0

! enable BGP for inter-site routing across Metro E ciruits


router bgp 64512
! only announce summary routes for the HQ site
! suppress prefixes smaller than /16
aggregate-address 10.1.0.0/16 summary-only
aggregate-address 10.2.0.0/16 summary-only
aggregate-address 10.32.0.0/16 summary-only
aggregate-address 172.16.0.0/16 summary-only
! define router ID to match the loopback 0 address
bgp router-id 10.224.224.1
! advertise the loopback interface into bgp
network 10.224.224.1/32
! enable bgp fast-external-fallover to tear down bgp
! sessions if the physical interface goes down
bgp fast-external-fallover
bgp log-neighbor-changes
! redistribute OSPF and BGP as per route maps
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
! create a peer group for common config elements for
! eBGP peers. Note that community is being sent as
! community is used by remote sites to influence
! bgp path selection.
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard

194
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.61 remote-as 64514
neighbor 10.224.0.61 peer-group EBGP_PEERS
neighbor 10.224.0.61 password ciphertext <<removed>>
neighbor 10.224.224.2 remote-as 64512
neighbor 10.224.224.2 description swhq-wan2
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.2 password ciphertext <<removed>>
neighbor 10.224.224.2 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default

195
SWHQ-WAN2 Configuration

!Version ArubaOS-CX TL.10.01.0002


hostname SWHQ-WAN2
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
*
* Welcome to SWHQ-WAN2 // 8320 // Loopback 0 10.224.224.2
* *
* Headquarters WAN Edge to Metro E Network
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! Sample sFlow configuration exporting to two collectors

196
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.2
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWCQ-WAN2
snmp-server system-location HQ MDF // Row 6 Rack 8
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!

197
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

ip prefix-list ALLOW_REDISTRIBUTE seq 20 permit 10.254.0.0/16 ge 16 le 32


ip prefix-list DC_NETWORKS seq 10 permit 10.254.0.0/17 ge 17 le 17
ip prefix-list DEFAULT_ONLY seq 10 permit 0.0.0.0/0
ip prefix-list SITE_NETWORKS seq 10 permit 10.0.0.0/8 le 32
!
ip community-list standard PREFERRED_PREFIX seq 10 permit 1:1
ip community-list standard SECONDARY_DEFAULT seq 10 permit 1:2
ip community-list standard SECONDARY_PREFIX seq 10 permit 1:2
!
!
route-map BGP->OSPF permit seq 10
match ip address prefix-list DEFAULT_ONLY
route-map BGP->OSPF permit seq 20
match ip address prefix-list ALLOW_REDISTRIBUTE
route-map BGP->OSPF deny seq 99
description don't allow other prefixes into the OSPF domain
route-map BGP_INBOUND_POLICY permit seq 99
description DO NOTHING
route-map BGP_OUTBOUND_POLICY permit seq 10
description Mark HQ learned default route
match ip address prefix-list DEFAULT_ONLY
set community 1:1
route-map BGP_OUTBOUND_POLICY permit seq 20
description Mark DC learned default route
match ip address prefix-list DC_NETWORKS
set community 1:1
route-map BGP_OUTBOUND_POLICY permit seq 99
set community 64512:1 additive
route-map OSPF->BGP permit seq 99
description match any clause to make this work
route-map STATIC->BGP permit seq 10
set local-preference 500
route-map STATIC->OSPF permit seq 99
description CATCH ALL PERMIT LINE

198
! enable OSPF and define a process ID
router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.224.224.1
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7

199
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1
description L3 to HQ-WAN1
no shutdown
ip mtu 2048
ip address 10.1.252.14/30

200
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 2
description to SWHQ-CORE2
no shutdown
ip mtu 2048
ip address 10.1.252.17/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 3
description L3 to SWHQ-CORE1
no shutdown
ip mtu 2048
ip address 10.1.252.21/30
lacp mode active
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface 1/1/1
description to SWGDR-WAN1
no shutdown
mtu 2068
ip address 10.224.0.21/30
ip mtu 2048

interface 1/1/5

201
description to SWHQ-CORE1
no shutdown
mtu 2068
lag 3
interface 1/1/48
description to SWHQ-CORE2
no shutdown
mtu 2068
lag 2
interface 1/1/54
description to SWHQ-WAN1
no shutdown
mtu 2068
lag 1
interface loopback 0
ip address 10.224.224.2/32
ip ospf 1 area 0.0.0.0

router bgp 64512


aggregate-address 10.1.0.0/16 summary-only
aggregate-address 10.2.0.0/16 summary-only
aggregate-address 10.32.0.0/16 summary-only
aggregate-address 172.16.0.0/16 summary-only
bgp router-id 10.224.224.2
network 10.224.224.2/32
network 172.248.16.0/21
bgp fast-external-fallover
bgp log-neighbor-changes
redistribute connected
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.22 remote-as 64513
neighbor 10.224.0.22 peer-group EBGP_PEERS

202
neighbor 10.224.0.22 password ciphertext <<removed>>
neighbor 10.224.224.1 remote-as 64512
neighbor 10.224.224.1 description swhq-wan1
neighbor 10.224.224.1 password ciphertext <<removed>>
neighbor 10.224.224.1 update-source loopback 0
https-server rest access-mode read-write
https-server vrf default

SWHQ-ACC-1A

For AOS-S device configurations below, comments should be removed before applying configuration(s) to
NOTE
switches.

; hpStack_WC Configuration Editor; Created on release #WC.16.06.0006


; Ver #13:4f.f8.1c.9b.3f.bf.bb.ef.7c.59.fc.6b.fb.9f.fc.ff.ff.37.ef:05

! Stacking member configuration and stack hostname

stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "swhq-acc-a1-1"

! QoS traffic classifications to identify packets for DSCP remarking

class ipv4 "CS5"


10 remark "CS5 Traffic for Q6 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 5
exit
class ipv4 "Network"
10 remark "CS7 Traffic for Q8 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 10
exit
class ipv4 "default"
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"

203
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069

204
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit

! QoS policy to remark packets by traffic classification

policy qos "QOS_IN"


10 remark "Input QoS Policy"
20 class ipv4 "BULK-AF11" action dscp af11
30 class ipv4 "VOICE-EF" action dscp ef
40 class ipv4 "VIDEO-AF42" action dscp af42
50 class ipv4 "CTRL-AF31" action dscp af31
60 class ipv4 "BUSN-AF21" action dscp af21
70 class ipv4 "BULK-AF12" action dscp af12
80 class ipv4 "BULK-AF13" action dscp af13
90 class ipv4 "Network" action priority 7
100 class ipv4 "Network Control" action priority 6
110 class ipv4 "CS5" action priority 5
120 class ipv4 "default" action dscp default
exit

! Idle timeout for console management sessions - set to 10 minutes

console idle-timeout 600


console idle-timeout serial-usb 600

! Uplink LACP trunk

trunk 1/A1,2/A1 trk1 lacp

! Login banner

banner motd "***********************************************************\n*


*\n* This is a private computer
network/device. Unauthorized
*\n* access is prohibited. All attempts to login/connect *\n* to this
device/network are
logged. Unauthorized users *\n* must disconnect now.
*\n*

205
*\n***********************************************************\n"

! Jumbo frame size/MTU set to support Dynamic Segmentation, OSPF, etc

jumbo ip-mtu 2048


jumbo max-frame-size 2068

! Send event logs of 'warning' severity and higher to syslog servers

logging 10.254.120.10
logging 10.254.224.10
logging severity warning

! QoS DSCP mode and DSCP-to-802.1p queue mappings

qos type-of-service diff-services


qos traffic-template "MFRA-VRD"
map-traffic-group 1 priority 1
map-traffic-group 1 name "background-tcg"
map-traffic-group 2 priority 2
map-traffic-group 2 name "spare-tcg"
map-traffic-group 3 priority 0
map-traffic-group 3 name "best-effort-tcg"
map-traffic-group 4 priority 3
map-traffic-group 4 name "ex-effort-tcg"
map-traffic-group 5 priority 4
map-traffic-group 5 name "controlled-load-tcg"
map-traffic-group 6 priority 5
map-traffic-group 6 name "video-tcg"
map-traffic-group 7 priority 6
map-traffic-group 7 name "voice-tcg"
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit

! RADIUS server configuration with dynamic authorization, no time limit


! for CoA requests to be considered current

radius-server host 10.254.33.24 key "secret"


radius-server host 10.254.33.24 dyn-authorization
radius-server host 10.254.33.24 time-window 0
radius-server host 10.254.133.24 key "secret"
radius-server host 10.254.133.24 dyn-authorization
radius-server host 10.254.133.24 time-window 0
radius-server cppm identity "m1ra"

! NTP time synchronization with authentication

206
timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable

! TACACS+ server configuration

tacacs-server host 10.254.33.24 key "secret"


tacacs-server host 10.254.133.24 key "secret"

! Disable built-in Telnet server

no telnet-server

! Timezone and DST configuration

time daylight-time-rule continental-us-and-canada


time timezone -480

! Enable built-in HTTPS server (requires certificate)

web-management ssl

! Default gateway and DNS configuration

ip default-gateway 10.2.127.254
ip dns domain-name "dumarsinc.com"
ip dns server-address priority 1 10.254.10.10
ip dns server-address priority 2 10.254.130.10

! Dynamic Segmentation configuration

tunneled-node-server
controller-ip 10.1.254.10
mode role-based
exit

! Uplink LACP trunk port labels and DSCP trust

interface 1/A1
name "Link to AGG1A"
exit
interface 2/A1
name "Link to AGG1B"
exit
interface Trk1
qos trust dscp

207
exit

! SNMP trap hosts and contact/location info

snmp-server host 10.254.224.65 community "s3cret!" trap-level not-info


snmp-server host 10.254.124.65 community "s3cret!" trap-level not-info
snmp-server contact "TMESupport" location "RACK6-ROW6"

! Enable downloadable user roles

aaa authorization user-role enable download

! Configure privilege-mode to allow switch to assign permissions based


! on privilege level provided by authentication servers

aaa authentication login privilege-mode

! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method

aaa authentication ssh login tacacs local


aaa authentication ssh enable tacacs local

! Use EAP-RADIUS for port access

aaa authentication port-access eap-radius

! Enable 802.1x authentication with limit of 5 clients on all ports

aaa port-access authenticator 1/1-1/48,2/1-2/48


aaa port-access authenticator 1/1-1/48,2/1-2/48 client-limit 5
aaa port-access authenticator active

! Enable MAC-based authentication on all ports with limit of 5


! addresses per port

aaa port-access mac-based 1/1-1/48,2/1-2/48


aaa port-access mac-based 1/1-1/48,2/1-2/48 addr-limit 5

! Remove all ports from VLAN 1

vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4,Trk1
no ip address
exit

! VLAN 10 used for switch management, tagged across uplink trunk

208
vlan 10
name "management"
tagged Trk1
ip address 10.2.1.11 255.255.128.0
jumbo
service-policy "QOS_IN" in
exit

! VLANs 20, 30, 40 dynamically assigned by user role; IGMP, jumbo frames,
! QoS policy (for inbound packets) enabled

vlan 20
name "IoT - bldg control"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "physec devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "phones-av devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit

! All ports untagged on VLAN 999 by default, no network access

vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit

! VLANs 1281, 1282, 1283 dynamically assigned by user role; IGMP, jumbo
frames,
! QoS policy (for inbound packets) enabled

vlan 1281
name "EXEC_USERS"
no ip address

209
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit

! Enable MSTP, enable admin-edge-port and BPDU protection on all non-uplink


! ports with a timeout of 60 seconds

spanning-tree
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 bpdu-protection
spanning-tree Trk1 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0

! Disable built-in TFTP server

no tftp server

! Enable loop-protection on uplink LACP trunk

loop-protect Trk1

! Disable USB port autorun

no autorun

! Disable configuration file and firmware downloads via DHCP option

no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url

! Set a local manager password


password manager

210
SWHQ-ACC-A1-2

; hpStack_WC Configuration Editor; Created on release #WC.16.06.0006


; Ver #13:4f.f8.1c.9b.3f.bf.bb.ef.7c.59.fc.6b.fb.9f.fc.ff.ff.37.ef:05

! Stacking member configuration and stack hostname

stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "swhq-acc-a1-2"

! QoS traffic classifications to identify packets for DSCP and 802.1p


remarking

class ipv4 "CS5"


10 remark "CS5 Traffic for Q6 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 5
exit
class ipv4 "Network"
10 remark "CS7 Traffic for Q8 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 10
exit
class ipv4 "default"
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"

211
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit

! QoS policy to remark packets by traffic classification

212
policy qos "QOS_IN"
10 remark "Input QoS Policy"
20 class ipv4 "BULK-AF11" action dscp af11
30 class ipv4 "VOICE-EF" action dscp ef
40 class ipv4 "VIDEO-AF42" action dscp af42
50 class ipv4 "CTRL-AF31" action dscp af31
60 class ipv4 "BUSN-AF21" action dscp af21
70 class ipv4 "BULK-AF12" action dscp af12
80 class ipv4 "BULK-AF13" action dscp af13
90 class ipv4 "Network" action priority 7
100 class ipv4 "Network Control" action priority 6
110 class ipv4 "CS5" action priority 5
120 class ipv4 "default" action dscp default
exit

! Idle timeout for console management sessions - set to 10 minutes

console idle-timeout 600


console idle-timeout serial-usb 600

! Uplink LACP trunk

trunk 1/A1-2/A1 trk1 lacp

! Login banner

banner motd "***********************************************************\n*


*\n* This is a private computer
network/device. Unauthorized
*\n* access is prohibited. All attempts to login/connect *\n* to this
device/network are
logged. Unauthorized users *\n* must disconnect now.
*\n*

*\n***********************************************************\n"

! Jumbo frame size/MTU set to support Dynamic Segmentation, OSPF, etc

jumbo ip-mtu 2048


jumbo max-frame-size 2068

! Send event logs of 'warning' severity and higher to syslog servers

logging 10.254.120.10
logging 10.254.224.10
logging severity warning

! QoS DSCP mode and DSCP-to-802.1p queue mappings

213
qos type-of-service diff-services
qos traffic-template "MFRA-VRD"
map-traffic-group 1 priority 1
map-traffic-group 1 name "background-tcg"
map-traffic-group 2 priority 2
map-traffic-group 2 name "spare-tcg"
map-traffic-group 3 priority 0
map-traffic-group 3 name "best-effort-tcg"
map-traffic-group 4 priority 3
map-traffic-group 4 name "ex-effort-tcg"
map-traffic-group 5 priority 4
map-traffic-group 5 name "controlled-load-tcg"
map-traffic-group 6 priority 5
map-traffic-group 6 name "video-tcg"
map-traffic-group 7 priority 6
map-traffic-group 7 name "voice-tcg"
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit

! RADIUS server configuration with dynamic authorization, no time limit


! for CoA requests to be considered current

radius-server host 10.254.33.24 key "secret"


radius-server host 10.254.33.24 dyn-authorization
radius-server host 10.254.33.24 time-window 0
radius-server host 10.254.133.24 key "secret"
radius-server host 10.254.133.24 dyn-authorization
radius-server host 10.254.133.24 time-window 0
radius-server cppm identity "m1ra"

! NTP time synchronization with authentication

timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable

! TACACS+ server configuration

tacacs-server host 10.254.33.24 key "secret"


tacacs-server host 10.254.133.24 key "secret"

! Disable built-in Telnet server

no telnet-server

214
! Timezone and DST configuration

time daylight-time-rule continental-us-and-canada


time timezone -480

! Enable built-in HTTPS server (requires certificate)

web-management ssl

! Default gateway and DNS configuration

ip default-gateway 10.2.127.254
ip dns domain-name "dumarsinc.com"
ip dns server-address priority 1 10.254.10.10
ip dns server-address priority 2 10.254.130.10

! Dynamic Segmentation configuration

tunneled-node-server
controller-ip 10.1.254.10
mode role-based
exit

! Uplink LACP trunk port labels and DSCP trust

interface 1/A1
name "Link to AGG1A"
exit
interface 2/A1
name "Link to AGG1B"
exit
interface Trk1
qos trust dscp
exit

! SNMP trap hosts and contact/location info

snmp-server host 10.254.224.65 community "s3cret!" trap-level not-info


snmp-server host 10.254.124.65 community "s3cret!" trap-level not-info
snmp-server contact "TMESupport" location "RACK6-ROW6"

! Enable downloadable user roles

aaa authorization user-role enable download

! Configure privilege-mode to allow switch to assign permissions based


! on privilege level provided by authentication servers

215
aaa authentication login privilege-mode

! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method

aaa authentication ssh login tacacs local


aaa authentication ssh enable tacacs local

! Use EAP-RADIUS for port access

aaa authentication port-access eap-radius

! Enable 802.1x authentication with limit of 5 clients on all ports

aaa port-access authenticator 1/1-1/48,2/1-2/48


aaa port-access authenticator 1/1-1/48,2/1-2/48 client-limit 5
aaa port-access authenticator active

! Enable MAC-based authentication on all ports with limit of 5


! addresses per port

aaa port-access mac-based 1/1-1/48,2/1-2/48


aaa port-access mac-based 1/1-1/48,2/1-2/48 addr-limit 5

! Remove all ports from VLAN 1

vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A2-1/A4,,2/1-2/48,2/A2-2/A4,Trk1
no ip address
exit

! VLAN 10 used for switch management, tagged across uplink trunk

vlan 10
name "management"
tagged Trk1
ip address 10.2.2.11 255.255.128.0
jumbo
service-policy "QOS_IN" in
exit

! VLANs 20, 30, 40 dynamically assigned by user role; IGMP, jumbo frames,
! QoS policy (for inbound packets) enabled

vlan 20
name "IoT - bldg control"
no ip address
ip igmp

216
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "physec devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "phones-av devices"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit

! All ports untagged on VLAN 999 by default, no network access

vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit

! VLANs 1281, 1282, 1283 dynamically assigned by user role; IGMP, jumbo
frames,
! QoS policy (for inbound packets) enabled

vlan 1281
name "EXEC_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp

217
jumbo
service-policy "QOS_IN" in
exit

! Enable MSTP, enable admin-edge-port and BPDU protection on all non-uplink


! ports with a timeout of 60 seconds

spanning-tree
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4 bpdu-protection
spanning-tree Trk1 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0

! Disable built-in TFTP server

no tftp server

! Enable loop-protection on uplink LACP trunk

loop-protect Trk1

! Disable USB port autorun

no autorun

! Disable configuration file and firmware downloads via DHCP option

no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url

! Set a local manager password


password manager

218
Mt. Rose Site Configurations

SWMTR-WAN1
!Version ArubaOS-CX TL.10.01.0002
hostname SWMTR-WAN1
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
*
* Welcome to SWMTR-WAN1 // 8320 // Loopback 0 10.224.224.36
* *
* Mt. Rose WAN Edge to Metro E Network
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

219
! Sample sFlow configuration exporting to two collectors
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.36
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWMTR-WAN1
snmp-server system-location Mt Rose MDF // Rack 1
snmp-server system-contact netops@dumarsinc.com

220
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! Please see the route maps below to understand the function


! of these prefix lists. The prefixes in these lists will
! need to be changed in most cases.

ip prefix-list ALLOW_REDISTRIBUTE seq 20 permit 10.254.0.0/16 ge 16 le 32

ip prefix-list DEFAULT_ONLY seq 10 permit 0.0.0.0/0


ip prefix-list SITE_NETWORKS seq 10 permit 10.0.0.0/8 le 32
!

ip community-list standard PREFER_GDR_PREFIX seq 10 permit 1:2


ip community-list standard PREFER_HQ_PREFIX seq 10 permit 1:1

! This routemap controls which BGP prefixes are allowed to


! be sent into OSPF. We only need to send a default but we
! also allow for specific routes to be redistributed.
!
route-map BGP->OSPF permit seq 10
match ip address prefix-list DEFAULT_ONLY
route-map BGP->OSPF permit seq 20
match ip address prefix-list ALLOW_REDISTRIBUTE
route-map BGP->OSPF deny seq 99
description don't allow other prefixes into the OSPF domain

! The site inbound policy is to prefer the path to GDR over HQ

route-map BGP_INBOUND_POLICY permit seq 10


match community-list PREFER_GDR_PREFIX
set local-preference 500

route-map BGP_INBOUND_POLICY permit seq 20


match community-list PREFER_HQ_PREFIX

221
set local-preference 250

route-map BGP_INBOUND_POLICY permit seq 99


description DO NOTHING

! The outbound policy marks prefixes with community matching


! the ASN and adding “:1” By design, we are not altering
! path selection for remote sites in this design

route-map BGP_OUTBOUND_POLICY permit seq 99


set community 64516:1 additive

! route-map to filter OSPF -> BGP redistribution


! by default we take all prefixes into the BGP table
route-map OSPF->BGP permit seq 99
description match any clause to make this work

! route-map to filter statics flowing into BGP


! for routes we originated via statics, we set
! the local preference to influence prefix selection

route-map STATIC->BGP permit seq 10


set local-preference 500

! route-map to filter statics flowing into OSPF


! by default, we allow redistribution
route-map STATIC->OSPF permit seq 99
description CATCH ALL PERMIT LINE

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.224.224.36
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot

222
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1
dwrr queue 2 weight 1

223
dwrr queue 3 weight 1
dwrr queue 4 weight 1
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1
description L3 to SWMTR-WAN2
no shutdown
ip mtu 2048
ip address 10.224.0.53/30

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 2
description to SWMTR-CORE
no shutdown
ip mtu 2048

224
ip address 10.16.252.5/30
lacp mode active

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface 1/1/1
description Metro-E SWSV-WAN2
no shutdown
mtu 2068

ip address 10.224.0.53/30
ip mtu 2048

interface 1/1/5
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2

interface 1/1/6
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2

interface loopback 0
ip address 10.224.224.36/32
ip ospf 1 area 0.0.0.0

! enable BGP for inter-site routing across Metro E ciruits


router bgp 64515
! only announce summary routes for the HQ site
! suppress prefixes smaller than /16
aggregate-address 10.16.0.0/21 summary-only

225
aggregate-address 10.16.8.0/21 summary-only
aggregate-address 10.64.0.0/21 summary-only
! define router ID to match the loopback 0 address
bgp router-id 10.224.224.36
! advertise the loopback interface into bgp
network 10.224.224.36/32
! enable bgp fast-external-fallover to tear down bgp
! sessions if the physical interface goes down
bgp fast-external-fallover
bgp log-neighbor-changes
! redistribute OSPF and BGP as per route maps
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
! create a peer group for common config elements for
! eBGP peers. Note that community is being sent as
! community is used by remote sites to influence
! bgp path selection.
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.37 remote-as 64514
neighbor 10.224.0.37 peer-group EBGP_PEERS
neighbor 10.224.0.37 password ciphertext <<removed>>
neighbor 10.224.224.37 remote-as 64515
neighbor 10.224.224.37 description SWMTR-WAN2
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.37 password ciphertext <<removed>>
neighbor 10.224.224.37 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default

226
SWMTR-WAN2 Configuration

!Version ArubaOS-CX TL.10.01.0002


hostname SWMTR-WAN2
banner motd !
**************************************************************
* *
* This is a private computer network/device. Unauthorized *
* access is prohibited. All attempts to login/connect *
* to this device/network are logged. Unauthorized users *
* must disconnect now. *
* *
**************************************************************

!
banner exec !
***********************************************************************
* *
*
* Welcome to SWMTR-WAN2 // 8320 // Loopback 0 10.224.224.37
* *
* Mt. Rose WAN Edge Switch 2 to Metro E Network
* *
***********************************************************************

! NTP configuration including authentication and timezone


! configuration elements
ntp authentication
clock timezone us/pacific
ntp authentication-key 1 md5 ciphertext <<removed>>
ntp server 10.254.224.10 iburst
ntp server 10.254.124.10 iburst prefer

! Syslog configuration
logging 10.254.120.10 udp severity warning
logging 10.254.224.10 udp severity warning

! Sample sFlow configuration exporting to two collectors

227
sflow
sflow collector 10.254.124.32
sflow collector 10.254.224.32
! define the reporting agent IP to match loopback 0
! interface address
sflow agent-ip 10.224.224.37
!
!
!

! fallback local account if TACACS is not reachable/functioning


user admin group administrators password <<removed>>

! Define both TACACS hosts


tacacs-server host HQ-TACACS key ciphertext <<removed>>
tacacs-server host GDR-TACACS key ciphertext <<removed>>

! Place both TACACS hosts in a group called TACACS


aaa group server tacacs TACACS
server 10.254.1.32
server 10.254.128.32

! enable authentication via TACACS


aaa authentication login default group TACACS

! enable command authorization via TACACS note we fail back to


! ‘none’ if the servers are not reachable/available
aaa authorization commands default group TACACS none

! enable command accounting via TACACS for the group TACACS


aaa accounting all default start-stop group TACACS

! enable SNMPv2c
snmp-server vrf default
snmp-server system-description SWMTR-WAN2
snmp-server system-location Mt Rose MDF // Rack 1
snmp-server system-contact netops@dumarsinc.com
snmp-server community s3cret!
snmp-server host 10.254.124.65 trap version v2c community s3cret!

228
snmp-server host 10.254.224.65 trap version v2c community s3cret!

! enable SSH from the default VRF


ssh server vrf default

! Please see the route maps below to understand the function


! of these prefix lists. The prefixes in these lists will
! need to be changed in most cases.

ip prefix-list ALLOW_REDISTRIBUTE seq 20 permit 10.254.0.0/16 ge 16 le 32

ip prefix-list DEFAULT_ONLY seq 10 permit 0.0.0.0/0


ip prefix-list SITE_NETWORKS seq 10 permit 10.0.0.0/8 le 32
!

ip community-list standard PREFER_GDR_PREFIX seq 10 permit 1:2


ip community-list standard PREFER_HQ_PREFIX seq 10 permit 1:1

! This routemap controls which BGP prefixes are allowed to


! be sent into OSPF. We only need to send a default but we
! also allow for specific routes to be redistributed.
!
route-map BGP->OSPF permit seq 10
match ip address prefix-list DEFAULT_ONLY
route-map BGP->OSPF permit seq 20
match ip address prefix-list ALLOW_REDISTRIBUTE
route-map BGP->OSPF deny seq 99
description don't allow other prefixes into the OSPF domain

! The site inbound policy is to prefer the path to GDR over HQ

route-map BGP_INBOUND_POLICY permit seq 10


match community-list PREFER_GDR_PREFIX
set local-preference 500

route-map BGP_INBOUND_POLICY permit seq 20


match community-list PREFER_HQ_PREFIX
set local-preference 250

229
route-map BGP_INBOUND_POLICY permit seq 99
description DO NOTHING

! The outbound policy marks prefixes with community matching


! the ASN and adding “:1” By design, we are not altering
! path selection for remote sites in this design

route-map BGP_OUTBOUND_POLICY permit seq 99


set community 64516:1 additive

! route-map to filter OSPF -> BGP redistribution


! by default we take all prefixes into the BGP table
route-map OSPF->BGP permit seq 99
description match any clause to make this work

! route-map to filter statics flowing into BGP


! for routes we originated via statics, we set
! the local preference to influence prefix selection

route-map STATIC->BGP permit seq 10


set local-preference 500

! route-map to filter statics flowing into OSPF


! by default, we allow redistribution
route-map STATIC->OSPF permit seq 99
description CATCH ALL PERMIT LINE

! enable OSPF and define a process ID


router ospf 1
! define the OSPF router ID to match
! the loopback address
router-id 10.224.224.37
! set the max-metric on start-up to exclude the device
! from routing via OSPF until <check time> seconds after
! system boot
max-metric router-lsa on-startup
! Use passive interfaces by default and only no-passive on

230
! interfaces which require OSPF adjacencies to be build
passive-interface default
! enable SNMP traps for OSPF events to be sent to trap
! receivers
trap-enable
! redistribute BGP and static routes into OSPF
redistribute bgp route-map BGP->OSPF
redistribute static route-map STATIC->OSPF
! define the OSPF area ID
area 0.0.0.0

! configure all VLANS and provide names for each vlan


! note that vsx-sync is enabled for VLANS participating in the
! vsx configuration
vlan 1
! define the QOS queing profile
! note the swapping of queue 5 and local priority 6 along with
! queue 7 to local-priority 5
! this is done to align with to RFC4594 QoS model
qos queue-profile QOS_PROFILE_OUT
map queue 0 local-priority 0
map queue 1 local-priority 1
map queue 2 local-priority 2
map queue 3 local-priority 3
map queue 4 local-priority 4
map queue 5 local-priority 6
map queue 6 local-priority 7
map queue 7 local-priority 5
name queue 7 VOICE

! define a QoS schedule profie and adjust weights of each


! queue as well as define a strict priority queue to support
! voice traffic
qos schedule-profile QOS_OUT
dwrr queue 0 weight 1
dwrr queue 1 weight 1
dwrr queue 2 weight 1
dwrr queue 3 weight 1
dwrr queue 4 weight 1

231
dwrr queue 5 weight 1
dwrr queue 6 weight 1
strict queue 7

! attach the queue profile and schedule profiles


apply qos queue-profile QOS_PROFILE_OUT schedule-profile QOS_OUT

! globally trust DSCP on received packets


qos trust dscp

! remap DSCP 40-45 ad 47 to local priority 6


qos dscp-map 40 local-priority 6 color green name CS5
qos dscp-map 41 local-priority 6 color green
qos dscp-map 42 local-priority 6 color green
qos dscp-map 43 local-priority 6 color green
qos dscp-map 44 local-priority 6 color green
qos dscp-map 45 local-priority 6 color green
qos dscp-map 47 local-priority 6 color green

interface lag 1
description L3 to SWMTR-WAN1
no shutdown
ip mtu 2048
ip address 10.224.0.54/30

ip ospf 1 area 0.0.0.0


no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface lag 2
description to SWMTR-CORE
no shutdown
ip mtu 2048
ip address 10.16.252.9/30
lacp mode active

232
ip ospf 1 area 0.0.0.0
no ip ospf passive
ip ospf network point-to-point
ip ospf authentication message-digest
ip ospf authentication-key ciphertext <<removed>>

interface 1/1/1
description Metro-E to GDRSW-WAN1
no shutdown
mtu 2068

ip address 10.224.0.58/30
ip mtu 2048

interface 1/1/5
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2

interface 1/1/6
description to SWMTR-CORE
no shutdown
mtu 2068
lag 2

interface loopback 0
ip address 10.224.224.37/32
ip ospf 1 area 0.0.0.0

! enable BGP for inter-site routing across Metro E ciruits


router bgp 64515
! only announce summary routes for the HQ site
! suppress prefixes smaller than /16
aggregate-address 10.16.0.0/21 summary-only
aggregate-address 10.16.8.0/21 summary-only
aggregate-address 10.64.0.0/21 summary-only

233
! define router ID to match the loopback 0 address
bgp router-id 10.224.224.37
! advertise the loopback interface into bgp
network 10.224.224.37/32
! enable bgp fast-external-fallover to tear down bgp
! sessions if the physical interface goes down
bgp fast-external-fallover
bgp log-neighbor-changes
! redistribute OSPF and BGP as per route maps
redistribute ospf route-map OSPF->BGP
redistribute static route-map STATIC->BGP
! create a peer group for common config elements for
! eBGP peers. Note that community is being sent as
! community is used by remote sites to influence
! bgp path selection.
neighbor EBGP_PEERS peer-group
neighbor EBGP_PEERS route-map BGP_INBOUND_POLICY in
neighbor EBGP_PEERS route-map BGP_OUTBOUND_POLICY out
neighbor EBGP_PEERS fall-over
neighbor EBGP_PEERS send-community standard
! enable BFD for eBGP peers
neighbor EBGP_PEERS fall-over bfd
neighbor 10.224.0.57 remote-as 64513
neighbor 10.224.0.57 peer-group EBGP_PEERS
neighbor 10.224.0.57 password ciphertext <<removed>>
neighbor 10.224.224.36 remote-as 64515
neighbor 10.224.224.36 description SWMTR-WAN1
! for iBGP peerings we are using the loopback
! of our neighbor to provide for reachability over
! multiple paths if available.
neighbor 10.224.224.36 password ciphertext <<removed>>
neighbor 10.224.224.36 update-source loopback 0
!
https-server rest access-mode read-write
https-server vrf default

234
SWMTR-CORE

; J9850A Configuration Editor; Created on release #KB.16.06.0006


; Ver #13:4f.f8.1c.fb.7f.bf.bb.ff.7c.59.fc.7b.ff.ff.fc.ff.ff.3f.ef:49

! VSF stack hostname

hostname "SWMTR-CORE"

! Configured modules (must match installed hardware)

module 1/A type j9993a


module 1/B type j9996a
module 2/A type j9993a
module 2/B type j9996a
module 2/C type j9991a

! VSF configuration with OOBM-MAD for split-stack condition handling

vsf
enable domain 1
member 1
type "J9850A" mac-address 941882-cf2900
priority 255
link 1 1/B1-1/B2
link 1 name "I-Link1_1"
exit
member 2
type "J9850A" mac-address 941882-cf2f00
priority 128
link 1 2/B1-2/B2
link 1 name "I-Link2_1"
exit
oobm-mad
port-speed 40g
exit

! QoS traffic classifications to identify packets for DSCP remarking

class ipv4 "CS5"


10 remark "CS5 Traffic for Q6 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 5
exit
class ipv4 "Network"
10 remark "CS7 Traffic for Q8 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 10
exit
class ipv4 "default"

235
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246

236
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit

! QoS policy to remark packets by traffic classification

policy qos "QOS_IN"


10 remark "Input QoS Policy"
20 class ipv4 "BULK-AF11" action dscp af11
30 class ipv4 "VOICE-EF" action dscp ef
40 class ipv4 "VIDEO-AF42" action dscp af42
50 class ipv4 "CTRL-AF31" action dscp af31
60 class ipv4 "BUSN-AF21" action dscp af21
70 class ipv4 "BULK-AF12" action dscp af12
80 class ipv4 "BULK-AF13" action dscp af13
90 class ipv4 "Network" action priority 7
100 class ipv4 "Network Control" action priority 6
110 class ipv4 "CS5" action priority 5
120 class ipv4 "default" action dscp default
exit

! Idle timeout for console management sessions - set to 10 minutes

console idle-timeout 600


console idle-timeout serial-usb 600

! Downlink LACP trunk to MTROSE-ACC1

trunk 1/A8,2/A8 trk1 lacp

! Login banner

banner motd "***********************************************************\n*

237
*\n* This is a private computer
network/device. Unauthorized
*\n* access is prohibited. All attempts to login/connect *\n* to this
device/network are
logged. Unauthorized users *\n* must disconnect now.
*\n*

*\n***********************************************************\n"

! Jumbo frame size/MTU set to support Dynamic Segmentation, OSPF, etc

jumbo ip-mtu 2048


jumbo max-frame-size 2068

! Send event logs of 'warning' severity and higher to syslog servers

logging 10.254.120.10
logging 10.254.224.10
logging severity warning

! QoS DSCP mode and DSCP-to-802.1p queue mappings

qos type-of-service diff-services


qos traffic-template "MFRA-VRD"
map-traffic-group 1 priority 1
map-traffic-group 1 name "background-tcg"
map-traffic-group 2 priority 2
map-traffic-group 2 name "spare-tcg"
map-traffic-group 3 priority 0
map-traffic-group 3 name "best-effort-tcg"
map-traffic-group 4 priority 3
map-traffic-group 4 name "ex-effort-tcg"
map-traffic-group 5 priority 4
map-traffic-group 5 name "controlled-load-tcg"
map-traffic-group 6 priority 5
map-traffic-group 6 name "video-tcg"
map-traffic-group 7 priority 6
map-traffic-group 7 name "voice-tcg"
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit

! NTP time synchronization with authentication

timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst

238
ntp enable

! TACACS+ server configuration

tacacs-server host 10.254.33.24 key "secret"


tacacs-server host 10.254.133.24 key "secret"

! Disable built-in Telnet server

no telnet-server

! Timezone and DST configuration

time daylight-time-rule continental-us-and-canada


time timezone -480

! Enable built-in HTTPS server (requires certificate)

web-management ssl

! Router ID for OSPF (set to Loopback address)

ip router-id 10.16.1.8

! Enable IP routing

ip routing

! Keychain entry for authentication keys

key-chain "secret"
key-chain "secret" key 1 key-string "secret!"

! Loopback address for OSPF routing

interface loopback 0
ip address 10.16.1.8
ip ospf 10.16.1.8 area backbone
exit

! SNMP community, trap hosts, and contact/location info

snmp-server community "s3cret!" unrestricted


snmp-server host 10.254.224.65 community "s3cret!" trap-level not-info
snmp-server host 10.254.124.65 community "s3cret!" trap-level not-info
snmp-server listen oobm
snmp-server contact "netops@dumarsinc.com" location "MTROSE // Row6 Rack8"

! Configure privilege-mode to allow switch to assign permissions based

239
! on privilege level provided by authentication servers

aaa authentication login privilege-mode

! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method

aaa authentication ssh login tacacs


aaa authentication ssh enable tacacs

! OSPF configuration
router ospf
area backbone
enable
exit

! Remove all ports from VLAN 1

vlan 1
name "DEFAULT_VLAN"
no untagged 1/A1-1/A7,2/A1-2/A7,2/C1-2/C24,Trk1
no ip address
exit

! VLANs 2 and 3 used for uplinks to WAN1 and WAN2

vlan 2
name "TO WAN1"
untagged 2/C1
ip address 10.16.7.2 255.255.255.252
ip ospf 10.16.7.2 area backbone
ip ospf 10.16.7.2 network-type point-to-point
exit
vlan 3
name "TO WAN2"
untagged 2/C2
ip address 10.16.7.6 255.255.255.252
ip ospf 10.16.7.6 area backbone
ip ospf 10.16.7.6 network-type point-to-point
exit

! VLAN 10 used for switch management, tagged across downlink trunk

vlan 10
name "Management"
tagged Trk1
ip address 10.16.15.254 255.255.248.0
ip ospf 10.16.15.254 passive
ip ospf 10.16.15.254 area backbone

240
exit

! VLANs 20, 30, 40 are dynamically assigned to devices at access layer by


user role

vlan 20
name "IoT_Building_Control"
ip address 172.19.3.254 255.255.252.0
ip ospf 172.19.3.254 passive
ip ospf 172.19.3.254 area backbone
exit
vlan 30
name "Phy_Sec"
ip address 172.19.7.254 255.255.252.0
ip ospf 172.19.7.254 passive
ip ospf 172.19.7.254 area backbone
exit
vlan 40
name "Phone_AV"
tagged Trk1
ip address 172.19.11.254 255.255.252.0
ip ospf 172.19.11.254 passive
ip ospf 172.19.11.254 area backbone
exit

! VLAN 999 is unauthorized user VLAN; untagged on all non-uplink/downlink


ports

vlan 999
name "Unauth VLAN"
no untagged
untagged 1/A1-1/A7,2/A1-2/A7,2/C3-2/C24
no ip address
exit

! VLANs 1281, 1282, 1283 are dynamically assigned to devices at access layer
by
! user role

vlan 1281
name "EXEC_Corp"
ip address 10.64.1.254 255.255.254.0
ip ospf 10.64.1.254 passive
ip ospf 10.64.1.254 area backbone
exit
vlan 1282
name "Engineering_Support"
ip address 10.64.3.254 255.255.254.0
ip ospf 10.64.3.254 passive

241
ip ospf 10.64.3.254 area backbone
exit
vlan 1283
name "Other_Users"
ip address 10.64.5.254 255.255.254.0
ip ospf 10.64.5.254 passive
ip ospf 10.64.5.254 area backbone
exit

! Enable MSTP as primary root with priority 0, with root-guard on all non-
uplink
! ports

spanning-tree
spanning-tree 1/A1-1/A7,2/A1-2/A7,2/C3-2/C24 root-guard
spanning-tree Trk1 priority 0 root-guard
spanning-tree root primary priority 0

! Enable v3-only mode (for VSF)

no allow-v2-modules

! Set a local manager password

password manager

SWMTR-ACC-1A-1

Running configuration:

; hpStack_WC Configuration Editor; Created on release #WC.16.06.0006


; Ver #13:4f.f8.1c.9b.3f.bf.bb.ef.7c.59.fc.6b.fb.9f.fc.ff.ff.37.ef:05

! Stacking member configuration and stack hostname

stacking
member 1 type "JL321A" mac-address <removed>
member 1 flexible-module A type JL083A
exit
member 2 type "JL321A" mac-address <removed>
member 2 flexible-module A type JL083A
exit
hostname "SWMTR-ACC-1A-1"

! QoS traffic classifications to identify packets for DSCP and 802.1p


remarking

242
class ipv4 "CS5"
10 remark "CS5 Traffic for Q6 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 5
exit
class ipv4 "Network"
10 remark "CS7 Traffic for Q8 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 10
exit
class ipv4 "default"
10 match ip 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255
exit
class ipv4 "VOICE-EF"
10 remark "S4B Audio"
10 match udp 0.0.0.0 255.255.255.255 range 50020 50039 0.0.0.0
255.255.255.255 range 50020 50039
exit
class ipv4 "BULK-AF11"
10 remark "OSSV servers - Snap Vault"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
10566
20 match tcp 0.0.0.0 255.255.255.255 eq 10566 0.0.0.0 255.255.255.255 gt
1023
exit
class ipv4 "BULK-AF12"
10 remark "S4B File Transfer"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255
range 42020 42039
20 remark "S4B App/Screen Sharing"
20 match tcp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
25 match udp 0.0.0.0 255.255.255.255 range 42000 42019 0.0.0.0
255.255.255.255 range 42000 42019
exit
class ipv4 "BULK-AF13"
10 remark "WFoD"
10 match udp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
20 match tcp 0.0.0.0 255.255.255.255 eq 5103 0.0.0.0 255.255.255.255 eq
5103
exit
class ipv4 "BUSN-AF21"
10 remark "SVT traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
8500
exit
class ipv4 "CTRL-AF31"
10 remark "TACACS+ traffic"
10 match tcp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
49

243
20 remark "RADIUS authentication traffic"
20 match udp 0.0.0.0 255.255.255.255 eq 1812 0.0.0.0 255.255.255.255 gt
1023
30 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
1812
40 remark "Wireless CAPWAP control traffic"
40 match udp 0.0.0.0 255.255.255.255 gt 1023 0.0.0.0 255.255.255.255 eq
5246
50 match udp 0.0.0.0 255.255.255.255 eq 5246 0.0.0.0 255.255.255.255 gt
1023
60 remark "SIP Signalling"
60 match tcp 0.0.0.0 255.255.255.255 range 5060 5069 0.0.0.0 0.0.0.0
range 5060 5069
exit
class ipv4 "VIDEO-AF42"
10 remark "S4B Video"
10 match udp 0.0.0.0 255.255.255.255 range 58000 58019 0.0.0.0
255.255.255.255 range 58000 58019
20 remark "S4B Client Media Port"
20 match udp 0.0.0.0 255.255.255.255 range 5350 5389 0.0.0.0
255.255.255.255 range 5350 5389
exit
class ipv4 "Network Control"
10 remark "CS6 Traffic for Q7 when 8-queues"
10 match udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 6
exit

! QoS policy to remark packets by traffic classification

policy qos "QOS_IN"


10 remark "Input QoS Policy"
20 class ipv4 "BULK-AF11" action dscp af11
30 class ipv4 "VOICE-EF" action dscp ef
40 class ipv4 "VIDEO-AF42" action dscp af42
50 class ipv4 "CTRL-AF31" action dscp af31
60 class ipv4 "BUSN-AF21" action dscp af21
70 class ipv4 "BULK-AF12" action dscp af12
80 class ipv4 "BULK-AF13" action dscp af13
90 class ipv4 "Network" action priority 7
100 class ipv4 "Network Control" action priority 6
110 class ipv4 "CS5" action priority 5
120 class ipv4 "default" action dscp default
exit

! Idle timeout for console management sessions - set to 10 minutes

console idle-timeout 600


console idle-timeout serial-usb 600

244
! Uplink LACP trunk to HQ Aggregation

trunk 1/A1,2/A1 trk1 lacp

! Uplink LACP trunk to MTR-CORE

trunk 1/A2,2/A2 trk2 lacp

! Login banner

banner motd "***********************************************************\n*


*\n* This is a private computer
network/device. Unauthorized
*\n* access is prohibited. All attempts to login/connect *\n* to this
device/network are
logged. Unauthorized users *\n* must disconnect now.
*\n*

*\n***********************************************************\n"

! Jumbo frame size/MTU set to support Dynamic Segmentation, OSPF, etc

jumbo ip-mtu 2048


jumbo max-frame-size 2068

! Send event logs of 'warning' severity and higher to syslog servers

logging 10.254.120.10
logging 10.254.224.10
logging severity warning

! QoS DSCP mode and DSCP-to-802.1p queue mappings

qos type-of-service diff-services


qos traffic-template "MFRA-VRD"
map-traffic-group 1 priority 1
map-traffic-group 1 name "background-tcg"
map-traffic-group 2 priority 2
map-traffic-group 2 name "spare-tcg"
map-traffic-group 3 priority 0
map-traffic-group 3 name "best-effort-tcg"
map-traffic-group 4 priority 3
map-traffic-group 4 name "ex-effort-tcg"
map-traffic-group 5 priority 4
map-traffic-group 5 name "controlled-load-tcg"
map-traffic-group 6 priority 5
map-traffic-group 6 name "video-tcg"
map-traffic-group 7 priority 6
map-traffic-group 7 name "voice-tcg"

245
map-traffic-group 8 priority 7
map-traffic-group 8 name "control-tcg"
exit

! RADIUS server configuration with dynamic authorization, no time limit


! for CoA requests to be considered current

radius-server host 10.254.33.24 key "secret"


radius-server host 10.254.33.24 dyn-authorization
radius-server host 10.254.33.24 time-window 0
radius-server host 10.254.133.24 key "secret"
radius-server host 10.254.133.24 dyn-authorization
radius-server host 10.254.133.24 time-window 0
radius-server cppm identity "m1ra"

! NTP time synchronization with authentication

timesync ntp
ntp unicast
ntp authentication key-id 1 authentication-mode md5 key-value secret
ntp server 10.254.124.10 iburst
ntp server 10.254.224.10 iburst
ntp enable

! TACACS+ server configuration

tacacs-server host 10.254.33.24 key "secret"


tacacs-server host 10.254.133.24 key "secret"

! Disable built-in Telnet server

no telnet-server

! Timezone and DST configuration

time daylight-time-rule continental-us-and-canada


time timezone -480

! Enable built-in HTTPS server (requires certificate)

web-management ssl

! Default gateway and DNS configuration

ip default-gateway 10.16.15.254

! Dynamic Segmentation configuration

tunneled-node-server

246
controller-ip 10.1.254.10
mode role-based
exit

! Uplink LACP trunk port labels and DSCP trust

interface 1/A1
name "Link to AGG1A"
exit
interface 1/A2
name "Link to AGG1B"
exit
interface 1/A3
name "Link 1 to MTR-CORE"
exit
interface 1/A4
name "Link 2 to MTR-CORE"
exit
interface Trk1
qos trust dscp
exit
interface Trk2
qos trust dscp
exit

! SNMP trap hosts and contact/location info

snmp-server host 10.254.224.65 community "s3cret!" trap-level not-info


snmp-server host 10.254.124.65 community "s3cret!" trap-level not-info
snmp-server contact "TMESupport" location "RACK6-ROW6"

! Enable downloadable user roles

aaa authorization user-role enable download

! Configure privilege-mode to allow switch to assign permissions based


! on privilege level provided by authentication servers

aaa authentication login privilege-mode

! Enable TACACS+ authentication for SSH login and enable access, with local
! authentication as backup method

aaa authentication ssh login tacacs local


aaa authentication ssh enable tacacs local

! Use EAP-RADIUS for port access

aaa authentication port-access eap-radius

247
! Enable 802.1x authentication with limit of 5 clients on all ports

aaa port-access authenticator 1/1-1/48,2/1-2/48


aaa port-access authenticator 1/1-1/48,2/1-2/48 client-limit 5
aaa port-access authenticator active

! Enable MAC-based authentication on all ports with limit of 5


! addresses per port

aaa port-access mac-based 1/1-1/48,2/1-2/48


aaa port-access mac-based 1/1-1/48,2/1-2/48 addr-limit 5

! Remove all ports from VLAN 1

vlan 1
name "DEFAULT_VLAN"
no untagged 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4,Trk1-Trk2
no ip address
exit

! VLAN 10 used for switch management, tagged across uplink trunks

vlan 10
name "Management"
tagged Trk1-Trk2
ip address 10.16.8.20 255.255.248.0
service-policy "QOS_IN" in
exit

! VLANs 20, 30, 40 dynamically assigned by user role

vlan 20
name "IoT_Building_Control"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 30
name "Phy_Sec"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 40
name "Phone_AV"
no ip address

248
ip igmp
jumbo
service-policy "QOS_IN" in
exit

! All ports untagged on VLAN 999 by default, no network access

vlan 999
name "Unauth VLAN"
untagged 1/1-1/48,1/A2-1/A4,2/1-2/48,2/A2-2/A4
no ip address
jumbo
exit

! VLANs 1281, 1282, 1283 dynamically assigned by user role

vlan 1281
name "EXEC_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1282
name "ENGINEERING_SUPPORT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit
vlan 1283
name "DEFAULT_USERS"
no ip address
ip igmp
jumbo
service-policy "QOS_IN" in
exit

! Enable MSTP, enable admin-edge-port and BPDU protection on all non-uplink


! ports with a timeout of 60 seconds

spanning-tree
spanning-tree 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4 admin-edge-port
spanning-tree 1/1-1/48,1/A3-1/A4,2/1-2/48,2/A3-2/A4 bpdu-protection
spanning-tree Trk1-Trk2 priority 4 bpdu-filter pvst-filter
spanning-tree bpdu-protection-timeout 60 priority 0

! Disable built-in TFTP server

249
no tftp server

! Enable loop-protection on uplink LACP trunks

loop-protect Trk1-Trk2

! Disable USB port autorun

no autorun

! Disable configuration file and firmware downloads via DHCP option

no dhcp config-file-update
no dhcp image-file-update
no dhcp tr69-acs-url

! Set a local manager password


password manager

250
Mobility Controller Configuration

DUMARS_INC Enterprise Level Config


secondary masterip 10.254.132.10 ipsec ff4c8c0f19f3aae7347dc7048c10c708
ip access-list session mfra_employee
any any any permit dot1p-priority 0
!
ip access-list session apprf-mfra_engineering-sacl
!
ip access-list session mfra_engineering
any any any permit dot1p-priority 0
!
ip access-list session apprf-mfra_exec-sacl
!
ip access-list session apprf-mfra_employee-sacl
!
ip access-list session mfra_exec
any any any permit dot1p-priority 0
!
user-role MFRA_EMPLOYEE
access-list session mfra_employee
vlan 1283
!
user-role MFRA_ENGINEERING
access-list session mfra_engineering
vlan 1282
!
user-role MFRA_EXEC
access-list session mfra_exec
vlan 1281
!
vlan 139
!
vlan 1281
!
vlan 1283
!
vlan-name DMZ_GUEST
vlan-name MFRA_EMPLOYEE
vlan-name MFRA_EXEC
vlan DMZ_GUEST 139
vlan MFRA_EMPLOYEE 1283
vlan MFRA_EXEC 1281

251
HQ Site Configuration
ip access-list session apprf-dumars_guest-guest-logon-sacl
!
user-role dumars_guest-guest-logon
access-list session logon-control
access-list session captiveportal
access-list session v6-logon-control
access-list session captiveportal6
!
vlan 10
!
vlan 20
!
vlan 138
!
aaa rfc-3576-server "10.254.33.24"
key 29027974fb07628a8e71f9bdee36eb62
!
aaa authentication dot1x "dumars_byod"
!
aaa authentication dot1x "Dumars_Inc"
!
aaa authentication-server radius "M1RA-CPPM"
host "10.254.33.24"
key 09130061e7d5370d7e1ef27817239880
cppm username "m1ra" password
e4bc0458ab31346f76bfbf6c8497f9cd30517e224e291a77
!
aaa server-group "default"
auth-server Internal position 1
!

252
aaa server-group "dumars_byod"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_employee"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_engineering"
auth-server M1RA-CPPM position 1
!
aaa server-group "Dumars_Inc"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumarsinc-radius"
auth-server M1RA-CPPM position 1
!
aaa profile "default"
initial-role "authenticated"
dot1x-server-group "default"
download-role
!
aaa profile "dumars_byod"
authentication-dot1x "dumars_byod"
dot1x-default-role "authenticated"
dot1x-server-group "dumars_byod"
!
aaa profile "dumars_guest"
initial-role "dumars_guest-guest-logon"
!
aaa profile "Dumars_Inc"
authentication-dot1x "Dumars_Inc"
dot1x-default-role "authenticated"
dot1x-server-group "Dumars_Inc"
download-role

253
rfc-3576-server "10.254.33.24"
!
aaa authentication captive-portal "dumars_guest"
no user-logon
redirect-url "https://www.dumarsinc.com"
!
lc-cluster group-profile "M1RA-CLUSTER"
controller 10.1.254.10 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
controller 10.1.254.11 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
!
ap system-profile "dumars"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password ebfca5fad32c7d5fb11aad1d790bae2f4bccf23762ef8e96
!
ap system-profile "gdr"
lms-ip 10.254.132.66
bkup-lms-ip 10.1.254.12
lms-preemption
ap-console-password 1bd85a5b7f6d4f745e0e740a47eb2a03515fdec6428dfca8
!
ap system-profile "hq"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password ccd229f8aab8870ecc37f993a5894f902696c6e55592ee77
!
wlan ssid-profile "dumars_byod"
essid "DUMARS_BYOD"
opmode wpa2-aes
!

254
wlan ssid-profile "dumars_guest"
essid "DUMARS_GUEST"
!
wlan ssid-profile "Dumars_Inc"
essid "DUMARS_INC"
opmode wpa2-aes
!
wlan virtual-ap "dumars_byod"
aaa-profile "dumars_byod"
vlan 138
ssid-profile "dumars_byod"
!
wlan virtual-ap "dumars_guest"
aaa-profile "dumars_guest"
ssid-profile "dumars_guest"
!
wlan virtual-ap "Dumars_Inc"
aaa-profile "Dumars_Inc"
vlan 1281
ssid-profile "Dumars_Inc"
!
ap-group "default"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
!
ap-group "GDR-APs"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "gdr"
!
ap-group "HQ-APs"

255
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "hq"
!

HQMC1A Controller

masterip 10.254.32.10 ipsec 02d078509ed7f53c3a2e41b1c1d01366c6805dcca43b5178


interface vlan 104
secondary masterip 10.254.132.10 ipsec 88c910b8ac67c09aab1d8f6cd94b50b0
controller-ip vlan 104
vlan 104
!
interface gigabitethernet 0/0/0
!
interface gigabitethernet 0/0/1
!
interface gigabitethernet 0/0/2
description GE0/0/2
switchport mode trunk
trusted
lacp group 0 mode active
trusted vlan 1-4094
!
interface gigabitethernet 0/0/3
!
interface gigabitethernet 0/0/4
!
interface gigabitethernet 0/0/5
!
interface port-channel 0
switchport mode trunk
switchport trunk allowed vlan 1-4094
switchport trunk native vlan 1
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2

256
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 104
ip address 10.1.254.10 255.255.255.0
!
interface tunnel 16657
tunnel source controller-ip
tunnel destination 10.254.7.18
tunnel mode gre 10
trusted
tunnel vlan 139
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
ip default-gateway 10.1.254.1
uplink wired vlan 104 uplink-id link1
!
mgmt-user admin root c8fd41fc01ba1dde325b58f211a3bd90a84c857a1595e3e8e1
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976

257
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!

hostname HQMC1A
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER
lc-cluster exclude-vlan 139,1
country US
vrrp 104
ip address 10.1.254.12
priority 120
vlan 104
no shutdown
!
HQMC1B Controller

masterip 10.254.32.10 ipsec aaf66091f48280e66d5d8ecb0c5cf52704243a49926e666d


interface vlan 104
secondary masterip 10.254.132.10 ipsec e6ad801d70b4d51325ea22c641ad8253
controller-ip vlan 104
vlan 104
!
interface gigabitethernet 0/0/0
!
interface gigabitethernet 0/0/1
!
interface gigabitethernet 0/0/2
switchport mode trunk
trusted
lacp group 0 mode active
trusted vlan 1-4094
!
interface gigabitethernet 0/0/3
!
interface gigabitethernet 0/0/4
!
interface gigabitethernet 0/0/5
!
interface port-channel 0
switchport mode trunk
switchport trunk native vlan 1
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted

258
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 104
ip address 10.1.254.11 255.255.255.0
!
ip default-gateway 10.1.254.1
uplink wired vlan 104 uplink-id link1
!
mgmt-user admin root 39ad53a5012e232378835922a19925c288d632fefc9100a10c
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!

hostname HQMC1B
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER

259
lc-cluster exclude-vlan 139,1
country US
vrrp 104
ip address 10.1.254.12
vlan 104
no shutdown
!

Gold River Site Config

ip access-list session apprf-dumars_guest-guest-logon-sacl


!
user-role dumars_guest-guest-logon
access-list session logon-control
access-list session captiveportal
access-list session v6-logon-control
access-list session captiveportal6
!
aaa rfc-3576-server "10.254.33.24"
key f9c16ddfbf7b3a61bfc8713af96108a8
!
aaa authentication dot1x "dumars_byod"
!
aaa authentication dot1x "Dumars_Inc"
!
aaa authentication-server radius "M1RA-CPPM"
host "10.254.33.24"
key acaa8943d0763ba1e280463a25c9ae75
cppm username "m1ra" password
a6c2ccfbb00d86e04751d2396e07087d7f3dd1341acf9c53
!
aaa server-group "default"
auth-server Internal position 1
!
aaa server-group "dumars_byod"

260
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_employee"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumars_engineering"
auth-server M1RA-CPPM position 1
!
aaa server-group "Dumars_Inc"
auth-server M1RA-CPPM position 1
!
aaa server-group "dumarsinc-radius"
auth-server M1RA-CPPM position 1
!
aaa profile "default"
initial-role "authenticated"
dot1x-server-group "default"
download-role
!
aaa profile "dumars_byod"
authentication-dot1x "dumars_byod"
dot1x-default-role "authenticated"
dot1x-server-group "dumars_byod"
!
aaa profile "dumars_guest"
initial-role "dumars_guest-guest-logon"
!
aaa profile "Dumars_Inc"
authentication-dot1x "Dumars_Inc"
dot1x-default-role "authenticated"
dot1x-server-group "Dumars_Inc"
download-role
rfc-3576-server "10.254.33.24"

261
!
aaa authentication captive-portal "dumars_guest"
no user-logon
redirect-url "https://www.dumarsinc.com"
!
lc-cluster group-profile "M1RA-CLUSTER1"
controller 10.254.132.64 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
controller 10.254.132.65 priority 128 mcast-vlan 0 vrrp-ip 0.0.0.0 vrrp-
vlan 0 group 0
!
ap system-profile "dumars"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password cfc843226ca022b0878e726e15d8ccd43054aa10594a07e7
!
ap system-profile "gdr"
lms-ip 10.254.132.66
bkup-lms-ip 10.1.254.12
lms-preemption
ap-console-password 69a98db4f44d3fe6ebf8345bccbd0359936c30269acd36d7
!
ap system-profile "hq"
lms-ip 10.1.254.12
bkup-lms-ip 10.254.132.66
lms-preemption
ap-console-password a3f7fff17c6b8c2640bcf8dcdd333c96780168bf7c7b1579
!
wlan ssid-profile "dumars_byod"
essid "DUMARS_BYOD"
opmode wpa2-aes
!
wlan ssid-profile "dumars_guest"

262
essid "DUMARS_GUEST"
!
wlan ssid-profile "Dumars_Inc"
essid "DUMARS_INC"
opmode wpa2-aes
!
wlan virtual-ap "dumars_byod"
aaa-profile "dumars_byod"
vlan 138
ssid-profile "dumars_byod"
!
wlan virtual-ap "dumars_guest"
aaa-profile "dumars_guest"
ssid-profile "dumars_guest"
!
wlan virtual-ap "Dumars_Inc"
aaa-profile "Dumars_Inc"
vlan 1281
ssid-profile "Dumars_Inc"
!
ap-group "default"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
!
ap-group "GDR-APs"
virtual-ap "dumars_guest"
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "gdr"
!
ap-group "HQ-APs"
virtual-ap "dumars_guest"

263
virtual-ap "dumars_byod"
virtual-ap "Dumars_Inc"
ap-system-profile "hq"
!

GDRMC1A Configuration
masterip 10.254.32.10 ipsec 0d57fed1a5cf41901a2e1f04ea4e12c4 interface vlan 1
controller-ip vlan 1
interface mgmt
shutdown
!
vlan 1
!
interface gigabitethernet 0/0/0
description GE0/0/0
switchport mode trunk
no spanning-tree
trusted
trusted vlan 1-4094
!
interface gigabitethernet 0/0/1
shutdown
no spanning-tree
!
interface gigabitethernet 0/0/2
shutdown
no spanning-tree
!
interface port-channel 0

264
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094
!
interface port-channel 7

265
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.254.132.64 255.255.255.0
!
ip default-gateway 10.254.132.1
uplink wired vlan 1 uplink-id link1
!
mgmt-user admin root 7eeb4bee010f514cc0a5db45fe21e74219b8164d130401d3df
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 1953
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 1953
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 976
cp-bandwidth-contract l2-other 976
cp-bandwidth-contract ike 1953
!

hostname GDRMC1A
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER1
lc-cluster exclude-vlan 139

266
country US
vrrp 1
ip address 10.254.132.66
priority 120
vlan 1
no shutdown
!

GDRMC1B Configuration

masterip 10.254.32.10 ipsec afb8c47df0f633a931a5676b8d84dc51 interface vlan 1


controller-ip vlan 1
interface mgmt
shutdown
!
vlan 1
!
interface gigabitethernet 0/0/0
description GE0/0/0
switchport mode trunk
no spanning-tree
trusted
trusted vlan 1-4094
!
interface gigabitethernet 0/0/1
shutdown
no spanning-tree
!
interface gigabitethernet 0/0/2

267
shutdown
no spanning-tree
!
interface port-channel 0
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6

268
trusted
trusted vlan 1-4094
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.254.132.65 255.255.255.0
!
ip default-gateway 10.254.132.1
uplink wired vlan 1 uplink-id link1
!
mgmt-user admin root c04747a10173ab6521bd9ca0cd235f2feedfae0593e5138c22
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 1953
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 1953
cp-bandwidth-contract route 976
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 976
cp-bandwidth-contract l2-other 976
cp-bandwidth-contract ike 1953
!

269
hostname GDRMC1B
clock timezone America/Los_Angeles
lc-cluster group-membership M1RA-CLUSTER1
lc-cluster exclude-vlan 139
country US
vrrp 1
ip address 10.254.132.66
vlan 1
no shutdown
!

DMZ Site Config


vlan 139
!
vlan-name GUEST
vlan GUEST 139

DMZMC1A Config
masterip 10.254.32.10 ipsec a57b8c03ce071ed1769674e9fd016898 interface vlan
777
user-role logon
access-list session global-sacl
access-list session apprf-logon-sacl
access-list session ra-guard
access-list session logon-control
access-list session captiveportal
access-list session vpnlogon
access-list session v6-logon-control
access-list session captiveportal6

270
captive-portal DUMARS_GUEST
!
controller-ip vlan 777
vlan 1
!
vlan 777
!
vlan-name OUTSIDE_FW
vlan OUTSIDE_FW 1
interface gigabitethernet 0/0/0
!
interface gigabitethernet 0/0/1
description GE0/0/1
!
interface gigabitethernet 0/0/2
description GE0/0/2
!
interface gigabitethernet 0/0/3
description GE0/0/3
!
interface gigabitethernet 0/0/4
description GE0/0/4
switchport mode trunk
trusted
trusted vlan 1-4094
!
interface gigabitethernet 0/0/5
description GE0/0/5

271
!
interface port-channel 0
trusted
trusted vlan 1-4094
!
interface port-channel 1
trusted
trusted vlan 1-4094
!
interface port-channel 2
trusted
trusted vlan 1-4094
!
interface port-channel 3
trusted
trusted vlan 1-4094
!
interface port-channel 4
trusted
trusted vlan 1-4094
!
interface port-channel 5
trusted
trusted vlan 1-4094
!
interface port-channel 6
trusted
trusted vlan 1-4094

272
!
interface port-channel 7
trusted
trusted vlan 1-4094
!
interface vlan 1
ip address 10.6.8.242 255.255.255.0
!
interface vlan 139
ip address 172.31.0.1 255.255.0.0
mtu 1400
no suppress-arp
!
interface vlan 777
ip address 10.254.7.18 255.255.255.0
!
interface tunnel 1
tunnel source controller-ip
tunnel destination 10.1.254.11
tunnel mode gre 11
tunnel vlan 139
no inter-tunnel-flooding
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
interface tunnel 2
tunnel source controller-ip

273
tunnel destination 10.1.254.10
tunnel mode gre 10
tunnel vlan 139
no inter-tunnel-flooding
mtu 1400
tunnel keepalive
tunnel keepalive 10 3
!
ip route 10.224.0.0 255.255.0.0 10.254.7.1
ip route 10.254.0.0 255.255.0.0 10.254.7.1
ip route 10.1.0.0 255.255.0.0 10.254.7.1
ip default-gateway 10.6.8.1 1
ip default-gateway 10.254.7.1
uplink wired vlan 777 uplink-id link1
!
service dhcp
ip dhcp pool vlan_139
dns-server 8.8.8.8
default-router 172.31.0.1
network 172.31.0.0 255.255.252.0
!
mgmt-user admin root fb3fcc40018dc9111454cb68135d5ce15b0ba2dc282773f1ed
firewall
cp-bandwidth-contract trusted-ucast 65535
cp-bandwidth-contract trusted-mcast 3906
cp-bandwidth-contract untrusted-ucast 9765
cp-bandwidth-contract untrusted-mcast 3906
cp-bandwidth-contract route 976

274
cp-bandwidth-contract sessmirr 976
cp-bandwidth-contract vrrp 512
cp-bandwidth-contract auth 976
cp-bandwidth-contract arp-traffic 3906
cp-bandwidth-contract l2-other 1953
!
aaa profile "DUMARS_GUEST_OPEN"
!
aaa authentication captive-portal "DUMARS_GUEST"
no user-logon
guest-logon
!
aaa authentication captive-portal "logon_cppm_sg"
no user-logon
guest-logon
!
aaa authentication wired
profile "DUMARS_GUEST_OPEN"
!

hostname DMZ-7205
clock timezone America/Los_Angeles
country US
sync-files /flash/upload/custom/logon_cppm_sg command-index 2

275
Appendix B - PLATFORM SCALING
CAMPUS SWITCHING

Layer Two & Interface Scaling

Switch Series Link Aggregation Trunk Groups/ Max # Interfaces per Layer 2 VLANS VLAN IP Interfaces
Groups Multi-chassis LAG/MCLAG (SVIs)
(LAGs) LAGs
Aruba 2930 Series (16.06) 60 60 8 2048 2048
Aruba 3810 Series (16.06) 144 144 8 4094 4094
Aruba 5400R Series 144 144 8 4094 4094
(16.06)
Aruba 8320 Series 32 48 8 512 256
(CX 10.1)
Aruba 8400 Series 128 128 8 256 512
(CX 10.1)
Figure 76 - Switch Interface Scaling

Switch Series MAC Entries IPv4 ARP Entries IPv6 ND Entries Dual Stack Clients
(1 IPv4 ARP+ 2 IPv6 ND)
Aruba 3810 Series (16.06) 64,000 25,000 25,000 8,333
Aruba 5400R Series (16.06) 64,000 25,000 25,000 8,333
Aruba 8320 Series (CX 10.1.020) 47,000 47,000 44,000 22,000
Configured in Mobile-First Mode
Aruba 8400 Series (CX 10.1.020) 64,000 64,000 48,000 32,000
Table 77 – ARP and ND Scaling

Layer Three Scaling

Switch Series IPv4 IPv4 IPv6 IPv6


Unicast Routes Multicast Routes Unicast Routes Multicast Routes
Aruba 3810 Series (16.06) 10,000 2,048 5,000 12,500
Aruba 5400R Series (16.06) 10,000 2,048 5,000 12,500
Aruba 8320 Series (CX 10.1.020) 72,000 3,200 20,000 *
Routed Mode Configuration
Aruba 8400 Series (CX 10.1.020) 100,000 4,000 20,000 *
Figure 78 - Layer Three Scaling

Switch Series OSPFv2 OSPFv2 OSPFv2 OSPFv2


Interfaces Neighbors Areas Routes
Aruba 3810 Series (16.06) 16 128 16 10,000

276
Aruba 5400R Series (16.06) 16 128 16 10,000
Aruba 8320 Series (CX 10.1) 32 32 32
Aruba 8400 Series (CX 10.1) 32 32 32
Figure 79 – OSPFv2 Scaling

Switch Series BGP BGP


Neighbors Prefixes
Aruba 3810 Series (16.06) 32 10,000
Aruba 5400R Series (16.06) 10,000 10,000
Aruba 8320 Series (CX 10.1) 10
Aruba 8400 Series (CX 10.1) 10
Figure 80 - BGP Scaling

WIRELESS
MOBILITY MASTER
HARDWARE

Model Number of Devices Number of Clients Number of Controllers


MM-HW-1K 1,000 10,000 100
MM-HW-5K 5,000 50,000 500
MM-HW-10K 10,000 100,000 1,000
Figure 81 - Hardware Mobility Master Scaling

VIRTUAL

Model Number of Devices Number of Clients Number of Controllers


MM-VA-50 50 500 5
MM-VA-500 500 5,000 50
MM-VA-1K 1,000 10,000 100
MM-VA-5K 5,000 50,000 500
MM-VA-10K 10,000 100,000 1,000
Figure 82 - Virtual Mobility Master Scaling

MOBILITY CONTROLLER SCALING


7000 SERIES

277
Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 32 2,048 16 16 1,024 1,024
3 48 3,072 16 16 1,024 1,024
4 64 4,096 16 16 1,024 1,024
Figure 83 - 7024 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 64 4,096 32 32 2,048 2,048
3 96 6,144 32 32 2,048 2,048
4 128 8,192 32 32 2,048 2,048
Figure 84 - 7030 Cluster Scaling

7200 SERIES
7205 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 256 8,192 128 128 4,096 4,096
3 384 12,288 128 128 4,096 4,096
4 512 16,384 128 128 4,096 4,096
5 640 20,480 128 128 4,096 4,096
6 768 24,576 128 128 4,096 4,096
7 896 28,672 128 128 4,096 4,096
8 1,024 32,768 128 128 4,096 4,096
9 1,152 36,864 128 128 4,096 4,096
10 1,280 40,960 128 128 4,096 4,096
11 1,408 45,056 128 128 4,096 4,096
12 1,536 49,152 128 128 4,096 4,096
Figure 85 - 7205 Cluster Scaling

278
7210 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 512 16,384 256 256 8,192 8,192
3 768 24,576 256 256 8,192 8,192
4 1,024 32,768 256 256 8,192 8,192
5 1,280 40,960 256 256 8,192 8,192
6 1,536 49,152 256 256 8,192 8,192
7 1,792 57,344 256 256 8,192 8,192
8 2,048 65,536 256 256 8,192 8,192
9 2,304 73,728 256 256 8,192 8,192
10 2,560 81,920 256 256 8,192 8,192
11 2,816 90,112 256 256 8,192 8,192
12 3,072 98,304 256 256 8,192 8,192
Figure 86 - 7210 Cluster Scaling

7220 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 1,024 24,576 512 512 12,288 12,288
3 1,536 36,864 512 512 12,288 12,288
4 2,048 49,152 512 512 12,288 12,288
5 2,560 61,440 512 512 12,288 12,288
6 3,072 73,728 512 512 12,288 12,288
7 3,584 86,016 512 512 12,288 12,288
8 4,096 98,304 512 512 12,288 12,288
9 4,608 110,592 ¹ 512 512 12,288 12,288
10 5,120 122,880 ¹ 512 512 12,288 12,288
11 5,632 135,168 ¹ 512 512 12,288 12,288
12 6,144 147,456 ¹ 512 512 12,288 12,288

¹ The number of potential clients supported in the cluster exceeds the 100,000 limit of the Mobility Master.

Figure 87 - 7220 Cluster Scaling

279
7240/7240XM/7280 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 2,048 32,768 1,024 1,024 16,384 16,384
3 3,072 49,152 1,024 1,024 16,384 16,384
4 4,096 65,536 1,024 1,024 16,384 16,384
5 5,120 81,920 1,024 1,024 16,384 16,384
6 6,144 98,304 1,024 1,024 16,384 16,384
7 7,168 114,688 ² 1,024 1,024 16,384 16,384
8 8,192 131,072 ² 1,024 1,024 16,384 16,384
9 9,216 147,456 ² 1,024 1,024 16,384 16,384
10 10,240 ¹ 163,840 ² 1,024 1,024 16,384 16,384
11 11,264 ¹ 180,224 ² 1,024 1,024 16,384 16,384
12 12,288 ¹ 196,608 ² 1,024 1,024 16,384 16,384

¹ The number of potential Access Points supported in the cluster exceeds the 10,000 limit of the Mobility Master.
² The number of potential clients supported in the cluster exceeds the 100,000 limit of the Mobility Master.
Figure 88 - 7240/7240XM/7280 Cluster Scaling

VIRTUAL
MC-VA-50 Cluster Scaling

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 50 800 25 25 400 400
3 75 1,200 25 25 400 400
4 100 1,600 25 25 400 400
5 125 2,000 25 25 400 400
6 150 2,400 25 25 400 400
7 175 2,800 25 25 400 400
8 200 3,200 25 25 400 400
9 225 3,600 25 25 400 400
10 250 4,000 25 25 400 400
11 275 4,400 25 25 400 400
12 300 4,800 25 25 400 400
Figure 89 - MC-VA-50 Cluster Scaling

280
MC-VA-250 Cluster

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 250 4,000 125 125 2,000 2,000
3 375 6,000 125 125 2,000 2,000
4 500 8,000 125 125 2,000 2,000
5 625 10,000 125 125 2,000 2,000
6 750 12,000 125 125 2,000 2,000
7 875 14,000 125 125 2,000 2,000
8 1,000 16,000 125 125 2,000 2,000
9 1,125 18,000 125 125 2,000 2,000
10 1,250 20,000 125 125 2,000 2,000
11 1,375 22,000 125 125 2,000 2,000
12 1,500 24,000 125 125 2,000 2,000
Figure 90 - MC-VA-250 Cluster Scaling

MC-VA-1K Cluster

Cluster Members Max. APs / Max. Clients / AAC / Controller AAC-S / UAC / Controller UAC-S /
Cluster Cluster Controller Controller
2 1,000 16,000 500 500 8,000 8,000
3 1,500 24,000 500 500 8,000 8,000
4 2,000 32,000 500 500 8,000 8,000
5 2,500 40,000 500 500 8,000 8,000
6 3,000 48,000 500 500 8,000 8,000
7 3,500 56,000 500 500 8,000 8,000
8 4,000 64,000 500 500 8,000 8,000
9 4,500 72,000 500 500 8,000 8,000
10 5,000 80,000 500 500 8,000 8,000
11 5,500 88,000 500 500 8,000 8,000
12 6,000 96,000 500 500 8,000 8,000
Figure 91 - MC-VA-1K Cluster

281
For more information
http://www.arubanetworks.com/

3333 Scott Blvd | Santa Clara, CA 95054


1.866.55.ARUBA | T: 1.408.227.4500 | FAX: 1.408.227.4550 | info@arubanetworks.com

www.arubanetworks.com