Sie sind auf Seite 1von 124

FlexPod Datacenter with Cisco Secure Enclaves

Last Updated: May 15, 2014

Building Architectures to Solve Business Problems

About the Authors

About the Authors


Chris O'Brien, Technical Marketing Manager, Server Access Virtualization Business Unit, Cisco
Systems
Chris O'Brien is currently focused on developing infrastructure best practices and solutions that are
designed, tested, and documented to facilitate and improve customer deployments. Previously, O'Brien
was an application developer and has worked in the IT industry for more than 15 years.
John George, Reference Architect, Infrastructure and Cloud Engineering, NetApp
John George is a Reference Architect in the NetApp Infrastructure and Cloud Engineering team and is
focused on developing, validating, and supporting cloud infrastructure solutions that include NetApp
products. Before his current role, he supported and administered Nortel's worldwide training network
and VPN infrastructure. John holds a Master's degree in computer engineering from Clemson University.
Lindsey Street, Solutions Architect, Infrastructure and Cloud Engineering, NetApp
Lindsey Street is a Solutions Architect in the NetApp Infrastructure and Cloud Engineering team. She
focuses on the architecture, implementation, compatibility, and security of innovative vendor
technologies to develop competitive and high-performance end-to-end cloud solutions for customers.
Lindsey started her career in 2006 at Nortel as an interoperability test engineer, testing customer
equipment interoperability for certification. Lindsey has her Bachelors of Science degree in Computer
Networking and her Masters of Science in Information Security from East Carolina University.

About the Authors

About Cisco Validated Design (CVD) Program


The CVD program consists of systems and solutions designed, tested, and documented to facilitate
faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING
FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS
SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES,
INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF
THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED
OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco
WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We
Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS,
Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the
Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital,
the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone,
iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace
Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to
Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of
Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners.
The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
2014 Cisco Systems, Inc. All rights reserved

About Cisco Validated Design (CVD) Program

FlexPod Datacenter with Cisco Secure


Enclaves
Overview
The increased scrutiny on security is being driven by the evolving trends of mobility, cloud computing,
and advanced targeted attacks. More than the attacks themselves, a major consideration is the change in
what defines a network, which goes beyond traditional walls and includes data centers, endpoints, virtual
and mobile to make up the extended network.
Today most converged infrastructures are designed to meet performance and function requirements with
little or no attention to security. Furthermore, the movement toward optimal use of IT resources through
virtualization has resulted in an environment in which the true and implied security accorded by physical
separation has essentially vanished. System consolidation efforts have also accelerated the movement
toward co-hosting on converged platforms, and the likelihood of compromise is increased in a highly
shared environment. This situation presents a need for enhanced security and an opportunity to create a
framework and platform that instills trust.
The FlexPod Data Center with Cisco Secure Enclaves solution is a threat-centric approach to security
allowing customers to address the full attack continuum, before during and after the attack on a standard
platform with a consistent approach. The solution is based on the FlexPod Data Center integrated system
and augmented with services to address business, compliance and application requirements. The
FlexPod Data Center with Cisco Secure Enclaves is a standard approach to delivering a flexible,
functional and secure application environment that can be readily automated.

Solution Components
FlexPod Datacenter with Cisco Secure Enclaves uses the FlexPod Data Center configuration as its
foundation. The FlexPod Data Center is an integrated infrastructure solution from Cisco and NetApp
with validated designs that expedite IT infrastructure and application deployment, while simultaneously
reducing cost, complexity, and project risk. FlexPod Data Center consists of Cisco Nexus Networking,
Cisco Unified Computing System (Cisco UCS), NetApp FAS Series storage systems. One especially
significant benefit of the FlexPod architecture is the ability to customize or "flex" the environment to
suit a customer's requirements, this includes the hardware previously mentioned as well as operating
systems or hypervisors it supports.

Audience

The Cisco Secure Enclaves design extends the FlexPod infrastructure by using the abilities inherit to the
integrated system and augmenting this functionality with services to address the specific business and
application requirements of the enterprise. These functional requirements promote uniqueness and
innovation in the FlexPod, augmenting the original FlexPod design to support these prerequisites. The
result is a region, or enclave, and more likely multiple enclaves, in the FlexPod built to address the
unique workload activities and business objectives of an organization.
FlexPod Data Center with Cisco Secure Enclaves is developed using the following technologies:

Note

FlexPod Data Center from Cisco and NetApp

VMware vSphere

Cisco Adaptive Security Appliance (ASA)

Cisco NetFlow Generation Appliance (NGA)

Cisco Virtual Security Gateway (VSG)

Cisco Identity Services Engine (ISE)

Cisco Network Analysis Module

Cisco UCS Director

Lancope StealthWatch System

The FlexPod solution is hypervisor agnostic.


Please go to the Reference Section of this document for URLs providing more details about the
individual components of the solution.

Audience
This document describes the architecture and deployment procedures of a secure FlexPod Data Center
infrastructure enabled with Cisco and NetApp technologies. The intended audience for this document
includes but is not limited to sales engineers, field consultants, professional services, IT managers,
partner engineering, and customers interested in making security an integral part of their FlexPod
infrastructure.

FlexPod Data Center with Cisco Secure Enclaves


FlexPod Data Center with Cisco Secure Enclaves Overview
The FlexPod Data Center with Cisco Secure Enclaves is a standardized approach to the integration of
security services with a FlexPod Data Center based infrastructure. The design enables features inherit to
the FlexPod platform and calls for its extension through dedicated physical or virtual appliance
implementations. The main design objective is to help ensure that applications in this environment meet
their subscribed service-level agreements (SLAs), including confidentiality requirements, by using the
validated FlexPod infrastructure and the security additions it can readily support. The secure enclave
framework allows an organization to adapt the FlexPod shared infrastructure to meet the disparate needs
of users and applications based on their specific requirements.

FlexPod Datacenter with Cisco Secure Enclaves

FlexPod Data Center with Cisco Secure Enclaves

Components of FlexPod Data Center with Cisco Secure Enclaves


FlexPod Data Center
FlexPod Data Center is a unified platform, composed of Cisco UCS servers, Cisco Nexus network
switches, and NetApp storage arrays. Figure 1 shows the FlexPod base configuration and design
elements. The FlexPod modules can be configured to match the application requirements by mixing and
matching the component versions to achieve the optimum capacity, price and performance targets. The
solution can be scaled by augmenting the elements of a single FlexPod instance and by adding multiple
FlexPod instances to build numerous solutions for a virtualized and non-virtualized data center.
Figure 1

FlexPod Datacenter Solution

Cisco Secure Enclaves


The Cisco Secure Enclaves design uses the common components of Cisco Integrated Systems along with
additional services integration to address business and application requirements. These functional
requirements promote uniqueness and innovation in the integrated computing stack that augment the
original design to support these prerequisites. These unique areas on the shared infrastructure are
referenced as enclaves. The Cisco Integrated System readily supports one or multiple enclaves.
The common foundation of the Cisco Secure Enclaves design is Cisco Integrated Systems components.
Cisco Integrated Systems consists of the Cisco Unified Computing System (Cisco UCS) and Cisco
Nexus platforms. Figure 2 illustrates the extension of Cisco Integrated Systems to include features and
functions beyond the foundational elements. Access controls, visibility, and threat defense are all
elements that can be uniformly introduced into the system as required. The main feature of the enclave
framework is the extensibility of the architecture to integrate current and future technologies within and
upon its underpinnings, expanding the value of the infrastructure stack to address current and future
application requirements

FlexPod Datacenter with Cisco Secure Enclaves

FlexPod Data Center with Cisco Secure Enclaves

Figure 2

Cisco Secure Enclaves Architecture Structure

For more information on Cisco Secure Enclave Architecture go to


http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-manager/whitepaperc07-731204.html
Software Revisions
Table 1 details the software revisions of various components used in the solution validation.
Table 1

Software Revisions

Component
Network

Nexus 5548UP
Nexus 7000
Nexus 1110X
Nexus 1000v

Compute

Cisco UCS Fabric


2.1(3a)
Interconnect 6248
Cisco UCS Fabric
2.1(3a)
Extender - 2232
Cisco UCS C220-M3 2.1(3a)
Cisco UCS B200M3
VMware ESXi
Cisco eNIC Driver
Cisco fNIC Driver

FlexPod Datacenter with Cisco Secure Enclaves

Software
NX-OS 6.0(2)N1(2a)
NX-OS 6.1(2)
4.2(1)SP1(6.2)
4.2(1)SV2(2.1a)

2.1(3a)
5.1u1
2.1.2.38
1.5.0.45

Risk
Low (positioned)

Count
2

Low (positioned)
Low (positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
Low
Low

2
2
1
2
2
2
4
X

FlexPod Data Center with Cisco Secure Enclaves

Services

Management

VMware vCenter
Cisco Virtual
Security Gateway
(VSG)
Cisco UCS Manager
(UCSM)
Cisco Network
Analysis Module
(NAM) VSB
Cisco NetFlow
Generation
Appliance (NGA)
Cisco Identity
Services Engine
(ISE)
Lancope
StealthWatch
Cisco Intrusion
Prevention System
Security Services
Processor (IPS SSP)
Cisco Adaptive
Security Appliance
(ASA) 5585
Lancope
StealthWatch
FlowCollector
Citrix Netscaler
1000v

5.1u1
4.2(1)VSG1(1)

Low
Low
(positioned)

1
X

2.1(3)

Low
(positioned)
Low
(positioned)

1.0(2)

Low
(positioned)

1.2

Low
(positioned)

6.3

Low
(positioned)
Low
(positioned)

Cisco UCS Director


Lancope
StealthWatch
Management
Console
Cisco Security
Manager (CSM)
Cisco Prime
Network Services
Controller
NetApp
OnCommand
System Manager

5.1(2)

7.2(1)E4

9.1(2)

Low
(positioned)

6.3

Low
(positioned)

10.1

Low

4.1
6.3

(positioned)
Low (positioned)
Low
(positioned)

4.4
3.0(2e)

3.0

Low
(positioned)
Low
(positioned)

1
1

Low
(positioned)

FlexPod Datacenter with Cisco Secure Enclaves

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Storage

NetApp
OnCommand
Unified Manager
NetApp Virtual
Storage Console
(VSC)
NetApp NFS Plug-in
for VMware
vStorage APIs for
Array Integration
(VAAI)
NetApp
OnCommand
Balance
NetApp FAS 3250

6.0

Low
(positioned)

4.2.1

Low
(positioned)

1.0.21

Low

4.1.1.2R1

Low
(positioned)

Data ONTAP 8.2P5

Low

FlexPod Data Center with Cisco Secure Enclaves


Architecture and Design
FlexPod Topology
Figure 3 depicts the two FlexPod models validated in this configuration. These are the foundation
platforms to be augmented with additional services to instantiate an enclave.
Figure 3

FlexPod Data Center with Cisco Nexus 7000 (Left) FlexPod Data Center with Cisco Nexus 5000

FlexPod Datacenter with Cisco Secure Enclaves

10

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Note

For more information on the FlexPod Data Center configurations used in the design go to:
FlexPod Data Center with VMware vSphere 5.1 and Nexus 7000 using FCoE Design Guide
FlexPod Data Center with VMware vSphere 5.1 Update 1 Design Guide
FlexPod Design Zone
The following common features between the FlexPod models are key for the instantiation of the secure
enclaves on the FlexPod:

NetApp FAS Controllers with Clustered Data ONTAP providing Storage Virtual Machine (SVM)
and Quality of Service (QoS) capabilities

Cisco Nexus Switching providing an Unified fabric, Cisco Trust Sec, Private VLANs, NetFlow,
Switch Port Analyzer (SPAN), VXLAN and QoS capabilities

Cisco Unified Computing System (UCS) with centralized management through Cisco UCS
Manager, SPAN, QoS, Private VLANs, and hardware virtualization

Adaptive Security Appliance (ASA) Extension


The Cisco ASA provides advanced stateful firewall and VPN concentrator functionality in one device,
and for some models, integrated services modules such as IPS. The ASA includes many advanced
features, such as multiple security contexts (similar to virtualized firewalls), clustering (combining
multiple firewalls into a single firewall), transparent (Layer 2) firewall or routed (Layer 3) firewall
operation, advanced inspection engines, VPN support, Cisco TrustSec and many more features. The
ASA has two physical deployment models each has been validated to support secure enclaves.
The enclave design uses the Security Group Firewall (SGFW) functionality of the ASA to enforce policy
to and between servers in the data center. The SGFW objects are centrally defined in the Cisco Identity
Services Engine (ISE) and used by the security operations team to create access policies. The Cisco ASA
simply has the option to use the source and destination security groups to make decisions.

ASA High Availability Pair


Figure 4 shows a traditional Cisco ASA high-availability pair deployment model in which the Cisco
Nexus switches of the FlexPod provide a connection point for the appliances. The ASA uses the Virtual
Port Channel (vPC) capabilities of the Cisco Nexus switch for link and device fault tolerance. The two
units in a HA pair communicate over a failover link to determine the operating status of each unit. The
following information is communicated over the failover link:

The unit state (active or standby)

Hello messages (keep-alives)

Network link status

MAC address exchange

Configuration replication and synchronization

The stateful link supports the sharing of session state information between the devices.

FlexPod Datacenter with Cisco Secure Enclaves

11

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 4

Physical Security Extension to the FlexPod - ASA HA Pair

ASA Clustering
ASA Clustering lets you group multiple ASAs together as a single logical device. A cluster provides all
the convenience of a single device (management, integration into a network) while achieving the
increased throughput and redundancy of multiple devices. Currently, the ASA cluster supports a
maximum of eight nodes. Figure 5describes the physical connection of the ASA cluster to the Cisco
Nexus switches of the FlexPod.
Figure 5

Physical Extension to the FlexPod - ASA Clustering

FlexPod Datacenter with Cisco Secure Enclaves

12

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

The ASA cluster uses a single vPC to support data traffic and a dedicated vPC per cluster node for
control and data traffic redirection within the cluster. Control traffic includes:

Master election

Configuration replication

Health monitoring

Data traffic includes:

State replication

Connection ownership queries and data packet forwarding

The data vPC spans all the nodes of the cluster, known as spanned Etherchannel, and is the recommended
mode of operation. The Cisco Nexus switches use a consistent port channel load balancing algorithm to
balance traffic distribution and in and out of the cluster to limit and optimize use of the cluster control
links.
Note

The ASA clustering implementation from this validation is captured in a separate CVD titled Cisco
Secure Data Center for Enterprise Design Guide.

NetFlow Generation Appliance (NGA) Extension


The Cisco NetFlow Generation Appliance (NGA) introduces a highly scalable, cost-effective
architecture for cross-device flow generation. The Cisco NGA generates, unifies, and exports flow data,
empowering network operations, engineering, and security teams to boost network operations
excellence, enhance services delivery, implement accurate billing, and harden network security. he NGA
is a promiscuous device and can accept mirrored traffic from any source to create NetFlow records to
export. The export target in this design is the cyber threat detection system, the Lancope StealthWatch
platform.
The use of threat defense systems allows an organization to address compliance and other mandates,
network and data security concerns as well as monitoring and visibility of the data center. Cyber threat
defense address several use cases including:

Detecting advanced security threats that have breached the perimeter security boundaries

Uncovering Network & Security Reconnaissance

Malware and BotNet activity

Data Loss Prevention

Figure 6 shows the deployment of Cisco NGA on the stack to provide these services, accepting mirrored
traffic from various sources of the converged infrastructure. As illustrated, the NGAs are dual-homed to
the Cisco Nexus switches that use a static "always on" port channel configuration to mirror traffic from
the various monitoring sessions defined on each switch. In addition, the NGAs capture interesting traffic
from the Cisco UCS domain. It should be noted that the SPAN traffic originating from each fabric
interconnect is rate-limited to 1 Gbps.

FlexPod Datacenter with Cisco Secure Enclaves

13

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 6

Physical Extension of the FlexPod - NetFlow Generation Appliance Integration

The Enclave
The enclave is a distinct logical entity that encompasses essential constructs including security along
with application or customer-specific resources to deliver a trusted platform that meets SLAs. The
modular construction and potential to automate delivery help make the enclave a scalable and securely
separated layer of abstraction. The use of multiple enclaves delivers increased isolation, addressing
disparate requirements of the FlexPod integrated infrastructure stack.
Figure 7 provides a conceptual view of the enclave that defines an enclave in relation to an n-tier
application.
The enclave provides the following functions:

Access control point for the secure region (public)

Access control within and between application tiers (private)

Cisco Cyber Security and Threat Defense operations to expose and identify malicious traffic

Cisco TrustSec security using secure group access control to identify server roles and enforce
securitypolicy

Out-of-band management for centralized administration of the enclave and its resources

Optional load-balancing capabilities

FlexPod Datacenter with Cisco Secure Enclaves

14

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 7

Cisco Secure Enclave Model

Storage Design
Clustered Data ONTAP is an ideal storage system operating system to support SEA. Clustered Data
ONTAP is architected in such a way that all data access is done through secure virtual storage partitions.
It is possible to have a single partition that represents the resources of the entire cluster or multiple
partitions that are assigned specific subsets of cluster resources or Enclaves. These secure virtual storage
partitions are known as Storage Virtual Machines, or SVMs. In the current implementation of SEA, the
SVM serves as the storage basis for each Enclave.

Storage Virtual Machines (SVMs)


Introduction to SVMs

The secure logical storage partition through which data is accessed in clustered Data ONTAP is known
as a Storage Virtual Machine (SVM). A cluster serves data through at least one and possibly multiple
SVMs. An SVM is a logical abstraction that represents a set of physical resources of the cluster. Data
volumes and logical network interfaces (LIFs) are created and assigned to an SVM and may reside on
any node in the cluster to which the SVM has been given access. An SVM may own resources on
multiple nodes concurrently, and those resources can be moved nondisruptively from one node to
another. For example, a flexible volume may be nondisruptively moved to a new node and aggregate, or
a data LIF could be transparently reassigned to a different physical network port. In this manner, the
SVM abstracts the cluster hardware and is not tied to specific physical hardware.
An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be
junctioned together to form a single NAS namespace, which makes all of an SVM's data available
through a single share or mount point to NFS and CIFS clients. SVMs also support block-based
protocols, and LUNs can be created and exported using iSCSI, Fibre Channel, or Fibre Channel over
Ethernet. Any or all of these data protocols may be configured for use within a given SVM.
Because it is a secure entity, an SVM is only aware of the resources that have been assigned to it and has
no knowledge of other SVMs and their respective resources. Each SVM operates as a separate and
distinct entity with its own security domain. Tenants may manage the resources allocated to them
through a delegated SVM administration account. Each SVM may connect to unique authentication
zones such as Active Directory, LDAP, or NIS.

FlexPod Datacenter with Cisco Secure Enclaves

15

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

An SVM is effectively isolated from other SVMs that share the same physical hardware.
Clustered Data ONTAP is highly scalable, and additional storage controllers and disks can be easily
added to existing clusters in order to scale capacity and performance to meet rising demands. As new
nodes or aggregates are added to the cluster, the SVM can be nondisruptively configured to use them. In
this way, new disk, cache, and network resources can be made available to the SVM to create new data
volumes or migrate existing workloads to these new resources in order to balance performance.
This scalability also enables the SVM to be highly resilient. SVMs are no longer tied to the lifecycle of
a given storage controller. As new hardware is introduced to replace hardware that is to be retired, SVM
resources can be nondisruptively moved from the old controllers to the new controllers. At this point the
old controllers can be retired from service while the SVM is still online and available to serve data.

Components of an SVM
Logical Interfaces

All SVM networking is done through logical interfaces (LIFs) that are created within the SVM. As
logical constructs, LIFs are abstracted from the physical networking ports on which they reside.
Flexible Volumes

A flexible volume is the basic unit of storage for an SVM. An SVM has a root volume and can have one
or more data volumes. Data volumes can be created in any aggregate that has been delegated by the
cluster administrator for use by the SVM. Depending on the data protocols used by the SVM, volumes
can contain either LUNs for use with block protocols, files for use with NAS protocols, or both
concurrently.
Namespace

Each SVM has a distinct namespace through which all of the NAS data shared from that SVM can be
accessed. This namespace can be thought of as a map to all of the junctioned volumes for the SVM, no
matter on which node or aggregate they might physically reside. Volumes may be junctioned at the root
of the namespace or beneath other volumes that are part of the namespace hierarchy.

Managing Storage Workload Performance Using Storage QoS


Storage QoS (Quality of Service) can help manage risks around meeting performance objectives. You
use Storage QoS to limit the throughput to workloads and to monitor workload performance. You can
reactively limit workloads to address performance problems and you can proactively limit workloads to
prevent performance problems. You can also limit workloads to support SLAs with customers.
Workloads can be limited on either a workload IOPs or bandwidth in MB/s basis.
Storage QoS is supported on clusters that have up to eight nodes.
A workload represents the input/output (I/O) operations to one of the following storage objects:

A Storage Virtual Machine (SVM) with FlexVol volumes

A FlexVol volume

A LUN

A file (typically represents a virtual machine)

In the SEA Architecture, since an SVM is usually associated with an Enclave, a QoS policy group would
normally be applied to the SVM, setting up an overall storage rate limit for the Enclave. Storage QoS is
administered by the cluster administrator.

FlexPod Datacenter with Cisco Secure Enclaves

16

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

You assign a storage object to a QoS policy group to control and monitor a workload. You can monitor
workloads without controlling them in order to size the workload and determine appropriate limits
within the storage cluster.
For more information on managing workload performance by using Storage QoS, please see "Managing
system performance" in the Clustered Data ONTAP 8.2 System Administration Guide for Cluster
Administrators.

NetApp cDOT SVM with Cisco Secure Enclaves


The cDOT SVM is a significant element of the FlexPod Data Center with Cisco Secure Enclaves design.
As show in Figure 8, the physical network resources of two NetApp FAS3200 series controllers have
been partitioned into three logical controllers namely the Infrastructure SVM, Enclave1 SVM and
Enclave2 SVM. Each SVM is allocated to an Enclave supporting one or more applications removing the
requirement for dedicated physical storage as the FAS device logically consolidates and separates the
storage partitions. The Enclaves SVM have the following characteristics:

Dedicated Logical Interfaces (LIFs) are created in each SVM from the physical NetApp Unified
Target Adapters (UTAs)

SAN LIF presence supporting SAN A(e3) and SAN B (e4) topologies
Zoning provides SAN traffic isolation within the fabric

The NetApp ifgroup aggregates the Ethernet interfaces (e3a, e4a) of the UTA for high availability
and supports Layer 2 VLANs

IP LIFs use the ifgroup construct for NFS(enclave_ds1) and or iSCSI based LIFs

Management IP LIFs (svm_mgmt) are defined on each SVM for administration of that SVM and its
logical resources. The management is contained to the SVM.

Dedicated VLANs to each LIF assure traffic separation across the Ethernet fabric

Figure 8

NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines

In addition, each SVM brings other features to support the granular separation and control of the FlexPod
storage domain. These include:

QoS policies allowing the administrator to manage system performance and resource consumption
per Enclave through policies based on IOPS or Mbps throughput.

Role based access control with predefined roles for at cDOT cluster layer and per individual SVM

FlexPod Datacenter with Cisco Secure Enclaves

17

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Performance monitoring

Management security through firewall policy limiting access to trusted protocols.

Figure 9 describes another deployment model for the Cisco Secure Enclave on NetApp cDOT. The
Enclaves do not receive a dedicated SVM but share a single SVM with multiple LIFs defined to support
specific data stores. This model does not provide the same level of granularity, but it may provide a
simpler operational model for larger deployments.
Figure 9

NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines (Service Provider
Model)

Compute Design
The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a
clustered, active-standby configuration for high availability. The software gives administrators a single
interface for performing server provisioning, device discovery, inventory, configuration, diagnostics,
monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager service profiles and
templates support versatile role- and policy-based management, and system configuration information
can be exported to configuration management databases (CMDBs) to facilitate processes based on IT
Infrastructure Library (ITIL) concepts.
Compute nodes are deployed in a Cisco UCS environment by leveraging Cisco UCS service profiles.
Service profiles let server, network, and storage administrators treat Cisco UCS servers as raw
computing capacity to be allocated and reallocated as needed. The profiles define server I/O properties,
personalities, properties and firmware revisions and are stored in the Cisco UCS 6200 Series Fabric
Interconnects. Using service profiles, administrators can provision infrastructure resources in minutes
instead of days, creating a more dynamic environment and more efficient use of server capacity.
Each service profile consists of a server software definition and the server's LAN and SAN connectivity
requirements. When a service profile is deployed to a server, Cisco UCS Manager automatically
configures the server, adapters, fabric extenders, and fabric interconnects to match the configuration
specified in the profile. The automatic configuration of servers, network interface cards (NICs), host bus
adapters (HBAs), and LAN and SAN switches lowers the risk of human error, improves consistency, and
decreases server deployment times.

FlexPod Datacenter with Cisco Secure Enclaves

18

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Service profiles benefit both virtualized and non-virtualized environments in the Cisco Secure Enclave
deployment. The profiles increase the mobility of non-virtualized servers, such as when moving
workloads from server to server or taking a server offline for service or upgrade. Profiles can also be
used in conjunction with virtualization clusters to bring new resources online easily, complementing
existing virtual machine mobility. The profiles are a standard, a template that can be readily deployed
and secured.

Virtual Server Model


Standardizing the host topology through Cisco UCS service profiles improves IT efficiency. Figure 9
shows the uniform deployment of VMware ESXi within the enclave framework.
The main features include:

The VMware ESXi host resides in a Cisco converged infrastructure.

The VMware ESXi host is part of a larger VMware vSphere High Availability (HA) and Distributed
Resource Scheduler (DRS) cluster

Cisco virtual interface cards (VICs) offer multiple virtual PCI Express (PCIe) adapters for the
VMware ESXi host for further traffic isolation and specialization.
Six Ethernet-based virtual network interface cards (vNICs) with specific roles associated with

the enclave system, enclave data, and core services traffic are created:
vmnic0 and vmnic1 for the Cisco Nexus 1000V system uplink support management, VMware

vMotion, and virtual service control traffic.


vmnic2 and vmnic3 support data traffic originating from the enclaves.
vmnic4 and vmnic5 carry core services traffic.
Private VLANs isolate traffic to the virtual machines within an enclave, providing core services

such as Domain Name System (DNS), Microsoft Active Directory, Domain Host Configuration
Protocol (DHCP), and Microsoft Windows updates.
Two virtual host bus adapters (vHBAs) for multihoming to available block-based storage.

Three VMkernal ports are created to support the following traffic types:
vmknic0 supports VMware ESXi host management traffic.
vmknic1 supports VMware vMotion traffic.
vmknic2 and vmknic3 provides the Virtual Extensible LAN (VXLAN) tunnel endpoint (VTEP)

to support traffic with path load balancing through the Cisco UCS fabric.
Additional Network File System (NFS) and Small Computer System Interface over IP (iSCSI)

VMknics may be assigned to individual enclaves as needed to support application and


segmentation requirements. These VMknics use the PortChannel dedicated to enclave data.
Note

A maximum of 256 VMkernal NICs are available per VMware ESXi host.

Cisco Nexus 1000V is deployed on the VMware ESXi host with the following elements:
PortChannels created for high availability and load balancing
Segmentation of traffic through dedicated vNICs, VLANs, and VXLANs

FlexPod Datacenter with Cisco Secure Enclaves

19

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 10

Uniform ESXi Host Topology

Bare Metal Server Model


The enclave architecture is not restricted to virtualized server platforms. Bare-metal servers persist in
many organizations to address various performance and compliance requirements. To address bare-metal
operating systems within an enclave (Figure 10), the following features were enabled:

Cisco UCS fabric failover to provide fabric-based high availability


This feature precludes the use of host-based link aggregation or bonding.

Cisco VICs to provide multiple virtual PCIe adapters to the host for further traffic isolation and
specialization
Ethernet-based vNICs with specific roles associated with the enclave system, enclave data, and

core services traffic are created:


vnic-a and vnic-b support data traffic originating from the host. Two vNICs were defined to
allow host-based bonding. One vNIC is required.
vcore supports core services traffic.

Private VLANs isolate traffic to the virtual machines within an enclave, providing core services
such as DNS, Microsoft Active Directory, DHCP, and Microsoft Windows Updates.

Two virtual HBAs provide multihoming to available block-based storage.

Dedicated VLANs per enclave for bare-metal server connections

FlexPod Datacenter with Cisco Secure Enclaves

20

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 11

Bare Metal Server Model

Network Design
The network fabric knits the previously defined storage and compute domains with the addition of
network services into a cohesive system. The combination creates an efficient, consistent, and secure
application platform, an enclave. The enclave is built using the Cisco Nexus switching platforms
already included in the FlexPod Data Center. This section describes two enclave models their
components and capabilities.
Figure 12 depicts an enclave using two VLANs, with one or more VXLANs used at the virtualization
layer. The VXLAN solution provides logical isolation within the hypervisor and removes the scale
limitations associated with VLANs. The enclave is constructed as follows:

Two VLANs are consumed on the physical switch for the entire enclave.

The Cisco Nexus Series Switch provides the policy enforcement point and default gateway
(SVI2001).

Cisco ASA provides the security group firewall for traffic control enforcement.

Cisco ASA provides virtual context bridging for two VLANs (VLANs 2001 to 3001 in the figure).

VXLAN is supported across the infrastructure for virtual machine traffic.

Consistent security policy is provided through universal security group tags (SGTs):
The import of the Cisco ISE protected access credential (PAC) file establishes a secure

communication channel between Cisco ISE and the device.


Cisco ISE provides SGTs to Cisco ASA, and Cisco ASA defines security group access control

lists (SGACLs).
Cisco ISE provides SGTs and downloadable SGACLs to the Cisco Nexus switch.
Cisco ISE provides authentication and authorization across the infrastructure.

An SGT is assigned on the Cisco Nexus 1000V port profile.

The Cisco Nexus 1000V propagates IP address-to-SGT mapping across the fabric through the SGT
Exchange Protocol (SXP) for SGTs assigned to the enclave.

FlexPod Datacenter with Cisco Secure Enclaves

21

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

The Cisco VSG for each enclave provides Layer 2 firewall functions.

Load-balancing services are optional but readily integrated into the model.

Dedicated VMknics are available to meet dedicated NFS and iSCSI access requirements

Figure 12

Enclave Model: Transparent VLAN with VXLAN (Cisco ASA Transparent Mode)

Figure 13 illustrates the logical structure of another enclave on the same shared infrastructure employing
the Cisco ASA routed virtual context as the default gateway for the web server. The construction of this
structure is identical to the previously documented enclave except for the firewall mode of operation.
Figure 13

Enclave Model: Routed Firewall with VXLAN (Cisco ASA Routed Mode)

FlexPod Datacenter with Cisco Secure Enclaves

22

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Security Services
Firewall

Firewalls are the primary control point for access between two distinct network segments, commonly
referred to as inside, outside, public or private. The Cisco Secure Enclave Architecture uses two
categories of firewalls zone or edge for access control into, between and within the enclave. The enclave
model promotes security "proximity", meaning where possible traffic patterns within an enclave should
remain contiguous to the compute. The use of multiple policy enforcement points promotes optimized
paths.
Cisco Virtual Security Gateway

The Cisco Virtual Security Gateway (VSG) protects traffic within the enclave, enforcing security policy
at the VM level by applying policy based on VM or network based attributes. Typically this traffic is
considered "east, west" in nature. The reality is any traffic into a VM is subject to the VSG security
policy. The enclave model calls for a single VSG instance per enclave allowing the security operations
team to develop granular security rules based on the application and associated business requirements.
The Cisco Nexus 1000v Virtual Ethernet Module (VEM) will redirect the initial packet destined to a VM
to the VSG where policy evaluation occurs. The redirection of traffic occurs using vPath when the virtual
service is defined on the port profile of the VM. The VEM encapsulates the packet and forwards it to the
VSG assigned to the enclave. The Cisco VSG processes the packet and forwards the result to the vPath
on the VEM where the policy decision is cached and enforced for subsequent packets. The vPath will
maintain the cache until the flow is reset (RST), finished (FIN) or timeouts.
Note

The Cisco Virtual Security Gateway may deployed adjacent to the Cisco Nexus 1000v VEM or across a
number of Layer 3 hops.
Cisco Adaptive Security Appliances

The edge of the enclave is protected using the Cisco's Adaptive Security Appliance. The Cisco ASA can
be partitioned into multiple security context (<250) allowing each enclave to have a dedicated virtual
ASA to apply access control, intrusion prevention, and antivirus policy. The primary role of each ASA
enclave context is to control access between the "inside and outside" network segments. This traffic is
typically referred to as "north, south" in nature.
The Cisco ASA supports Cisco TrustSec. Cisco TrustSec is an intelligent solution providing secure
network access based on the context of a user or a device. Subsequently network access is granted based
on contextual data such as "who, what, where, when, and how,". Cisco TrustSec in the enclave
architecture uses Security Group Tag (SGT) assignment on the Cisco Nexus 1000v and the ASA as a
Security Group Firewall (SGFW) to enforce the role based access control policy.
The Cisco Identity Services Engine (ISE) is a required component in the CiscoTrustSec implementation
providing centralized definitions of the SGTs to IP mapping. A Protected Access Credential (PAC) file
secures the communication between the ISE and ASA platforms and allows for the ASA to download
the security group table. This table contains SGT to security group names translation. The security
operations team can then create access rules based on the object tags (SGTs), simplifying policy
configuration in the data center.
The SGT is assigned at the VM port profile on the Cisco Nexus 1000v. The SGT assignment is
propagated to the ASA through the Security eXchange Protocol (SXP). SXP is a secure conversation
between the two devices a speaker and listener. The ASA may perform both roles but in this design it is
strictly a listener learning and acting as a SGFW. If the IP to SGT mapping is part of a security group
policy the ASA enforces the rule.

FlexPod Datacenter with Cisco Secure Enclaves

23

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Cyber Threat Defense

Cyber threats are attacks focused on seizing information related to sensitive data, money or ideas. The
Cisco Cyber Threat Defense Solution provides greater visibility into these threats by identifying
suspicious network traffic patterns within the network allowing security analysts the contextual
information necessary to discern the level of threat these suspicious patterns represent. As shown in
Figure 14, the solution is easily integrated and readily enabled on the base-FlexPod components. The
entire FlexPod Data Center with Cisco Secure Enclave solution is protected.
The CTD solution employs three primary components to provide this crucial visibility:

Network Telemetry through NetFlow

Threat Context through Cisco Identity Services Engine (ISE)

Unified Visibility, Analysis and Context through Lancope StealthWatch

Figure 14

Cisco Secure Enclave Cyber Threat Defense Model

Network Telemetry through NetFlow

NetFlow was developed by Cisco to collect network traffic information and enable monitoring of the
network. The data collected by NetFlow provides insight into specific traffic flows in the form of
records. The enclave framework uses several methods to reliably collect NetFlow data and provide a full
picture of the FlexPod Data Center environment including:

NetFlow Generation Appliances (NGA)

Direct NetFlow Sources

Cisco ASA 5500 NetFlow Secure Event Logging (NSEL)

FlexPod Datacenter with Cisco Secure Enclaves

24

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

The effectiveness of any monitoring system is dependent on the completeness of the data it
captures.With that in mind, the enclave model does not recommend using sampled NetFlow. Ideally the
NetFlow records should reflect the FlexPod traffic in its entirety. To that end the physical Cisco Nexus
switches are relieved of NetFlow responsibilities and implement line-rate SPAN. The NGA are
connected to SPAN destination ports on the Cisco Nexus switches and Cisco UCS Fabric Interconnects.
The collection points are described in the NetFlow Generation Appliance (NGA) Extension section. The
NGA devices are promiscuous supporting up to 40Gbps of mirrored traffic to create NetFlow records for
export to the Lancope StealthWatch Flow Collectors.
Direct NetFlow sources generate and send flow records directly to the Lancope FlowCollectors. The
Cisco Nexus 1000v virtual distributed switch provides this functionality for the virtual access layer of
the enclave. It is recommended to enable Netflow on the Cisco Nexus 1000v interfaces. In larger
environments where the limits of the Cisco Nexus 1000v NetFlow resources are reached, NetFlow
should be enabled on VM interfaces with data sources.
Another source of direct flow data is the Cisco ASA 5500. The Cisco ASA generates a NSEL records.
These records differ from traditional NetFlow but are fully supported by the Lancope StealthWatch
system. In fact, the records include the "action" permit or deny taken by the ASA on the flow as well as
NAT translation that adds another layer of depth to the telemetry of the CTD system.
Threat Context through Cisco Identity Services Engine (ISE)

In order to provide some context or perspective, the Lancope StealthWatch system employs the services
of the Cisco Identity Services Engine. The ISE can provide device and user information, offering more
information for the security operations team to use during the process of threat analysis and potential
response. In addition to the device profile and user identity, the ISE can provide time, location, and
network data to create a contextual identity to who and what is on the network.
Unified Visibility, Analysis and Context through Lancope StealthWatch

The Lancope StealthWatch system collects, organizes and analyzes all of the incoming data points to
provide a cohesive view into the inner workings of the enclave. The StealthWatch Management Console
(SMC) is the central point of control supporting millions of flows. The primary SMC dashboards offer
insight into network reconnaissance, malware propagation, command and control traffic, data
exfiltration, and internal host reputation. The combination of Cisco and Lancope technologies offers a
protection
Management Design

The communication between the management domain, the hardware infrastructure, and the enclaves is
established through traditional paths as well as through the use of private VLANs on the Cisco Nexus
1000V and Cisco UCS fabric interconnects. The use of dedicated out-of-band management VLANs for
the hardware infrastructure, including Cisco Nexus switching and the Cisco UCS fabric, is a
recommended practice. The enclave model suggests the use of a single isolated private VLAN that is
maintained between the bare-metal and virtual environments. This private isolated VLAN allows all
virtual machines and bare-metal servers to converse with the services in the management domain, which
is a promiscuous region. The private VLAN feature enforces separation between servers within a single
enclave and between enclaves.
Figure 15 shows the logical construction of this private VLAN environment, which supports directory,
DNS, Microsoft Windows Server Update Services (WSUS), and other common required services for an
organization

FlexPod Datacenter with Cisco Secure Enclaves

25

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 15

Private VLANs Providing Secure Access to Core Services

Figure 16 shows on the virtual machine connection points to the management domain and the data
domain. As illustrated, the traffic patterns are completely segmented through the use of traditional
VLANs, VXLANs, and isolated private VLANs. The figure also shows the use of dedicated PCIe
devices and logical PortChannels created on the Cisco Nexus 1000V to provide load balancing, high
availability, and additional traffic separation.
Figure 16

Enclave Virtual Machine Connections

Management Services
The FlexPod Data Center with Cisco Secure Enclaves employs numerous domain level managers to
provision, organize and coordinate the operation of the enclaves on the shared infrastructure. The
domain level managers employed during the validation are listed in Table 2 and Table 3. Table 2
describes the role of the management product while Table 3 indicates the positioning of that product
within the architecture.

FlexPod Datacenter with Cisco Secure Enclaves

26

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Table 2

FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms

Product
Cisco Unified Computing System Manager
(UCSM)

Microsoft Active Directory, DNS, DHCP,


WSUS, etc.

Role
Provides administrators a single interface for performing
server provisioning, device discovery, inventory,
configuration, diagnostics, monitoring, fault detection,
auditing, and statistics collection.
Microsoft directory services provided centralized
authentication and authorization for users and computers.
DNS Services are centralized for TCP/IP name translation.
DHCP provides automated IP address assignment this is
coordinated with the DNS records.

VMware vSphere vCenter

Cisco Security Manager

Lancope StealthWatch System

Cisco Identity Services Engine

Cisco Prime Network Services Controller


NetApp OnCommand System Manager
NetApp OnCommand Unified Manager

Windows Update Services are provided and defined and


applied through AD Group Policy. This service maintains the
Windows operating systems currency.
Provides centralized management of the vSphere ESXi hosts,
virtual machines and enablement of VMware features such as
vMotion and DRS cluster services.
Provides scalable, centralized management that allows
administrators to efficiently manage a wide range of Cisco
security devices, gain visibility across the network
deployment, and share information with other essential
network services, such as compliance systems and advanced
security analysis systems, with a high degree of security.
Ingests and processes NetFlow records providing unique
insight into network transactions, allowing for greater
understanding of the network and fine grained analysis of
security incidents under its watch.
Provides user and device identity and context information to
create policies that govern authorized network access. ISE is
the policy control point of the Cisco TrustSec deployment
allowing for centralized object based security.
Provides centralized device and security policy management
of the Cisco Virtual Security (VSG) and other virtual services.
Manages individual or clustered storage systems through a
browser-based interface
Provides a single dashboard to view the health of your NetApp
storage availability, capacity, and data protection
relationships. Unified Manager offers risk identification and
proactive notifications and recommendations.

FlexPod Datacenter with Cisco Secure Enclaves

27

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

NetApp Virtual Storage Console (VSC)

NetApp NFS Plug-in for VMware vStorage


APIs for Array Integration (VAAI)
NetApp OnCommand Balance

Cisco Nexus 1000v Virtual Supervisor


Module for VMware vSphere
Cisco Virtual Security Gateway
Cisco Prime Network Analysis Module
(NAM)
Table 3

Provides integrated, comprehensive, end-to-end virtual


storage management for the VMware vSphere infrastructure,
including discovery, health monitoring, capacity management,
provisioning, cloning, backup, restore, and disaster recovery.
VAAI is a set of APIs and SCSI commands allowing VMware
ESXi hosts to offload VM operations such as cloning and
initialization to the FAS controllers.
Provides directions to optimize the performance and capacity
of the virtual and physical data center resources including
NetApp storage, physical servers, and VMware virtual
machines.
Provides a comprehensive and extensible architectural
platform for virtual machine (VM) and cloud networking
Delivers security, compliance, and trusted access for virtual
data center and cloud computing environments
Delivers application visibility and network analytics to the
physical and virtual network

FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms

Product
Microsoft Active Directory, DNS, DHCP,
WSUS, etc.
VMware vSphere vCenter
Cisco Security Manager
Lancope StealthWatch System
Cisco Identity Services Engine
Cisco Prime Network Services Controller
NetApp OnCommand System Manager
NetApp OnCommand Unified Manager
NetApp Virtual Storage Console (VSC)
NetApp NFS Plug-in for VMware vStorage
APIs for Array Integration (VAAI)
NetApp OnCommand Balance
Cisco Nexus 1000v Virtual Supervisor
Module
Cisco Virtual Security Gateway
Cisco Prime Network Analysis Module
(NAM)

Positioned
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware ESXi Host
VMware vSphere Management Cluster
Nexus 1110-X Platform
Nexus 1110-X Platform
Nexus 1110-X Platform

Unified Management with Cisco UCS Director


Cisco UCS Director provides a central user portal for managing the environment and enables the
automation of the manual tasks associated with the provisioning and subsequent operation of the
enclave. Cisco UCS Director can directly or indirectly manage the individual FlexPod Data Center
components and enclave extensions.

FlexPod Datacenter with Cisco Secure Enclaves

28

FlexPod Data Center with Cisco Secure Enclaves Architecture and Design

Figure 17

Cisco UCS Director for FlexPod Management

Figure 18 shows the interfaces that Cisco UCS Director employs. Ideally, the north bound APIs of the
various management domains are used but the UCS Director may also directly access devices to create
the Enclave environment. It should be noted that the Cyber Threat Defense components are not directly
accessed as these protections are overlays encompassing the entire infrastructure.
Figure 18

Cisco UCS Director Secure Enclave Connections

The instantiation of multiple enclaves on the FlexPod Data Center platform through Cisco UCS Director
offers operational efficiency and consistency to the organization. Figure 19 illustrates the automation of
the infrastructure through a single pane of glass approach.
Figure 19

Cisco UCS Director Automating Enclave Deployment

FlexPod Datacenter with Cisco Secure Enclaves

29

Enclave Implementation

Enclave Implementation
The implementation section of this document builds off of the baseline FlexPod Data Center deployment
guides and assumes this baseline infrastructure is in place containing Cisco UCS, NetApp FAS and Cisco
Nexus configuration. Please reference the following documents for FlexPod Data Center deployment
with the Cisco Nexus 7000 or Cisco Nexus 5000 series switches.
VMware vSphere 5.1 on FlexPod Deployment Guide for Clustered ONTAP at
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_ucsm2_Clusterdeploy.h
tml
VMware vSphere 5.1 on FlexPod with the Cisco Nexus 7000 Deployment Guide at
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi_N7k.html
The deployment details provide example configurations necessary to achieve enclave functionality. It is
assumed that the reader has installed and has some familiarity with the products.

Cisco Nexus Switching


The FlexPod Data Center solution supports multiple Cisco Nexus family switches including the Cisco
Nexus 9000, Cisco Nexus 7000, Cisco Nexus 6000, and Cisco Nexus 5000 series switches. This section
of the document will address using either the Cisco Nexus 7000 or Cisco Nexus 5000 series switches as
the FlexPod Data Center networking platform.

Cisco Nexus 7000 as FlexPod Data Center Switch


The Cisco Nexus 7000 has three Virtual Device Contexts (VDC); one admin VDC, one storage VDC and
one LAN or Ethernet VDC. VDC are abstractions of the physical switch and offer operational benefits
of fault isolation and traffic isolation. The VDCs were built using the deployment guidance of the
FlexPod Data Center with Cisco Nexus 7000 document. The majority of the configurations are identical
to the based FlexPod implementation. This section discusses the modifications.

ISE Integration
Two Identity Services Engines are provisioned in a primary secondary configuration for high
availability. Each ISE assumes the following personas:

Administration Node

Policy Service Node

Monitoring Node

The ISE provides RADIUS services to each of the Cisco Nexus 7000 VDCs which are configured as
Network.

FlexPod Datacenter with Cisco Secure Enclaves

30

Enclave Implementation

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

radius-server key 7 "K1kmN0gy"

radius-server key 7 "K1kmN0gy"

radius distribute

radius distribute

radius-server host 172.26.164.187 key 7


"K1kmN0gy" authentication accounting

radius-server host 172.26.164.187 key 7


"K1kmN0gy" authentication accounting

radius-server host 172.26.164.239 key 7


"K1kmN0gy" authentication accounting

radius-server host 172.26.164.239 key 7


"K1kmN0gy" authentication accounting

radius commit

radius commit

aaa group server radius ISE-Radius-Grp

aaa group server radius ISE-Radius-Grp

server 172.26.164.187

server 172.26.164.187

server 172.26.164.239

server 172.26.164.239

use-vrf management

use-vrf management

source-interface mgmt0

source-interface mgmt0

ip radius source-interface mgmt0

ip radius source-interface mgmt0

The following AAA commands were used:

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

aaa authentication login default group


ISE-Radius-Grp

aaa authentication login default group


ISE-Radius-Grp

aaa authentication dot1x default group


ISE-Radius-Grp

aaa authentication dot1x default group


ISE-Radius-Grp

aaa accounting dot1x default group


ISE-Radius-Grp

aaa accounting dot1x default group


ISE-Radius-Grp

aaa authorization cts default group


ISE-Radius-Grp

aaa authorization cts default group


ISE-Radius-Grp

aaa accounting default group


ISE-Radius-Grp

aaa accounting default group


ISE-Radius-Grp

no aaa user default-role

no aaa user default-role

Cisco TrustSec
Cisco TrustSec provides an access-control solution that builds upon an existing identity-aware
infrastructure to ensure data confidentiality between network devices and integrate security access
services on one platform. In the Cisco TrustSec solution, enforcement devices utilize a combination of
user attributes and end-point attributes to make role-based and identity-based access control decisions.

FlexPod Datacenter with Cisco Secure Enclaves

31

Enclave Implementation

In this release, the ASA integrates with Cisco TrustSec to provide security group based policy
enforcement. Access policies within the Cisco TrustSec domain are topology-independent, based on the
roles of source and destination devices rather than on network IP addresses.
The ASA can utilize the Cisco TrustSec solution for other types of security group based policies, such
as application inspection; for example, you can configure a class map containing an access policy based
on a security group.
The Cisco TrustSec environment is enabled on the Nexus 7000. The Cisco Nexus 7000 aggregates
Security Exchange Protocol (SXP) information and sends it to any listener. In the enclave design the
Cisco Nexus 1000v is a speaker and the Cisco ASA virtual contexts are listener devices.
Figure 20

Cisco TrustSec Implementation as Validated

FlexPod Datacenter with Cisco Secure Enclaves

32

Enclave Implementation

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

! Enable Cisco TrustSec on the Nexus 7000

! Enable Cisco TrustSec on the Nexus 7000

feature cts

feature cts

! Name and password shared for ISE device


registration

! Name and password shared for ISE device


registration

cts device-id k02-fp-sw-a password 7 K1kmN0gy

cts device-id k02-fp-sw-b password 7 K1kmN0gy

cts role-based counters enable

cts role-based counters enable

!Enable SXP

!Enable SXP

cts sxp enable

cts sxp enable

! Default SXP password used for all SXP


communications

!Default SXP password used for all SXP


communications

cts sxp default password 7 K1kmN0gy

cts sxp default password 7 K1kmN0gy

! SXP connection to an ASA virtual context


N7k in speaker role

! SXP connection to an ASA virtual context


N7k in speaker role

cts sxp connection peer 10.0.101.100 source


172.26.164.218 password default mode listener

cts sxp connection peer 10.0.101.100 source


172.26.164.217 password default mode listener

! SXP connection to the Nexus 1000v N7k in


listener mode

! SXP connection to the Nexus 1000v N7k in


listener mode

cts sxp connection peer 172.26.164.18 source


172.26.164.218 password default mode speaker

cts sxp connection peer 172.26.164.18 source


172.26.164.217 password default mode speaker

Note

The SXP information is common across ASA virtual contexts The SGT mappings are global and should
not overlap between contexts.

Private VLANs
The use of private VLANs allows for the complete isolation of control and management traffic within
an Enclave. The Cisco Nexus 7000 supports private VLANs and used the following structure during
validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried
across the infrastructure.

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated

vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated

FlexPod Datacenter with Cisco Secure Enclaves

33

Enclave Implementation

Port Profiles
A port profile is a mechanism for simplifying the configuration of interfaces. A single port profile can
be assigned to multiple interfaces to give them all the same configuration. Changes to a port profile are
propagated to the configuration of any interface that is assigned to it.
In the validated architecture, three port profiles were created supporting the Cisco UCS, NetApp FAS
controllers and Cisco Nexus 1110 Cloud Services Platform. The following details the port profile
configurations which are applied to the virtual and physical interfaces on the Cisco Nexus 7000.

FlexPod Datacenter with Cisco Secure Enclaves

34

Enclave Implementation

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

port-profile type port-channel UCS-FI

port-profile type port-channel UCS-FI

switchport

switchport

switchport mode trunk

switchport mode trunk

switchport trunk native vlan 2

switchport trunk native vlan 2

spanning-tree port type edge trunk

spanning-tree port type edge trunk

mtu 9216

mtu 9216

switchport trunk allowed vlan


2,98-99,201-219,666,2001-2019,3001-3019

switchport trunk allowed vlan


2,98-99,201-219,666,2001-2019,3001-3019

switchport trunk allowed vlan add


3170-3173,3175-3179,3250-3251,3253-3255

switchport trunk allowed vlan add


3170-3173,3175-3179,3250-3251,3253-3255

description <<**UCS Fabric Interconnect Port Profile **>>


state enabled
port-profile type ethernet Cloud-Services-Platforms

description <<**UCS Fabric Interconnect Port Profile **>>


state enabled
port-profile type ethernet Cloud-Services-Platforms

switchport

switchport

switchport mode trunk

switchport mode trunk

spanning-tree port type edge trunk

spanning-tree port type edge trunk

switchport trunk allowed vlan 98-99,3175-3176,3250

switchport trunk allowed vlan 98-99,3175-3176,3250

description <<** CSP Port Profile **>>

description <<** CSP Port Profile **>>

state enabled
port-profile type port-channel FAS-Node

state enabled
port-profile type port-channel FAS-Node

switchport

switchport

switchport mode trunk

switchport mode trunk

switchport trunk native vlan 2

switchport trunk native vlan 2

spanning-tree port type edge trunk

spanning-tree port type edge trunk

mtu 9216

mtu 9216

switchport trunk allowed vlan 201-219,3170

switchport trunk allowed vlan 201-219,3170

description <<** NetApp FAS Node Port Profile **>>

description <<** NetApp FAS Node Port Profile **>>

state enabled

state enabled

interface port-channel11
inherit port-profile FAS-Node
interface port-channel12
inherit port-profile FAS-Node
interface port-channel13
inherit port-profile UCS-FI
interface port-channel14
inherit port-profile UCS-FI
interface Ethernet4/17
inherit port-profile Cloud-Services-Platforms
interface Ethernet4/19
inherit port-profile Cloud-Services-Platforms

interface port-channel11
inherit port-profile FAS-Node
interface port-channel12
inherit port-profile FAS-Node
interface port-channel13
inherit port-profile UCS-FI
interface port-channel14
inherit port-profile UCS-FI
interface Ethernet4/17
inherit port-profile Cloud-Services-Platforms
interface Ethernet4/19
inherit port-profile Cloud-Services-Platforms

Quality of Service (QoS)


The Enclave design in the Nexus 7000 uses multiple VDCs one of them dedicated to supporting block
based storage through FCoE. As such, the system defaults may be adjusted and the environment
optimized to address the complete separation of FCoE from other Ethernet traffic through the Nexus
7000 VDCs. Cisco Modular QoS CLI (MQC) provides this functionality allowing administrators to:

FlexPod Datacenter with Cisco Secure Enclaves

35

Enclave Implementation

Create traffic classes by classifying the incoming and outgoing packets that match criteria such as
IP address or QoS fields.

Create policies by specifying actions to take on the traffic classes, such as limiting, marking, or
dropping packets.

Apply policies to a port, port channel, VLAN, or a sub-interface.

Queues (optional modifications)

Queues are one method to manage network congestion. Ingress and egress queue selection is based on
CoS values. The default network-qos queue structure nq-7e-4Q1T-HQoS is shown below for the system
with F2 linecards. The F2 line card supports four queues each supporting specific traffic classes assigned
by CoS values.

Note

F2 series line cards were used for validation.


The Enclave does not require modification of the QoS environment but this is provided as an example
of optimizing FlexPod resources. The following command copies the default queuing policy of the
system, inherited from the admin VDC, to the local Ethernet VDC.

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

qos copy policy-map type queuing


default-4q-7e-in-policy prefix FP-

qos copy policy-map type queuing


default-4q-7e-in-policy prefix FP-

The new local copy of the ingress queuing policy structure (as shown above) is redefined to address
Ethernet traffic. The "no-drop" or FCoE traffic is given the minimal amount of resources as this traffic
will not traverse this Ethernet VDC but traverse the VDC dedicated to storage traffic. Essentially, class
of service (CoS) 3 no-drop traffic is not defined or expected within this domain.
In the following example, the c-4q-7e-drop-in is given 99% of the available resources.

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

policy-map type queuing FP-4q-7e-in

policy-map type queuing FP-4q-7e-in

class type queuing c-4q-7e-drop-in

class type queuing c-4q-7e-drop-in

service-policy type queuing


FP-4q-7e-drop-in

service-policy type queuing


FP-4q-7e-drop-in

queue-limit percent 99

queue-limit percent 99

class type queuing c-4q-7e-ndrop-in


service-policy type queuing
FP-4q-7e-ndrop-in
queue-limit percent 1

FlexPod Datacenter with Cisco Secure Enclaves

36

class type queuing c-4q-7e-ndrop-in


service-policy type queuing
FP-4q-7e-ndrop-in
queue-limit percent 1

Enclave Implementation

The queuing policy maps are then adjusted to reflect the new percentages total. For example, the
4q4t-7e-in-q1 class will receive 50% of the 100% queue-limits within the FP-4q-7e-drop-in class, but
that is really 50% of the 99% queue limit available in total meaning the 4q4t-7e-in-q1 will receive 49.5%
of the total available queue.
Note

Effective queue limit % = assigned queue-limit % from parent class * local queue limit %
The 4q4t-7e-in-q4 under the FP-4q-7e-ndrop-in class will receive 100% of the 1% effectively assigned
to it. Again the lab implementation did not expect any CoS traffic in the Ethernet VDC.

FlexPod Datacenter with Cisco Secure Enclaves

37

Enclave Implementation

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

policy-map type queuing FP-4q-7e-drop-in

policy-map type queuing FP-4q-7e-drop-in

class type queuing 4q4t-7e-in-q1

class type queuing 4q4t-7e-in-q1

queue-limit percent 50

queue-limit percent 50

bandwidth percent 50

bandwidth percent 50

class type queuing 4q4t-7e-in-q-default

class type queuing 4q4t-7e-in-q-default

queue-limit percent 25

queue-limit percent 25

bandwidth percent 24

bandwidth percent 24

class type queuing 4q4t-7e-in-q3

class type queuing 4q4t-7e-in-q3

queue-limit percent 25

queue-limit percent 25

bandwidth percent 25

bandwidth percent 25

policy-map type queuing FP-4q-7e-ndrop-in


class type queuing 4q4t-7e-in-q4

policy-map type queuing FP-4q-7e-ndrop-in


class type queuing 4q4t-7e-in-q4

queue-limit percent 100

queue-limit percent 100

bandwidth percent 1

bandwidth percent 1

The bandwidth percentage should total 100% across the class queues. The no-drop queue was given the
least amount of resources, 1%. Note zero resources is not an option for any queue.
Table 4

Effective Queuing Configuration Example

Queuing Class
4q4t-7e-in-q1 (CoS 5-7)
4q4t-7e-in-q-default (CoS 0-1)
4q4t-7e-in-q3 (CoS 2,4)
4q4t-7e-in-q4 (no drop) (CoS
3)

Queue-limit % Effective %

Bandwidth % Effective

50 49.5
25 24.75
25 24.75

50 - 50
24 24
25 25
1 - 1

100 1

The queuing policy can be applied to one or more interfaces. To simplify the deployment, the service
policy is applied to the relevant port profiles, namely the FAS and Cisco UCS ports.

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

port-profile type port-channel UCS-FI

port-profile type port-channel UCS-FI

service-policy type queuing input


FP-4q-7e-in

service-policy type queuing input


FP-4q-7e-in

port-profile type port-channel FAS-Node

port-profile type port-channel FAS-Node

service-policy type queuing input


FP-4q-7e-in

service-policy type queuing input


FP-4q-7e-in

Note

The egress queue buffer allocations are non-configurable for the F2 line cards used for validation.

FlexPod Datacenter with Cisco Secure Enclaves

38

Enclave Implementation

Classification

The NAS traffic originating from the NetApp FAS controllers will be classified and marked to receive
the appropriate levels of service across the Enclave architecture. The FP-qos-fas policy map was created
to mark all packets with a CoS of 5 (Gold). Marking the traffic from the FAS is a recommended practice.
CoS 5 aligns with the policies created in the Cisco UCS and Cisco Nexus 1000v platforms.

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

policy-map type qos FP-qos-fas

policy-map type qos FP-qos-fas

class class-default
set cos 5

class class-default
set cos 5

The ability to assign this at the VLAN simplifies the classifications of packets and aligns well with the
VLAN to NetApp Storage Virtual Machines (SVMs) which require dedicated VLANs for processing on
the controller. After this configuration, the CoS of 5 is effectively marked on all frames within the
VLANs listed. The VLANs in this example support Enclave NFS traffic.

Nexus 7000-A (Storage VDC)

Nexus 7000-B (Storage VDC)

vlan configuration 201-219

vlan configuration 201-219

service-policy type qos input FP-qos-fas

service-policy type qos input FP-qos-fas

Monitoring
The ability to monitor network traffic within the Nexus platform is key to ensure the efficient operation
of the solution. The design calls for the use of Switched Port Analyzer (SPAN) as well as NetFlow
services to provide visibility.
SPAN

Switched Port Analyzer (SPAN) sends a copy of the traffic to a destination port. The network analyzer,
which is attached with destination port, analyzes the traffic that passes through source port. The Cisco
Nexus 7000 supports all SPAN sessions in hardware, the supervisor CPU is not involved.
The source port can be a single port or multiple ports or a VLAN, which is also called a monitored port.
You can monitor all the packets for source port which is received (rx), transmitted (tx), or bidirectional
(both). A replication of the packets is sent to the destination port for analysis.
The destination port is a port that connects to a probe or security device that can receive and analyze the
copied packets from single or multiple source ports. In the design, the SPAN destination ports are the
Cisco NetFlow Generation Appliances NGA). It is important to note the capacity of the destination
SPAN interfaces should be equivalent or exceed the capacity of the source interfaces to avoid potential
SPAN drops obscuring network visibility.
Figure 21 describes the connectivity between the Cisco Nexus 7000 switches and the Cisco NGA
devices. Notice that a static port channel is configured on the Cisco Nexus 7000 to the NGAs. The NGA
are promiscuous devices and do not participate in port aggregation protocols such as PAGP or LACP on
their data interfaces. Each of the links are 10 Gig enabled. The port channel may contain up to 16 active
interfaces in the bundle allowing for greater capacity. It is important to note that the NGA devices are
independent devices so adding more promiscuous endpoint devices to the port channel is not an issue.
SPAN traffic will be redirected and load balanced across the static link members of the port channel.

FlexPod Datacenter with Cisco Secure Enclaves

39

Enclave Implementation

Figure 21

Cisco Nexus 7000 to Cisco NGA Connectivity

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

interface port-channel8

interface port-channel8

description <<** NGA SPAN PORTS **>>

description <<** NGA SPAN PORTS **>>

switchport mode trunk

switchport mode trunk

switchport monitor

switchport monitor

monitor session 1

monitor session 1

description SPAN ASA Data Traffic from Po20

description SPAN ASA Data Traffic from Po20

source interface port-channel20 rx

source interface port-channel20 rx

destination interface port-channel8

destination interface port-channel8

no shut

no shut
Note

Span may use the same replication engine as multicast on the module and there is a physical limit to the
amount of replication that each replication engine can do. Nexus 7000 modules have multiple replication
engines for each module and under normal circumstances, multicast is unaffected by a span session. But
it is possible to impact multicast replication if you have a large number of high rate multicast streams
inbound to the module, and the port you monitor uses the same replication engine.

NetFlow
NetFlow technology efficiently provides accounting for various applications such as network traffic
accounting, usage-based network billing, network planning, as well as Denial Services monitoring
capabilities, network monitoring, outbound marketing, and data mining capabilities for both Service
Provider and Enterprise organizations. The NetFlow architecture consists of flow records, flow exports
and flow monitors. NetFlow consumes hardware resources such as TCAM and CPU in the switching
environment. It is also not a recommended practice to use NetFlow sampling as this provides an
incomplete view of network traffic.

FlexPod Datacenter with Cisco Secure Enclaves

40

Enclave Implementation

To avoid NetFlow resource utilization in the Nexus switch and potential "blindspots" the NetFlow
service is offloaded to dedicated devices, namely the Cisco NetFlow Generation Appliances (NGA). The
NGAs consume SPAN traffic from the Nexus 7000. The NGAs are promiscuous endpoints of Port
Channel 8 described above. Please see the Cisco NetFlow Generation Appliance section for details on
its implementation in the design.

Cisco Nexus 5000 as FlexPod Data Center Switch


The switch used in this FlexPod data center architecture is the Nexus 5548UP model. The base switch
configuration is based on the FlexPod Data Center with VMware vSphere deployment model. The
following configurations describe significant implementations to realize the secure enclave architecture.

ISE Integration
Two Identity Services Engines are provisioned in a primary secondary configuration for high
availability. Each ISE assumes the following personas:

Administration Node

Policy Service Node

Monitoring Node

The ISE provides RADIUS services to each of the Nexus 5000 VDCs which are configured as Network
Devices. The Cisco Nexus 5000 configuration is identical to the Cisco Nexus 7000 implementation
captured in the Cisco Nexus 7000 ISE Integration section.

Cisco TrustSec
Cisco TrustSec allows security operations teams to create role-based security policy. The Cisco Nexus
5500 platform supports TrustSec but cannot act as an SXP "listener". his means it cannot aggregate and
advertise through SXP the IP to SGT mappings learned from the Cisco Nexus 1000v. In light of this, the
Nexus 1000v will implement an SXP connection to each ASA virtual context directly to advertise the
CTS tag to IP information.
Note

The Cisco Nexus 7000 and 5000 support enforcement of Security Group ACLs in the network fabric.
This capability was not explored in this design.

Private VLANs
The use of private VLANs allows for the complete isolation of control and management traffic within
an Enclave. The Cisco Nexus 5548UP supports private VLANs and used the following structure during
validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried
across the infrastructure.

FlexPod Datacenter with Cisco Secure Enclaves

41

Enclave Implementation

Nexus 5000-A

Nexus 5000-B

Feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated

Feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated

Port Profiles
A Port profile is a mechanism for simplified configuration of interfaces. A port profile can be assigned
to multiple interfaces giving them all the same configuration. Port profiles provide consistency. Changes
to the port profile can be propagated automatically to the configuration of any interface assigned to it.
Please use the port profile guidance provided in the Nexus 7000 Port Profiles section for configuration
details.

Quality of Service (QoS)


The Nexus 5500 platform inherently trusts the CoS values it receives. In the FlexPod Data Center
platform the same assumption is made, CoS values are trusted and expected to be properly set prior to
egressing the unified computing domain. The NetApp FAS controller traffic will be marked on ingress
to the Nexus 5500 platform.
A system class is uniquely identified by a qos-group value. The Nexus 5500 platform supports six
classes or qos-groups. qos-group 0 is reserved for default drop traffic. The Nexus 5500 by default assigns
all traffic to this class with the exception of FCoE which is reserved for qos-group 1. This essentially
leaves groups 2 through 5 for cos mapping. Each qos-group will define policies and attributes to assign
to traffic in that class such as MTU, CoS value and bandwidth. The CoS 5 Gold class will be assigned
to qos-group 4.
The NAS traffic originating from the NetApp FAS controllers will be classified and marked to receive
the appropriate levels of service across the Enclave architecture. The pm-qos-fas policy map was created
to mark all packets with a CoS of 5 (Gold). CoS 5 aligns with the policies created in the remaining QoS
enabled infrastructure.
The Nexus 5000 supports VLAN based marking. The ability to assign this at the VLAN simplifies the
analysis of packets and aligns well with the VLAN to NetApp Storage Virtual Machines (SVMs)
relationship which requires dedicated VLANs for processing on the FAS controller. The QoS policy is
applied to the appropriate VLANs. After this configuration, the CoS of 5 is effectively marked on all
frames within the VLANs listed. The VLANs in this example 201-219 support NFS traffic.
The TCAM tables must be adjusted to support VLAN QoS entries. The limit is user adjustable and
should be modified to support the number of CoS 5 (NFS,iSCSI) VLANs required in the environment.
The class map cm-qos-fas classifies all IP traffic through the permit "any any" acl-fas ACL as subject to
the policy map pm-qos--fas.

FlexPod Datacenter with Cisco Secure Enclaves

42

Enclave Implementation

Nexus 5000-A

Nexus 5000-B

hardware profile tcam feature interface-qos


limit 20

hardware profile tcam feature interface-qos


limit 20
ip access-list acl-fas

ip access-list acl-fas
10 permit ip any any
10 permit ip any any
class-map type qos match-any cm-qos-fas
class-map type qos match-any cm-qos-fas
match access-group name acl-fas
match access-group name acl-fas
policy-map type qos pm-qos-fas
policy-map type qos pm-qos-fas
class cm-qos-fas
class cm-qos-fas
set qos-group 4
set qos-group 4
vlan configuration 201-219
vlan configuration 201-219
service-policy type qos input pm-qos-fas
service-policy type qos input pm-qos-fas
Note

Use the show hardware profile tcam feature qos command to display TCAM resource utilization.
The following configuration speaks to the classifications defined (qos) on the Nexus switch. A class-map
defines the CoS value and is subsequently used to assign the CoS to a system class or qos-group through
a system assigned policy map pm-qos-global.

FlexPod Datacenter with Cisco Secure Enclaves

43

Enclave Implementation

Nexus 5000-A

Nexus 5000-B

class-map type qos match-all cm-qos-gold

class-map type qos match-all cm-qos-gold


match cos 5

match cos 5
class-map type qos match-all cm-qos-bronze

class-map type qos match-all cm-qos-bronze


match cos 1

match cos 1
class-map type qos match-all cm-qos-silver

class-map type qos match-all cm-qos-silver


match cos 2

match cos 2
class-map type qos match-all cm-qos-platinum

class-map type qos match-all cm-qos-platinum


match cos 6

match cos 6
policy-map type qos pm-qos-global
class cm-qos-platinum

policy-map type qos pm-qos-global


class cm-qos-platinum

set qos-group 5

set qos-group 5

class cm-qos-gold

class cm-qos-gold

set qos-group 4

set qos-group 4

class cm-qos-silver

class cm-qos-silver

set qos-group 3

set qos-group 3

class cm-qos-bronze

class cm-qos-bronze

set qos-group 2

set qos-group 2

class class-fcoe
set qos-group 1

class class-fcoe
set qos-group 1
system qos

system qos
service-policy type qos input pm-qos-global

service-policy type qos input pm-qos-global

The queuing and scheduling definitions are defined for ingress and egress traffic to the Nexus platform.
The available queues (2-5) are given bandwidth percentages that align with those defined on the Cisco
UCS system. The ingress and egress policies are applied at the system level through the service-policy
command.

FlexPod Datacenter with Cisco Secure Enclaves

44

Enclave Implementation

Nexus 5000-A

Nexus 5000-B

class-map type queuing cm-que-qos-group-2

class-map type queuing cm-que-qos-group-2

match qos-group 2
class-map type queuing cm-que-qos-group-3
match qos-group 3
class-map type queuing cm-que-qos-group-4
match qos-group 4
class-map type queuing cm-que-qos-group-5
match qos-group 5
policy-map type queuing pm-que-in-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
policy-map type queuing pm-que-out-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
system qos

match qos-group 2
class-map type queuing cm-que-qos-group-3
match qos-group 3
class-map type queuing cm-que-qos-group-4
match qos-group 4
class-map type queuing cm-que-qos-group-5
match qos-group 5
policy-map type queuing pm-que-in-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
policy-map type queuing pm-que-out-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
system qos

service-policy type queuing input pm-que-in-global

service-policy type queuing input pm-que-in-global

service-policy type queuing output pm-que-out-global

service-policy type queuing output pm-que-out-global

FlexPod Datacenter with Cisco Secure Enclaves

45

Enclave Implementation

The network-qos policy defines the attributes of each qos-group on the Nexus platform. Groups 2 - 5 are
each assigned an MTU and associated CoS value. The MTU was set to the maximum in this environment
as the edge of the network will define acceptable frame transmission. The fcoe class qos-group 1 is
assigned CoS 3 with an MTU of 2518 by default with Priority Flow Control (PFC pause) and lossless
Ethernet settings. The network policy is applied at the system level.

FlexPod Datacenter with Cisco Secure Enclaves

46

Enclave Implementation

Nexus 5000-A

Nexus 5000-B

class-map type network-qos cm-nq-qos-group-2

class-map type network-qos cm-nq-qos-group-2

match qos-group 2
class-map type network-qos cm-nq-qos-group-3
match qos-group 3
class-map type network-qos cm-nq-qos-group-4
match qos-group 4
class-map type network-qos cm-nq-qos-group-5
match qos-group 5
policy-map type network-qos pm-nq-global
class type network-qos class-fcoe

match qos-group 2
class-map type network-qos cm-nq-qos-group-3
match qos-group 3
class-map type network-qos cm-nq-qos-group-4
match qos-group 4
class-map type network-qos cm-nq-qos-group-5
match qos-group 5
policy-map type network-qos pm-nq-global
class type network-qos class-fcoe

pause no-drop

pause no-drop

mtu 2158

mtu 2158

class type network-qos cm-nq-qos-group-5

class type network-qos cm-nq-qos-group-5

mtu 9216

mtu 9216

set cos 6

set cos 6

class type network-qos cm-nq-qos-group-4

class type network-qos cm-nq-qos-group-4

mtu 9216

mtu 9216

set cos 5

set cos 5

class type network-qos cm-nq-qos-group-3

class type network-qos cm-nq-qos-group-3

mtu 9216

mtu 9216

set cos 2

set cos 2

class type network-qos cm-nq-qos-group-2

class type network-qos cm-nq-qos-group-2

mtu 9216

mtu 9216

set cos 1

set cos 1
system qos

system qos
service-policy type network-qos pm-nq-global

service-policy type network-qos pm-nq-global

Monitoring
The ability to monitor network traffic within the Nexus platform is key to ensure the efficient operation
of the solution. The design calls for the use of Switched Port Analyzer (SPAN) as well as NetFlow
services to provide visibility.

FlexPod Datacenter with Cisco Secure Enclaves

47

Enclave Implementation

SPAN
Switched Port Analyzer (SPAN) sources refer to the interface from which traffic can be monitored.
SPAN sources send a copy of the traffic to a destination port. The network analyzer, which is attached
with destination port, analyzes the traffic that passes through source port.

Nexus 5000-A (Ethernet VDC)

Nexus 5000-B (Ethernet VDC)

monitor session 1

monitor session 1

description SPAN ASA Data Traffic from Po20

description SPAN ASA Data Traffic from Po20

source interface port-channel20 rx

source interface port-channel20 rx

destination interface Etherent 1/27

destination interface Ethernet 1/27

no shut

no shut

The SPAN source positioning is at a critical juncture of the network allowing for full visibility of traffic
ingress and egress to the switch.

NetFlow
NetFlow technology efficiently provides accounting for various applications such as network traffic
accounting, usage-based network billing, network planning, as well as Denial Services monitoring
capabilities, network monitoring, outbound marketing, and data mining capabilities for both Service
Provider and Enterprise organizations.
In this design, NetFlow services are offloaded to dedicated devices, namely the Cisco NetFlow
Generation Appliances (NGA). The NGAs consume SPAN traffic from the Nexus 5548UP. The SPAN
sources are implemented at network "choke points" to optimize the capture and ultimately visibility into
the environment. Please see the Cisco NetFlow Generation Appliance section for details on its
implementation in the design.

Cisco Nexus 1110 Cloud Services Platform


The Cisco Nexus 1110 Cloud Services Platform (CSP) is an optional component of the base FlexPod
Data Center deployment. The Secure Enclave Architecture implements several new virtual service
blades on the unused portions of the platform. It should be noted that there are two different model CSPs.
The Cisco Nexus 1110-S supports a maximum of six VSBs. For example, six Cisco Nexus 1000V
VSMs, each capable of managing 64 VMware ESX or ESXi hosts for a total of 384 VMware ESX or
ESXi hosts or six Cisco Virtual Security Gateway (VSG) VSBs
The Cisco Nexus 1110-X supports up to 10 VSBs total. For example, ten Cisco Nexus 1000V VSMs,
each capable of managing 64 VMware ESX or ESXi hosts for a total of 640 VMware ESX or ESXi hosts
or 10 Cisco VSG VSBs.
Figure 22 depicts the Cisco Nexus 1100-S CSP used during validation. The device hosted four different
virtual service blades and had capacity to support two more services which are defined as "Spare" in the
illustration. The implementation of each of these virtual services consumes a logical slot on the virtual
platform.

FlexPod Datacenter with Cisco Secure Enclaves

48

Enclave Implementation

Figure 22

Cisco Nexus 1100 CSP Validated Deployment Model

Figure 23 details the physical connections of the Cisco Nexus 1100 series platforms to the FlexPod. This
aligns with the traditional connectivity models. The CSP platforms are an active standby pair with
trunked links supporting control and management traffic related to the virtual services. he control0 and
mgmt0 interfaces of the Nexus 1100 are seen originating from the active Nexus platform. The
configurations are automatically synced between the two Nexus 1100 cluster nodes.
Figure 23

Note

Cisco Nexus 1100 CSP Physical Connections

The Cisco Nexus 1100 can be provisioned in a Flexible Network Uplink configuration. This deployment
model is recommended for FlexPod Data Center moving forward. The flexible model allows for port
aggregation of CSP interfaces to provide enhanced link and device fault tolerance with minimal
convergence as well as maximum uplink utilization.
The virtual services blades are deployed in a redundant fashion across the Nexus 1100 devices. As show
below, the NAM VSB does not support a high availability deployment model and is active only on the
Primary platform.

FlexPod Datacenter with Cisco Secure Enclaves

49

Enclave Implementation

Virtual Supervisor Module(s)


The secure enclave architecture uses the base FlexPod Data Center deployment. As such the first virtual
service blade in slot ID 1 is already provisioned with the Cisco Nexus Virtual Supervisor Module (VSM)
according to the recommended FlexPod practices. This VSM will support infrastructure management
services such as VMware vCenter, Microsoft Active Directory services, Cisco ISE among others. The
VSM is identified as virtual service blade VSM-1 in the configuration below.

FlexPod Datacenter with Cisco Secure Enclaves

50

Enclave Implementation

Nexus 1100 (Active)


virtual-service-blade VSM-1
virtual-service-blade-type name VSM-1.2
interface control vlan 3176
interface packet vlan 3176
ramsize 3072
disksize 3
numcpu 1
cookie 857755331
no shutdown
virtual-service-blade sea-prod-vsm
virtual-service-blade-type name VSM-1.3
interface control vlan 3250
interface packet vlan 3250
ramsize 3072
disksize 3
numcpu 1
cookie 1936577345
no shutdown primary
no shutdown secondary

A second VSM is provisioned to support the application enclaves deployed in the "production" VMware
vSphere cluster. The VSM is identified as sea-prod-vsm. The second VSM is not required to isolate the
management network infrastructure from the "production" environment but with the available VSB
capacity on the Cisco Nexus 1100 platforms it makes the implementation much cleaner. As such, VLAN
3250 provides a dedicated segment for production control traffic.

Virtual Security Gateway


The Virtual Security Gateway (VSG) VSB is dedicated to the protection of the management
infrastructure elements. The VSG security policies are built according to the requirements of this
infrastructure. Each enclave will have its own VSG with specific security policies for that application
environment. Enclave VSGs are provisioned on the "Production" VMware cluster.

FlexPod Datacenter with Cisco Secure Enclaves

51

Enclave Implementation

The configuration of the VSG requires the definition of two VLAN interfaces for data services (VLAN
99) and control traffic (VLAN 98). The VEM and VSG communicate over VLAN 99 (vPath) for policy
enforcement. The HA VLAN provides VSG node communication and take over in the event of a failure.

Nexus 1100 (Active)


virtual-service-blade vsg1
virtual-service-blade-type name VSG-1.2
description vsg1_for_managment_enclave
interface data vlan 99
interface ha vlan 98
ramsize 2048
disksize 3
numcpu 1
cookie 325527222
no shutdown primary
no shutdown secondary

Virtual Network Analysis Module


The Virtual NAM VSB allows administrators to view network and application performance. The NAM
supports the use of ERSPAN and NetFlow to provide visibility. The NAM requires a management
interface (VLAN 3175) for data capture and administrative web access to the tool. In this case, the NAM
use management VLAN 3175. The NAM does not support an HA deployment model. In the secure
enclave validation effort the NAM was used for intermittent packet captures of interesting traffic
through ERSPAN.

Nexus 1100 (Active)


virtual-service-blade NAM
virtual-service-blade-type name NAM-1.1
interface data vlan 3175
ramsize 2048
disksize 53
numcpu 2
cookie 1310139820
no shutdown primary

FlexPod Datacenter with Cisco Secure Enclaves

52

Enclave Implementation

Cisco Nexus 1000v


The following section describes the implementation of the Cisco Nexus 1000v VSM and VEMs in the
enclave architecture. As described in an the Cisco Nexus 1110 Cloud Services Platform section, there
are two Nexus 1000v Virtual Supervisor Modules (VSM) deployed in this design, one for the
infrastructure and one for the production environment. This document will focus on the production VSM
(sea-prod-vsm) and call out modifications to the base FlexPod Nexus 1000v VSM (sea-vsm1) where
applicable.

SVS Domain
A Nexus 1000v DVS (sea-prod-vsm) is created with a unique SVS domain to support the new production
enclave environment. This new virtual distributed switch will associated with the baseline FlexPod
VMware vCenter Server.

Nexus 1000v (sea-prod-vsm)


interface mgmt0
ip address 172.26.164.18/24

interface control0
ip address 192.168.250.18/24
svs-domain
domain id 201
control vlan 3250
packet vlan 3250
svs mode L3 interface control0
svs connection vCenter
protocol vmware-vim
remote ip address 172.26.164.200 port 80
vmware dvs uuid "c5662d50b4a07c11-6d3bcb9fb19154c0" datacenter-name SEA Data Center
max-ports 8192
connect

Figure 24 Cisco Nexus 1000v "Production" VSM Topology describes the use of control0 interface on a
unique VLAN to provide ESXi host isolation from the remaining management network. All VEM to
VSM communication will occur over this dedicated VLAN. The svs mode L3 interface control0
command assigns communication between the VSM and VEM across the control interface.

FlexPod Datacenter with Cisco Secure Enclaves

53

Enclave Implementation

Figure 24

Cisco Nexus 1000v Production VSM Topology

The Nexus 1000v production enclave VSM is part of the same VMware vSphere vCenter deployment as
the FlexPod Data Center Nexus 1000v VSM (sea-vsm1) dedicated to management services. This image
of the vCenter Networking construct for the data center indicates the presence of the two virtual
distributed switches.

ISE Integration
The ISE provides RADIUS services to each of the Nexus 1000v VSM which are configured as network
devices in the ISE tool.

Nexus 1000v (sea-prod-vsm)


radius-server key 7 "K1kmN0gy"
radius distribute
radius-server host 172.26.164.187 key 7 "K1kmN0gy" authentication accounting
aaa group server radius ISE-Radius-Grp
server 172.26.164.187
server 172.26.164.239
use-vrf management
source-interface mgmt0

ip radius source-interface mgmt0

FlexPod Datacenter with Cisco Secure Enclaves

54

Enclave Implementation

For more deployment details on the ISE implementation please go to the Cisco Identity Services Engine
section.
The following AAA commands were used:

FlexPod Datacenter with Cisco Secure Enclaves

55

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


aaa authentication login default group ISE-Radius-Grp
aaa authentication dot1x default group ISE-Radius-Grp
aaa accounting dot1x default group ISE-Radius-Grp
aaa authorization cts default group ISE-Radius-Grp
aaa accounting default group ISE-Radius-Grp
no aaa user default-role

VXLAN
Virtual Extensible LAN (VXLAN) allows organizations to scale beyond the 4000 VLAN limit present
in traditional switching environments by encapsulating frames MAC frames in IP. This approach allows
a single overlay VLAN to support multiple VXLAN segments, simultaneously addressing VLAN scale
issues and network segmentation requirements.
In the enclave architecture, the use of VXLAN is enabled through the segmentation feature and the
Unicast-only mode was validated. Unicast-only mode distributes a list of IP addresses associated with a
particular VXLAN to all Nexus 1000v VEM. Each VEM requires at least one IP/MAC address pair to
terminate VXLAN packets. This IP/MAC address pair is known as the VXLAN Tunnel End Point
(VTEP) IP/MAC addresses. The distribution MAC feature enables the VSM to distribute a list of MAC
to VTEP associations. The combination of these two features eliminates unicast flooding as all MAC
addresses are known to all VEMs under the same VSM.

Nexus 1000v (sea-prod-vsm)


feature segmentation
segment mode unicast-only
segment distribution mac
The IP/MAC address that the VTEP uses is configured when you enter the capability vxlan command.
You can have a maximum of four VTEPs in a single VEM. The production Nexus 1000v uses VLAN
3253 to support VXLAN traffic. The Ethernet uplink port-profile supporting traffic originating from the
enclaves will support the VXLAN VLAN. Notice the MTU of the uplink is large enough to
accommodate the additional VXLAN encapsulation header of 50 bytes.

FlexPod Datacenter with Cisco Secure Enclaves

56

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


vlan 3253
name prod-vtep-vxlan

port-profile type ethernet enclave-data-uplink


vmware port-group
switchport mode trunk
switchport trunk native vlan 2
system mtu 9000
switchport trunk allowed vlan 201-219,666,2001-2019,3001-3019,3175,3253
channel-group auto mode on mac-pinning
no shutdown
system vlan 201-219
state enabled
The VXLAN vethernet port profile uses that capability vxlan to enable the VXLAN functionality on the
VMKNIC on the Nexus 1000v VEM.

Nexus 1000v (sea-prod-vsm)


port-profile type vethernet vXLAN-VTEP
vmware port-group
switchport mode access
switchport access vlan 3253
capability vxlan
service-policy type qos input Gold
no shutdown
state enabled

FlexPod Datacenter with Cisco Secure Enclaves

57

Enclave Implementation

Figure 25

VTEP Configuration

To create VXLAN segments IDs or domains it is necessary to construct bridge domains in the Nexus
1000v configuration. The bridge domains are referenced by Virtual Machine port profiles requiring
VXLAN services. In the example below, six bridge domains are created. As the naming standard dictates
there are two VXLAN segments for each of the enclaves. The segment ID is assigned my the
administrator. The enclave validation allows for a maximum of ten VXLAN segments per enclave, but
this is adjustable based on each organizations requirement. The current version of the Nexus 1000v
supports up to 2048 VXLAN bridge domains.

Nexus 1000v (sea-prod-vsm)


bridge-domain bd-enclave-1
segment id 30011
bridge-domain bd-enclave-2
segment id 30021
bridge-domain bd-enclave-1-2
segment id 30012
bridge-domain bd-enclave-2-2
segment id 30022
The Nexus 1000v VXLAN enabled port profile referencing the previously defined bridged domains.
Figure 26 is an example of the port group availability in VMware vCenter.

FlexPod Datacenter with Cisco Secure Enclaves

58

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


port-profile type vethernet enc1-vxlan1
vmware port-group
inherit port-profile enc-base
switchport access bridge-domain bd-enclave-1
state enabled
port-profile type vethernet enc2-vxlan1
vmware port-group
inherit port-profile enc-base
switchport access bridge-domain bd-enclave-2
state enabled
port-profile type vethernet enc1-vxlan2
vmware port-group
inherit port-profile enc-base
switchport access bridge-domain bd-enclave-1-2
state enabled
port-profile type vethernet enc2-vxlan2
vmware port-group
inherit port-profile enc-base
switchport access bridge-domain bd-enclave-2-2
state enabled

FlexPod Datacenter with Cisco Secure Enclaves

59

Enclave Implementation

Figure 26

Cisco Nexus 1000v VXLAN Port Group in VMware vCenter Example

Visibility
The following Nexus 1000v features were enabled to provide virtual access visibility, awareness and to
support cyber threat defense technologies.

SPAN
The Nexus 1000v supports the mirroring of traffic within the virtual distributed switch as well as
externally to third party network analysis devices or probes. Each of these capabilities has been
implemented with the Secure Enclave architecture to advance understanding of traffic patterns and
performance of the environment.

Local SPAN
The Switched Port Analyzer (SPAN) feature allows mirroring of traffic within the VEM to a vEthernet
interface supporting a network analysis device. The SPAN sources can be ports (Ethernet, vEthernet or
Port Channels) VLANs, or port profiles. Traffic is directional in nature and the SPAN configuration

FlexPod Datacenter with Cisco Secure Enclaves

60

Enclave Implementation

allows for ingress (rx), egress (tx) or both to be captured in relation to the source construct. The
following example capture ingress traffic on the system-uplink port-profile and send the data to a
promiscuous VM.

Nexus 1000v (sea-prod-vsm)


monitor session 2
source port-profile system-uplink rx
destination interface Vethernet68
no shut

Encapsulated Remote SPAN (ERSPAN)


Encapsulated remote (ER) SPAN monitors traffic in multiple network devices across an IP network and
sends that traffic in an encapsulated envelope to destination analyzers. In contrast, Local SPAN cannot
forward traffic through the IP network. ERSPAN can be used to monitor traffic remotely. ERSPAN
sources can be ports (Ethernet, vEthernet or Port Channels) VLANs, or port profiles. The following
example show an ERSPAN session capturing traffic from port channel 1 in the Nexus 1000v
configuration. The NAM VSB on the Nexus 1100 platform is the destination.

Nexus 1000v (sea-prod-vsm)


monitor session 1 type erspan-source
source interface port-channel1 rx
destination ip 172.26.164.167
erspan-id 1
ip ttl 64
ip prec 0
ip dscp 0
mtu 1500
header-type 2
no shut
The ERSPAN ID associated with the session is configurable with a maximum of 64 sessions defined.
The ERSPAN ID affords filtering at the destination analyzer, in this case example the NAM VSB. Given
the replication of traffic with SPAN it is important to note resources on the wire will be consumed an
QoS should be properly implemented to avoid negative impacts.

FlexPod Datacenter with Cisco Secure Enclaves

61

Enclave Implementation

NetFlow
The Nexus 1000v supports NetFlow. The data may be exported to the Lancope StealthWatch system for
analysis. As shown below the NetFlow feature is enabled. The destination of the flow records is defined
as "nf-exprot-1" which is the Lancope Cyber Thread Defense (CTD) solution. The flow record
"sea-enclaves" defines the interesting parameters to be captured with each flow and indicates the
"nf-export-1" as the collector.

Nexus 1000v (sea-prod-vsm)


feature netflow

flow exporter nf-export-1


description <<** SEA Lancope Flow Collector

**>>

destination 172.26.164.240 use-vrf management


transport udp 2055
source mgmt0
version 9
option exporter-stats timeout 300
option interface-table timeout 300
flow monitor sea-enclaves
record netflow-original
exporter nf-export-1
timeout inactive 15
timeout active 60
The validated version of the Nexus 1000v supports up to 32 NetFlow monitors and 256 instances. An
instance being the application of the monitor to a port-profile. If resource availability is a concern, it is
suggested that the monitoring focus on data sources such as data base profiles and critical enclaves
within the architecture.
Note

For more information on the Cyber Threat Defense system implemented for the Secure Enclave
architecture please visit the Cisco Cyber Threat Defense for the Data Center Solution: Cisco Validated
Design at
www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/ctd-first-look-designguide.pdf

FlexPod Datacenter with Cisco Secure Enclaves

62

Enclave Implementation

vTracker
The vTracker feature on the Cisco Nexus 1000V switch provides information about the virtual network
environment. vTracker provides various views that are based on the data sourced from the vCenter, the
Cisco Discovery Protocol (CDP), and other related systems connected with the virtual switch. vTracker
enhances troubleshooting, monitoring, and system maintenance. Using vTracker show commands, you
can access consolidated network information across the following views:

Module-ViewvTracker showing information about a server module

upstream-ViewvTracker information showing from upstream switch

Vlan-ViewvTracker showing information vlan usage by virtual machines

Vm-ViewvTracker showing information about a virtual machine

Vmotion-ViewvTracker showing information about VM migration

For example the show vtracker module-view provides visibility into the ESXi pNICS defined as vNICS
on the Cisco UCS system:

FlexPod Datacenter with Cisco Secure Enclaves

63

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


sea-prod-vsm# show vtracker module-view pnic

-------------------------------------------------------------------------------Mod

EthIf

Adapter

Mac-Address

Driver

DriverVer

FwVer

Description
-------------------------------------------------------------------------------3

Eth3/1

vmnic0

0050.5652.0a00 enic

2.1.2.38

2.1(3a)

Cisco Systems Inc Cisco VIC Ethernet NIC

Eth3/2

vmnic1

0050.5652.0b00 enic

2.1.2.38

2.1(3a)

Cisco Systems Inc Cisco VIC Ethernet NIC

Eth3/3

vmnic2

0050.5652.5a00 enic

2.1.2.38

2.1(3a)

Cisco Systems Inc Cisco VIC Ethernet NIC

Eth3/4

vmnic3

0050.5652.5b00 enic

2.1.2.38

2.1(3a)

Cisco Systems Inc Cisco VIC Ethernet NIC

Eth3/5

vmnic4

0050.5652.3a00 enic

2.1.2.38

2.1(3a)

Cisco Systems Inc Cisco VIC Ethernet NIC

Eth3/6

vmnic5

0050.5652.3b00 enic

2.1.2.38

Cisco Systems Inc Cisco VIC Ethernet NIC

FlexPod Datacenter with Cisco Secure Enclaves

64

2.1(3a)

Enclave Implementation

Cisco TrustSec
The Cisco Nexus 1000v supports the Cisco TrustSec architecture by implementing the SGT Exchange
Protocol (SXP). The SXP protocol is used to propagate the IP addresses of virtual machines and their
corresponding SGTs up to the upstream Cisco TrustSec-capable switches or Cisco ASA firewalls. The
SXP protocol is a secure communication between the speaker (Nexus 1000v) and listener devices.
The following configuration describes the enablement of the CTS feature on the Nexus 1000v. The
feature is enabled with device tracking. CTS device tracking allows the switch to capture the IP address
and associated SGT assigned at the port profile of the virtual machine.

Nexus 1000v (sea-prod-vsm)


feature cts
cts device tracking
The SXP configuration can be optimized by configuring a default password and source IP address
associated with any SXP connection. The SXP connection definition in this example points to the Nexus
7000 switches that are configured as listeners. In a FlexPod configuration with the Nexus 7000 it is
recommended to use the Nexus 7000 as SXP listeners. The Nexus 7000 switches will act as a CTS
IP-to-SGT aggregation point and can be configured to transmit (speak) the CTS mapping information to
other CTS infrastructure devices such as the Cisco ASA.

Nexus 1000v (sea-prod-vsm)


cts sxp enable
cts sxp default password 7 K1kmN0gy
cts sxp default source-ip 172.26.164.18
cts sxp connection peer 172.26.164.217 password default mode listener vrf management
cts sxp connection peer 172.26.164.218 password default mode listener vrf management
Figure 27

Cisco Nexus 1000v Cisco TrustSec SXP Example Nexus 7000

Switches such as the Cisco Nexus 5000 do not support the SXP listener role. In this scenario, the Nexus
1000v will "speak" directly to each ASA virtual context providing SGT to IP mapping information for
use in the access control service policies.

FlexPod Datacenter with Cisco Secure Enclaves

65

Enclave Implementation

Figure 28

Cisco Nexus 1000v Cisco TrustSec SXP ExampleASA SXP

Private VLANs
The private VLAN configuration on the Nexus 1000v supports the isolation of enclave management
traffic. This configuration requires the enablement of the feature and definition of two VLANs. In this
example, VLAN 3172 is the primary VLAN supporting the isolated VLAN 3172.

Nexus 1000v (sea-prod-vsm)


feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
The private VLAN construct is then applied to a vethernet port profile. The sample below indicates the
use of the private VLAN for core services traffic such as Active Directory, DNS, and Windows Update
Services. It is important to remember that virtual machines connected to an isolated private VLAN
cannot communicate with other VMs on the same segment.

FlexPod Datacenter with Cisco Secure Enclaves

66

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


port-profile type vethernet pvlan_core_services
vmware port-group
switchport mode private-vlan host
switchport private-vlan host-association 3171 3172
service-policy type qos input Platinum
no shutdown
state enabled
The Cisco UCS VNICs dedicated to support the private VLAN traffic are assigned to the core-uplink
port profile. This port channel trunk carries the primary and isolated VLANs. Notice the primary VLAN
is defined as the native VLAN to support traffic coming back from the promiscuous management domain
described below.

Nexus 1000v (sea-prod-vsm)


port-profile type ethernet core-uplinks
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 3171-3172
system mtu 9000
switchport trunk native vlan 3171
channel-group auto mode on mac-pinning
no shutdown
state enabled
The isolated private VLAN is also defined on the dedicated infrastructure management Nexus 1000v
VSM. The feature and private VLAN definition is identical to the production VSM documented earlier
in this section.

FlexPod Datacenter with Cisco Secure Enclaves

67

Enclave Implementation

Nexus 1000v (sea-vsm1 Management VSM)


feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
The Nexus 1000v management VSM defines a promiscuous port profile allowing isolated traffic on the
production VSM to communicate with virtual machines using the core_services profile.

Nexus 1000v (sea-vsm1 Management VSM)


port-profile type vethernet core_services
vmware port-group
switchport mode private-vlan promiscuous
switchport access vlan 3171
switchport private-vlan mapping 3171 3172
ip flow monitor sea-enclaves input
no shutdown
state enabled

Ethernet Port Profiles


The Nexus 1000v production VSM uses three unique Ethernet type port profiles for uplink transport.
This is accomplished by defining six VNICs on the ESXi UCS service profile. The VNICs are deployed
in parallel offering connectivity to either UCS Fabric A or B. The Nexus 1000v VEM provides host
based port aggregation of these VNICs creating port channels. The segmentation and availability of the
enclave is enhanced by using dedicated vNICs with the HA features of Nexus 1000v port channeling.
The system-uplink port profile supports all of the VLANs required for control and management services.
The MTU is set to 9000 requiring jumbo enforcement at the edge and enablement across the
infrastructure. Table 5 details the VLANs carried on the system uplink ports.

FlexPod Datacenter with Cisco Secure Enclaves

68

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 3250-3251,3254-3255
system mtu 9000
channel-group auto mode on mac-pinning
no shutdown
system vlan 3250
state enabled
Table 5

VLAN ID
3250
3251
3254
3255

Production VSM SystemUplink VLANs

Description
Production Management VLAN
vMotion VLAN
vPath Data Service
HA services
The enclave port profile uplinks support traffic directly associated with the enclaves. This includes NFS,
iSCSI and enclave data flows. Table 6 describes the VLANs created for the enclave validation effort. It
is important to understand that these VLANS to capture the limits of the environment.

FlexPod Datacenter with Cisco Secure Enclaves

69

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


port-profile type ethernet enclave-data-uplink
vmware port-group
switchport mode trunk
switchport trunk native vlan 2
system mtu 9000
switchport trunk allowed vlan 201-219,3001-3019,3253
channel-group auto mode on mac-pinning
no shutdown
system vlan 201-219
state enabled
Table 6

VLAN ID
201-219
3001-3019
3253

Production VSM Enclave VLANs

Description
Enclave NFS VLANs; one per enclave
Enclave public VLAN; one per enclave*
VXLAN VTEP VLAN
*This is not indicative of the maximum number of VLANs supported.
The core-uplinks port profile supports the private VLANs, primary and isolated, that offer complete
isolation of management traffic to all enclaves in the architecture. The port-channel created in the design
is dedicated to only these two VLANs. Please see the Cisco Unified Computing System for more details
regarding the construction of this secure traffic path.

FlexPod Datacenter with Cisco Secure Enclaves

70

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


port-profile type ethernet core-uplinks
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 3171-3172
system mtu 9000
switchport trunk native vlan 3171
channel-group auto mode on mac-pinning
no shutdown
state enabled

VLAN ID
3171
3172

Description
Enclave Primary Private VLAN
Enclave Isolated Private VLAN
The show port-channel summary command for a single VEM module, ESXi host, captures the three port
channel uplinks created. Figure 29 illustrates the resulting uplink configurations.

Nexus 1000v (sea-prod-vsm)


show port-channel summary | in Eth10
8

Po8(SU)

Eth

NONE

Eth10/5(P)

Eth10/6(P)

16

Po16(SU)

Eth

NONE

Eth10/1(P)

Eth10/2(P)

32

Po32(SU)

Eth

NONE

Eth10/3(P)

Eth10/4(P)

Figure 29

ESXi Host Uplink Example

FlexPod Datacenter with Cisco Secure Enclaves

71

Enclave Implementation

Quality of Service
The Nexus 1000v uses Cisco Modular QoS CLI (MQC) that defines a policy configuration process to
identify and define traffic at the virtual access layer. The MQC policy implementation can be
summarized in three primary steps:

Define matching criteria through a class-map

Associate an action with each defined class through a policy-map

Apply policy to entire system or an interface through a service-policy

The Nexus 1000v as an edge device can apply a CoS value at the edge based on the VM value/role in
the organization. The first step in the process is to create a class-map construct. In the enclave
architecture there are four class maps defined. The fifth class is best effort which was not explicitly
defined.

Nexus 1000v (sea-prod-vsm)


class-map type qos match-all Gold_Traffic
class-map type qos match-all Bronze_Traffic
class-map type qos match-all Silver_Traffic
class-map type qos match-all Platinum_Traffic
The important note about this design is that this definition is a match-all statement. This implies that all
traffic will match the mapping and be subject to the service policies for that class of traffic. This is a
simple classification model and could certainly be revised to meet more complex requirements. This
model is carried throughout the FlexPod Data Center deployment.
The association of an "action" with a class of traffic requires the policy-map construct. In the enclave
architecture, each class-map is used by a single policy-map. Each policy-map marks the packet with a
CoS value. This value is then referenced by the remaining data center elements to provide a particular
quality of service for that traffic.

FlexPod Datacenter with Cisco Secure Enclaves

72

Enclave Implementation

Nexus 1000v (sea-prod-vsm)


policy-map type qos Gold
class Gold_Traffic
set cos 5
policy-map type qos Bronze
class Bronze_Traffic
set cos 1
policy-map type qos Silver
class Silver_Traffic
set cos 2
policy-map type qos Platinum
class Platinum_Traffic
set cos 6
The final step in the process is the application of the policy-map to the system or interface. In the enclave
design, the QoS policy-map is applied to traffic ingress to the Nexus 1000v through the port-profile
interface configuration. In this example, all interfaces inheriting the vMotion port-profile will mark the
traffic a CoS 1 on ingress.

Nexus 1000v (sea-prod-vsm)


port-profile type vethernet vMotion
vmware port-group
switchport mode access
switchport access vlan 3251
service-policy type qos input Bronze
no shutdown
state enabled
Statistics are maintained for each policy, class action, and match criteria per interface. The qos statistics
command enables or disables this globally.
The Nexus 1000v marks all of its self-generated control and packet traffic with CoS 6. This aligns with
IEEE CoS use recommendations as shown in Table 7 below.

FlexPod Datacenter with Cisco Secure Enclaves

73

Enclave Implementation

Virtual Service Integration (Virtual Security Gateway)


Integration of virtual services into the Cisco Nexus 1000v environment requires that the switch register
with the Cisco Prime Network Services Controller (PNSC). The registration process requires the
presence of a policy-agent file, the PNSC IP address and a shared secret for secure communication
between the VSM and controller. The following sample details the policy-agent configuration in the
enclave environment.

Nexus 1000v (sea-prod-vsm)


vnm-policy-agent
registration-ip 192.168.250.250
shared-secret **********
policy-agent-image bootflash:/vnmc-vsmpa.2.1.1b.bin
log-level
The Nexus 1000v allows for the global definition of vservice specific attributes that can be inherited by
the instantiated services. The global VSG qualities are defined below. The bypass asa-traffic command
indicates that traffic will bypass an ASA Nexus 1000v. The ASA Nexus 1000v is not part of this design,
this command is unnecessary.

Nexus 1000v (sea-prod-vsm)


vservice global type vsg
tcp state-checks invalid-ack
tcp state-checks seq-past-window
no tcp state-checks window-variation
! This refers to the ASA Nexus 1000v platform which is not in this design
bypass asa-traffic
The instantiation of a vservice in the Nexus 1000v requires the network administrator to define the
service node, and bind the security profile to the port-profile. In this example, the VSG service node is
named enc1-vsg. The vPath communication will occur at Layer 2 between the VEM and vPath interface
of the VSG. The IP address of the VSG is resolved through ARP and data (vPath) traverses VLAN 3254.
In this example, if the VSG should fail the traffic will not be permitted to flow.

Nexus 1000v (sea-prod-vsm)


vservice node enc1-vsg type vsg
ip address 111.111.111.111
adjacency l2 vlan 3254
fail-mode close

FlexPod Datacenter with Cisco Secure Enclaves

74

Enclave Implementation

vPath is an encapsulation technique that will add 62 bytes when used in L2 mode or 82 bytes if using L3
mode. To avoid fragmentation in a Layer 2 implementation, ensure the outgoing uplinks support the
required MTU. If it is a Layer 3 enabled vPath packets will be dropped and ICMP error messages sent
to the traffic source.
The port profile enc1-web uses the previously described service node. The vservice command binds a
specific Cisco VSG (enc1-vsg) and security profile (enc1_web) to the port profile. This enables vPath
to redirect the traffic to the Cisco VSG. The org command defines the tenant with the PNSC where the
firewall is enabled.

Nexus 1000v (sea-prod-vsm)


port-profile type vethernet enc1-web
vservice node enc1-vsg profile enc1_web
org root/Enclave1
no shutdown
description <<** Enclave 1 Data WEB **>>
state enabled

Cisco Unified Computing System


The Cisco Unified Computing System configuration is based upon the recommended practices of
FlexPod Data Center. The enclave architecture will build off of this baseline deployment to instantiate
new Service Profiles and the objects required for their instantiation.

Cisco UCS QoS System Class


Queuing and bandwidth control are implemented within the Cisco UCS and at the access layer (Cisco
Nexus physical and virtual switching). Within the Cisco UCS, CoS values are assigned to a system class
and given a certain percentage of the effective bandwidth. This is configured under the LAN tab in the
Cisco UCS Manager.
Figure 30

Cisco UCS QoS System Class Settings

The configuration adapts the CoS IEEE 802.1Q-2006 CoS-use recommendations shown in Table 7. It is
important to note voice CoS value 5 has been reassigned to support NFS and video traffic CoS 4 is not
in use.

FlexPod Datacenter with Cisco Secure Enclaves

75

Enclave Implementation

Table 7

IEEE 802.1Q-2005 CoSUse General Recommendations to Cisco UCS Priority

CoS Value
6
5
4
3
2
1
0

Acronym
IC
VO
VI
CA
EE
BK
BE

Description
Internetwork Control
Voice
Video Traffic
Critical Applications
Excellent effort traffic
Background traffic
Not used

Priority
Platinum
Gold
Not in use
Fibre Channel
Silver
Bronze
Best Effort

Enclave Traffic Assigned


Control and Management
NFS traffic
Not in use
FCoE
iSCSI
vMotion
Not in use

The MTU maximum (9216) has been set allowing the edge devices to control frame sizing and reduce
the potential for fragmentation at least within the Cisco UCS domain. Service profiles determine the
attributes of the server including MTU settings and CoS assignment.

Cisco UCS Service Profile


Service profiles are the central concept of Cisco UCS. Each service profile serves a specific purpose:
ensuring that the associated server hardware has the configuration required to support the applications
it will host.
The service profile maintains configuration information about the server hardware, interfaces, fabric
connectivity, and server and network identity. This information is stored in a format that you can manage
through Cisco UCS Manager. All service profiles are centrally managed and stored in a database on the
fabric interconnect.
Every server must be associated with a service profile. The FlexPod Data Center baseline service
profiles were used to build the enclave environment. Modifications were made in regards to the QoS
policy of the service profiles, as well as, the number of VNICs instantiated on a given host.
Whether Cisco UCS controls the CoS for a vNIC or not strictly depends on the Host Control field of the
QoS Policy, which is assigned to that particular vNIC. Referring to Figure 31, the QoS_N1k policy
allows full host control. Since Full is selected and if the packet has a valid CoS assigned by the Nexus
1000v, then UCS trusts the CoS settings assigned at the host level. Otherwise, Cisco UCS uses the CoS
value associated with the priority selected in the Priority drop-down list, in this case Best Effort. The
None selection indicates that the UCS will assign the CoS value associated with the Priority Class given
in the QoS policy, disregarding any of the settings implemented at the host level by the Nexus 1000v.

FlexPod Datacenter with Cisco Secure Enclaves

76

Enclave Implementation

Figure 31

Note

Cisco UCS QoS PolicyAllow Host Control

The Cisco UCS Host Control "None" setting uses the CoS value associated with the priority selected in
the Priority drop-down list regardless of the CoS value assigned by the host.
The vNIC template uses the QoS policy to defer classification of traffic to the host or in the enclave
architecture the Nexus 1000v. Figure 32 is a sample vNIC template where the QoS Policy and MTU are
defined for any Service Profile using this template.

Note

If a QoS policy is undefined or not set the system will use a CoS of 0 which aligns to the best-effort
priority
Figure 32

Service Profile vNIC Template Example

FlexPod Datacenter with Cisco Secure Enclaves

77

Enclave Implementation

Figure 33 captures all of the vNIC templates defined for the production servers in the enclave VMware
DRS cluster. Each template uses the QoS_N1k QoS policy and an MTU of 9000. The naming standard
also indicates there is fabric alignment of the vNIC to Fabric Interconnect A or B. Figure 34 is the
example adapter summary for the enclave service profile.
Figure 33

Cisco UCS vNIC Templates for Enclave Production Servers

Figure 34

Cisco UCS Production ESXi Host Service Profile

User Management
The Cisco UCS domain is configured to use the radius services of the ISE for user management,
centralizing authentication and authorization policy in the organization. The Cisco Identity Services
Engine section will discuss the user auth_c and auth_z policy implementation. The following
configurations were put in place to achieve this goal.

Create a radius provider

Create a radius group

Define an authentication domain

Revise the Native Authentication policy

The following figures step through the UCS integration of ISE radius services. Notice the figures include
the Cisco UCS navigation path.

FlexPod Datacenter with Cisco Secure Enclaves

78

Enclave Implementation

FlexPod Datacenter with Cisco Secure Enclaves

79

Enclave Implementation

VMware vSphere
ESXi
The ESXi hosts are uniform in their deployment employing the FCoE boot practices established in
FlexPod. The Cisco UCS service profile is altered to provides 6 vmnics for use by the hypervisor as
described in the previous section. The following sample from one of the ESXi hosts reflects the UCS
VNIC construct and MTU settings provided by the Cisco Nexus 1000v.

ESXi Host Example


~ # esxcfg-nics -l
Name

PCI

Driver

Link Speed

Duplex MAC Address

MTU

Description

vmnic0

0000:06:00.00 enic

Up

40000Mbps Full

00:25:b5:02:0a:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

vmnic1

0000:07:00.00 enic

Up

40000Mbps Full

00:25:b5:02:0b:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

vmnic2

0000:08:00.00 enic

Up

40000Mbps Full

00:25:b5:02:5a:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

vmnic3

0000:09:00.00 enic

Up

40000Mbps Full

00:25:b5:02:5b:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

vmnic4

0000:0a:00.00 enic

Up

40000Mbps Full

00:25:b5:02:3a:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

vmnic5

0000:0b:00.00 enic

Up

40000Mbps Full

00:25:b5:02:3b:04 9000

Cisco Systems Inc Cisco VIC Ethernet NIC

The vmknics vmko, vmk1 and vmk2 are provisioned for infrastructure services management, vMotion
and VXLAN VTEP respectively. Notice the MTU on the VXLAN services is set to 1700 to account for
the encapsulation overhead of VXLAN.

ESXi Host Example vmknics for infrastructure services


# esxcfg-vmknic -l
Interface

Port Group/DVPort

IP Family IP Address

Netmask

Broadcast

vmk0

38

IPv4

192.168.250.15

255.255.255.0

vmk1

740

IPv4

192.168.251.15

vmk2

776

IPv4

192.168.253.15

TSO MSS

Enabled Type

192.168.250.255 00:25:b5:02:3a:04 1500

65535

true

STATIC

255.255.255.0

192.168.251.255 00:50:56:61:64:70 9000

65535

true

STATIC

255.255.255.0

192.168.253.255 00:50:56:6d:88:95 1700

65535

true

STATIC

FlexPod Datacenter with Cisco Secure Enclaves

80

MAC Address

MTU

Enclave Implementation

Each enclave has dedicated NFS or potentially iSCSI services available to it in the NetApp Virtual
Storage Machine (VSM) vmknics are required to support this transport. The following example show a
number of vmknics attached to distinct subnets offering L2/L3 isolation of storage services to the
enclave.

ESXi Host Example vmknics dedicated to enclaves


# esxcfg-vmknic -l
Interface

Port Group/DVPort

IP Family IP Address

Netmask

Broadcast

MAC Address

vmk7

516

IPv4

192.168.3.15

255.255.255.0

192.168.3.255

vmk8

548

IPv4

192.168.4.15

255.255.255.0

vmk9

580

IPv4

192.168.5.15

vmk10

612

IPv4

vmk11

644

vmk12
vmk13

MTU

TSO MSS

Enabled Type

00:50:56:63:a4:0c 9000

65535

true

STATIC

192.168.4.255

00:50:56:6a:51:9e 9000

65535

true

STATIC

255.255.255.0

192.168.5.255

00:50:56:64:cd:2b 9000

65535

true

STATIC

192.168.6.15

255.255.255.0

192.168.6.255

00:50:56:62:77:7a 9000

65535

true

STATIC

IPv4

192.168.7.15

255.255.255.0

192.168.7.255

00:50:56:68:64:41 9000

65535

true

STATIC

676

IPv4

192.168.8.15

255.255.255.0

192.168.8.255

00:50:56:6c:b4:85 9000

65535

true

STATIC

708

IPv4

192.168.9.15

255.255.255.0

192.168.9.255

00:50:56:62:05:7a 9000

65535

true

STATIC

DRS for Virtual Service Nodes


The VMware DRS cluster provides affinity controls and rules for VM and ESXi host alignment. In the
enclave design, virtual services are retained within the production cluster to manage traffic patterns and
offer the performance inherit to locality. To avoid a single point of failure, the ESXi host, DRS cluster
setting were modified and placement policies created.
Two virtual machine DRS Groups were created indicating the primary and secondary members of HA
pairs. In this example, the Primary VSG and Secondary VSG are instantiated and VSGs were assigned
to each group as appropriate. The DRS production cluster ESXi host resources were "split" into two
categories based on the naming standard of odd and even ESXi hosts.

FlexPod Datacenter with Cisco Secure Enclaves

81

Enclave Implementation

Two DRS virtual machine rules were created defining the acceptable positioning of VSG services on the
DRS cluster. As shown the previously created DRS cluster VM and Host groups are used to define two
distinct placement policies in the cluster, essentially removing the ESXi host as a single point of failure
for the identified services (VMs).

FlexPod Datacenter with Cisco Secure Enclaves

82

Enclave Implementation

NetApp FAS
This section of the document builds off of the FlexPod Data Center foundation to enable creating an
enclave Storage Virtual Machine (SVM).
1.

Build VLAN interfaces for NFS, iSCSI, and Management on each Node's interface group, and set
appropriate MTUs.

FlexPod Datacenter with Cisco Secure Enclaves

83

Enclave Implementation

2.

Create Failover Groups for NFS and Management interfaces.

3.

Create the Storage Virtual Machine.

FlexPod Datacenter with Cisco Secure Enclaves

84

Enclave Implementation

FlexPod Datacenter with Cisco Secure Enclaves

85

Enclave Implementation

4.

Assign Production Aggregates to the SVM.

5.

Turn on SVM NFS vstorage parameter to enable NFS VAAI Plugin support.

6.

Set Up Root Volume Load Sharing Mirrors for the SVM.

FlexPod Datacenter with Cisco Secure Enclaves

86

Enclave Implementation

7.

If necessary, configure FCP in the SVM.

8.

Create a valid self-signed security certificate for the SVM or install a certificate from a Certificate
Authority (CA).

FlexPod Datacenter with Cisco Secure Enclaves

87

Enclave Implementation

9.

Secure the SVM Default Export-Policy. Create a SVM Export-Policy and assign it to the SVM root
volume.

10. Create datastore volumes while assigning the junction-path and Export-Policy, and update load

sharing mirrors.

FlexPod Datacenter with Cisco Secure Enclaves

88

Enclave Implementation

11. Enable storage efficiency on the created volumes.

12. Create NFS LIFs while assigning to failover groups.

13. Create any necessary FCP or iSCSI LIFs.


14. Create the SVM management LIF and assign the SVM administrative user.

FlexPod Datacenter with Cisco Secure Enclaves

89

Enclave Implementation

Cisco Adaptive Security Appliance


The Cisco ASA 5585-X is a high-performance, 2-slot chassis, with the firewall Security Services
Processor (SSP) occupying the bottom slot, and the IPS Security Services Processor (IPS SSP) in the top
slot of the chassis. The ASA includes many advanced features, such as multiple security contexts,
clustering, transparent (Layer 2) firewall or routed (Layer 3) firewall operation, advanced inspection
engines, and many more features. The FlexPod Data Center readily supports the ASA platform to
provide security services and the enclave design.
It should be noted that the Secure Enclave validation effort has resulted in a number of Cisco Validated
Designs that speak directly to the security implementation of the Cisco ASA platforms with FlexPod
Data Center. The Design Zone for Secure Data Center Portfolio page
(http://www.cisco.com/c/en/us/solutions/enterprise/design-zone-secure-data-center-portfolio/index.ht
ml) references these documents:

Cisco Secure Data Center for Enterprise Solution Design Guide at


http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-dg.pdf
This guide includes design and implementation guidance specifically focused on single site
clustering with Cisco TrustSec.

Cisco Secure Data Center for Enterprise (Implementation Guide) at


http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-ig.pdf
This document is focused on providing implementation guidance for the Cisco Single Site
Clustering with IPS and TrustSec solution.

Cisco Cyber Threat Defense for the Data Center Solution: First Look Guide at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-des
ign-guide.pdf

FlexPod Datacenter with Cisco Secure Enclaves

90

Enclave Implementation

This guide provides design details and guidance for detecting threats already operating in an internal
network or data center.

Cluster Mode
The ASA cluster model uses the Cisco Nexus 7000 as an aggregation point for the security service.
Figure 35 details the connections. The picture shows four physical ASA devices connected to two Nexus
7000 switches. The four Nexus switch images represent the same two Nexus VDCs 7000-A and 7000-B.
The ASA Clustered data links were configured as a Spanned EtherChannel using a single port-channel,
PC-2 that supports both inside and outside VLANs. These channels connect to a pair of Nexus 7000s
using a virtual PortChannel (vPC), vPC-20. The EtherChannel aggregates the traffic across all the
available active interfaces in the channel. A spanned EtherChannel accommodates both routed and
transparent firewall modes, in addition to single or multi-context. The EtherChannel inherently provides
load balancing as part of basic operation using Cluster Link Aggregation Control Protocol (cLACP).
Figure 35

Cisco Adaptive Security Appliance ConnectionsCluster Mode Deployment

The Cluster control links are local EtherChannels configured on each ASA device. In this example, each
ASA port channel PC-1 is dual-homed to the Nexus 7000 switches using vPC. A distinct vPC is defined
on the Nexus 7000 pair to provide control traffic HA. The Cluster control links do not support any
enclave traffic VLANs. A single VLAN supports the cluster control traffic. In the following example it
is defined as VLAN 20.

FlexPod Datacenter with Cisco Secure Enclaves

91

Enclave Implementation

Nexus 7000-A (Ethernet VDC)

Nexus 7000-B (Ethernet VDC)

feature vpc

feature vpc

vpc domain 100

vpc domain 100

role priority 10

role priority 20

peer-keepalive destination 172.26.164.183 source 172.26.164.182

peer-keepalive destination 172.26.164.182 source 172.26.164.183

peer-gateway

peer-gateway

auto-recovery

auto-recovery

interface port-channel20

interface port-channel20

description <<** ASA-Cluster-Data **>>

description <<** ASA-Cluster-Data **>>

switchport mode trunk

switchport mode trunk

switchport trunk native vlan 2

switchport trunk native vlan 2

switchport trunk allowed vlan 2001-2135,3001-3135

switchport trunk allowed vlan 2001-2135,3001-3135

spanning-tree port type normal

spanning-tree port type normal

vpc 20

vpc 20

interface port-channel21

interface port-channel21

description <<** k02-ASA-1-Control **>>

description <<** k02-ASA-1-Control **>>

switchport access vlan 20

switchport access vlan 20

spanning-tree port type normal

spanning-tree port type normal

no logging event port link-status

no logging event port link-status

no logging event port trunk-status

no logging event port trunk-status

vpc 21

vpc 21

vlan 20

vlan 20

name ASA-Cluster-Control
interface port-channel22

name ASA-Cluster-Control
interface port-channel22

description <<** k02-ASA-2-Control **>>

description <<** k02-ASA-2-Control **>>

switchport access vlan 20

switchport access vlan 20

spanning-tree port type normal

spanning-tree port type normal

vpc 22

vpc 22

interface port-channel23

interface port-channel23

description <<** k02-ASA-3-Control **>>

description <<** k02-ASA-3-Control **>>

switchport access vlan 20

switchport access vlan 20

spanning-tree port type normal

spanning-tree port type normal

vpc 23

vpc 23

interface port-channel24

interface port-channel24

description <<** k02-ASA-4-Control **>>

description <<** k02-ASA-4-Control **>>

switchport access vlan 20

switchport access vlan 20

spanning-tree port type normal

spanning-tree port type normal

vpc 24

vpc 24

The ASA cluster defines the same interface configuration across the nodes to support the local and
spanned EtherChannel configuration. The vss-id command is a locally significant ID for the ASA to use
when connected to the vPC switches. It is important that each of the node interfaces connect to the same
switch. In this case all of T0/8 attach to Cisco Nexus 7000-A and T0/9 to Cisco Nexus 7000-B.

FlexPod Datacenter with Cisco Secure Enclaves

92

Enclave Implementation

ASA Cluster
interface TenGigabitEthernet0/6
channel-group 1 mode active
!
interface TenGigabitEthernet0/7
channel-group 1 mode active
!
interface TenGigabitEthernet0/8
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!

The local and spanned EtherChannels are formed and enclave data VLANs can be assigned through the
sub-interface construct. The sample configuration shows three enclave data VLANs assigned to the
spanned EtherChannel port channel 2. The traffic is balanced across the bundled interfaces.

FlexPod Datacenter with Cisco Secure Enclaves

93

Enclave Implementation

ASA Cluster
interface Port-channel1
description Clustering Interface
!
interface Port-channel2
description Cluster Spanned Data Link to PC-20
port-channel span-cluster vss-load-balance
!
interface Port-channel2.2001
description Enclave1-outside
vlan 2001
!
interface Port-channel2.2002
description Enclave2-outside
vlan 2002
!
interface Port-channel2.2003
description Enclave3-outside
vlan 2003
!

The Cisco ASA management traffic uses dedicated interfaces into the management domain. In a multi
context environment this physical interface can be shared across virtual context through 802.1q
sub-interfaces. The trunked management interface allows each security context to have its own
management interfaces. The IPS sensors on each platform has its own dedicated interface with
connections into the management infrastructure.

FlexPod Datacenter with Cisco Secure Enclaves

94

Enclave Implementation

Figure 36

Cisco ASA and Cisco IPS Management Connections

ASA Cluster
interface Management0/0
!
interface Management0/0.101
description <<** Enclave 1 Management **>>
vlan 101
!
interface Management0/0.102
description <<** Enclave 2 Management **>>
vlan 102
!
interface Management0/0.103
description <<** Enclave 3 Management **>>
vlan 103
!
interface Management0/0.164
description <<** Cluster Management Interface **>>
vlan 164
!

The Enclave model uses that ASA in a multi-mode context. The ASA is portioned into multiple virtual
devices, known as security contexts. Each context acts as an independent device, with its own security
policy, interfaces, and administrators. Multiple contexts are similar to having multiple standalone
devices each dedicated to an Enclave. The context are defined at the System level.

FlexPod Datacenter with Cisco Secure Enclaves

95

Enclave Implementation

The primary administrative context is the "admin" context. This context is assigned a single management
sub-interface for security operations.

ASA Cluster Admin Context


admin-context admin
context admin
allocate-interface Management0/0.164
config-url disk0:/admin.cfg

Within the admin context a pool of cluster address must be created for distribution to slave nodes as they
are added to the ASA cluster. This IP pool construct is repeated for each security context created in ASA
cluster. In this example, a pool of four IP address is reserved for the admin context indicating a four node
maximum configuration.

ASA Cluster Cluster IP Pool


ip local pool K02-SEA 172.26.164.157-172.26.164.160 mask 255.255.255.0

The management interface uses the sub-interface assigned in the system context. The cluster IP
(172.26.164.191) is assigned and is "owned" only by the master node.

ASA Cluster Cluster IP Pool


interface Management0/0.164
management-only
nameif management
security-level 0
ip address 172.26.164.191 255.255.255.0 cluster-pool K02-SEA
!

The cluster can now be instantiated in the system context. In this example, the K02-SEA ASA cluster
is created on ASA-1. The cluster interface characteristics and associated attributes are defined. This is
repeated on each node of the cluster.

FlexPod Datacenter with Cisco Secure Enclaves

96

Enclave Implementation

ASA Cluster Cluster Definition


cluster group K02-SEA
key *****
local-unit ASA-1
cluster-interface Port-channel1 ip 192.168.20.101 255.255.255.0
priority 1
console-replicate
health-check holdtime 3
clacp system-mac auto system-priority 1
enable
conn-rebalance frequency 3

This configuration is repeated on each node added to the cluster. Notice the IP address is different for
the second node.

ASA Cluster Additional Node Cluster


cluster group K02-SEA
key *****
local-unit ASA-2
cluster-interface Port-channel1 ip 192.168.20.102 255.255.255.0
priority 2
console-replicate
health-check holdtime 3
clacp system-mac auto system-priority 1
enable
conn-rebalance frequency 3

The Cisco ASDM Cluster Dashboard provides an overview of the cluster and role assignments.

FlexPod Datacenter with Cisco Secure Enclaves

97

Enclave Implementation

Security Contexts
The security contexts are defined in the system context and allocated network resources. These resources
were previously defined as sub-interfaces in the spanned EtherChannel. Names can be attached to the
interfaces for use within the security context, in this sample Mgmt, outside and inside are in use.

ASA Cluster Enclave Security Contexts


context Enclave1
description Secure Enclave 1
allocate-interface Management0/0.101 Mgmt101
allocate-interface Port-channel2.2001 outside
allocate-interface Port-channel2.3001 inside
config-url disk0:/enclave1.cfg
!
context Enclave2
description Secure Enclave 2
allocate-interface Management0/0.102 Mgmt102
allocate-interface Port-channel2.2002 outside
allocate-interface Port-channel2.3002 inside
config-url disk0:/enclave2.cfg
!
context Enclave3
description Secure Enclave 3
allocate-interface Management0/0.103 mgmt103
allocate-interface Port-channel2.2003 outside
allocate-interface Port-channel2.3003 inside
config-url disk0:/Enclave3.cfg

The Cisco ASDM reflects each security context as an independent firewall. Each of these context is
configured and active on each node in the cluster.

FlexPod Datacenter with Cisco Secure Enclaves

98

Enclave Implementation

Within the context, the operational mode of the context is defined as routed or transparent. The security
context requires its own management IP pool that is used by each Enclave2 instance across the ASA
nodes in the cluster. The example below creates the IP pool enclave2-pool and assigns this pool to the
Mgmt102 interface. The 10.0.102.100 is the cluster IP interface. ASDM and CSM may access the system
or enclave through the shared IP address. Records sourced from the ASA system or enclave will reflect
the locally significant address assigned through the pool construct.

FlexPod Datacenter with Cisco Secure Enclaves

99

Enclave Implementation

ASA Cluster
firewall transparent
hostname Enclave2
!
ip local pool enclave2-pool 10.0.102.101-10.0.102.104 mask 255.255.255.0
!
interface BVI1
description Enclave2 BVI
ip address 10.2.1.251 255.255.255.0
!
interface Mgmt102
management-only
nameif management
security-level 0
ip address 10.0.102.100 255.255.255.0 cluster-pool enclave2-pool
!
interface outside
nameif outside
bridge-group 1
security-level 0
!
interface inside
nameif inside
bridge-group 1
security-level 100
!

This command indicates that Enclave2 is defined as a transparent security context across the cluster.

FlexPod Datacenter with Cisco Secure Enclaves

100

Enclave Implementation

ASA Cluster
K02-ASA-Cluster/Enclave2# cluster exec show context
ASA-1(LOCAL):*********************************************************
Context Name
Enclave2

Class

Interfaces

Mode

URL

default

inside,Mgmt102,

Transparent

disk0:/enclave2.cfg

outside

ASA-3:****************************************************************
Context Name
Enclave2

Class

Interfaces

Mode

URL

default

inside,Mgmt102,

Transparent

disk0:/enclave2.cfg

outside

ASA-4:****************************************************************
Context Name
Enclave2

Class

Interfaces

Mode

URL

default

inside,Mgmt102,

Transparent

disk0:/enclave2.cfg

outside

ASA-2:****************************************************************
Context Name
Enclave2

Class

Interfaces

Mode

URL

default

inside,Mgmt102,

Transparent

disk0:/enclave2.cfg

ISE Integration
The Cisco ASA security context communicate with the ISE over Radius for AAA and Cisco TrustSec
related services. The AAA server group is created and the ISE nodes referenced with secure keys and
password that are similarly defined on the ISE platform. The AAA authentication can then be assigned
to connection types.

FlexPod Datacenter with Cisco Secure Enclaves

101

Enclave Implementation

ASA Cluster
aaa-server ISE_Radius_Group protocol radius
aaa-server ISE_Radius_Group (management) host 172.26.164.187
key *****
radius-common-pw *****
aaa-server ISE_Radius_Group (management) host 172.26.164.239
key *****
radius-common-pw *****
!
aaa authentication enable console ISE_Radius_Group
aaa authentication http console ISE_Radius_Group LOCAL
aaa authentication ssh console ISE_Radius_Group LOCAL
Figure 37

Example AAA Server Group Configuration in Cisco ASDM

Cisco TrustSec
As shown earlier in Figure 38, each ASA security context communicates with the Cisco ISE platform
and maintains its own database to enforce role-based access control policies. In the terms of Cisco
TrustSec, the ASA is a Policy Enforcement Point (PEP) and Cisco ISE is a Policy Decision Point (PDP).
The ISE PDP shares the secure group name and tag mappings (the security group table) to the ISE

FlexPod Datacenter with Cisco Secure Enclaves

102

Enclave Implementation

through a secure Protected Access Credential (PAC) Radius transaction. This information is commonly
referred to as Cisco TrustSec environment data. The PDP provides Security Group Tag (SGT)
information to build access policies in the ASA as PEP.
The ASA PEP learns identity information through the Security-group eXchange Protocol (SXP). This
can be from multiple sources. The ASA creates a database to house the IP to SGT mappings. Only the
master cluster unit learns security group tag (SGT) information. The master unit then populates the SGT
to slaves, and slaves can make a match decision for SGT based on the security policy.
The following example references the ISE server group and establishes a connection to the group
through the shared cluster IP address 10.0.102.100. The ASA establishes two SXP connections to the
Nexus switches and listens for IP-to-SGT updates.

ASA Cluster
cts server-group ISE_Radius_Group
cts sxp enable
cts sxp default password *****
cts sxp default source-ip 10.0.102.100
cts sxp connection peer 172.26.164.218 password default mode local listener
cts sxp connection peer 172.26.164.217 password default mode local listener
Note

The ASA can also be configured as an SXP speaker to share data with the other members of the CTS
infrastructure.
Figure 38

Example SXP Configuration in Cisco ASDM

FlexPod Datacenter with Cisco Secure Enclaves

103

Enclave Implementation

The ASA as a PEP uses the security groups to create security policies. The following images capture
rule creation through the Cisco ASDM. Notice the Security Group object as a criteria option for both
source and destination in Figure 39.
Figure 39

Cisco ASDM Add Access Rule Screenshot

When selecting the Source Group as a criteria, the Security Group Browser window will be available.
This window will list all available Security Groups and their associated tags. The ability to filter on the
Security Name streamlines the creation of access control policy. In this case, the interesting tags are for
enclave2.

FlexPod Datacenter with Cisco Secure Enclaves

104

Enclave Implementation

Figure 40

Cisco ASDM Browse Security Group Example

Figure 41 is an example of the Security Group access rules. The role-based rules simplify rule creation
and understanding. The associated CLI is provided for completeness.
Figure 41

Sample Access Rules based on Security Group Information

FlexPod Datacenter with Cisco Secure Enclaves

105

Enclave Implementation

ASA Cluster Security Group Access List Example


access-list inside_access_in extended permit icmp security-group name enc2_web any any
access-list inside_access_in extended permit object-group TCPUDP security-group name enc2_web
any any eq www
access-list outside_access_in extended permit icmp any security-group name enc2_web any
access-list outside_access_in extended permit object-group TCPUDP any security-group name
enc2_web any eq www

NetFlow Secure Event Logging (NSEL)


The ASA implements NSEL to provide a stateful, IP flow tracking method that exports only those
records that indicate significant events in a flow. In stateful flow tracking, tracked flows go through a
series of state changes. NSEL events are used to export data about flow status and are triggered by the
event that caused the state change. The significant events that are tracked include flow-create,
flow-teardown, and flow-denied. In addition, the ASA generates periodic NSEL events, flow-update
events, to provide periodic byte counters over the duration of the flow. These events are usually
time-driven, which makes them more in line with traditional NetFlow; however, they may also be
triggered by state changes in the flow. In a clustered configuration, each ASA node establishes a
connection the flow collector.
Figure 42 is an example of the ASA cluster NSEL configuration through Cisco ASDM. The
172.26.164.240 address is the Lancope Flow Collector. Each node will export data to this collector
through its management interface. The command line view is provided for completeness. The
global_policy policy map enables NSEL to the flow collector.
Figure 42

Cisco ASDM NetFlow Configuration Example

FlexPod Datacenter with Cisco Secure Enclaves

106

Enclave Implementation

ASA Cluster NSEL Example Configuration


flow-export destination management 172.26.164.240 2055

policy-map global_policy
class class-default
flow-export event-type all destination 172.26.164.240
!
flow-export template timeout-rate 2
logging flow-export syslogs disable

High Availability Pair


Intrusion Prevention System
The Cisco IPS modules are optional components that can be integrated into the Cisco ASA appliance.
The IPS can be placed in a promiscuous mode or inline with traffic. The IPS virtual sensor uses a
port-channel interface established on the backplane of the appliance to divert or mirror interesting traffic
to the device. Figure 43 describes the virtual sensor on one of the ASA nodes in the cluster. The ASDM
capture show the management and backplane port channel 0/0. If the device is positioned inline the
device can be configured to fail-open or fail-close depending on the organizations security requirements.
Figure 43

Cisco IPS Integration

In a clustered ASA deployment, the local sensor monitors the traffic local to the ASA node. There is no
traffic redirection or sharing across sensors in the cluster. This lack of IPS collaboration in the cluster
configuration does prevent detection of certain types of scans as the traffic may traverse a number of IPS
devices due to load balancing across the ASA cluster.

FlexPod Datacenter with Cisco Secure Enclaves

107

Enclave Implementation

The IPS implementation is fully documented in the Cisco Secure Data Center for Enterprise
Implementation Guide at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-ig.pdf

Cisco Virtual Security Gateway


The Cisco Virtual Security Gateway is a virtual appliance that works with the Nexus 1000v to
consistently enforce security policies in virtualized environments. The Cisco VSG operates at Layer 2
creating zones based segmentation. The Enclave architecture uses the VSG to secure "east-west" traffic
patterns.
Figure 44 describes the flow and how segregation of duties and ownership is maintained for provisioning
the Virtual Security Gateway. The security, network and server administrators each have a role in the
process. This section of the document will focus on the security administrator role as the network Nexus
1000v configuration is covered in the Virtual Service Integration (Virtual Security Gateway) section and
assigning a port group to a virtual machines is a well-known operation.
Figure 44

Cisco VSG Deployment Process

Figure 45 depicts the implementation of the VSG in the Enclave architecture. This deployment is based
on a single FlexPod with Layer 2 adjacency. Layer 3 VSG implementations are also supported for more
distributed environments. The VSG VLAN provisioning is as follows:

VLAN
Management VLAN

Service VLAN

FlexPod Datacenter with Cisco Secure Enclaves

108

Description
Supports VMware vCenter, the Cisco Virtual
Network Management Center, the Cisco Nexus
1000V VSM, and the managed Cisco VSGs
Supports the Cisco Nexus 1000V VEM and Cisco
VSGs. All the Cisco VSGs are part of the Service
VLAN and the VEM uses this VLAN for its
interaction with Cisco VSGs

Enclave Implementation

HA VLAN

HA heartbeat mechanism and identifies the


master-slave relationship for the VSG provisioned
in this mode.
Note VSG supports standalone deployment
models.
Figure 45

Enclave VSG Implementation

Figure 46 captures the Encalve1 VSG network interface details. The HA service which monitors state
through heartbeats has no IP addressing. The management interface is in the appropriate subnet, while
the Data or vPath service has an IP of 111.111.111.111. This IP is only used to resolve the MAC address
of the VSG and all other communication or redirection of enclave data traffic will occur at Layer2. Layer
3 based vPath would require a vmknic with Layer 3 capabilities enabled.

FlexPod Datacenter with Cisco Secure Enclaves

109

Enclave Implementation

Figure 46

Enclave1 VSG Network Interfaces

The VSG firewall is assigned at the "Tenant" level in Cisco VSG terminology. The Tenant is defined as
an Enclave instance. Figure 47 depicts enc1-vsg assigned the Enclave1 VSG "Tenant". It is
recommended to provision the VSG in HA mode as shown below.
Note

It is not recommended to use VMware High Availability (HA) or fault-tolerance with the Cisco VSG.
It is recommended to use a HA pair of VSGs and VMware DRS groups as described in the DRS for
Virtual Service Nodes section of this document. In situations where neither the primary nor the standby
Cisco VSG is available to vPath, configure the failure mode as Fail Open or Fail Close as dictated by
the security requirements of the Enclave.
Figure 47

Enclave VSG AssignmentTenant Level

FlexPod Datacenter with Cisco Secure Enclaves

110

Enclave Implementation

Three security profiles were created for the n-tier application Microsoft SharePoint 2013 hosted within
Enclave1. Each security profile is created within the PNSC and associated with a port-profile.

The primary recommendation for SharePoint 2013 is to secure inter-farm communication by blocking
the default ports (TCP 1433, and UDP 1434) used for SQL Server communication and establishing
custom ports for this communication instead. The VSG enc1-db security profile uses an ACL to drop
this service traffic.

Note

Security policies may be applied at the Virtual Data Center or application role level. This level of
granularity was not used in the Enclave framework but is certainly a viable option for an organization.
This policy maybe applied at a global or "root" level, at the Tenant level (Enclave), VDC or application
layer defined within PNSC. The definition of these layers and assignment of firewall resources can
become very granular for tight application security controls.

Cisco Identity Services Engine


The Cisco ISE is an access control and identity services platform for the Enclave architecture. The
enclave uses this for authentication (auth_c) and authorization (auth_z) functionality across the system
as well as role-based identities for enhanced security. Figure 48 summarizes the two node ISE pair
deployed for validation. These are virtual machines deployed through OVF on the VMware vSphere
enabled management domain. Notice these two engines support all ISE roles, for larger deployments
these personas can be distributed among multiple virtual machines.

FlexPod Datacenter with Cisco Secure Enclaves

111

Enclave Implementation

Figure 48

Identity Services Engine Nodes

The remaining sections of this document capture the configurations to address administrative and policy
functionality implemented in the enclave. The primary areas of focus include:

Note

Network Resources

Identity Management

Policy Elements

Authentication Policy

Authorization Policy

The ISE is a powerful tool and the configuration and capabilities captured in this document are simply
scratching the surface. It is recommended that readers use the reference documents to fully explore the
ISE platform.

Administering Network Resources


A network device such as a switch or a router is an authentication, authorization, and accounting (AAA)
client through which AAA service requests are sent to Cisco ISE. The Cisco ISE only supports network
devices defined to it. A network device that is not defined in Cisco ISE cannot receive AAA services
from Cisco ISE. There are two primary steps to register the device create Network Device Group details
and define the device.

Network Device Groups


Network Device Groups (NDGs) that contain network devices. NDGs logically group network devices
based on various criteria such as geographic location, device type, and the relative place in the network.
Figure 49 illustrates the two forms necessary to complete during NDG creation and a sample outcome
from the lab environment. These conditions can be used later to refine device authentication rules.

FlexPod Datacenter with Cisco Secure Enclaves

112

Enclave Implementation

Figure 49

Network Device Group Types and Locations

Network Devices
Figure 50 summarizes the Network Device definitions and required elements. Figure 51 is the expanded
view of the default radius authentication settings for the device. These fields should correspond to the
radius definitions provided in each of the network elements definition. The name should be identical to
the hostname of the device.
Figure 50

Network Device Definition

FlexPod Datacenter with Cisco Secure Enclaves

113

Enclave Implementation

Figure 51

Authentication SettingsRadius (Default)

Figure 52 is the form for enabling Cisco TrustSec for a particular device. This section defines the
Security Group Access attributes for the newly added network device. The PAC file is generated in from
this page to secure communications between the ISE and the network device.
Figure 52

Advanced TrustSec Settings

FlexPod Datacenter with Cisco Secure Enclaves

114

Enclave Implementation

Administering Identity Management


External Identity Sources
The Cisco ISE can store or reference internal or external user information for authentication and
authorization. The following example will document the use of Microsoft Active Directory as the singles
source of truth for valid users in the organization. Using a single source of truth minimizes risk as data
and its currency is maintained in a single repository promoting accuracy and operation efficiency.

Connection
The connection to the Active Directory external identity store is established by providing Domain and a
locally significant name to the data source. Figure 53 shows the connection between the ISE active
standby pair and the CORP domain. After joining the domain the Cisco ISE can access user, group and
device data.
Figure 53

Cisco ISE Active Directory Connection Example

Groups
The Active Directory connection allows the Cisco ISE to use the repositories group construct. These
groups can be referenced for authentication rules. For example, Figure 54 shows four groups defined in
AD being used by the ISE.
Figure 54

Cisco ISE Active Directory Group Reference Example

Figure 55 is a snippet of the form to add these groups to the Cisco ISE. Notice the groups previously
selected.

FlexPod Datacenter with Cisco Secure Enclaves

115

Enclave Implementation

Figure 55

Cisco ISE Select Directory Groups Form Example

Identity Source Sequences


Identity source sequences define the order in which Cisco ISE looks for user credentials in the different
databases available to it. Cisco ISE supports the following identity sources:

Internal Users

Guest Users

Active Directory

LDAP

RSA

RADIUS Token Servers

Certificate Authentication Profiles

The ISE uses a first match policy across the identity sources for authentication and authorization
purposes.

AD Sequence
The Active Directory service sequence is added referencing the previously joined domain. This sequence
will be used during authentication policy creation. Figure 56 illustrates the addition of an
"AD_Sequence" using the previously joined AD domain as an identity source.

FlexPod Datacenter with Cisco Secure Enclaves

116

Enclave Implementation

Figure 56

Cisco ISE Identity Source Sequence Example

Policy ElementsResults
This following policy elements were defined in the Secure Enclave architecture:

Authorization Profiles

Security Group Access

Authorization Profiles
Policy elements are the components that construct the policies associated with authentication,
authorization, and secure group access. Authorization profiles define policy components related to
permissions. Authorization profiles are used when creating authorization policies. The authorization
profile returns permission attributes when the RADIUS request is accepted.

FlexPod Datacenter with Cisco Secure Enclaves

117

Enclave Implementation

Figure 57 captures some of the default and custom authorization profiles used during validation.
Figure 58 details the UCS_Admins profile that upon authentication the UCS admin role is assigned
through the cisco-av-pair radius attribute value. Note that the cisco-av-pair value varies based on the
Cisco device type, please refer to device specific documentation for the proper syntax.
Figure 57

Policy Element ResultsAuthorization Profiles Example

Figure 58

Cisco ISE Authorization Profile Example

The integration of ISE into each network devices configuration is required. Please refer to the individual
components for ISE or Radius configuration details.

Secure Group AccessSecurity Groups


Packets within the Secure Enclave architecture are tagged to support role-based security policy. The
Cisco ISE contains the tag definitions that can be auto-generated or manually assigned. Figure 59 is a
sample of the tags used in the Enclave validation effort. Notice that each enclave role (app, db or web)
has a unique tag. Figure 60 captures the import process form which allows for bulk create of SGT
information on the ISE platform.

FlexPod Datacenter with Cisco Secure Enclaves

118

Enclave Implementation

Figure 59

Cisco ISE Security Groups Example

Figure 60

Security Groups FormImport Process

Authentication Policy
The Cisco ISE authentication policy defines the acceptable communication protocol and identity source
for network device authentication. This policy is built using conditions or device attributes previously
defined such as device type or location as well as the acceptable network protocol. Figure 61 shows the
authentication policy associated with the UCS system. Essentially the rule states that if the device type
is UCS and the communication is using the password authentication protocol (Pap_ASCII) use the
identity source defined in the AD_Sequence.
Figure 61

Cisco ISE Authentication Policy Example

FlexPod Datacenter with Cisco Secure Enclaves

119

Enclave Implementation

Figure 62 illustrates the definition of multiple ISE authentication policies each built to meet the specific
needs of the network device and the overall organization.
Figure 62

Cisco ISE Authentication Policies

Authorization Policy
The ISE authorization policy enables the organization to set specific privileges and access rights based
on any number of conditions. If the conditions are met a permission level or authorization profile is
assigned to the user and applied to the network device being accessed. For example, in Figure 63 the
UCS Admins authorization policy has a number of conditions that must be met including location, access
protocol and Active Directory group membership before the UCS Admins authorization profiles
permissions are assigned to that user session. The Cisco ISE allows organizations to capture the context
of a user session and make decisions more intelligently. Figure 64 shows that multiple authorization
policies are supported.
Figure 63

Cisco ISE Authorization Policy Example

FlexPod Datacenter with Cisco Secure Enclaves

120

Enclave Implementation

Figure 64

Cisco ISE Authorization Policies

Cisco NetFlow Generation Appliance


Each NetFlow Generation Appliance is configured to accept SPAN traffic from up to four different ten
Gigabit Ethernet data ports. These promiscuous ports can be easily setup using the NGA Quick Setup
web form as shown in Figure 65. The quick setup pane configures setup to a single collector.
Figure 65

Cisco NetFlow Generation ApplianceQuick Setup Form

The following screenshots capture a single NGA configuration used in the enclave validation effort. The
NGA redirects all traffic to the Lancope Flow Collector at 172.26.164.240. The Figure 66 screenshot
describes the collector defined using the quick form.

FlexPod Datacenter with Cisco Secure Enclaves

121

Enclave Implementation

Figure 66

Example NGA Flow Collector Definition

Figure 67 details the NetFlow record being sent to the collector.


Figure 67

Example NGA NetFlow Record Definition

The export details are set to their defaults.


Figure 68

Example NGA NetFlow Exporter Definition

Figure 69 Example NGA Monitor Definition shows the result of the quick setup implemented in the
enclave architecture. The Lancope monitor is created with all four data ports of mirrored traffic being
sent to the Lancope flow collector.
Figure 69

Example NGA Monitor Definition

FlexPod Datacenter with Cisco Secure Enclaves

122

Enclave Implementation

Lancope StealthWatch System


The Cisco Data Center Cyber Threat Defense Solution leverages Cisco networking technology such as
NetFlow, as well as identity, device profiling, posture, and user policy services from the Cisco Identity
Services Engine (ISE).
Cisco has partnered with Lancope to jointly develop and offer the Cisco Cyber Threat Defense
Solutions. Available from Cisco, the Lancope StealthWatch System serves as the NetFlow analyzer
and management system in the Cisco Data Center Cyber Threat Defense Solution.
StealthWatch FlowCollector provides NetFlow collection services and performs analysis to detect
suspicious activity. The StealthWatch Management Console provides centralized management for all
StealthWatch appliances and provides real-time data correlation, visualization, and consolidated
reporting of combined NetFlow and identity analysis.
The minimum system requirement to gain flow and behavior visibility is to deploy one or more NetFlow
generators with a single StealthWatch FlowCollector managed by a StealthWatch Management Console.
The minimum requirement to gain identity services is to deploy the Cisco Identity Services Engine and
one or more authenticating access devices in a valid Cisco TrustSec Monitoring Mode deployment. The
volume of flows per second will ultimately determine the number of components required for the
Lancope system.
Figure 70

Lancope StealthWatch Management Console

The complete design considerations and implementation details of the CTD system validated in this
effort is captured at Cisco Cyber Threat Defense for the Data Center Solution at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-designguide.pdf

FlexPod Datacenter with Cisco Secure Enclaves

123

Conclusion

Conclusion
There are many challenges facing organizations today including changing business models where
workloads are moving to the clouds and users demand ubiquitous access with any device. This new
reality places pressure on organizations to address a larger dynamic threat landscape with consistent
security policy and enforcement where the perimeter of the network is no longer clearly defined. The
edge of the data center has become vague.
The Secure Enclave architecture proposes a standard approach to application security. The Secure
Enclave extends the FlexPod Data Center infrastructure by integrating and enabling security
technologies uniformly, allowing application specific policies to be consistently enforced. The
standardization on the Enclave model facilitates operational efficiencies through automation. The
Secure Enclave Architecture allows the enterprise to consume the FlexPod infrastructure securely and
address the complete attack continuum, user to application.

References
Cisco Secure Enclaves Architecture Design Guide
http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-manager/whitepaperc07-731204.html
Cisco Secure Data Center for Enterprise Solution Design Guide at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-dg.pdf
Cisco Secure Data Center for Enterprise (Implementation Guide) at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-ig.pdf
Cisco Cyber Threat Defense for the Data Center Solution: First Look Guide at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-designguide.pdf

FlexPod Datacenter with Cisco Secure Enclaves

124

Das könnte Ihnen auch gefallen