Beruflich Dokumente
Kultur Dokumente
Solution Components
FlexPod Datacenter with Cisco Secure Enclaves uses the FlexPod Data Center configuration as its
foundation. The FlexPod Data Center is an integrated infrastructure solution from Cisco and NetApp
with validated designs that expedite IT infrastructure and application deployment, while simultaneously
reducing cost, complexity, and project risk. FlexPod Data Center consists of Cisco Nexus Networking,
Cisco Unified Computing System (Cisco UCS), NetApp FAS Series storage systems. One especially
significant benefit of the FlexPod architecture is the ability to customize or "flex" the environment to
suit a customer's requirements, this includes the hardware previously mentioned as well as operating
systems or hypervisors it supports.
Audience
The Cisco Secure Enclaves design extends the FlexPod infrastructure by using the abilities inherit to the
integrated system and augmenting this functionality with services to address the specific business and
application requirements of the enterprise. These functional requirements promote uniqueness and
innovation in the FlexPod, augmenting the original FlexPod design to support these prerequisites. The
result is a region, or enclave, and more likely multiple enclaves, in the FlexPod built to address the
unique workload activities and business objectives of an organization.
FlexPod Data Center with Cisco Secure Enclaves is developed using the following technologies:
Note
VMware vSphere
Audience
This document describes the architecture and deployment procedures of a secure FlexPod Data Center
infrastructure enabled with Cisco and NetApp technologies. The intended audience for this document
includes but is not limited to sales engineers, field consultants, professional services, IT managers,
partner engineering, and customers interested in making security an integral part of their FlexPod
infrastructure.
Figure 2
Software Revisions
Component
Network
Nexus 5548UP
Nexus 7000
Nexus 1110X
Nexus 1000v
Compute
Software
NX-OS 6.0(2)N1(2a)
NX-OS 6.1(2)
4.2(1)SP1(6.2)
4.2(1)SV2(2.1a)
2.1(3a)
5.1u1
2.1.2.38
1.5.0.45
Risk
Low (positioned)
Count
2
Low (positioned)
Low (positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
(positioned)
Low
Low
Low
2
2
1
2
2
2
4
X
Services
Management
VMware vCenter
Cisco Virtual
Security Gateway
(VSG)
Cisco UCS Manager
(UCSM)
Cisco Network
Analysis Module
(NAM) VSB
Cisco NetFlow
Generation
Appliance (NGA)
Cisco Identity
Services Engine
(ISE)
Lancope
StealthWatch
Cisco Intrusion
Prevention System
Security Services
Processor (IPS SSP)
Cisco Adaptive
Security Appliance
(ASA) 5585
Lancope
StealthWatch
FlowCollector
Citrix Netscaler
1000v
5.1u1
4.2(1)VSG1(1)
Low
Low
(positioned)
1
X
2.1(3)
Low
(positioned)
Low
(positioned)
1.0(2)
Low
(positioned)
1.2
Low
(positioned)
6.3
Low
(positioned)
Low
(positioned)
5.1(2)
7.2(1)E4
9.1(2)
Low
(positioned)
6.3
Low
(positioned)
10.1
Low
4.1
6.3
(positioned)
Low (positioned)
Low
(positioned)
4.4
3.0(2e)
3.0
Low
(positioned)
Low
(positioned)
1
1
Low
(positioned)
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Storage
NetApp
OnCommand
Unified Manager
NetApp Virtual
Storage Console
(VSC)
NetApp NFS Plug-in
for VMware
vStorage APIs for
Array Integration
(VAAI)
NetApp
OnCommand
Balance
NetApp FAS 3250
6.0
Low
(positioned)
4.2.1
Low
(positioned)
1.0.21
Low
4.1.1.2R1
Low
(positioned)
Low
FlexPod Data Center with Cisco Nexus 7000 (Left) FlexPod Data Center with Cisco Nexus 5000
10
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Note
For more information on the FlexPod Data Center configurations used in the design go to:
FlexPod Data Center with VMware vSphere 5.1 and Nexus 7000 using FCoE Design Guide
FlexPod Data Center with VMware vSphere 5.1 Update 1 Design Guide
FlexPod Design Zone
The following common features between the FlexPod models are key for the instantiation of the secure
enclaves on the FlexPod:
NetApp FAS Controllers with Clustered Data ONTAP providing Storage Virtual Machine (SVM)
and Quality of Service (QoS) capabilities
Cisco Nexus Switching providing an Unified fabric, Cisco Trust Sec, Private VLANs, NetFlow,
Switch Port Analyzer (SPAN), VXLAN and QoS capabilities
Cisco Unified Computing System (UCS) with centralized management through Cisco UCS
Manager, SPAN, QoS, Private VLANs, and hardware virtualization
The stateful link supports the sharing of session state information between the devices.
11
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 4
ASA Clustering
ASA Clustering lets you group multiple ASAs together as a single logical device. A cluster provides all
the convenience of a single device (management, integration into a network) while achieving the
increased throughput and redundancy of multiple devices. Currently, the ASA cluster supports a
maximum of eight nodes. Figure 5describes the physical connection of the ASA cluster to the Cisco
Nexus switches of the FlexPod.
Figure 5
12
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
The ASA cluster uses a single vPC to support data traffic and a dedicated vPC per cluster node for
control and data traffic redirection within the cluster. Control traffic includes:
Master election
Configuration replication
Health monitoring
State replication
The data vPC spans all the nodes of the cluster, known as spanned Etherchannel, and is the recommended
mode of operation. The Cisco Nexus switches use a consistent port channel load balancing algorithm to
balance traffic distribution and in and out of the cluster to limit and optimize use of the cluster control
links.
Note
The ASA clustering implementation from this validation is captured in a separate CVD titled Cisco
Secure Data Center for Enterprise Design Guide.
Detecting advanced security threats that have breached the perimeter security boundaries
Figure 6 shows the deployment of Cisco NGA on the stack to provide these services, accepting mirrored
traffic from various sources of the converged infrastructure. As illustrated, the NGAs are dual-homed to
the Cisco Nexus switches that use a static "always on" port channel configuration to mirror traffic from
the various monitoring sessions defined on each switch. In addition, the NGAs capture interesting traffic
from the Cisco UCS domain. It should be noted that the SPAN traffic originating from each fabric
interconnect is rate-limited to 1 Gbps.
13
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 6
The Enclave
The enclave is a distinct logical entity that encompasses essential constructs including security along
with application or customer-specific resources to deliver a trusted platform that meets SLAs. The
modular construction and potential to automate delivery help make the enclave a scalable and securely
separated layer of abstraction. The use of multiple enclaves delivers increased isolation, addressing
disparate requirements of the FlexPod integrated infrastructure stack.
Figure 7 provides a conceptual view of the enclave that defines an enclave in relation to an n-tier
application.
The enclave provides the following functions:
Cisco Cyber Security and Threat Defense operations to expose and identify malicious traffic
Cisco TrustSec security using secure group access control to identify server roles and enforce
securitypolicy
Out-of-band management for centralized administration of the enclave and its resources
14
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 7
Storage Design
Clustered Data ONTAP is an ideal storage system operating system to support SEA. Clustered Data
ONTAP is architected in such a way that all data access is done through secure virtual storage partitions.
It is possible to have a single partition that represents the resources of the entire cluster or multiple
partitions that are assigned specific subsets of cluster resources or Enclaves. These secure virtual storage
partitions are known as Storage Virtual Machines, or SVMs. In the current implementation of SEA, the
SVM serves as the storage basis for each Enclave.
The secure logical storage partition through which data is accessed in clustered Data ONTAP is known
as a Storage Virtual Machine (SVM). A cluster serves data through at least one and possibly multiple
SVMs. An SVM is a logical abstraction that represents a set of physical resources of the cluster. Data
volumes and logical network interfaces (LIFs) are created and assigned to an SVM and may reside on
any node in the cluster to which the SVM has been given access. An SVM may own resources on
multiple nodes concurrently, and those resources can be moved nondisruptively from one node to
another. For example, a flexible volume may be nondisruptively moved to a new node and aggregate, or
a data LIF could be transparently reassigned to a different physical network port. In this manner, the
SVM abstracts the cluster hardware and is not tied to specific physical hardware.
An SVM is capable of supporting multiple data protocols concurrently. Volumes within the SVM can be
junctioned together to form a single NAS namespace, which makes all of an SVM's data available
through a single share or mount point to NFS and CIFS clients. SVMs also support block-based
protocols, and LUNs can be created and exported using iSCSI, Fibre Channel, or Fibre Channel over
Ethernet. Any or all of these data protocols may be configured for use within a given SVM.
Because it is a secure entity, an SVM is only aware of the resources that have been assigned to it and has
no knowledge of other SVMs and their respective resources. Each SVM operates as a separate and
distinct entity with its own security domain. Tenants may manage the resources allocated to them
through a delegated SVM administration account. Each SVM may connect to unique authentication
zones such as Active Directory, LDAP, or NIS.
15
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
An SVM is effectively isolated from other SVMs that share the same physical hardware.
Clustered Data ONTAP is highly scalable, and additional storage controllers and disks can be easily
added to existing clusters in order to scale capacity and performance to meet rising demands. As new
nodes or aggregates are added to the cluster, the SVM can be nondisruptively configured to use them. In
this way, new disk, cache, and network resources can be made available to the SVM to create new data
volumes or migrate existing workloads to these new resources in order to balance performance.
This scalability also enables the SVM to be highly resilient. SVMs are no longer tied to the lifecycle of
a given storage controller. As new hardware is introduced to replace hardware that is to be retired, SVM
resources can be nondisruptively moved from the old controllers to the new controllers. At this point the
old controllers can be retired from service while the SVM is still online and available to serve data.
Components of an SVM
Logical Interfaces
All SVM networking is done through logical interfaces (LIFs) that are created within the SVM. As
logical constructs, LIFs are abstracted from the physical networking ports on which they reside.
Flexible Volumes
A flexible volume is the basic unit of storage for an SVM. An SVM has a root volume and can have one
or more data volumes. Data volumes can be created in any aggregate that has been delegated by the
cluster administrator for use by the SVM. Depending on the data protocols used by the SVM, volumes
can contain either LUNs for use with block protocols, files for use with NAS protocols, or both
concurrently.
Namespace
Each SVM has a distinct namespace through which all of the NAS data shared from that SVM can be
accessed. This namespace can be thought of as a map to all of the junctioned volumes for the SVM, no
matter on which node or aggregate they might physically reside. Volumes may be junctioned at the root
of the namespace or beneath other volumes that are part of the namespace hierarchy.
A FlexVol volume
A LUN
In the SEA Architecture, since an SVM is usually associated with an Enclave, a QoS policy group would
normally be applied to the SVM, setting up an overall storage rate limit for the Enclave. Storage QoS is
administered by the cluster administrator.
16
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
You assign a storage object to a QoS policy group to control and monitor a workload. You can monitor
workloads without controlling them in order to size the workload and determine appropriate limits
within the storage cluster.
For more information on managing workload performance by using Storage QoS, please see "Managing
system performance" in the Clustered Data ONTAP 8.2 System Administration Guide for Cluster
Administrators.
Dedicated Logical Interfaces (LIFs) are created in each SVM from the physical NetApp Unified
Target Adapters (UTAs)
SAN LIF presence supporting SAN A(e3) and SAN B (e4) topologies
Zoning provides SAN traffic isolation within the fabric
The NetApp ifgroup aggregates the Ethernet interfaces (e3a, e4a) of the UTA for high availability
and supports Layer 2 VLANs
IP LIFs use the ifgroup construct for NFS(enclave_ds1) and or iSCSI based LIFs
Management IP LIFs (svm_mgmt) are defined on each SVM for administration of that SVM and its
logical resources. The management is contained to the SVM.
Dedicated VLANs to each LIF assure traffic separation across the Ethernet fabric
Figure 8
NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines
In addition, each SVM brings other features to support the granular separation and control of the FlexPod
storage domain. These include:
QoS policies allowing the administrator to manage system performance and resource consumption
per Enclave through policies based on IOPS or Mbps throughput.
Role based access control with predefined roles for at cDOT cluster layer and per individual SVM
17
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Performance monitoring
Figure 9 describes another deployment model for the Cisco Secure Enclave on NetApp cDOT. The
Enclaves do not receive a dedicated SVM but share a single SVM with multiple LIFs defined to support
specific data stores. This model does not provide the same level of granularity, but it may provide a
simpler operational model for larger deployments.
Figure 9
NetApp FAS Enclave Storage Design Using cDOT Storage Virtual Machines (Service Provider
Model)
Compute Design
The Cisco UCS Manager resides on a pair of Cisco UCS 6200 Series Fabric Interconnects using a
clustered, active-standby configuration for high availability. The software gives administrators a single
interface for performing server provisioning, device discovery, inventory, configuration, diagnostics,
monitoring, fault detection, auditing, and statistics collection. Cisco UCS Manager service profiles and
templates support versatile role- and policy-based management, and system configuration information
can be exported to configuration management databases (CMDBs) to facilitate processes based on IT
Infrastructure Library (ITIL) concepts.
Compute nodes are deployed in a Cisco UCS environment by leveraging Cisco UCS service profiles.
Service profiles let server, network, and storage administrators treat Cisco UCS servers as raw
computing capacity to be allocated and reallocated as needed. The profiles define server I/O properties,
personalities, properties and firmware revisions and are stored in the Cisco UCS 6200 Series Fabric
Interconnects. Using service profiles, administrators can provision infrastructure resources in minutes
instead of days, creating a more dynamic environment and more efficient use of server capacity.
Each service profile consists of a server software definition and the server's LAN and SAN connectivity
requirements. When a service profile is deployed to a server, Cisco UCS Manager automatically
configures the server, adapters, fabric extenders, and fabric interconnects to match the configuration
specified in the profile. The automatic configuration of servers, network interface cards (NICs), host bus
adapters (HBAs), and LAN and SAN switches lowers the risk of human error, improves consistency, and
decreases server deployment times.
18
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Service profiles benefit both virtualized and non-virtualized environments in the Cisco Secure Enclave
deployment. The profiles increase the mobility of non-virtualized servers, such as when moving
workloads from server to server or taking a server offline for service or upgrade. Profiles can also be
used in conjunction with virtualization clusters to bring new resources online easily, complementing
existing virtual machine mobility. The profiles are a standard, a template that can be readily deployed
and secured.
The VMware ESXi host is part of a larger VMware vSphere High Availability (HA) and Distributed
Resource Scheduler (DRS) cluster
Cisco virtual interface cards (VICs) offer multiple virtual PCI Express (PCIe) adapters for the
VMware ESXi host for further traffic isolation and specialization.
Six Ethernet-based virtual network interface cards (vNICs) with specific roles associated with
the enclave system, enclave data, and core services traffic are created:
vmnic0 and vmnic1 for the Cisco Nexus 1000V system uplink support management, VMware
such as Domain Name System (DNS), Microsoft Active Directory, Domain Host Configuration
Protocol (DHCP), and Microsoft Windows updates.
Two virtual host bus adapters (vHBAs) for multihoming to available block-based storage.
Three VMkernal ports are created to support the following traffic types:
vmknic0 supports VMware ESXi host management traffic.
vmknic1 supports VMware vMotion traffic.
vmknic2 and vmknic3 provides the Virtual Extensible LAN (VXLAN) tunnel endpoint (VTEP)
to support traffic with path load balancing through the Cisco UCS fabric.
Additional Network File System (NFS) and Small Computer System Interface over IP (iSCSI)
A maximum of 256 VMkernal NICs are available per VMware ESXi host.
Cisco Nexus 1000V is deployed on the VMware ESXi host with the following elements:
PortChannels created for high availability and load balancing
Segmentation of traffic through dedicated vNICs, VLANs, and VXLANs
19
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 10
Cisco VICs to provide multiple virtual PCIe adapters to the host for further traffic isolation and
specialization
Ethernet-based vNICs with specific roles associated with the enclave system, enclave data, and
Private VLANs isolate traffic to the virtual machines within an enclave, providing core services
such as DNS, Microsoft Active Directory, DHCP, and Microsoft Windows Updates.
20
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 11
Network Design
The network fabric knits the previously defined storage and compute domains with the addition of
network services into a cohesive system. The combination creates an efficient, consistent, and secure
application platform, an enclave. The enclave is built using the Cisco Nexus switching platforms
already included in the FlexPod Data Center. This section describes two enclave models their
components and capabilities.
Figure 12 depicts an enclave using two VLANs, with one or more VXLANs used at the virtualization
layer. The VXLAN solution provides logical isolation within the hypervisor and removes the scale
limitations associated with VLANs. The enclave is constructed as follows:
Two VLANs are consumed on the physical switch for the entire enclave.
The Cisco Nexus Series Switch provides the policy enforcement point and default gateway
(SVI2001).
Cisco ASA provides the security group firewall for traffic control enforcement.
Cisco ASA provides virtual context bridging for two VLANs (VLANs 2001 to 3001 in the figure).
Consistent security policy is provided through universal security group tags (SGTs):
The import of the Cisco ISE protected access credential (PAC) file establishes a secure
lists (SGACLs).
Cisco ISE provides SGTs and downloadable SGACLs to the Cisco Nexus switch.
Cisco ISE provides authentication and authorization across the infrastructure.
The Cisco Nexus 1000V propagates IP address-to-SGT mapping across the fabric through the SGT
Exchange Protocol (SXP) for SGTs assigned to the enclave.
21
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
The Cisco VSG for each enclave provides Layer 2 firewall functions.
Load-balancing services are optional but readily integrated into the model.
Dedicated VMknics are available to meet dedicated NFS and iSCSI access requirements
Figure 12
Enclave Model: Transparent VLAN with VXLAN (Cisco ASA Transparent Mode)
Figure 13 illustrates the logical structure of another enclave on the same shared infrastructure employing
the Cisco ASA routed virtual context as the default gateway for the web server. The construction of this
structure is identical to the previously documented enclave except for the firewall mode of operation.
Figure 13
Enclave Model: Routed Firewall with VXLAN (Cisco ASA Routed Mode)
22
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Security Services
Firewall
Firewalls are the primary control point for access between two distinct network segments, commonly
referred to as inside, outside, public or private. The Cisco Secure Enclave Architecture uses two
categories of firewalls zone or edge for access control into, between and within the enclave. The enclave
model promotes security "proximity", meaning where possible traffic patterns within an enclave should
remain contiguous to the compute. The use of multiple policy enforcement points promotes optimized
paths.
Cisco Virtual Security Gateway
The Cisco Virtual Security Gateway (VSG) protects traffic within the enclave, enforcing security policy
at the VM level by applying policy based on VM or network based attributes. Typically this traffic is
considered "east, west" in nature. The reality is any traffic into a VM is subject to the VSG security
policy. The enclave model calls for a single VSG instance per enclave allowing the security operations
team to develop granular security rules based on the application and associated business requirements.
The Cisco Nexus 1000v Virtual Ethernet Module (VEM) will redirect the initial packet destined to a VM
to the VSG where policy evaluation occurs. The redirection of traffic occurs using vPath when the virtual
service is defined on the port profile of the VM. The VEM encapsulates the packet and forwards it to the
VSG assigned to the enclave. The Cisco VSG processes the packet and forwards the result to the vPath
on the VEM where the policy decision is cached and enforced for subsequent packets. The vPath will
maintain the cache until the flow is reset (RST), finished (FIN) or timeouts.
Note
The Cisco Virtual Security Gateway may deployed adjacent to the Cisco Nexus 1000v VEM or across a
number of Layer 3 hops.
Cisco Adaptive Security Appliances
The edge of the enclave is protected using the Cisco's Adaptive Security Appliance. The Cisco ASA can
be partitioned into multiple security context (<250) allowing each enclave to have a dedicated virtual
ASA to apply access control, intrusion prevention, and antivirus policy. The primary role of each ASA
enclave context is to control access between the "inside and outside" network segments. This traffic is
typically referred to as "north, south" in nature.
The Cisco ASA supports Cisco TrustSec. Cisco TrustSec is an intelligent solution providing secure
network access based on the context of a user or a device. Subsequently network access is granted based
on contextual data such as "who, what, where, when, and how,". Cisco TrustSec in the enclave
architecture uses Security Group Tag (SGT) assignment on the Cisco Nexus 1000v and the ASA as a
Security Group Firewall (SGFW) to enforce the role based access control policy.
The Cisco Identity Services Engine (ISE) is a required component in the CiscoTrustSec implementation
providing centralized definitions of the SGTs to IP mapping. A Protected Access Credential (PAC) file
secures the communication between the ISE and ASA platforms and allows for the ASA to download
the security group table. This table contains SGT to security group names translation. The security
operations team can then create access rules based on the object tags (SGTs), simplifying policy
configuration in the data center.
The SGT is assigned at the VM port profile on the Cisco Nexus 1000v. The SGT assignment is
propagated to the ASA through the Security eXchange Protocol (SXP). SXP is a secure conversation
between the two devices a speaker and listener. The ASA may perform both roles but in this design it is
strictly a listener learning and acting as a SGFW. If the IP to SGT mapping is part of a security group
policy the ASA enforces the rule.
23
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Cyber threats are attacks focused on seizing information related to sensitive data, money or ideas. The
Cisco Cyber Threat Defense Solution provides greater visibility into these threats by identifying
suspicious network traffic patterns within the network allowing security analysts the contextual
information necessary to discern the level of threat these suspicious patterns represent. As shown in
Figure 14, the solution is easily integrated and readily enabled on the base-FlexPod components. The
entire FlexPod Data Center with Cisco Secure Enclave solution is protected.
The CTD solution employs three primary components to provide this crucial visibility:
Figure 14
NetFlow was developed by Cisco to collect network traffic information and enable monitoring of the
network. The data collected by NetFlow provides insight into specific traffic flows in the form of
records. The enclave framework uses several methods to reliably collect NetFlow data and provide a full
picture of the FlexPod Data Center environment including:
24
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
The effectiveness of any monitoring system is dependent on the completeness of the data it
captures.With that in mind, the enclave model does not recommend using sampled NetFlow. Ideally the
NetFlow records should reflect the FlexPod traffic in its entirety. To that end the physical Cisco Nexus
switches are relieved of NetFlow responsibilities and implement line-rate SPAN. The NGA are
connected to SPAN destination ports on the Cisco Nexus switches and Cisco UCS Fabric Interconnects.
The collection points are described in the NetFlow Generation Appliance (NGA) Extension section. The
NGA devices are promiscuous supporting up to 40Gbps of mirrored traffic to create NetFlow records for
export to the Lancope StealthWatch Flow Collectors.
Direct NetFlow sources generate and send flow records directly to the Lancope FlowCollectors. The
Cisco Nexus 1000v virtual distributed switch provides this functionality for the virtual access layer of
the enclave. It is recommended to enable Netflow on the Cisco Nexus 1000v interfaces. In larger
environments where the limits of the Cisco Nexus 1000v NetFlow resources are reached, NetFlow
should be enabled on VM interfaces with data sources.
Another source of direct flow data is the Cisco ASA 5500. The Cisco ASA generates a NSEL records.
These records differ from traditional NetFlow but are fully supported by the Lancope StealthWatch
system. In fact, the records include the "action" permit or deny taken by the ASA on the flow as well as
NAT translation that adds another layer of depth to the telemetry of the CTD system.
Threat Context through Cisco Identity Services Engine (ISE)
In order to provide some context or perspective, the Lancope StealthWatch system employs the services
of the Cisco Identity Services Engine. The ISE can provide device and user information, offering more
information for the security operations team to use during the process of threat analysis and potential
response. In addition to the device profile and user identity, the ISE can provide time, location, and
network data to create a contextual identity to who and what is on the network.
Unified Visibility, Analysis and Context through Lancope StealthWatch
The Lancope StealthWatch system collects, organizes and analyzes all of the incoming data points to
provide a cohesive view into the inner workings of the enclave. The StealthWatch Management Console
(SMC) is the central point of control supporting millions of flows. The primary SMC dashboards offer
insight into network reconnaissance, malware propagation, command and control traffic, data
exfiltration, and internal host reputation. The combination of Cisco and Lancope technologies offers a
protection
Management Design
The communication between the management domain, the hardware infrastructure, and the enclaves is
established through traditional paths as well as through the use of private VLANs on the Cisco Nexus
1000V and Cisco UCS fabric interconnects. The use of dedicated out-of-band management VLANs for
the hardware infrastructure, including Cisco Nexus switching and the Cisco UCS fabric, is a
recommended practice. The enclave model suggests the use of a single isolated private VLAN that is
maintained between the bare-metal and virtual environments. This private isolated VLAN allows all
virtual machines and bare-metal servers to converse with the services in the management domain, which
is a promiscuous region. The private VLAN feature enforces separation between servers within a single
enclave and between enclaves.
Figure 15 shows the logical construction of this private VLAN environment, which supports directory,
DNS, Microsoft Windows Server Update Services (WSUS), and other common required services for an
organization
25
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 15
Figure 16 shows on the virtual machine connection points to the management domain and the data
domain. As illustrated, the traffic patterns are completely segmented through the use of traditional
VLANs, VXLANs, and isolated private VLANs. The figure also shows the use of dedicated PCIe
devices and logical PortChannels created on the Cisco Nexus 1000V to provide load balancing, high
availability, and additional traffic separation.
Figure 16
Management Services
The FlexPod Data Center with Cisco Secure Enclaves employs numerous domain level managers to
provision, organize and coordinate the operation of the enclaves on the shared infrastructure. The
domain level managers employed during the validation are listed in Table 2 and Table 3. Table 2
describes the role of the management product while Table 3 indicates the positioning of that product
within the architecture.
26
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Table 2
FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms
Product
Cisco Unified Computing System Manager
(UCSM)
Role
Provides administrators a single interface for performing
server provisioning, device discovery, inventory,
configuration, diagnostics, monitoring, fault detection,
auditing, and statistics collection.
Microsoft directory services provided centralized
authentication and authorization for users and computers.
DNS Services are centralized for TCP/IP name translation.
DHCP provides automated IP address assignment this is
coordinated with the DNS records.
27
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
FlexPod Data Center with Cisco Secure Enclaves Validated Management Platforms
Product
Microsoft Active Directory, DNS, DHCP,
WSUS, etc.
VMware vSphere vCenter
Cisco Security Manager
Lancope StealthWatch System
Cisco Identity Services Engine
Cisco Prime Network Services Controller
NetApp OnCommand System Manager
NetApp OnCommand Unified Manager
NetApp Virtual Storage Console (VSC)
NetApp NFS Plug-in for VMware vStorage
APIs for Array Integration (VAAI)
NetApp OnCommand Balance
Cisco Nexus 1000v Virtual Supervisor
Module
Cisco Virtual Security Gateway
Cisco Prime Network Analysis Module
(NAM)
Positioned
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware vSphere Management Cluster
VMware ESXi Host
VMware vSphere Management Cluster
Nexus 1110-X Platform
Nexus 1110-X Platform
Nexus 1110-X Platform
28
FlexPod Data Center with Cisco Secure Enclaves Architecture and Design
Figure 17
Figure 18 shows the interfaces that Cisco UCS Director employs. Ideally, the north bound APIs of the
various management domains are used but the UCS Director may also directly access devices to create
the Enclave environment. It should be noted that the Cyber Threat Defense components are not directly
accessed as these protections are overlays encompassing the entire infrastructure.
Figure 18
The instantiation of multiple enclaves on the FlexPod Data Center platform through Cisco UCS Director
offers operational efficiency and consistency to the organization. Figure 19 illustrates the automation of
the infrastructure through a single pane of glass approach.
Figure 19
29
Enclave Implementation
Enclave Implementation
The implementation section of this document builds off of the baseline FlexPod Data Center deployment
guides and assumes this baseline infrastructure is in place containing Cisco UCS, NetApp FAS and Cisco
Nexus configuration. Please reference the following documents for FlexPod Data Center deployment
with the Cisco Nexus 7000 or Cisco Nexus 5000 series switches.
VMware vSphere 5.1 on FlexPod Deployment Guide for Clustered ONTAP at
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/esxi51_ucsm2_Clusterdeploy.h
tml
VMware vSphere 5.1 on FlexPod with the Cisco Nexus 7000 Deployment Guide at
http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/flexpod_esxi_N7k.html
The deployment details provide example configurations necessary to achieve enclave functionality. It is
assumed that the reader has installed and has some familiarity with the products.
ISE Integration
Two Identity Services Engines are provisioned in a primary secondary configuration for high
availability. Each ISE assumes the following personas:
Administration Node
Monitoring Node
The ISE provides RADIUS services to each of the Cisco Nexus 7000 VDCs which are configured as
Network.
30
Enclave Implementation
radius distribute
radius distribute
radius commit
radius commit
server 172.26.164.187
server 172.26.164.187
server 172.26.164.239
server 172.26.164.239
use-vrf management
use-vrf management
source-interface mgmt0
source-interface mgmt0
Cisco TrustSec
Cisco TrustSec provides an access-control solution that builds upon an existing identity-aware
infrastructure to ensure data confidentiality between network devices and integrate security access
services on one platform. In the Cisco TrustSec solution, enforcement devices utilize a combination of
user attributes and end-point attributes to make role-based and identity-based access control decisions.
31
Enclave Implementation
In this release, the ASA integrates with Cisco TrustSec to provide security group based policy
enforcement. Access policies within the Cisco TrustSec domain are topology-independent, based on the
roles of source and destination devices rather than on network IP addresses.
The ASA can utilize the Cisco TrustSec solution for other types of security group based policies, such
as application inspection; for example, you can configure a class map containing an access policy based
on a security group.
The Cisco TrustSec environment is enabled on the Nexus 7000. The Cisco Nexus 7000 aggregates
Security Exchange Protocol (SXP) information and sends it to any listener. In the enclave design the
Cisco Nexus 1000v is a speaker and the Cisco ASA virtual contexts are listener devices.
Figure 20
32
Enclave Implementation
feature cts
feature cts
!Enable SXP
!Enable SXP
Note
The SXP information is common across ASA virtual contexts The SGT mappings are global and should
not overlap between contexts.
Private VLANs
The use of private VLANs allows for the complete isolation of control and management traffic within
an Enclave. The Cisco Nexus 7000 supports private VLANs and used the following structure during
validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried
across the infrastructure.
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
33
Enclave Implementation
Port Profiles
A port profile is a mechanism for simplifying the configuration of interfaces. A single port profile can
be assigned to multiple interfaces to give them all the same configuration. Changes to a port profile are
propagated to the configuration of any interface that is assigned to it.
In the validated architecture, three port profiles were created supporting the Cisco UCS, NetApp FAS
controllers and Cisco Nexus 1110 Cloud Services Platform. The following details the port profile
configurations which are applied to the virtual and physical interfaces on the Cisco Nexus 7000.
34
Enclave Implementation
switchport
switchport
mtu 9216
mtu 9216
switchport
switchport
state enabled
port-profile type port-channel FAS-Node
state enabled
port-profile type port-channel FAS-Node
switchport
switchport
mtu 9216
mtu 9216
state enabled
state enabled
interface port-channel11
inherit port-profile FAS-Node
interface port-channel12
inherit port-profile FAS-Node
interface port-channel13
inherit port-profile UCS-FI
interface port-channel14
inherit port-profile UCS-FI
interface Ethernet4/17
inherit port-profile Cloud-Services-Platforms
interface Ethernet4/19
inherit port-profile Cloud-Services-Platforms
interface port-channel11
inherit port-profile FAS-Node
interface port-channel12
inherit port-profile FAS-Node
interface port-channel13
inherit port-profile UCS-FI
interface port-channel14
inherit port-profile UCS-FI
interface Ethernet4/17
inherit port-profile Cloud-Services-Platforms
interface Ethernet4/19
inherit port-profile Cloud-Services-Platforms
35
Enclave Implementation
Create traffic classes by classifying the incoming and outgoing packets that match criteria such as
IP address or QoS fields.
Create policies by specifying actions to take on the traffic classes, such as limiting, marking, or
dropping packets.
Queues are one method to manage network congestion. Ingress and egress queue selection is based on
CoS values. The default network-qos queue structure nq-7e-4Q1T-HQoS is shown below for the system
with F2 linecards. The F2 line card supports four queues each supporting specific traffic classes assigned
by CoS values.
Note
The new local copy of the ingress queuing policy structure (as shown above) is redefined to address
Ethernet traffic. The "no-drop" or FCoE traffic is given the minimal amount of resources as this traffic
will not traverse this Ethernet VDC but traverse the VDC dedicated to storage traffic. Essentially, class
of service (CoS) 3 no-drop traffic is not defined or expected within this domain.
In the following example, the c-4q-7e-drop-in is given 99% of the available resources.
queue-limit percent 99
queue-limit percent 99
36
Enclave Implementation
The queuing policy maps are then adjusted to reflect the new percentages total. For example, the
4q4t-7e-in-q1 class will receive 50% of the 100% queue-limits within the FP-4q-7e-drop-in class, but
that is really 50% of the 99% queue limit available in total meaning the 4q4t-7e-in-q1 will receive 49.5%
of the total available queue.
Note
Effective queue limit % = assigned queue-limit % from parent class * local queue limit %
The 4q4t-7e-in-q4 under the FP-4q-7e-ndrop-in class will receive 100% of the 1% effectively assigned
to it. Again the lab implementation did not expect any CoS traffic in the Ethernet VDC.
37
Enclave Implementation
queue-limit percent 50
queue-limit percent 50
bandwidth percent 50
bandwidth percent 50
queue-limit percent 25
queue-limit percent 25
bandwidth percent 24
bandwidth percent 24
queue-limit percent 25
queue-limit percent 25
bandwidth percent 25
bandwidth percent 25
bandwidth percent 1
bandwidth percent 1
The bandwidth percentage should total 100% across the class queues. The no-drop queue was given the
least amount of resources, 1%. Note zero resources is not an option for any queue.
Table 4
Queuing Class
4q4t-7e-in-q1 (CoS 5-7)
4q4t-7e-in-q-default (CoS 0-1)
4q4t-7e-in-q3 (CoS 2,4)
4q4t-7e-in-q4 (no drop) (CoS
3)
Queue-limit % Effective %
Bandwidth % Effective
50 49.5
25 24.75
25 24.75
50 - 50
24 24
25 25
1 - 1
100 1
The queuing policy can be applied to one or more interfaces. To simplify the deployment, the service
policy is applied to the relevant port profiles, namely the FAS and Cisco UCS ports.
Note
The egress queue buffer allocations are non-configurable for the F2 line cards used for validation.
38
Enclave Implementation
Classification
The NAS traffic originating from the NetApp FAS controllers will be classified and marked to receive
the appropriate levels of service across the Enclave architecture. The FP-qos-fas policy map was created
to mark all packets with a CoS of 5 (Gold). Marking the traffic from the FAS is a recommended practice.
CoS 5 aligns with the policies created in the Cisco UCS and Cisco Nexus 1000v platforms.
class class-default
set cos 5
class class-default
set cos 5
The ability to assign this at the VLAN simplifies the classifications of packets and aligns well with the
VLAN to NetApp Storage Virtual Machines (SVMs) which require dedicated VLANs for processing on
the controller. After this configuration, the CoS of 5 is effectively marked on all frames within the
VLANs listed. The VLANs in this example support Enclave NFS traffic.
Monitoring
The ability to monitor network traffic within the Nexus platform is key to ensure the efficient operation
of the solution. The design calls for the use of Switched Port Analyzer (SPAN) as well as NetFlow
services to provide visibility.
SPAN
Switched Port Analyzer (SPAN) sends a copy of the traffic to a destination port. The network analyzer,
which is attached with destination port, analyzes the traffic that passes through source port. The Cisco
Nexus 7000 supports all SPAN sessions in hardware, the supervisor CPU is not involved.
The source port can be a single port or multiple ports or a VLAN, which is also called a monitored port.
You can monitor all the packets for source port which is received (rx), transmitted (tx), or bidirectional
(both). A replication of the packets is sent to the destination port for analysis.
The destination port is a port that connects to a probe or security device that can receive and analyze the
copied packets from single or multiple source ports. In the design, the SPAN destination ports are the
Cisco NetFlow Generation Appliances NGA). It is important to note the capacity of the destination
SPAN interfaces should be equivalent or exceed the capacity of the source interfaces to avoid potential
SPAN drops obscuring network visibility.
Figure 21 describes the connectivity between the Cisco Nexus 7000 switches and the Cisco NGA
devices. Notice that a static port channel is configured on the Cisco Nexus 7000 to the NGAs. The NGA
are promiscuous devices and do not participate in port aggregation protocols such as PAGP or LACP on
their data interfaces. Each of the links are 10 Gig enabled. The port channel may contain up to 16 active
interfaces in the bundle allowing for greater capacity. It is important to note that the NGA devices are
independent devices so adding more promiscuous endpoint devices to the port channel is not an issue.
SPAN traffic will be redirected and load balanced across the static link members of the port channel.
39
Enclave Implementation
Figure 21
interface port-channel8
interface port-channel8
switchport monitor
switchport monitor
monitor session 1
monitor session 1
no shut
no shut
Note
Span may use the same replication engine as multicast on the module and there is a physical limit to the
amount of replication that each replication engine can do. Nexus 7000 modules have multiple replication
engines for each module and under normal circumstances, multicast is unaffected by a span session. But
it is possible to impact multicast replication if you have a large number of high rate multicast streams
inbound to the module, and the port you monitor uses the same replication engine.
NetFlow
NetFlow technology efficiently provides accounting for various applications such as network traffic
accounting, usage-based network billing, network planning, as well as Denial Services monitoring
capabilities, network monitoring, outbound marketing, and data mining capabilities for both Service
Provider and Enterprise organizations. The NetFlow architecture consists of flow records, flow exports
and flow monitors. NetFlow consumes hardware resources such as TCAM and CPU in the switching
environment. It is also not a recommended practice to use NetFlow sampling as this provides an
incomplete view of network traffic.
40
Enclave Implementation
To avoid NetFlow resource utilization in the Nexus switch and potential "blindspots" the NetFlow
service is offloaded to dedicated devices, namely the Cisco NetFlow Generation Appliances (NGA). The
NGAs consume SPAN traffic from the Nexus 7000. The NGAs are promiscuous endpoints of Port
Channel 8 described above. Please see the Cisco NetFlow Generation Appliance section for details on
its implementation in the design.
ISE Integration
Two Identity Services Engines are provisioned in a primary secondary configuration for high
availability. Each ISE assumes the following personas:
Administration Node
Monitoring Node
The ISE provides RADIUS services to each of the Nexus 5000 VDCs which are configured as Network
Devices. The Cisco Nexus 5000 configuration is identical to the Cisco Nexus 7000 implementation
captured in the Cisco Nexus 7000 ISE Integration section.
Cisco TrustSec
Cisco TrustSec allows security operations teams to create role-based security policy. The Cisco Nexus
5500 platform supports TrustSec but cannot act as an SXP "listener". his means it cannot aggregate and
advertise through SXP the IP to SGT mappings learned from the Cisco Nexus 1000v. In light of this, the
Nexus 1000v will implement an SXP connection to each ASA virtual context directly to advertise the
CTS tag to IP information.
Note
The Cisco Nexus 7000 and 5000 support enforcement of Security Group ACLs in the network fabric.
This capability was not explored in this design.
Private VLANs
The use of private VLANs allows for the complete isolation of control and management traffic within
an Enclave. The Cisco Nexus 5548UP supports private VLANs and used the following structure during
validation. In this sample, VLAN 3171 is the primary VLAN and 3172 is an isolated VLAN carried
across the infrastructure.
41
Enclave Implementation
Nexus 5000-A
Nexus 5000-B
Feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
Feature private-vlan
vlan 3171
name core-services-primary
private-vlan primary
private-vlan association 3172
vlan 3172
name core-services-isolated
private-vlan isolated
Port Profiles
A Port profile is a mechanism for simplified configuration of interfaces. A port profile can be assigned
to multiple interfaces giving them all the same configuration. Port profiles provide consistency. Changes
to the port profile can be propagated automatically to the configuration of any interface assigned to it.
Please use the port profile guidance provided in the Nexus 7000 Port Profiles section for configuration
details.
42
Enclave Implementation
Nexus 5000-A
Nexus 5000-B
ip access-list acl-fas
10 permit ip any any
10 permit ip any any
class-map type qos match-any cm-qos-fas
class-map type qos match-any cm-qos-fas
match access-group name acl-fas
match access-group name acl-fas
policy-map type qos pm-qos-fas
policy-map type qos pm-qos-fas
class cm-qos-fas
class cm-qos-fas
set qos-group 4
set qos-group 4
vlan configuration 201-219
vlan configuration 201-219
service-policy type qos input pm-qos-fas
service-policy type qos input pm-qos-fas
Note
Use the show hardware profile tcam feature qos command to display TCAM resource utilization.
The following configuration speaks to the classifications defined (qos) on the Nexus switch. A class-map
defines the CoS value and is subsequently used to assign the CoS to a system class or qos-group through
a system assigned policy map pm-qos-global.
43
Enclave Implementation
Nexus 5000-A
Nexus 5000-B
match cos 5
class-map type qos match-all cm-qos-bronze
match cos 1
class-map type qos match-all cm-qos-silver
match cos 2
class-map type qos match-all cm-qos-platinum
match cos 6
policy-map type qos pm-qos-global
class cm-qos-platinum
set qos-group 5
set qos-group 5
class cm-qos-gold
class cm-qos-gold
set qos-group 4
set qos-group 4
class cm-qos-silver
class cm-qos-silver
set qos-group 3
set qos-group 3
class cm-qos-bronze
class cm-qos-bronze
set qos-group 2
set qos-group 2
class class-fcoe
set qos-group 1
class class-fcoe
set qos-group 1
system qos
system qos
service-policy type qos input pm-qos-global
The queuing and scheduling definitions are defined for ingress and egress traffic to the Nexus platform.
The available queues (2-5) are given bandwidth percentages that align with those defined on the Cisco
UCS system. The ingress and egress policies are applied at the system level through the service-policy
command.
44
Enclave Implementation
Nexus 5000-A
Nexus 5000-B
match qos-group 2
class-map type queuing cm-que-qos-group-3
match qos-group 3
class-map type queuing cm-que-qos-group-4
match qos-group 4
class-map type queuing cm-que-qos-group-5
match qos-group 5
policy-map type queuing pm-que-in-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
policy-map type queuing pm-que-out-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
system qos
match qos-group 2
class-map type queuing cm-que-qos-group-3
match qos-group 3
class-map type queuing cm-que-qos-group-4
match qos-group 4
class-map type queuing cm-que-qos-group-5
match qos-group 5
policy-map type queuing pm-que-in-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
policy-map type queuing pm-que-out-global
class type queuing class-fcoe
bandwidth percent 20
class type queuing cm-que-qos-group-2
bandwidth percent 10
class type queuing cm-que-qos-group-3
bandwidth percent 20
class type queuing cm-que-qos-group-4
bandwidth percent 30
class type queuing cm-que-qos-group-5
bandwidth percent 10
class type queuing class-default
bandwidth percent 10
system qos
45
Enclave Implementation
The network-qos policy defines the attributes of each qos-group on the Nexus platform. Groups 2 - 5 are
each assigned an MTU and associated CoS value. The MTU was set to the maximum in this environment
as the edge of the network will define acceptable frame transmission. The fcoe class qos-group 1 is
assigned CoS 3 with an MTU of 2518 by default with Priority Flow Control (PFC pause) and lossless
Ethernet settings. The network policy is applied at the system level.
46
Enclave Implementation
Nexus 5000-A
Nexus 5000-B
match qos-group 2
class-map type network-qos cm-nq-qos-group-3
match qos-group 3
class-map type network-qos cm-nq-qos-group-4
match qos-group 4
class-map type network-qos cm-nq-qos-group-5
match qos-group 5
policy-map type network-qos pm-nq-global
class type network-qos class-fcoe
match qos-group 2
class-map type network-qos cm-nq-qos-group-3
match qos-group 3
class-map type network-qos cm-nq-qos-group-4
match qos-group 4
class-map type network-qos cm-nq-qos-group-5
match qos-group 5
policy-map type network-qos pm-nq-global
class type network-qos class-fcoe
pause no-drop
pause no-drop
mtu 2158
mtu 2158
mtu 9216
mtu 9216
set cos 6
set cos 6
mtu 9216
mtu 9216
set cos 5
set cos 5
mtu 9216
mtu 9216
set cos 2
set cos 2
mtu 9216
mtu 9216
set cos 1
set cos 1
system qos
system qos
service-policy type network-qos pm-nq-global
Monitoring
The ability to monitor network traffic within the Nexus platform is key to ensure the efficient operation
of the solution. The design calls for the use of Switched Port Analyzer (SPAN) as well as NetFlow
services to provide visibility.
47
Enclave Implementation
SPAN
Switched Port Analyzer (SPAN) sources refer to the interface from which traffic can be monitored.
SPAN sources send a copy of the traffic to a destination port. The network analyzer, which is attached
with destination port, analyzes the traffic that passes through source port.
monitor session 1
monitor session 1
no shut
no shut
The SPAN source positioning is at a critical juncture of the network allowing for full visibility of traffic
ingress and egress to the switch.
NetFlow
NetFlow technology efficiently provides accounting for various applications such as network traffic
accounting, usage-based network billing, network planning, as well as Denial Services monitoring
capabilities, network monitoring, outbound marketing, and data mining capabilities for both Service
Provider and Enterprise organizations.
In this design, NetFlow services are offloaded to dedicated devices, namely the Cisco NetFlow
Generation Appliances (NGA). The NGAs consume SPAN traffic from the Nexus 5548UP. The SPAN
sources are implemented at network "choke points" to optimize the capture and ultimately visibility into
the environment. Please see the Cisco NetFlow Generation Appliance section for details on its
implementation in the design.
48
Enclave Implementation
Figure 22
Figure 23 details the physical connections of the Cisco Nexus 1100 series platforms to the FlexPod. This
aligns with the traditional connectivity models. The CSP platforms are an active standby pair with
trunked links supporting control and management traffic related to the virtual services. he control0 and
mgmt0 interfaces of the Nexus 1100 are seen originating from the active Nexus platform. The
configurations are automatically synced between the two Nexus 1100 cluster nodes.
Figure 23
Note
The Cisco Nexus 1100 can be provisioned in a Flexible Network Uplink configuration. This deployment
model is recommended for FlexPod Data Center moving forward. The flexible model allows for port
aggregation of CSP interfaces to provide enhanced link and device fault tolerance with minimal
convergence as well as maximum uplink utilization.
The virtual services blades are deployed in a redundant fashion across the Nexus 1100 devices. As show
below, the NAM VSB does not support a high availability deployment model and is active only on the
Primary platform.
49
Enclave Implementation
50
Enclave Implementation
A second VSM is provisioned to support the application enclaves deployed in the "production" VMware
vSphere cluster. The VSM is identified as sea-prod-vsm. The second VSM is not required to isolate the
management network infrastructure from the "production" environment but with the available VSB
capacity on the Cisco Nexus 1100 platforms it makes the implementation much cleaner. As such, VLAN
3250 provides a dedicated segment for production control traffic.
51
Enclave Implementation
The configuration of the VSG requires the definition of two VLAN interfaces for data services (VLAN
99) and control traffic (VLAN 98). The VEM and VSG communicate over VLAN 99 (vPath) for policy
enforcement. The HA VLAN provides VSG node communication and take over in the event of a failure.
52
Enclave Implementation
SVS Domain
A Nexus 1000v DVS (sea-prod-vsm) is created with a unique SVS domain to support the new production
enclave environment. This new virtual distributed switch will associated with the baseline FlexPod
VMware vCenter Server.
interface control0
ip address 192.168.250.18/24
svs-domain
domain id 201
control vlan 3250
packet vlan 3250
svs mode L3 interface control0
svs connection vCenter
protocol vmware-vim
remote ip address 172.26.164.200 port 80
vmware dvs uuid "c5662d50b4a07c11-6d3bcb9fb19154c0" datacenter-name SEA Data Center
max-ports 8192
connect
Figure 24 Cisco Nexus 1000v "Production" VSM Topology describes the use of control0 interface on a
unique VLAN to provide ESXi host isolation from the remaining management network. All VEM to
VSM communication will occur over this dedicated VLAN. The svs mode L3 interface control0
command assigns communication between the VSM and VEM across the control interface.
53
Enclave Implementation
Figure 24
The Nexus 1000v production enclave VSM is part of the same VMware vSphere vCenter deployment as
the FlexPod Data Center Nexus 1000v VSM (sea-vsm1) dedicated to management services. This image
of the vCenter Networking construct for the data center indicates the presence of the two virtual
distributed switches.
ISE Integration
The ISE provides RADIUS services to each of the Nexus 1000v VSM which are configured as network
devices in the ISE tool.
54
Enclave Implementation
For more deployment details on the ISE implementation please go to the Cisco Identity Services Engine
section.
The following AAA commands were used:
55
Enclave Implementation
VXLAN
Virtual Extensible LAN (VXLAN) allows organizations to scale beyond the 4000 VLAN limit present
in traditional switching environments by encapsulating frames MAC frames in IP. This approach allows
a single overlay VLAN to support multiple VXLAN segments, simultaneously addressing VLAN scale
issues and network segmentation requirements.
In the enclave architecture, the use of VXLAN is enabled through the segmentation feature and the
Unicast-only mode was validated. Unicast-only mode distributes a list of IP addresses associated with a
particular VXLAN to all Nexus 1000v VEM. Each VEM requires at least one IP/MAC address pair to
terminate VXLAN packets. This IP/MAC address pair is known as the VXLAN Tunnel End Point
(VTEP) IP/MAC addresses. The distribution MAC feature enables the VSM to distribute a list of MAC
to VTEP associations. The combination of these two features eliminates unicast flooding as all MAC
addresses are known to all VEMs under the same VSM.
56
Enclave Implementation
57
Enclave Implementation
Figure 25
VTEP Configuration
To create VXLAN segments IDs or domains it is necessary to construct bridge domains in the Nexus
1000v configuration. The bridge domains are referenced by Virtual Machine port profiles requiring
VXLAN services. In the example below, six bridge domains are created. As the naming standard dictates
there are two VXLAN segments for each of the enclaves. The segment ID is assigned my the
administrator. The enclave validation allows for a maximum of ten VXLAN segments per enclave, but
this is adjustable based on each organizations requirement. The current version of the Nexus 1000v
supports up to 2048 VXLAN bridge domains.
58
Enclave Implementation
59
Enclave Implementation
Figure 26
Visibility
The following Nexus 1000v features were enabled to provide virtual access visibility, awareness and to
support cyber threat defense technologies.
SPAN
The Nexus 1000v supports the mirroring of traffic within the virtual distributed switch as well as
externally to third party network analysis devices or probes. Each of these capabilities has been
implemented with the Secure Enclave architecture to advance understanding of traffic patterns and
performance of the environment.
Local SPAN
The Switched Port Analyzer (SPAN) feature allows mirroring of traffic within the VEM to a vEthernet
interface supporting a network analysis device. The SPAN sources can be ports (Ethernet, vEthernet or
Port Channels) VLANs, or port profiles. Traffic is directional in nature and the SPAN configuration
60
Enclave Implementation
allows for ingress (rx), egress (tx) or both to be captured in relation to the source construct. The
following example capture ingress traffic on the system-uplink port-profile and send the data to a
promiscuous VM.
61
Enclave Implementation
NetFlow
The Nexus 1000v supports NetFlow. The data may be exported to the Lancope StealthWatch system for
analysis. As shown below the NetFlow feature is enabled. The destination of the flow records is defined
as "nf-exprot-1" which is the Lancope Cyber Thread Defense (CTD) solution. The flow record
"sea-enclaves" defines the interesting parameters to be captured with each flow and indicates the
"nf-export-1" as the collector.
**>>
For more information on the Cyber Threat Defense system implemented for the Secure Enclave
architecture please visit the Cisco Cyber Threat Defense for the Data Center Solution: Cisco Validated
Design at
www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/ctd-first-look-designguide.pdf
62
Enclave Implementation
vTracker
The vTracker feature on the Cisco Nexus 1000V switch provides information about the virtual network
environment. vTracker provides various views that are based on the data sourced from the vCenter, the
Cisco Discovery Protocol (CDP), and other related systems connected with the virtual switch. vTracker
enhances troubleshooting, monitoring, and system maintenance. Using vTracker show commands, you
can access consolidated network information across the following views:
For example the show vtracker module-view provides visibility into the ESXi pNICS defined as vNICS
on the Cisco UCS system:
63
Enclave Implementation
-------------------------------------------------------------------------------Mod
EthIf
Adapter
Mac-Address
Driver
DriverVer
FwVer
Description
-------------------------------------------------------------------------------3
Eth3/1
vmnic0
0050.5652.0a00 enic
2.1.2.38
2.1(3a)
Eth3/2
vmnic1
0050.5652.0b00 enic
2.1.2.38
2.1(3a)
Eth3/3
vmnic2
0050.5652.5a00 enic
2.1.2.38
2.1(3a)
Eth3/4
vmnic3
0050.5652.5b00 enic
2.1.2.38
2.1(3a)
Eth3/5
vmnic4
0050.5652.3a00 enic
2.1.2.38
2.1(3a)
Eth3/6
vmnic5
0050.5652.3b00 enic
2.1.2.38
64
2.1(3a)
Enclave Implementation
Cisco TrustSec
The Cisco Nexus 1000v supports the Cisco TrustSec architecture by implementing the SGT Exchange
Protocol (SXP). The SXP protocol is used to propagate the IP addresses of virtual machines and their
corresponding SGTs up to the upstream Cisco TrustSec-capable switches or Cisco ASA firewalls. The
SXP protocol is a secure communication between the speaker (Nexus 1000v) and listener devices.
The following configuration describes the enablement of the CTS feature on the Nexus 1000v. The
feature is enabled with device tracking. CTS device tracking allows the switch to capture the IP address
and associated SGT assigned at the port profile of the virtual machine.
Switches such as the Cisco Nexus 5000 do not support the SXP listener role. In this scenario, the Nexus
1000v will "speak" directly to each ASA virtual context providing SGT to IP mapping information for
use in the access control service policies.
65
Enclave Implementation
Figure 28
Private VLANs
The private VLAN configuration on the Nexus 1000v supports the isolation of enclave management
traffic. This configuration requires the enablement of the feature and definition of two VLANs. In this
example, VLAN 3172 is the primary VLAN supporting the isolated VLAN 3172.
66
Enclave Implementation
67
Enclave Implementation
68
Enclave Implementation
VLAN ID
3250
3251
3254
3255
Description
Production Management VLAN
vMotion VLAN
vPath Data Service
HA services
The enclave port profile uplinks support traffic directly associated with the enclaves. This includes NFS,
iSCSI and enclave data flows. Table 6 describes the VLANs created for the enclave validation effort. It
is important to understand that these VLANS to capture the limits of the environment.
69
Enclave Implementation
VLAN ID
201-219
3001-3019
3253
Description
Enclave NFS VLANs; one per enclave
Enclave public VLAN; one per enclave*
VXLAN VTEP VLAN
*This is not indicative of the maximum number of VLANs supported.
The core-uplinks port profile supports the private VLANs, primary and isolated, that offer complete
isolation of management traffic to all enclaves in the architecture. The port-channel created in the design
is dedicated to only these two VLANs. Please see the Cisco Unified Computing System for more details
regarding the construction of this secure traffic path.
70
Enclave Implementation
VLAN ID
3171
3172
Description
Enclave Primary Private VLAN
Enclave Isolated Private VLAN
The show port-channel summary command for a single VEM module, ESXi host, captures the three port
channel uplinks created. Figure 29 illustrates the resulting uplink configurations.
Po8(SU)
Eth
NONE
Eth10/5(P)
Eth10/6(P)
16
Po16(SU)
Eth
NONE
Eth10/1(P)
Eth10/2(P)
32
Po32(SU)
Eth
NONE
Eth10/3(P)
Eth10/4(P)
Figure 29
71
Enclave Implementation
Quality of Service
The Nexus 1000v uses Cisco Modular QoS CLI (MQC) that defines a policy configuration process to
identify and define traffic at the virtual access layer. The MQC policy implementation can be
summarized in three primary steps:
The Nexus 1000v as an edge device can apply a CoS value at the edge based on the VM value/role in
the organization. The first step in the process is to create a class-map construct. In the enclave
architecture there are four class maps defined. The fifth class is best effort which was not explicitly
defined.
72
Enclave Implementation
73
Enclave Implementation
74
Enclave Implementation
vPath is an encapsulation technique that will add 62 bytes when used in L2 mode or 82 bytes if using L3
mode. To avoid fragmentation in a Layer 2 implementation, ensure the outgoing uplinks support the
required MTU. If it is a Layer 3 enabled vPath packets will be dropped and ICMP error messages sent
to the traffic source.
The port profile enc1-web uses the previously described service node. The vservice command binds a
specific Cisco VSG (enc1-vsg) and security profile (enc1_web) to the port profile. This enables vPath
to redirect the traffic to the Cisco VSG. The org command defines the tenant with the PNSC where the
firewall is enabled.
The configuration adapts the CoS IEEE 802.1Q-2006 CoS-use recommendations shown in Table 7. It is
important to note voice CoS value 5 has been reassigned to support NFS and video traffic CoS 4 is not
in use.
75
Enclave Implementation
Table 7
CoS Value
6
5
4
3
2
1
0
Acronym
IC
VO
VI
CA
EE
BK
BE
Description
Internetwork Control
Voice
Video Traffic
Critical Applications
Excellent effort traffic
Background traffic
Not used
Priority
Platinum
Gold
Not in use
Fibre Channel
Silver
Bronze
Best Effort
The MTU maximum (9216) has been set allowing the edge devices to control frame sizing and reduce
the potential for fragmentation at least within the Cisco UCS domain. Service profiles determine the
attributes of the server including MTU settings and CoS assignment.
76
Enclave Implementation
Figure 31
Note
The Cisco UCS Host Control "None" setting uses the CoS value associated with the priority selected in
the Priority drop-down list regardless of the CoS value assigned by the host.
The vNIC template uses the QoS policy to defer classification of traffic to the host or in the enclave
architecture the Nexus 1000v. Figure 32 is a sample vNIC template where the QoS Policy and MTU are
defined for any Service Profile using this template.
Note
If a QoS policy is undefined or not set the system will use a CoS of 0 which aligns to the best-effort
priority
Figure 32
77
Enclave Implementation
Figure 33 captures all of the vNIC templates defined for the production servers in the enclave VMware
DRS cluster. Each template uses the QoS_N1k QoS policy and an MTU of 9000. The naming standard
also indicates there is fabric alignment of the vNIC to Fabric Interconnect A or B. Figure 34 is the
example adapter summary for the enclave service profile.
Figure 33
Figure 34
User Management
The Cisco UCS domain is configured to use the radius services of the ISE for user management,
centralizing authentication and authorization policy in the organization. The Cisco Identity Services
Engine section will discuss the user auth_c and auth_z policy implementation. The following
configurations were put in place to achieve this goal.
The following figures step through the UCS integration of ISE radius services. Notice the figures include
the Cisco UCS navigation path.
78
Enclave Implementation
79
Enclave Implementation
VMware vSphere
ESXi
The ESXi hosts are uniform in their deployment employing the FCoE boot practices established in
FlexPod. The Cisco UCS service profile is altered to provides 6 vmnics for use by the hypervisor as
described in the previous section. The following sample from one of the ESXi hosts reflects the UCS
VNIC construct and MTU settings provided by the Cisco Nexus 1000v.
PCI
Driver
Link Speed
MTU
Description
vmnic0
0000:06:00.00 enic
Up
40000Mbps Full
00:25:b5:02:0a:04 9000
vmnic1
0000:07:00.00 enic
Up
40000Mbps Full
00:25:b5:02:0b:04 9000
vmnic2
0000:08:00.00 enic
Up
40000Mbps Full
00:25:b5:02:5a:04 9000
vmnic3
0000:09:00.00 enic
Up
40000Mbps Full
00:25:b5:02:5b:04 9000
vmnic4
0000:0a:00.00 enic
Up
40000Mbps Full
00:25:b5:02:3a:04 9000
vmnic5
0000:0b:00.00 enic
Up
40000Mbps Full
00:25:b5:02:3b:04 9000
The vmknics vmko, vmk1 and vmk2 are provisioned for infrastructure services management, vMotion
and VXLAN VTEP respectively. Notice the MTU on the VXLAN services is set to 1700 to account for
the encapsulation overhead of VXLAN.
Port Group/DVPort
IP Family IP Address
Netmask
Broadcast
vmk0
38
IPv4
192.168.250.15
255.255.255.0
vmk1
740
IPv4
192.168.251.15
vmk2
776
IPv4
192.168.253.15
TSO MSS
Enabled Type
65535
true
STATIC
255.255.255.0
65535
true
STATIC
255.255.255.0
65535
true
STATIC
80
MAC Address
MTU
Enclave Implementation
Each enclave has dedicated NFS or potentially iSCSI services available to it in the NetApp Virtual
Storage Machine (VSM) vmknics are required to support this transport. The following example show a
number of vmknics attached to distinct subnets offering L2/L3 isolation of storage services to the
enclave.
Port Group/DVPort
IP Family IP Address
Netmask
Broadcast
MAC Address
vmk7
516
IPv4
192.168.3.15
255.255.255.0
192.168.3.255
vmk8
548
IPv4
192.168.4.15
255.255.255.0
vmk9
580
IPv4
192.168.5.15
vmk10
612
IPv4
vmk11
644
vmk12
vmk13
MTU
TSO MSS
Enabled Type
00:50:56:63:a4:0c 9000
65535
true
STATIC
192.168.4.255
00:50:56:6a:51:9e 9000
65535
true
STATIC
255.255.255.0
192.168.5.255
00:50:56:64:cd:2b 9000
65535
true
STATIC
192.168.6.15
255.255.255.0
192.168.6.255
00:50:56:62:77:7a 9000
65535
true
STATIC
IPv4
192.168.7.15
255.255.255.0
192.168.7.255
00:50:56:68:64:41 9000
65535
true
STATIC
676
IPv4
192.168.8.15
255.255.255.0
192.168.8.255
00:50:56:6c:b4:85 9000
65535
true
STATIC
708
IPv4
192.168.9.15
255.255.255.0
192.168.9.255
00:50:56:62:05:7a 9000
65535
true
STATIC
81
Enclave Implementation
Two DRS virtual machine rules were created defining the acceptable positioning of VSG services on the
DRS cluster. As shown the previously created DRS cluster VM and Host groups are used to define two
distinct placement policies in the cluster, essentially removing the ESXi host as a single point of failure
for the identified services (VMs).
82
Enclave Implementation
NetApp FAS
This section of the document builds off of the FlexPod Data Center foundation to enable creating an
enclave Storage Virtual Machine (SVM).
1.
Build VLAN interfaces for NFS, iSCSI, and Management on each Node's interface group, and set
appropriate MTUs.
83
Enclave Implementation
2.
3.
84
Enclave Implementation
85
Enclave Implementation
4.
5.
Turn on SVM NFS vstorage parameter to enable NFS VAAI Plugin support.
6.
86
Enclave Implementation
7.
8.
Create a valid self-signed security certificate for the SVM or install a certificate from a Certificate
Authority (CA).
87
Enclave Implementation
9.
Secure the SVM Default Export-Policy. Create a SVM Export-Policy and assign it to the SVM root
volume.
10. Create datastore volumes while assigning the junction-path and Export-Policy, and update load
sharing mirrors.
88
Enclave Implementation
89
Enclave Implementation
Cisco Cyber Threat Defense for the Data Center Solution: First Look Guide at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-des
ign-guide.pdf
90
Enclave Implementation
This guide provides design details and guidance for detecting threats already operating in an internal
network or data center.
Cluster Mode
The ASA cluster model uses the Cisco Nexus 7000 as an aggregation point for the security service.
Figure 35 details the connections. The picture shows four physical ASA devices connected to two Nexus
7000 switches. The four Nexus switch images represent the same two Nexus VDCs 7000-A and 7000-B.
The ASA Clustered data links were configured as a Spanned EtherChannel using a single port-channel,
PC-2 that supports both inside and outside VLANs. These channels connect to a pair of Nexus 7000s
using a virtual PortChannel (vPC), vPC-20. The EtherChannel aggregates the traffic across all the
available active interfaces in the channel. A spanned EtherChannel accommodates both routed and
transparent firewall modes, in addition to single or multi-context. The EtherChannel inherently provides
load balancing as part of basic operation using Cluster Link Aggregation Control Protocol (cLACP).
Figure 35
The Cluster control links are local EtherChannels configured on each ASA device. In this example, each
ASA port channel PC-1 is dual-homed to the Nexus 7000 switches using vPC. A distinct vPC is defined
on the Nexus 7000 pair to provide control traffic HA. The Cluster control links do not support any
enclave traffic VLANs. A single VLAN supports the cluster control traffic. In the following example it
is defined as VLAN 20.
91
Enclave Implementation
feature vpc
feature vpc
role priority 10
role priority 20
peer-gateway
peer-gateway
auto-recovery
auto-recovery
interface port-channel20
interface port-channel20
vpc 20
vpc 20
interface port-channel21
interface port-channel21
vpc 21
vpc 21
vlan 20
vlan 20
name ASA-Cluster-Control
interface port-channel22
name ASA-Cluster-Control
interface port-channel22
vpc 22
vpc 22
interface port-channel23
interface port-channel23
vpc 23
vpc 23
interface port-channel24
interface port-channel24
vpc 24
vpc 24
The ASA cluster defines the same interface configuration across the nodes to support the local and
spanned EtherChannel configuration. The vss-id command is a locally significant ID for the ASA to use
when connected to the vPC switches. It is important that each of the node interfaces connect to the same
switch. In this case all of T0/8 attach to Cisco Nexus 7000-A and T0/9 to Cisco Nexus 7000-B.
92
Enclave Implementation
ASA Cluster
interface TenGigabitEthernet0/6
channel-group 1 mode active
!
interface TenGigabitEthernet0/7
channel-group 1 mode active
!
interface TenGigabitEthernet0/8
channel-group 2 mode active vss-id 1
!
interface TenGigabitEthernet0/9
channel-group 2 mode active vss-id 2
!
The local and spanned EtherChannels are formed and enclave data VLANs can be assigned through the
sub-interface construct. The sample configuration shows three enclave data VLANs assigned to the
spanned EtherChannel port channel 2. The traffic is balanced across the bundled interfaces.
93
Enclave Implementation
ASA Cluster
interface Port-channel1
description Clustering Interface
!
interface Port-channel2
description Cluster Spanned Data Link to PC-20
port-channel span-cluster vss-load-balance
!
interface Port-channel2.2001
description Enclave1-outside
vlan 2001
!
interface Port-channel2.2002
description Enclave2-outside
vlan 2002
!
interface Port-channel2.2003
description Enclave3-outside
vlan 2003
!
The Cisco ASA management traffic uses dedicated interfaces into the management domain. In a multi
context environment this physical interface can be shared across virtual context through 802.1q
sub-interfaces. The trunked management interface allows each security context to have its own
management interfaces. The IPS sensors on each platform has its own dedicated interface with
connections into the management infrastructure.
94
Enclave Implementation
Figure 36
ASA Cluster
interface Management0/0
!
interface Management0/0.101
description <<** Enclave 1 Management **>>
vlan 101
!
interface Management0/0.102
description <<** Enclave 2 Management **>>
vlan 102
!
interface Management0/0.103
description <<** Enclave 3 Management **>>
vlan 103
!
interface Management0/0.164
description <<** Cluster Management Interface **>>
vlan 164
!
The Enclave model uses that ASA in a multi-mode context. The ASA is portioned into multiple virtual
devices, known as security contexts. Each context acts as an independent device, with its own security
policy, interfaces, and administrators. Multiple contexts are similar to having multiple standalone
devices each dedicated to an Enclave. The context are defined at the System level.
95
Enclave Implementation
The primary administrative context is the "admin" context. This context is assigned a single management
sub-interface for security operations.
Within the admin context a pool of cluster address must be created for distribution to slave nodes as they
are added to the ASA cluster. This IP pool construct is repeated for each security context created in ASA
cluster. In this example, a pool of four IP address is reserved for the admin context indicating a four node
maximum configuration.
The management interface uses the sub-interface assigned in the system context. The cluster IP
(172.26.164.191) is assigned and is "owned" only by the master node.
The cluster can now be instantiated in the system context. In this example, the K02-SEA ASA cluster
is created on ASA-1. The cluster interface characteristics and associated attributes are defined. This is
repeated on each node of the cluster.
96
Enclave Implementation
This configuration is repeated on each node added to the cluster. Notice the IP address is different for
the second node.
The Cisco ASDM Cluster Dashboard provides an overview of the cluster and role assignments.
97
Enclave Implementation
Security Contexts
The security contexts are defined in the system context and allocated network resources. These resources
were previously defined as sub-interfaces in the spanned EtherChannel. Names can be attached to the
interfaces for use within the security context, in this sample Mgmt, outside and inside are in use.
The Cisco ASDM reflects each security context as an independent firewall. Each of these context is
configured and active on each node in the cluster.
98
Enclave Implementation
Within the context, the operational mode of the context is defined as routed or transparent. The security
context requires its own management IP pool that is used by each Enclave2 instance across the ASA
nodes in the cluster. The example below creates the IP pool enclave2-pool and assigns this pool to the
Mgmt102 interface. The 10.0.102.100 is the cluster IP interface. ASDM and CSM may access the system
or enclave through the shared IP address. Records sourced from the ASA system or enclave will reflect
the locally significant address assigned through the pool construct.
99
Enclave Implementation
ASA Cluster
firewall transparent
hostname Enclave2
!
ip local pool enclave2-pool 10.0.102.101-10.0.102.104 mask 255.255.255.0
!
interface BVI1
description Enclave2 BVI
ip address 10.2.1.251 255.255.255.0
!
interface Mgmt102
management-only
nameif management
security-level 0
ip address 10.0.102.100 255.255.255.0 cluster-pool enclave2-pool
!
interface outside
nameif outside
bridge-group 1
security-level 0
!
interface inside
nameif inside
bridge-group 1
security-level 100
!
This command indicates that Enclave2 is defined as a transparent security context across the cluster.
100
Enclave Implementation
ASA Cluster
K02-ASA-Cluster/Enclave2# cluster exec show context
ASA-1(LOCAL):*********************************************************
Context Name
Enclave2
Class
Interfaces
Mode
URL
default
inside,Mgmt102,
Transparent
disk0:/enclave2.cfg
outside
ASA-3:****************************************************************
Context Name
Enclave2
Class
Interfaces
Mode
URL
default
inside,Mgmt102,
Transparent
disk0:/enclave2.cfg
outside
ASA-4:****************************************************************
Context Name
Enclave2
Class
Interfaces
Mode
URL
default
inside,Mgmt102,
Transparent
disk0:/enclave2.cfg
outside
ASA-2:****************************************************************
Context Name
Enclave2
Class
Interfaces
Mode
URL
default
inside,Mgmt102,
Transparent
disk0:/enclave2.cfg
ISE Integration
The Cisco ASA security context communicate with the ISE over Radius for AAA and Cisco TrustSec
related services. The AAA server group is created and the ISE nodes referenced with secure keys and
password that are similarly defined on the ISE platform. The AAA authentication can then be assigned
to connection types.
101
Enclave Implementation
ASA Cluster
aaa-server ISE_Radius_Group protocol radius
aaa-server ISE_Radius_Group (management) host 172.26.164.187
key *****
radius-common-pw *****
aaa-server ISE_Radius_Group (management) host 172.26.164.239
key *****
radius-common-pw *****
!
aaa authentication enable console ISE_Radius_Group
aaa authentication http console ISE_Radius_Group LOCAL
aaa authentication ssh console ISE_Radius_Group LOCAL
Figure 37
Cisco TrustSec
As shown earlier in Figure 38, each ASA security context communicates with the Cisco ISE platform
and maintains its own database to enforce role-based access control policies. In the terms of Cisco
TrustSec, the ASA is a Policy Enforcement Point (PEP) and Cisco ISE is a Policy Decision Point (PDP).
The ISE PDP shares the secure group name and tag mappings (the security group table) to the ISE
102
Enclave Implementation
through a secure Protected Access Credential (PAC) Radius transaction. This information is commonly
referred to as Cisco TrustSec environment data. The PDP provides Security Group Tag (SGT)
information to build access policies in the ASA as PEP.
The ASA PEP learns identity information through the Security-group eXchange Protocol (SXP). This
can be from multiple sources. The ASA creates a database to house the IP to SGT mappings. Only the
master cluster unit learns security group tag (SGT) information. The master unit then populates the SGT
to slaves, and slaves can make a match decision for SGT based on the security policy.
The following example references the ISE server group and establishes a connection to the group
through the shared cluster IP address 10.0.102.100. The ASA establishes two SXP connections to the
Nexus switches and listens for IP-to-SGT updates.
ASA Cluster
cts server-group ISE_Radius_Group
cts sxp enable
cts sxp default password *****
cts sxp default source-ip 10.0.102.100
cts sxp connection peer 172.26.164.218 password default mode local listener
cts sxp connection peer 172.26.164.217 password default mode local listener
Note
The ASA can also be configured as an SXP speaker to share data with the other members of the CTS
infrastructure.
Figure 38
103
Enclave Implementation
The ASA as a PEP uses the security groups to create security policies. The following images capture
rule creation through the Cisco ASDM. Notice the Security Group object as a criteria option for both
source and destination in Figure 39.
Figure 39
When selecting the Source Group as a criteria, the Security Group Browser window will be available.
This window will list all available Security Groups and their associated tags. The ability to filter on the
Security Name streamlines the creation of access control policy. In this case, the interesting tags are for
enclave2.
104
Enclave Implementation
Figure 40
Figure 41 is an example of the Security Group access rules. The role-based rules simplify rule creation
and understanding. The associated CLI is provided for completeness.
Figure 41
105
Enclave Implementation
106
Enclave Implementation
policy-map global_policy
class class-default
flow-export event-type all destination 172.26.164.240
!
flow-export template timeout-rate 2
logging flow-export syslogs disable
In a clustered ASA deployment, the local sensor monitors the traffic local to the ASA node. There is no
traffic redirection or sharing across sensors in the cluster. This lack of IPS collaboration in the cluster
configuration does prevent detection of certain types of scans as the traffic may traverse a number of IPS
devices due to load balancing across the ASA cluster.
107
Enclave Implementation
The IPS implementation is fully documented in the Cisco Secure Data Center for Enterprise
Implementation Guide at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-ig.pdf
Figure 45 depicts the implementation of the VSG in the Enclave architecture. This deployment is based
on a single FlexPod with Layer 2 adjacency. Layer 3 VSG implementations are also supported for more
distributed environments. The VSG VLAN provisioning is as follows:
VLAN
Management VLAN
Service VLAN
108
Description
Supports VMware vCenter, the Cisco Virtual
Network Management Center, the Cisco Nexus
1000V VSM, and the managed Cisco VSGs
Supports the Cisco Nexus 1000V VEM and Cisco
VSGs. All the Cisco VSGs are part of the Service
VLAN and the VEM uses this VLAN for its
interaction with Cisco VSGs
Enclave Implementation
HA VLAN
Figure 46 captures the Encalve1 VSG network interface details. The HA service which monitors state
through heartbeats has no IP addressing. The management interface is in the appropriate subnet, while
the Data or vPath service has an IP of 111.111.111.111. This IP is only used to resolve the MAC address
of the VSG and all other communication or redirection of enclave data traffic will occur at Layer2. Layer
3 based vPath would require a vmknic with Layer 3 capabilities enabled.
109
Enclave Implementation
Figure 46
The VSG firewall is assigned at the "Tenant" level in Cisco VSG terminology. The Tenant is defined as
an Enclave instance. Figure 47 depicts enc1-vsg assigned the Enclave1 VSG "Tenant". It is
recommended to provision the VSG in HA mode as shown below.
Note
It is not recommended to use VMware High Availability (HA) or fault-tolerance with the Cisco VSG.
It is recommended to use a HA pair of VSGs and VMware DRS groups as described in the DRS for
Virtual Service Nodes section of this document. In situations where neither the primary nor the standby
Cisco VSG is available to vPath, configure the failure mode as Fail Open or Fail Close as dictated by
the security requirements of the Enclave.
Figure 47
110
Enclave Implementation
Three security profiles were created for the n-tier application Microsoft SharePoint 2013 hosted within
Enclave1. Each security profile is created within the PNSC and associated with a port-profile.
The primary recommendation for SharePoint 2013 is to secure inter-farm communication by blocking
the default ports (TCP 1433, and UDP 1434) used for SQL Server communication and establishing
custom ports for this communication instead. The VSG enc1-db security profile uses an ACL to drop
this service traffic.
Note
Security policies may be applied at the Virtual Data Center or application role level. This level of
granularity was not used in the Enclave framework but is certainly a viable option for an organization.
This policy maybe applied at a global or "root" level, at the Tenant level (Enclave), VDC or application
layer defined within PNSC. The definition of these layers and assignment of firewall resources can
become very granular for tight application security controls.
111
Enclave Implementation
Figure 48
The remaining sections of this document capture the configurations to address administrative and policy
functionality implemented in the enclave. The primary areas of focus include:
Note
Network Resources
Identity Management
Policy Elements
Authentication Policy
Authorization Policy
The ISE is a powerful tool and the configuration and capabilities captured in this document are simply
scratching the surface. It is recommended that readers use the reference documents to fully explore the
ISE platform.
112
Enclave Implementation
Figure 49
Network Devices
Figure 50 summarizes the Network Device definitions and required elements. Figure 51 is the expanded
view of the default radius authentication settings for the device. These fields should correspond to the
radius definitions provided in each of the network elements definition. The name should be identical to
the hostname of the device.
Figure 50
113
Enclave Implementation
Figure 51
Figure 52 is the form for enabling Cisco TrustSec for a particular device. This section defines the
Security Group Access attributes for the newly added network device. The PAC file is generated in from
this page to secure communications between the ISE and the network device.
Figure 52
114
Enclave Implementation
Connection
The connection to the Active Directory external identity store is established by providing Domain and a
locally significant name to the data source. Figure 53 shows the connection between the ISE active
standby pair and the CORP domain. After joining the domain the Cisco ISE can access user, group and
device data.
Figure 53
Groups
The Active Directory connection allows the Cisco ISE to use the repositories group construct. These
groups can be referenced for authentication rules. For example, Figure 54 shows four groups defined in
AD being used by the ISE.
Figure 54
Figure 55 is a snippet of the form to add these groups to the Cisco ISE. Notice the groups previously
selected.
115
Enclave Implementation
Figure 55
Internal Users
Guest Users
Active Directory
LDAP
RSA
The ISE uses a first match policy across the identity sources for authentication and authorization
purposes.
AD Sequence
The Active Directory service sequence is added referencing the previously joined domain. This sequence
will be used during authentication policy creation. Figure 56 illustrates the addition of an
"AD_Sequence" using the previously joined AD domain as an identity source.
116
Enclave Implementation
Figure 56
Policy ElementsResults
This following policy elements were defined in the Secure Enclave architecture:
Authorization Profiles
Authorization Profiles
Policy elements are the components that construct the policies associated with authentication,
authorization, and secure group access. Authorization profiles define policy components related to
permissions. Authorization profiles are used when creating authorization policies. The authorization
profile returns permission attributes when the RADIUS request is accepted.
117
Enclave Implementation
Figure 57 captures some of the default and custom authorization profiles used during validation.
Figure 58 details the UCS_Admins profile that upon authentication the UCS admin role is assigned
through the cisco-av-pair radius attribute value. Note that the cisco-av-pair value varies based on the
Cisco device type, please refer to device specific documentation for the proper syntax.
Figure 57
Figure 58
The integration of ISE into each network devices configuration is required. Please refer to the individual
components for ISE or Radius configuration details.
118
Enclave Implementation
Figure 59
Figure 60
Authentication Policy
The Cisco ISE authentication policy defines the acceptable communication protocol and identity source
for network device authentication. This policy is built using conditions or device attributes previously
defined such as device type or location as well as the acceptable network protocol. Figure 61 shows the
authentication policy associated with the UCS system. Essentially the rule states that if the device type
is UCS and the communication is using the password authentication protocol (Pap_ASCII) use the
identity source defined in the AD_Sequence.
Figure 61
119
Enclave Implementation
Figure 62 illustrates the definition of multiple ISE authentication policies each built to meet the specific
needs of the network device and the overall organization.
Figure 62
Authorization Policy
The ISE authorization policy enables the organization to set specific privileges and access rights based
on any number of conditions. If the conditions are met a permission level or authorization profile is
assigned to the user and applied to the network device being accessed. For example, in Figure 63 the
UCS Admins authorization policy has a number of conditions that must be met including location, access
protocol and Active Directory group membership before the UCS Admins authorization profiles
permissions are assigned to that user session. The Cisco ISE allows organizations to capture the context
of a user session and make decisions more intelligently. Figure 64 shows that multiple authorization
policies are supported.
Figure 63
120
Enclave Implementation
Figure 64
The following screenshots capture a single NGA configuration used in the enclave validation effort. The
NGA redirects all traffic to the Lancope Flow Collector at 172.26.164.240. The Figure 66 screenshot
describes the collector defined using the quick form.
121
Enclave Implementation
Figure 66
Figure 69 Example NGA Monitor Definition shows the result of the quick setup implemented in the
enclave architecture. The Lancope monitor is created with all four data ports of mirrored traffic being
sent to the Lancope flow collector.
Figure 69
122
Enclave Implementation
The complete design considerations and implementation details of the CTD system validated in this
effort is captured at Cisco Cyber Threat Defense for the Data Center Solution at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-designguide.pdf
123
Conclusion
Conclusion
There are many challenges facing organizations today including changing business models where
workloads are moving to the clouds and users demand ubiquitous access with any device. This new
reality places pressure on organizations to address a larger dynamic threat landscape with consistent
security policy and enforcement where the perimeter of the network is no longer clearly defined. The
edge of the data center has become vague.
The Secure Enclave architecture proposes a standard approach to application security. The Secure
Enclave extends the FlexPod Data Center infrastructure by integrating and enabling security
technologies uniformly, allowing application specific policies to be consistently enforced. The
standardization on the Enclave model facilitates operational efficiencies through automation. The
Secure Enclave Architecture allows the enterprise to consume the FlexPod infrastructure securely and
address the complete attack continuum, user to application.
References
Cisco Secure Enclaves Architecture Design Guide
http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-manager/whitepaperc07-731204.html
Cisco Secure Data Center for Enterprise Solution Design Guide at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-dg.pdf
Cisco Secure Data Center for Enterprise (Implementation Guide) at
http://www.cisco.com/c/dam/en/us/solutions/collateral/enterprise/design-zone-security/sdc-ig.pdf
Cisco Cyber Threat Defense for the Data Center Solution: First Look Guide at
http://www.cisco.com/en/US/solutions/collateral/ns340/ns414/ns742/ns744/docs/ctd-first-look-designguide.pdf
124