Sie sind auf Seite 1von 99

IBM Flex System Solution for

Microsoft Hyper-V

Configuration and Implementation Guide using IBM
Flex System x240 Compute Nodes and Flex System
V7000 Storage Node with Converged Network
Infrastructure running Windows Server 2012 and
System Center 2012 SP1


Scott Smith
David Ye



Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 2
Contents
IBM Flex System Solution for Microsoft Hyper-V .................................................................... 1
Business Problem ............................................................................................................................ 5
Business Value ................................................................................................................................ 5
Intended Audience .......................................................................................................................... 5
IBM Flex System Solution for Microsoft Hyper-V ........................................................................ 6
Components .................................................................................................................................... 7
IBM Flex System Enterprise Chassis ......................................................................................... 7
IBM Flex System Chassis Management Module........................................................................ 7
IBM Flex System x240 Compute Node ...................................................................................... 8
IBM Flex System V7000 Storage Node ..................................................................................... 9
IBM Flex System CN4093 Switches ........................................................................................ 10
IBM Flex System Manager ....................................................................................................... 10
Microsoft Windows Server 2012 .............................................................................................. 11
Microsoft System Center 2012 SP1 .......................................................................................... 11
Best Practice and Implementation Guidelines .............................................................................. 11
Racking and Power Distribution ................................................................................................... 11
Networking and VLANs ............................................................................................................... 12
Flex System Switch Positions and Network Connections ........................................................ 12
VLAN Description .................................................................................................................... 15
Management Cluster Private and CSV Networks (VLAN 30) ............................................. 16
Management Cluster Live Migration Network (VLAN 31) ................................................. 16
SQL Guest Cluster Private Network (VLAN 33) ................................................................. 16
SCVMM Guest Cluster Private Network (VLAN 35) .......................................................... 16
Production Cluster Private and CSV Networks (VLANs 37) ............................................... 16
Production Cluster Live Migration Network (VLAN 38) .................................................... 16
Cluster Public Network (VLAN 40) ..................................................................................... 16
Production VM Communication Network (VLAN 50) ........................................................ 17
Out of Band Management Network (VLAN 70) .................................................................. 17
FCoE Storage Network (VLAN 1002) ................................................................................. 17
Inter Switch Link Network (VLAN 4094)............................................................................ 17
x240 Compute Node Network Ports ......................................................................................... 17
Physical Host Data Access ............................................................................................... 17
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 3
Physical Host FCoE Storage Access .............................................................................. 18
Storage Controller FCoE Access ..................................................................................... 18
IBM Flex System CN4093 Converged Ethernet Configuration ............................................... 18
Using ISCLI to configure CN4093 switches ........................................................................ 23
Active Directory............................................................................................................................ 29
IBM Flex System V7000 Storage Node ....................................................................................... 29
Overview ................................................................................................................................... 29
Internal Flex Chassis Connections ............................................................................................ 29
Management .............................................................................................................................. 30
IBM Flex System V7000 Storage Node and Cluster Storage Considerations .......................... 30
Storage Pool and Volume Configuration .................................................................................. 31
Host Server Definition and Volume Mapping .......................................................................... 33
IBM Flex System x240 Management Fabric Setup ...................................................................... 34
Pre-OS Installation .................................................................................................................... 35
OS Installation and Configuration ............................................................................................ 38
Network Configuration ............................................................................................................. 39
Host Storage Connections ......................................................................................................... 42
Management Host Cluster Creation .......................................................................................... 43
Virtual Machine Fibre Channel Storage Connections .............................................................. 45
Virtual Machine Setup and Configuration ................................................................................ 48
System Center 2012 SP1 Setup and Configuration ...................................................................... 49
SQL Server 2012 Setup and Configuration .............................................................................. 49
SQL Clustered Instances ....................................................................................................... 50
SQL Cluster Storage ............................................................................................................. 51
SQL Server Guest Clustering................................................................................................ 52
System Center Virtual Machine Manager 2012 SP1 Setup and Configuration ........................ 53
SCVMM Guest Clustering for Virtual Machine Manager ................................................... 54
IBM Pro Pack for Microsoft System Center Virtual Machine Manager .............................. 54
Flex System V7000 Storage Automation with SMI-S .......................................................... 54
Bare Metal Provisioning ....................................................................................................... 57
System Center Operations Manager 2012 SP1 Setup and Configuration ................................. 57
IBM Upward Integration Modules for Microsoft System Center Operations Manager ....... 58
IBM Pro Pack for Microsoft System Center Virtual Machine Manager .............................. 58
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 4
IBM Flex System V7000 Storage Management Pack for Microsoft System Center
Operations Manager .............................................................................................................. 58
System Center Orchestrator 2012 SP1 Setup and Configuration ............................................. 59
System Center Service Manager 2012 SP1 Setup and Configuration ...................................... 60
WSUS Server Setup and Configuration .................................................................................... 61
Cluster Aware Updating Setup and Configuration ................................................................... 62
IBM Flex System x240 Compute Node Setup .............................................................................. 63
Pre-OS Installation .................................................................................................................... 63
OS Installation and Configuration ............................................................................................ 63
Network Configuration ............................................................................................................. 64
Host Storage Connections ......................................................................................................... 67
Compute Host Cluster Creation ................................................................................................ 67
Summary ....................................................................................................................................... 69
Appendix ....................................................................................................................................... 70
Related Links ............................................................................................................................ 70
Bill of Materials ........................................................................................................................ 72
Networking Worksheets............................................................................................................ 74
Switch Configuration ................................................................................................................ 75
Switch-1 ................................................................................................................................ 75
Switch-2 ................................................................................................................................ 83
PowerShell Scripts .................................................................................................................... 91
Management Node Network Configuration .......................................................................... 91
Compute Node Network Configuration ................................................................................ 92
Network Address Tables ........................................................................................................... 94
The team who wrote this paper ..................................................................................................... 96
Trademarks and special notices .................................................................................................... 97

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 5
Business Problem
Todays IT managers are looking for efficient ways to manage and grow their IT infrastructure with
confidence. Good IT practices recognize the need for high availability, simplified management and
containing costs through maximum resource utilization. CIOs need to be able to rapidly respond to
changing business needs with simple, easily deployed configurations with the ability to scale on demand.
Natural disasters, malicious attacks, and even simple software upgrade patches can cripple services and
applications until administrators resolve the problems and restore any backed up data. The challenge of
maintaining healthy systems and services only becomes more critical as businesses consolidate physical
servers into a virtual server infrastructure to reduce data center costs, maximize utilization, and increase
workload performance.

Business Value
The IBM Flex System
TM
Solution for Microsoft Hyper-V provides businesses with an affordable,
interoperable and reliable industry-leading virtualization solution choice. This IBM Flex System based
offering built around the latest IBM Flex System Compute Nodes, storage and networking, takes the
complexity out of the solution with this step-by-step implementation guide. Validated under the Microsoft
Private Cloud Fast Track program, this IBM reference architecture combines Microsoft software,
consolidated guidance, and validated configurations for computing, network, and storage. This reference
architecture provides certain minimum levels of redundancy and fault tolerance across the servers,
storage, and networking for the Windows Servers to help ensure a defined level of fault tolerance while
managing pooled resources.

By pooling computing, networking, and storage capacity with Microsoft Hyper-V in a Windows Failover
Cluster helps eliminate single points of failure so users have near-continuous access to important server-
based, business-productivity applications. An independent cluster hosting the management fabric based
on Microsoft System Center 2012 SP1 with IBM upward integration components, provides an
environment to deploy, maintain, and monitor the production private cloud.

IT administration can be improved by simplifying the hardware configuration to a corporate standard with
automated deployment and maintenance practices. Templates of pre-configured virtual machines can be
saved and deployed rapidly through self-service portals to the end customers. Virtual machines can be
migrated among clustered host servers to support resource balancing, scheduled maintenance, and in
the event of unplanned physical or logical outages, virtual machines can automatically be restarted on the
remaining cluster nodes. As a result, clients will minimize downtime, making this seamless operation
attractive to organizations that are trying to create new business and maintain healthy service level
agreements.

Intended Audience
This reference configuration architecture and implementation guide targets organizations implementing
Hyper-V and IT Engineers familiar with the hardware and software that make up the IBM Virtualization
Reference Architecture with Microsoft Hyper-V. Additionally, the System x sales teams and their
customers evaluating or pursuing Hyper-V virtualization solutions will benefit from this previously
validated configuration. Comprehensive experience with the various reference configuration
technologies is recommended.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 6
IBM Flex System Solution for Microsoft
Hyper-V
Microsoft Hyper-V technology continues to gain competitive traction as a key component in many
customer virtualization environments. Hyper-V is a standard component in Windows Server 2012
Standard and Datacenter editions. Windows 2012 Microsoft Hyper-V Virtual Machines (VMs) support up
to sixty-four virtual processors and up to 1TB of memory.

Individual virtual machines (VMs) have their own operating system instance and are completely isolated
from the host operating system as well as other VMs. VM isolation helps promote higher business-critical
application availability while the Microsoft failover clustering feature, found in the Windows Server 2012,
can dramatically improve production system uptimes.

IT administration can be improved by simplifying the hardware configuration to a corporate standard with
automated deployment and maintenance practices. Templates of pre-sized virtual machines can be
saved and rapidly deployed from self-service portals to compute nodes. Virtual machines can be migrated
among clustered host servers to support resource balancing, scheduled maintenance, and in the event
unplanned failures of physical or logical outages virtual machines can be automatically restarted on
remaining cluster nodes. As a result, clients can minimize downtime. This seamless operation is attractive
for organizations trying to develop or expand new business opportunities and maintain healthy service
level agreements.

This Hyper-V Reference Architecture and Implementation guide provides ordering, setup and
configuration details for the IBM highly available virtualization compute environment that has been
validated as a Microsoft Hyper-V Fast Track Medium configuration. The Microsoft Hyper-V Fast Track
Medium configuration provides a validated 2-node clustered management fabric built around Microsoft
System Center 2012 SP1, and an 8-node clustered compute fabric for deployment of production
resources. This is ideal for large organizations that are ready to take their virtualization to the next level.
The design consists of ten IBM Flex System x240 Compute Nodes, attached to an IBM Flex System
V7000 Storage Node. Networking leverages the Flex System CN4093 converged switches. This fault
tolerant hardware configuration is clustered using Microsofts Windows Server 2012. This configuration
can be expanded to multiple chassis for additional compute capacity or storage.

A short summary of the IBM Hyper-V Reference Architecture software and hardware components is listed
below, followed by best practice implementation guidelines.

The IBM Hyper-V Reference Configuration is constructed with the following enterprise-class components:
One IBM Flex System Enterprise System Chassis
Ten IBM Flex System x240 Compute Nodes in a Windows Failover Cluster running Hyper-V
o Two IBM Flex System Compute Nodes will be used to build the highly available
management fabric.
o Eight IBM Flex System Compute Nodes will be used to build a highly available
virtualization cluster.
One IBM Flex System V7000 Storage Node with dual controllers (V7000 expansion options
available)
Two IBM Flex System CN4093 switches providing fully converged and redundant networking for
data and storage (FCoE)

Together, these software and hardware components form a high-performance, cost-effective solution that
supports Microsoft Hyper-V environments for most business-critical applications and many custom third-
party solutions. Equally important, these components meet the criteria set by Microsoft Private Cloud Fast
Track program which promotes robust virtualization environments to help satisfy even the most
demanding virtualization requirements. A diagram of the overall configuration is illustrated in Figure 1.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 7

Figure 1. IBM Hyper-V Fast Track reference configuration

Components
This highly available IBM virtualization architecture is comprised of the IBM Flex System Enterprise
chassis with IBM Flex System CN4093 converged switches, IBM Flex System x240 Compute Nodes
running Microsofts Windows Server 2012 operating system, and the IBM Flex System V7000 Storage
Node. Each component provides a key element to the overall solution.

IBM Flex System Enterprise Chassis
The IBM Flex System Enterprise Chassis is a simple, integrated infrastructure platform that supports a
combination of compute, storage, and networking resources to meet the demands of your application
workloads. Additional chassis can be added as the workloads increases. The 14 node, 10U chassis
delivers high-speed performance that is complete with integrated servers, storage, and networking. This
flexible chassis is designed for a simple deployment now, and for scaling to meet future needs. With the
optional IBM Flex System Manager, multiple chassis can be monitored from a single screen. In addition,
the optional IBM Upward Integration Modules (UIM) for Microsoft System Center provides the integration
of the management features of the Flex System into an existing Microsoft System Center environment.
These IBM upward integration modules enhance Microsoft System Center server management
capabilities by integrating IBM hardware management functionality, providing affordable, basic
management of physical and virtual environments and reducing the time and effort required for routine
system administration. The UIM provides discovery, deployment, configuration, monitoring, event
management, and power monitoring needed to reduce cost and complexity through server consolidation
and simplified management.

IBM Flex System Chassis Management Module
The IBM Flex System Chassis Management Module (CMM) is a hot-swap module that configures and
manages all installed chassis components. The CMM provides resource discovery, inventory, monitoring,
and alerts for all compute nodes, switches, power supplies, and fans in a single chassis. The CMM
provides a communication link with each components management processor to support power control
and out of band remote connectivity as shown in Figure 2.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 8

Figure 2. CMM Management Network

Note: The default IP address for the CMM is 192.168.70.100.
Default UserID and Password: USERID / PASSW0RD (with a zero)

IBM Flex System x240 Compute Node
At the core of this IBM reference configuration for Hyper-V, the IBM Flex System x240 Compute Nodes
deliver the performance and reliability required for virtualizing business-critical applications in Hyper-V
environments. To provide the expected virtualization performance for handling any Microsoft production
environment, IBM Flex System x240 Compute Nodes can be equipped with up to two eight core E5-2600
series processors, and up to 768GB of memory. The IBM Flex System x240 includes an on-board RAID
controller and the choice of either hot swap SAS or SATA disks as well as SFF hot swap solid state
drives. Two I/O slots provide ports for both data and storage connections though the Flex Enterprise
chassis switches. The x240 also supports remote management via the IBM Integrated Management
Module (IMM) which enables continuous out of band management capabilities. All of these key features,
including many that are not listed, help solidify the dependability that IBM customers have grown
accustomed to with System x servers.

By virtualizing with Microsoft Hyper-V technology on IBM Flex System x240 Compute Nodes (Figure 3),
businesses reduce physical server space, power consumption and the total cost of ownership (TCO).
Virtualizing the server environment can also result in lower server administration overhead, giving
administrators the ability to manage more systems than in a physical server environment. Highly available
critical applications residing on clustered host servers can be managed with greater flexibility and minimal
downtime with Microsoft Hyper-V Live Migration capabilities.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 9


Figure 3. IBM Flex System x240

IBM Flex System V7000 Storage Node
IBM Flex System V7000 Storage Node combines best-of-breed storage development with leading 1/10
GbE iSCSI, FCoE, or FC host interfaces and SAS/SSD drive technology. With its simple, efficient and
flexible approach to storage, the Flex V7000 Storage Node is a cost-effective complement to the IBM Flex
System. The Flex V7000 Storage Node delivers superior price/performance ratios, functionality,
scalability, and ease of use for the mid-range storage user by offering substantial features at a price that
fits most budgets.

The V7000 Storage Node storage offers the ability to:
Automate and speed deployment with integrated storage for the IBM PureFlex System or IBM
Flex System
Simplify management with an integrated, intuitive user interface for faster system accessibility
Reduce network complexity with FCoE and iSCSI connectivity
Store up to five times more active data in the same disk space using IBM Real-time
Compression
Virtualize third-party storage for investment protection of the current storage infrastructure
Optimize costs for mixed workloads, with up to 200 percent performance improvement with solid-
state drives (SSDs) using IBM System Storage Easy Tier
1

Improve application availability and resource utilization for organizations of all sizes
Support growing business needs while controlling costs with clustered systems
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 10

The IBM Flex System V7000 Storage Node (Figure 4) is well-suited for Microsoft Hyper-V environments.
The Flex V7000 Storage Node delivers proven disk storage in flexible, scalable configurations and
complements the IBM Flex System Enterprise Chassis, Flex System CN4093 Converged Network
switches, and x240 Compute Nodes in an end-to-end IBM solution for Microsoft Hyper-V.. Connecting
optional EXP2500 enclosures to your Flex V7000 Storage Node can scale up to 240 SAS and SSD disks
and up to 960 per clustered system. The Flex V7000 Storage Node has 8GB cache per controller and
16GB for the whole system.

The IBM Flex System V7000 Storage Node comes with advanced features such as System Storage Easy
Tier, IBM Flashcopy, internal virtualization, thin provisioning, data migration, and system clustering.
Optional features include Remote Mirroring, Real-time Compression, and external virtualization.


Figure 4. IBM Flex System V7000 Storage Node

IBM Flex System CN4093 Switches
The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (Figure 5) provides unmatched
scalability, performance, convergence, and network virtualization, while delivering innovations to address
a number of networking concerns today and providing capabilities that will help prepare for the future. The
switch offers full Layer 2/3 switching as well as FCoE Full Fabric and Fibre Channel NPV Gateway
operations to deliver a truly converged integrated solution, and is designed to install within the I/O module
bays of the IBM Flex System Enterprise Chassis. The switch can help clients migrate to a 10Gb or 40Gb
converged Ethernet infrastructure.


Figure 5. IBM Flex CN4093 Switch

IBM Flex System Manager
The IBM Flex System Manager (FSM) is a systems management appliance that drivers efficiency and
cost savings in the data center. The IBM Flex System Manager provides a pre-integrated and virtualized
management environment across servers, storage, and networking that can be easily managed from a
single interface. Providing a single focal point for seamless multi-chassis management, the Flex System
Manager offers an instant and resource-oriented view of chassis resources for both IBM System x and
IBM Power Systems compute nodes. Figure 6 displays this optional management node.


Figure 6. The optional IBM Flex System Manager Node
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 11


Microsoft Windows Server 2012
Windows Server 2012 with Hyper-V provides the enterprise with a scalable and highly dynamic platform
to support virtualization of most environments with support for up to 4TB of RAM, 320 logical processors,
and sixty-four physical nodes per cluster. IT organizations can simplify virtualization resource pools using
key features as High Availability clustering, simultaneous Live Migration, in-box network teaming, and
improved network Quality of Service (QoS) features. Virtual machines running Windows Server 2012 with
Hyper-V have also increased their resource utilization with support for up to 64 vCPU, 1TB of RAM, and
virtual HBA (vHBA).

Microsoft System Center 2012 SP1
Microsoft System Center 2012 with IBM Upward Integration Modules enables you to create a
comprehensive management environment for the IBM Flex System for Microsoft Hyper-V environment
with the following features:
Platform monitoring and management with System Center Operations Manager
Virtualization deployment and management with System Center Virtual Machine Manager.
Self Service Portal and incident tracking with System Center Service Manager
Automation management with System Center Orchestrator

Best Practice and Implementation Guidelines
Successful Microsoft Hyper-V deployment and operation is significantly attributed to a set of test-proven
planning and deployment techniques. Proper planning includes the sizing of needed server resources
(CPU and memory), storage (space, and IOPS), and network bandwidth needed to support the
infrastructure. This information can then be implemented using industry standard best practices to
achieve optimal performance and reserve capacity necessary for the solution. The Microsoft Private
Cloud Fast Track program combined with IBMs enterprise-class hardware prepares IT administrators to
meet their virtualization performance and growth objectives by deploying highly available, elastic and
flexible virtualize resource pools efficiently and securely.

An IBM and Microsoft collaboration-based collection of Hyper-V configuration best practices and
implementation guidelines that aid in the planning and configuration of your solution, are shared in the
sections that follow. They are categorized into the following topics:
Racking location and power distribution
Networking and VLANs
Storage setup and configuration
Setup of IBM Flex System x240 Compute Node
Windows Server Failover Cluster and Hyper-V setup
System Center 2012 SP1 Setup and Configuration

Racking and Power Distribution
The installation of power distribution units (PDUs) and associated cables should be performed before any
system is racked. Before cabling the PDUs, consider the following:
Make sure that there are separate electrical circuits and receptacles providing enough power to
support the required PDUs.
Redundant electrical circuits to support power to the PDUs are recommended to minimize the
possibility of a single electrical circuit failure impacting this configuration.
Plan for individual electrical cords from separate PDUs for devices that have redundant power
supplies.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 12
Maintain appropriate shielding and surge suppression practices, and appropriate battery back-up
techniques

For questions please refer IBM Flex System Enterprise Chassis & PureFlex Power Requirements Guide

Networking and VLANs
Flex System Switch Positions and Network Connections
The IBM Flex System chassis contains up to four switches. The numbering of these switches is
interleaved as shown in Figure 7, and should be kept in mind when performing work on the switches or
adding cable connections to the external ports.

Figure 7. IBM Flex System Chassis Switch Position

Each compute node has a single four port 10Gb CN4054 card. Each CN4054 has two ASIC chips and
each supports two of the four 10GB ports as shown in Figure 8. Each compute node will maintain two
10Gb/s connections to each switch. Storage connections and network teams should be distributed across
both ASICs to maintain fault tolerance.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 13

Figure 8. View of CN4093 network and storage connections

A visual representation of the connections between the CN4054 converged adapters and the CN4093
converged switches is shown in Figure 9.

Figure 9. Illustration of converged storage and server connections

Combinations of physical and virtual isolated networks are configured at the host, switch, and storage
layers to satisfy isolation requirements. At the physical host layer, there is a 4-port 10GbE Virtual Fabric
Adapter for each Hyper-V server (1- Flex System CN4054 four port 10Gb VFA module). At the physical
switch layer, there are two redundant Flex System CN4093 modules with up to 42 internal 10GbE ports, 2
external 10GbE SFP+ ports, 12 external SFP+ Omni ports, and 2 external 40GbE QSFP+ ports (which
can also be converted to eight 10GbE ports) for storage and host connectivity. In order to support all four
10GbE connections from each server the CN4093 switches will require the Upgrade 1 Feature on
Demand (FoD) option. The servers and storage maintain connectivity through two FCoE connections
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 14
using Multi Path I/O (MPIO). The two 10GbE ports for FCoE are also shared with host management,
cluster private and public data networks because these networks are generally not bandwidth intensive
and Windows QoS settings are applied to limit bandwidth so storage traffic is not impeded. On the data
network side, Windows Server 2012 NIC teaming is used to provide fault tolerance, and load balancing to
all of the communication networks: host management, cluster private, cluster public, live migration, and
virtual machine. This setup allows the most efficient use of network resources with a highly-optimized
configuration for both network and storage connectivity.

At the physical switch layer, VLANs are used to provide logical isolation between the various storage and
data traffic. A key element is to properly configure the switches to maximize the available bandwidth and
reduce congestion, however based on individual environment preferences, there is flexibility regarding
how many VLANs are created and what type of role-based traffic they handle. Once a final selection is
made, make sure that the switch configurations are saved and backed up.

All switch ports with the exception of the Flex V7000 ports should be configured as tagged and the VLAN
definitions specified on each port as needed. Non-FCoE networks will need to have VLAN assignments
made in Windows Server or Hyper-V.

Inter switch links are created between the two CN4093 switches. Link Aggregation Control Protocol
(LACP) is used to combine two 10GbE switch ports into a single entity, which is then connected to a
similar number of ports on the second switch. LACP teams provide higher bandwidth connections and
error correction between LACP team members. An LACP team can also be used to support the uplink
connections to a corporate network. In addition to LACP, Virtual Link Aggregation Group (VLAG) is also
configured between the 2 switches. VLAGs allow multi-chassis link aggregation and facilitate active-active
uplinks of access layer switches. VLAG with spanning tree disabled helps avoid the wasted bandwidth
associated with links that are blocked when enabled. An example of a VLAG configuration is illustrated in
Figure 10.



Figure 10. Typical Data Center Switching Layers with STP vs. VLAG


A high level network VLAN overview is shown in Figure 11.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 15

Figure 11. IBM Flex System Solution for Microsoft Hyper-V Architecture

VLAN Description
The validated configuration uses the VLANs described in Table 1.

Network Name Description
VLAN 30 Management Cluster Private
Network
Private cluster communication for 2-node
management cluster.
VLAN 31 Management Cluster Live
Migration Network
VM Live Migration traffic for 2-node
management cluster
VLAN 33 SQL Cluster Private Private Cluster communication for SQL
Cluster
VLAN 35 SCVVM Cluster Private Private Cluster communication for
SCVMM Cluster
VLAN 37 Production Cluster Private
Network
Private cluster communication for 8-node
production cluster.
VLAN 38 Production Cluster Live
Migration Network
VM Live Migration traffic for 8-node
production cluster
VLAN 40 VM Communication Network VM communication
VLAN 50 Cluster Public Network

Used for host management and cluster
public network
VLAN 70 Out of band management
network

Used for out of band connections to CMM
and IMM devices.
VLAN 1002 FCoE Storage Network Used for FCoE storage traffic
VLAN 4094 Inter-Switch Link (ISL) VLAN Dedicated to ISL
Table 1. VLAN definitions

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 16
Management Cluster Private and CSV Networks (VLAN 30)
This network is reserved for cluster private (heartbeat and cluster shard volume) communication between
clustered management servers. Switch ports should be configured to appropriately limit the scope of each
of these VLANs. This will require the appropriate switch ports (see Table 2) for each management x240
Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch.
The networks using these must specify VLAN 30 in Windows Server 2012. There should be no IP routing
or default gateways for cluster private networks.

Management Cluster Live Migration Network (VLAN 31)
A separate VLAN should be created to support Live Migration for the management cluster. Switch ports
should be configured to appropriately limit the scope of each of these VLANs. This will require the
appropriate switch ports used by each management x240 Compute Node to be set as tagged, and the
VLAN definitions should include these ports for each switch. The networks using these must specify
VLAN 31 in Windows Server. There should be no routing on the Live Migration VLAN.

SQL Guest Cluster Private Network (VLAN 33)
A separate VLAN should be created to support the SQL guest cluster private network communication.
Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will
require the appropriate switch ports used by each management x240 Compute Node to be set as
tagged, and the VLAN definitions should include these ports for each switch. The networks using these
must specify VLAN 33 in the Hyper-V settings for the SQL virtual machines. There should be no routing
on the SQL Cluster Private VLAN.

SCVMM Guest Cluster Private Network (VLAN 35)
A separate VLAN should be created to support the SCVMM guest cluster private network communication.
Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will
require the appropriate switch ports used by each x240 Compute Node to be set as tagged, and the
VLAN definitions should include these ports for each switch. The networks using these must specify
VLAN 35 in Windows Server. There should be no routing on the SCVMM Cluster Private VLAN.

Production Cluster Private and CSV Networks (VLANs 37)
This network is reserved for cluster private (heartbeat and cluster shard volume) communication between
clustered production servers. Switch ports should be configured to appropriately limit the scope of each of
these VLANs. This will require the appropriate switch ports (see Table 2) for each production x240
Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch.
The networks using these must specify VLAN 37 in Windows Server 2012. There should be no IP routing
or default gateways for cluster private networks.

Production Cluster Live Migration Network (VLAN 38)
A separate VLAN should be created to support Live Migration for the production cluster. Switch ports
should be configured to appropriately limit the scope of each of these VLANs. This will require the
appropriate switch ports used by each production x240 Compute Node to be set as tagged, and the
VLAN definitions should include these ports for each switch. The networks using these must specify
VLAN 38 in Windows Server. There should be no routing on the Live Migration VLAN.

Cluster Public Network (VLAN 40)
This network supports communication for the host management servers for both the management cluster,
System Center components, and compute cluster. One team over two 10GbE ports, created using the
Windows Server 2012 in-box NIC teaming feature, will be used to provide fault tolerance, and load
balancing for host management (cluster public), and cluster private networks. This NIC team will be
sharing the bandwidth with the two FCoE 10Gb ports. Quality of Service (QoS) will be applied from
Windows Server to limit its bandwidth usage. The management cluster will also support VLAN 40 on the
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 17
VM communications network to allow the System Center components to manage the host servers. VLAN
identification will have to be set in either Windows Server or Hyper-V accordingly.

Production VM Communication Network (VLAN 50)
This network supports communication for the virtual machines. One LACP team over two 10GbE ports,
created using the Windows Server 2012 in-box NIC teaming feature, will be used to provide fault
tolerance, and load balancing for communication for live migration and virtual machine communication.
This will require the appropriate switch ports (see Table 2) for each production x240 Compute Node to be
set as tagged, and the VLAN definitions should include these ports for each switch. Network settings for
proper VLAN identification will need to be performed in each virtual machines network interface.

If additional segregation between virtual machine networks is required then the VM Team network switch
ports can have additional VLAN IDs assigned as needed. Each VM can then set the necessary VLAN ID
as part of its network settings in Hyper-V.

Out of Band Management Network (VLAN 70)
This network supports communication for the out of band management network. As shown in Figure 2 the
CMM provides the communication entry point for the Flex System x240 Integrated Management Module
(IMM), IO modules, and Flex System V7000 storage.

For best practices and security reasons, this network should be isolated and integrated into customers
existing management network environment.

Routing will have to be configured for VLAN 70 and VLAN 40 to support communication between System
Center components and the management environment.

FCoE Storage Network (VLAN 1002)
All FCoE storage traffic between the Flex System V7000 and Flex System ITEs should be isolated on
VLAN 1002.

Inter Switch Link Network (VLAN 4094)
A dedicated VLAN to support the ISL between the two switches should be implemented. There should be
no spanning tree protocol on the ISL VLAN.

x240 Compute Node Network Ports
Each host server has one CN4054 4-port 10Gb device that will be used for network connections and
FCoE storage connectivity, public and private cluster communication, and VM communication. The FCoE
connections to storage will use Multipath I/O drives to ensure fault tolerance and load balancing.
Windows Server 2012 NIC teaming is used for all but the FCoE networks to provide fault tolerance, and
spread the workload across the network communication interfaces. The NIC teams will follow best
practice by ensuring the team members are from each of the two ASICs on the CN4054 CNA card, so no
single ASIC failure can take down the team. See Figure 9 for more information.

Physical Host Data Access
Each compute node will utilize four connections to the Ethernet network(s). Figure 9 shows four
active data connections. Two connections from each ASIC
By default the CN4093 switches are set as untagged ports. This will need to be changed to
tagged and VLAN IDs assigned according to Table 3. The default VLAN ID will remain with a
PVID equal to 1.
Windows Server 2012 NIC teaming will be used to form high bandwidth fault tolerant teams.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 18
Physical Host FCoE Storage Access
Each compute node will utilize two connections to the FCoE network. Figure 9 shows two active
FCoE connections. One connection coming from each ASIC
Since the host servers will be utilizing switch ports for both data and storage traffic each CN4093
switch port will be changed from the default untagged mode to tagged. The default VLAN ID will
remain with a PVID equal to 1. Correct VLAN IDs for storage and data must be specified for
each switch port as shown in Table 3.

Storage Controller FCoE Access
At the physical storage layer, the V7000 Storage Node uses FCoE ports for storage connectivity. Each
controller has two 10GbE converged ports for FCoE traffic. The use of the V7000 Storage Node Device
Specific Module (DSM) manages the multiple I/O paths between the host servers and storage, and
optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from
other data traffic occurring on the switches. FCoE traffic is prioritized by the CN4093 switches to
maximize storage traffic throughput.

Each Flex V7000 Storage Node controller will maintain two FCoE connections to help balance
storage workloads
One connection is provided each controller to each switch (Figure 8 above)
By default the CN4093 switches are set as untagged ports. The ports for the storage controller
will need to be assigned to the FCoE VLAN ID 1002.

IBM Flex System CN4093 Converged Ethernet Configuration
The IBM Hyper-V Virtualization Reference Architecture uses two Flex System CN4093 switches
containing up to (64) 10GbE ports each. The CN4093 provides primary storage access and data
communication services. Redundancy across the switches is achieved by creating an inter-switch link
(ISL) and VLAG between switches 1 and 2. The inter-switch link will be created using two external 10GbE
links from each switch to form an LACP team. Virtual Link Aggregation (VLAG) is a method to created
LACP teams between two independent switches. Corporate uplink connections can be achieved with
VLAGs and Virtual Router Redundancy Protocol VRRP, as shown in Figure 12, depending on the
customer configuration. Each of the CN4093 switches will require Upgrade 1 to activate the additional
ports needed to fully support all the CN4054 ports on each x240 Compute Node.


Figure 12. Active-Active Configuration using VRRP and VLAGs

Note: Switch ports used for FCoE do not support VLAG/LACP. Windows 2012 Switch Independent
NIC Teaming is used for those ports to support TCP/IP data traffic.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 19
Note: Use of VLAGs between the two CN4093 switches will require all routing to be performed in
upstream switches

Management of the CN4093 switches can be performed either through a command line interface or a
web based user interface (Figure 13). The default user name and password for the IBM CN4093 switches
is admin/admin. This should be changed to a unique password to meet your security requirements.


Figure 13. CN4093 Administration Interface

Spanning Tree should be enabled on all switches according to your organizational requirements.
Spanning Tree is disabled for the VLAG ISL VLAN.

By default the switches are assigned the following management IP address:
192.168.70.120 Switch 1
192.168.70.121 Switch 2

Table 2 shows the roles of each switch port for the two CN4093 switches in the configuration.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 20

Table 2. CN4093 Switch port roles

Table 3 describes the VLAN configuration of the ports for each of the two CN4093 switches in the
configuration.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 21

Table 3. CN4093 switch port VLAN roles

Ports are set as untagged by default. All Flex x240 ports will set to tagged in the configuration. A
preferred VLAN ID (PVID) should be remain set 1. This can be done from the switch GUI under
Configuration Figure 14 or ISCLI as shown in section Using ISCLI to configure CN4093 switches.


Figure 14. Setting VLAN tagging and Default VLAN ID

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 22
VLAN assignments for the CN4093 switch ports can be made in the GUI as seen in Figure 15 or with the
ISCLI as shown in section Using ISCLI to configure CN4093 switches.


Figure 15. Adding ports to VLAN Interface

Note: Regarding VLAG and LACP teams on the CN4093 switch. Each LACP team will have its own
unique Port Admin Key (VLAG ID) with each port that is a member of that team being set to this unique
value. Spanning Tree Protocol is disabled on the VLAG ISL VLAN. VLAG and LACP configuration is
shown in section Using ISCLI to configure CN4093 switches

Figure 16 shows the concept of using VLAG to create LACP teams from the NIC interfaces.


Figure 16. VLAG/LACP configuration interfaces

A VLAG is only created between the two ports not being used for FCoE. Each servers VLAG should
consist of a team with one port from each CNA ASIC and span the two CN4093 switches. The VLAG ID
will be assigned automatically by the switch

Table 4 below describes the VLAG/LACP configurations for both switch 1 and 2:

VLAG ID (Server) Switch1 Port Switch2 Port LACP Key
65 (Server 1) INTB1 INTA1 101
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 23
66 (Server 2) INTB2 INTA2 102
67 (Server 3) INTB3 INTA3 103
68 (Server 4) INTB4 INTA4 104
69 (Server 5) INTB5 INTA5 105
70 (Server 6) INTB6 INTA6 106
71 (Server 7) INTB7 INTA7 107
72 (Server 8) INTB8 INTA8 108
73 (Server 9) INTB9 INTA9 109
74 (Server 10) INTB10 INTA10 110
ISL EXT1 EXT1 100
ISL EXT2 EXT2 100
Table 4. Summary of VLAGs for Switch 1 and 2

Using ISCLI to configure CN4093 switches
This section provides guidance on switch configuration using the ISCLI command line environment. This
is not an exhaustive step by step, but does provide details for each major component of the switch
configuration such as ISL, VLAN, VLAN, Port Configuration, and FCoE configuration. To access the
ISCLI, refer to the IBM Flex System Fabric CN4093 Industry Standard CLI Command Reference.

Grant Privilege and Enter Configuration mode

1. Grant Privilege Mode
CN 4093)# enable

2. Enter Configuration Mode
CN 4093)# configure terminal

Configure the ISL and VLAG Peer Relationship

1. Enable VLAG Globally.
CN 4093(config)# vlag enable

2. Configure the ISL ports of each switch and place them into a port trunk group:
CN 4093(config)# interface port ext1-ext2
CN 4093(config-if)# tagging
CN 4093(config-if)# pvid 4094
CN 4093(config-if)# lacp mode active
CN 4093(config-if)# lacp key 100
CN 4093(config-if)# exit

3. Place the ISL into a dedicated VLAN. VLAN 4094 is recommended:
CN 4093(config)# vlan 4094
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member ext1-ext2
CN 4093(config-vlan)# exit

4. If STP is used on the switch, turn STP off for the ISL:
CN 4093(config)# spanning-tree stp 20 vlan 4094
CN 4093(config)# no spanning-tree stp 20 enable

5. Configure VLAG Tier ID. This is used to identify the VLAG switch in a multi-tier environment.
CN 4093(config)# vlag tier-id 10

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 24
6. Define VLAG peer relationship:
CN 4093(config)# vlag isl vlan 4094
CN 4093(config)# vlag isl adminkey 100
CN 4093(config)# exit

7. Save the configuration changes
CN 4093# write


8. Configure the ISL and VLAG peer relationship for the second switch. Ensure the VLAG peer (VLAG
Peer 2) is configured using the same ISL trunk type (dynamic or static), VLAN, STP mode and tier ID
used on VLAG peer 1.

Configure the Host VLAG

1. Make each port from the ITEs an active LACP team member. This needs to be done for each of the
two ports per ITE (once on each switch). Refer to Table 4 as needed.
CN 4093(config)# interface port intb1 (Switch1)
CN 4093(config)# interface port inta1 (Switch2)
CN 4093(config-if)# lacp mode active
CN 4093(config-if)# lacp key 101 (For ITE1)
CN 4093(config-if)# exit

2. Enable the VLAG trunk on each switch. This allows LACP teams to be formed across the two
CN4093 switches. This should be done for each LACP key in Table 4 on each switch.
CN 4093(config)# vlag adminkey 101 enable

3. Continue by configuring all required VLAGs on VLAG Peer 1 (Switch 1), and then repeat the
configuration for VLAG Peer 2. For each corresponding VLAG on the peer, the port trunk type
(dynamic or static), VLAN, and STP mode and ID must be the same as on VLAG Peer 1.

4. Verify the completed configuration:
CN 4093(config)# show vlag

Note: The LACP teams will not show up as active on the switches until they are also formed in
Windows Server 2012 with NIC teaming.

Configure the VLANs
Each switch must be configured to support Fibre Channel over Ethernet.

1. From the ISCLI of Switch1:
CN 4093#enable
CN 4093#configure terminal

CN 4093(config-if)#interface port inta1-inta10,intb1-intb10,ext1-ext2
CN 4093(config-if)#tagging
CN 4093(config-if)#exit

CN 4093(config)# vlan 30
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta9-inta10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 31
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 25
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb9-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 33
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb9-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 35
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb9-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 37
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta1-inta8,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 38
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb1-intb8,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 40
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta1-inta10,intb9-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 50
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb1-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config-vlan)# show vlan
CN 4093(config-vlan)# show interface status

CN 4093(config-vlan)# write


2. From the ISCLI of Switch2:
CN 4093# enable
CN 4093# configure terminal

CN 4093(config-if)# interface port inta1-inta10,intb1-intb10,ext1-ext2
CN 4093(config-if)# tagging
CN 4093(config-if)# exit

CN 4093(config)# vlan 30
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb9-intb10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 31
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta9-inta10,ext1-ext2
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 26
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 33
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta9-inta10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 35
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta9-inta10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 37
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb1-intb8,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 38
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta1-inta8,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 40
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member intb1-intb10,inta9-inta10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config)# vlan 50
CN 4093(config-vlan)# enable
CN 4093(config-vlan)# member inta1-inta10,ext1-ext2
CN 4093(config-vlan)# exit

CN 4093(config-vlan)# show vlan
CN 4093(config-vlan)# show interface status

CN 4093(config-vlan)# reload


3. Backup configuration to TFTP server (xx.xx.xx.yy is the IP address of the TFTP server)
CN 4093# copy running-config tftp filename file.cfg address
xx.xx.xx.yy mgt-port

Note: The switch ports Ext13 and Ext14 should be configured as a fault tolerant team and
used as uplink connections for Routing and CorpNet access. Routes will need to be
established for VLANs 40, 50, and 70.

Configure Fibre Channel over Ethernet (FCoE)
Each switch must be configured to support Fibre Channel over Ethernet.

Note: It is easiest to perform the switch FCoE configuration after servers have been enabled
and configured for FCoE, the OS has been installed and PWWN have been recorded for each
server.

1. Enable FCoE on each switch from the ISCLI:
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 27
CN 4093(config)# cee enable
CN 4093(config)# fcoe fips enable

2. Enable FCoE fiber channel forwarding on each Switch for VLAN 1002 from the ISCLI:
CN 4093# enable
CN 4093# configure terminal

CN 4093(config)# system port ext11-ext12 type fc
CN 4093(config)# vlan 1002
CN 4093(config)# enable
(Switch1)
CN 4093(config-vlan)# member inta1-inta10,inta13-inta14,ext11-ext12
(Switch2)
CN 4093(config-vlan)# member intb1-intb10,inta13-inta14,ext11-ext12
CN 4093(config-vlan)# fcf enable

3. An FC alias is assigned to each HBA PWWN, used for easier name identification. An example of FC
aliases assignments are shown in Tables 5 and Table 6. Note that there is one port that is activated
at the switch from each of the ASICs on the CN4054 CNA adapter. PWWN can be viewed from the
OneCommand Manager tool as shown in Figure 8.

FC Alias PWWN
ITE7-PortA7 10:00:00:90:fa:07:84:21
ITE8-PortA8 10:00:00:90:fa:0d:4d:93
ITE9-PortA9 10:00:00:90:fa:0d:2a:27
ITE10-PortA10 10:00:00:90:fa:0d:33:75
V7KLeft-PortA13 50:05:07:68:05:04:01:96
V7KRight-PortA14 50:05:07:68:05:04:01:97
Table 5) Switch 1 Fibre Channel alias example

FC Alias PWWN
ITE7-PortB7 10:00:00:90:fa:07:84:2d
ITE8-PortB8 10:00:00:90:fa:0d:4d:9f
ITE9-PortB9 10:00:00:90:fa:0d:2a:33
ITE10-PortB10 10:00:00:90:fa:0d:33:81
V7KLeft-PortA13 50:05:07:68:05:08:01:96
V7KRight-PortA14 50:05:07:68:05:08:01:97
Table 6) Switch 2 Fibre Channel alias example

Create FCAliases for each World Wide Port Name (WWPN) on each switch. Each Flex System ITE, and
storage canister will present one WWPN per switch for storage connections. Additional WWPN will be
created for virtual machines that need direct storage access (VMs need to be created with virtual HBAs to
be able to view their PWWs). Examples of FCAlias definitions are below.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 28
CN 4093(config)# show fcns database

CN 4093(config)#fcalias ITE9_PortA9 wwn 10:00:00:90:fa:0d:2a:27
CN 4093(config)#fcalias ITE10_PortA10 wwn 10:00:00:90:fa:0d:33:75
CN 4093(config)#fcalias V7KLeft_PortA13 wwn 50:05:07:68:05:04:01:96
CN 4093(config)#fcalias V7KRight_PortA14 wwn 50:05:07:68:05:04:01:97
CN 4093(config-zone)# exit
CN 4093(config)# show fcalias
CN 4093(config)# write

Note: Virtual machine PWWN acquisition is shown in the Virtual Machine Fibre Channel
Storage Connections section below.

4. Create and populate fibre channel zones on each switch. These must contain all of the FC aliases
previously created.
Zone1 should include the storage and compute servers
Zone2 should include the storage and management servers
Zone3 should include the storage and SQL virtual machines (after VM configuration)
Zone4 should include the storage and VMM virtual machines (after VM configuration)

Note: For virtual machines using vHBAs will not register in the Fibre Channel name
service until the VM is running

CN 4093(config-zone)# zone name SW1Zone_MgmtSvrs
CN 4093(config-zone)# member fcalias ITE9_PortA9
CN 4093(config-zone)# member fcalias ITE10_PortA10
CN 4093(config-zone)# member fcalias V7KLeft_PortA13
CN 4093(config-zone)# member fcalias V7KRight_PortA14
CN 4093(config-zone)# exit
CN 4093(config)# show zone
CN 4093(config)# write

5. A FC zoneset can contain multiple FC zones. Create and activate a fibre channel zoneset on each
switch that contains the fibre channel zone(s) previously created.
CN 4093(config)# zoneset name SW1_ZoneSet
CN 4093(config-zoneset)# member SW1Zone_MgmtSvrs
CN 4093(config-zoneset)# member SW1Zone_ComputeSvrs
CN 4093(config-zoneset)# member SW1Zone_SQL_Cluster (After VM
Configuration)
CN 4093(config-zoneset)# member SW1Zone_VMM_Cluster (After VM
Configuration)
CN 4093(config)# exit
CN 4093(config)# show zoneset
CN 4093(config)# zoneset activate name SW1_ZoneSet
CN 4093(config)# write

6. Backup configuration to TFTP server (xx.xx.xx.yy is the IP address of the TFTP server)
CN 4093# copy running-config tftp mgt-port
Enter the IP address of the TFTP Server: xx.xx.xx.yy
Enter the filename: SW1-June-24-2013.cfg

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 29
Active Directory
The IBM Hyper-V Fast Track reference configuration must be part of an Active Directory (AD) domain.
This is required to form the Microsoft Windows Server 2012 clusters. For this configuration an AD server
must exist, and be reachable to this configuration.

IBM Flex System V7000 Storage Node
Overview
The reference guide for the IBM Flex V7000 can be found in the Redbook IBM Flex System V7000
Storage Node Introduction and Implementation Guide.

Internal Flex Chassis Connections
The Flex System V7000 Storage Node is connected to the CN4093 switches through the internal Flex
Chassis backplane. No physical cables are required. There are two 10GbE connections per controller.
Each controller has 1 connection to switch 1 and 2 in order to create a mesh topology for fault
toleranance and performance.

The Flex V7000 Storage Node initial setup and IP configurations is done through the CMM web based
management UI. After the initial setup as shown in there will be a single management IP interface that
can be accessed to manage the storage node through the UI. Figures 17 and 18 the initial setup of the
Flex V7000 storage.


Figure 17. Initial Flex V7000 Storage setup via CMM


Figure 18. Initial Flex V7000 Storage wizard

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 30
Management
Management of the V7000 Storage Node is performed using the web-based user interface (Figure 19).
To begin management of the V7000 Storage Node, perform the following actions:
1. From the web browser, establish an connection with V7000 Storage Node Cluster TCP/IP address
(Figure 18)
V7000 Storage Node Cluster Management IP Address example: 192.168.70.200
Default UserID and Password are superuser / passw0rd (with a zero)



Figure 19. Establish connection to V7000 Storage Node management UI

IBM Flex System V7000 Storage Node and Cluster Storage Considerations
The V7000 Storage Node supports a concept called Managed Disks (MDisk). An MDisk is a unit of
storage that the V7000 Storage Node virtualizes. This unit is typically a RAID group created from the
available V7000 Storage Node disks, but can also be logical volumes from external third party storage.
MDisks can be allocated to various storage pools in V7000 Storage Node for different uses and
workloads.

A storage pool is a collection up to 128 MDisks that are grouped to provide the capacity to create virtual
volumes or LUNs which can be mapped to the hosts. MDisks can be added to a storage pool at any time
to increase capacity and performance.

Microsoft Windows Failover Clustering supports Cluster Shared Volumes (CSVs). Cluster Shared
Volumes provide shared primary storage for virtual machine configuration files and virtual hard disks
consistently across the entire cluster. All CSVs are visible to all cluster nodes, and allow simultaneous
access from each node.

A single storage pool will be created from the twenty-four storage devices on the Flex V7000 storage
node. This will allow the Flex V7000 to fully monitor and manage I/O utilization across the volumes and
automatically move frequently accessed portions of files to the SSD for better performance. Host mapping
will use a hosts worldwide port names to limit host access to those volumes they are authorized to
access. Thin provisioning is a technology that allows you to create a volume of a specific size and present
that volume to an application or host. The Flex V7000 will only use the provisioned space on an as
needed. This allows the IT admin to set initial volume sizes, and then monitor, and add storage as
needed to support the volumes.

Figure 20 shows a recommended initial disk configuration for the Flex V7000 Storage Node, and Table 7
lists the volumes, recommended sizes per the Microsoft deployment guide, and host mappings.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 31

Figure 20. Flex V7000 Storage

Note: This storage configuration is sufficient for initial setup and deployment. Disk
configuration and performance can be highly workload dependent. It is recommended to
profile and analyze your specific environment to ensure adequate space and performance
for your organizational needs.

Volume Name Size Mapped Hosts
Management CSV1 2TB Management Host1- Host2
Management Quorum 1GB Management Host1- Host2
Compute CSV1 4TB Compute Host1 Host8
Compute Quorum 1GB Compute Host1 Host8
SQL Quorum 1GB SQLVM1-SQLVM2
SQL- Service Manager Management Database 145GB SQLVM1-SQLVM2
SQL- Service Manager Management Logs 70GB SQLVM1-SQLVM2
SQL- Service Manager Data Warehouse Data 1TB SQLVM1-SQLVM2
SQL- Service Manager Data Warehouse Logs 500GB SQLVM1-SQLVM2
SQL Service Manager Analysis Service Data 8GB SQLVM1-SQLVM2
SQL Service Manager Analysis Service Logs SQLVM1-SQLVM2
SQL - Service Manager SharePoint, Orchestrator,
App Controller Data
10GB SQLVM1-SQLVM2
SQL - Service Manager SharePoint, Orchestrator,
App Controller Logs
5GB SQLVM1-SQLVM2
SQL - Virtual Machine Manager and Update
Services Data
6GB SQLVM1-SQLVM2
SQL - Virtual Machine Manager and Update
Services Logs
3GB SQLVM1-SQLVM2
SQL - Operations Manager Data 130GB SQLVM1-SQLVM2
SQL - Operations Manager Logs 65GB SQLVM1-SQLVM2
SQL - Operations Manager Data Warehouse Data 1TB SQLVM1-SQLVM2
SQL - Operations Manager Data Warehouse Logs 500GB SQLVM1-SQLVM2
System Center Virtual Machine Manager Quorum 1GB SCVMM_VM1-SCVMM_VM2
SCVMM Library Volume 500GB SQLVM1-SQLVM2
Table 7. Volume roles, sizes, and mappings

Note: The Flex V7000 supports Thin Provisioning as a standard feature. The volumes above
can be Thin Provisioned to optimize the use of storage resources.

Storage Pool and Volume Configuration
The following step-by-step processes are accompanied by sample screenshots to illustrate how quick and
easy it is to configure V7000 Storage Node.

1. Create the MDisks (Figure 21) from Pools->Internal Storage. Each class of storage is shown
with all the devices in that class. Figure 20 defines a general configuration for most initial
workloads.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 32

Figure 21. Flex V700 MDisks Creation

2. Create a new storage Pool by using the MDisks that were created in the previous step shown in
Figure 22

Figure 22. V7000 Storage Node Pool Creation

3. Logical Disks can now be created from the pool created in the previous step (Figure 23).
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 33

Figure 23. V7000 Storage Node Logical Drives Creation

Host Server Definition and Volume Mapping
The next step is to define the host servers to the IBM Flex V7000 storage node and map volumes to
them. This process will limit access to the volumes to only authorized host servers.

Note: It is easiest to perform volume mapping after the switch and hosts have been configured
for FCoE, the OS has been installed and PWWN have been recorded for each server.

1. Host definitions are created in the Flex System V7000 storage console. Hosts should be created as
Fibre Channel Hosts, and include their WWPNs in the definition as shown in Figure 24.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 34
Figure 24. Host definition with PWWNs associated.

2. Map the logical drives to one or more host servers (Figure 25) as show in Table 7.

Figure 25. Flex V700 Logical Drives Hosts Mapping

Note: The IBM Flex V7000 will present a warning when mapping volumes to more than one
host.

Volumes mapped to hosts should now be visible in Windows Disk Manager. A rescan may be
required. Only one host server can have the storage online at a time until the cluster is
configured. One server per cluster must be chosen to initially bring the volume online, and format.

IBM Flex System x240 Management Fabric
Setup
The management fabric for this environment is built around a two node Windows Server cluster
consisting of two dual socket IBM Flex System x240 Compute Nodes with 256GB of RAM, and one
CN4054 4-port converged network card for each node. This independent cluster hosting the management
fabric based on Microsoft System Center 2012 SP1 with IBM upward integration modules helps eliminate
single points of failure and allows near continuous access to important management applications to
deploy, maintain, and monitor the production environment. The High Availability (HA) configuration used
for each management component varies depending on the needs and capabilities of the individual
component. A high level overview of the management fabric can be seen in Figure 26.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 35

Figure 26. Microsoft System Center 2012 Management Fabric for IBM Flex System for Microsoft Hyper-V

Setup involves the installation and configuration of Windows Server 2012 Datacenter edition, networking,
and storage on each node. Highly available VMs can then be created to perform the various management
tasks for the management framework.

Pre-OS Installation
The following configuration steps should be performed before installing an operating system.
1. Confirm that the CN4054 4-port Ethernet devices are installed in each compute node
The FCoE Feature on Demand (FoD) must be imported on each node through an IMM
connection as shown in Figure 26.

Note: The default IMM address for each x240 Compute Node is 192.168.70.1xx where xx is
equal to the two digit slot number that the compute node is installed in (Slot 1 = 01)

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 36

Figure 26. Feature on Demand Key Activation

2. It is best to include any Features on Demand (FoD) keys needed with the initial order so they can be
installed at the factory. If ordering IBM FoD separately please contact your IBM Sales representative
for assistance.

For questions regarding Features on Demand refer the Redpaper Using IBM Features on Demand.

3. If the Feature on Demand Unique Identifier (FUI) is needed. This can be found in UEFI under System
Settings->Network Emulex Device -> Feature on Demand. There should be two unique FUI per
CN4054 card, one for each ASIC.
Once Windows and the Emulex OneCommand tool is installed the HBACmd utility can be used
to retried the FUI with the format:
C:\Program Files\Emulex\Util\OCManager> HbaCmd getfodinfo xx-xx-xx-xx-xx-xx
Where xx-xx-xx-xx-xx-xx is the MAC address of the first NIC port on each ASIC

4. The Flex x240 firmware for the following devices: UEFI, IMM, DSA, SAS, and CNA should be
evaluated, and if necessary flashed to the latest firmware.
For out of band updates IBM Bootable Media Creator will create a bootable image of the latest
IBM x240 updates (download previously)
o An external DVD device will be required or mounted to the server using the virtual media
capabilities of the IMM.
o Please refer to IBM Tools Center Bootable Media Creator Installation and Users Guide 9.41
for assistance.
IBM Fast Setup is an optional tool that can be downloaded and used to configure multiple IBM
System x, Bladecenter or Flex systems simultaneously.
In band updates the can be applied using IBM UpdateXpress tool to download and install IBM
recommended updates on the IBM Flex System platform.
o Learn more about IBM UpdateXpress.
o Please refer to IBM UpdateXpress System Pack Installation and Users Guide for assistance.

5. By default the x240 Compute Node settings are set to balance power consumption and performance.
To change these settings, boot to UEFI mode, and select System Settings-> Operating Mode
(Figure 27). Change the settings to fit your organizational needs.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 37

Figure 27. Operating Mode settings in UEFI

6. Disable Multichannel mode on the CN4054. This can done in the UEFI under System Settings -
>Network Emulex Device as shown in Figure 28.


Figure 28. Disabling Multichannel Mode in CN4054 in UEFI


7. Enable FCoE on the CN4054 converged network adapter. This is done in the UEFI under System
Settings->Network->Network Device List. Select the top device on each bus as shown in Figure
29.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 38

Figure 29. Identifying CNA devices to be configured

8. Change the setting from NIC to FCoE as shown in Figure 30.


Figure 30. Changing CNA property to support FCoE

9. Repeat this action for the second device on the other bus (second ASIC).

10. The two local disks should be configured as a RAID 1 array.

11. Confirm CN4093 switches have been configured
Inter switch links, VLAGs, VLANs, configuration should be should have been created and
assigned as described above.
Switch FCoE configuration is easiest after the operating system is installed, and PWWN can be
collected for each server.

12. The V7000 Storage Node MDisks, Pools, and Volumes must be configured as defined in the storage
section above to be ready for host mapping assignments.
Volume mapping is easiest after the operating system is installed, and PWWN can be collected
for each server.

OS Installation and Configuration
Windows Server 2012 Datacenter allows unlimited Windows Server virtual machine instances or licenses
on the host servers and is the preferred version for deploying Hyper-V compute configurations. Windows
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 39
Server 2012 Standard now supports clustering also, but only provides licensing rights for up to two
Windows Server virtual machine instances (additional licenses would be needed for additional VMs).
Windows Server 2012 Standard Edition is intended for physical servers that have few or no virtual
machines running on it.

1. Install Windows Server 2012 Datacenter Edition.

2. Set your server name, and join the domain.

3. Install the Hyper-V role and Failover Clustering feature.

4. Run Windows Update to ensure any new patches are installed.

5. The latest CN4054 NIC and FCoE drivers should be downloaded from IBM Fix Central and installed.
A complete package of all platform related updates by operating system is available from IBM Fix
Central as an UpdateXpress System Pack.
The associated Windows Installer is download separately also on IBM Fix Central.

6. Install the Emulex OneCommand Manager utility to provide additional information for the CN4054
converged network adapter.

7. Multipath I/O is used to provide balanced and fault tolerant paths to V7000 Storage Node. This
requires an additional Flex V7000 MPIO DSM specific driver to be installed on the host servers before
attaching the storage.
The Microsoft MPIO pre-requisite driver will also be installed if not already on the system

8. Install the IBM Systems Director platform agent 6.3.3

Network Configuration
One key new feature of Windows Server 2012 is in-box NIC teaming. In-box teaming can provide fault
tolerance, link aggregation, and can be tailored to host or virtual machine (VM) connectivity. Two
separate Windows Server 2012 teams will be created in following configuration. One team is used to
support host server management and cluster private traffic. A second team is used to support Live
Migration and VM Communication.

Note: Be careful when identifying and enumerating the network interfaces in each host to ensure
teams are spread across the two network interfaces and properly routed to the correct switches.
Use Emulex OneCommand Manager Utility to review each network interface and MAC address.

Note: Refer to Figure 8 and 9 to clarify the mapping of server ports to the two CN4093 switch
ports. Each ASIC has two network interfaces, the top one is for switch 1 and the bottom for switch
2.

The following PowerShell commands can also be useful (additional PowerShell scripts can be found in
the section PowerShell Scripts found in the appendix. Each team should contain a member from each
bus
Get-NetAdapterHardwareInfo | Sort-Object Property Bus,Function
Get-NetAdapter InterfaceDescription Emulex*
Rename-NetAdapter name Ethernet NewName PortA9_SW1

Figure 31 is a NIC interface naming example..
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 40

Figure 31. Displaying all Emulex network adapters with PowerShell

Windows Server 2012 in-box NIC teaming can be found in the Server Manager console as shown in
figure 32. The basic NIC teaming tools are available in the Server Manager GUI however it is better to
use PowerShell for the added options and flexibility.

Note: Execute LBFOAdmin.exe from the command line to see the Network Teaming graphical user
interface.


Figure 32. NIC teaming in Server Manager

The team that is created to support cluster public and private communication between the host servers
will share the same ports that are being used for FCoE traffic. The CN4093 switches prioritize FCoE
traffic over Ethernet data traffic. To further reduce the potential of bandwidth over subscription Quality of
Service (QoS) limits will be placed on these interfaces in Windows Server 2012. This team should be
created using the two NIC ports as described in Table 2.

1. The ClusterTeam should be created using the default Switch Independent mode and Address
Hash mode with the following PowerShell commands.
New-NetLbfoTeam name ClusterTeam TeamMembers PortA9_SW1,
PortB9_SW2 TeamingMode SwitchIndependent

Note: The team member names will vary by host server

2. The second team will be created off the NIC interfaces that will be configured to support LACP. The
use of LACP teaming and is then backed by the full aggregated bandwidth of the two ports.
New-NetLbfoTeam name VLAG TeamMembers vLAG PortA9_SW2, VLAG
PortB9_SW1 TeamingMode Lacp

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 41
3. When Windows Server 2012 NIC teaming is complete there should be two teams displayed when
queried in PowerShell (Figure 33).


Figure 33. Windows Server NIC teaming.

4. Virtual switches will be created on each of these teams. Generally each Hyper-V virtual switch can
provide one interface to the management operating system. A second virtual NIC for use by the host
operating system will be created on the ClusterTeam in order to provide a segregated network path
for the Cluster Private/CSV network. The following PowerShell commands are used to create the
virtual switches and second virtual NIC.
New-VMSwitch Name MgmtClusterPublic NetAdapterName ClusterTeam
MinimumBandwidthMode Absolute AllowManagementOS $true
Add-VMNetworkAdapter ManagementOS -Name MgmtClusterPrivate
SwitchName MgmtClusterPublic
New-VMSwitch Name VM_Communication NetAdapterName VLAG
MinimumBandwidthMode Weight AllowManagementOS $true

5. Rename the management facing network interface on the VM_Communication team to reflect the role
that it is fulfilling.
Rename-VMNetworkAdapter -NewName MgmtLiveMigration ManagmentOS name
VM_Communication

6. Confirm the network interfaces are available to the management operating systems with the following
Powershell command and as shown in Figure 34.
Get-VMNetworkAdapter ManagementOS


Figure 34. Displaying all network interfaces created for host partition from vSwitches

7. Assign VLAN IDs to each of these interfaces with the following PowerShell commands (Figure 35).
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtClusterPublic Access Vlanid 40
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtClusterPrivate Access Vlanid 30
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 42
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtLiveMigration Access Vlanid 31

8. Confirm your Management OS network adapter names and VLAN assignments (Figure 35).
Get-VMNetworkAdapterVlan ManagementOS


Figure 35. VLAN Assignments on Management OS NIC interfaces

9. Bandwidth limits should be placed on these network interfaces. The ClusterPublic virtual switch was
created with Absolute weighting. This allows a maximum bandwidth cap to be placed on these
network interfaces with the following PowerShell commands. Maximum bandwidth is defined in bits
per second.
SetVMNetworkAdpater ManagementOS -Name MgmtClusterPublic -
MaximumBandwidth 2GB
SetVMNetworkAdpater ManagementOS -Name MgmtClusterPrivate -
MaximumBandwidth 1GB

10. The network interface used for Hyper-V Live Migration uses the team that was created using
Weighted mode. A minimum bandwidth setting of 30% will be set for the LiveMigration network.
SetVMNetworkAdpater ManagementOS -Name MgmtLiveMigration -
MinimumBandwidthWeight 30

11. Assign TCP/IP addresses and confirm network connectivity for all of the network connections on each
VLAN
New-NetIPAddress InterfaceAlias vEthernet (MgmtClusterPublic)
IPAddress 192.168.40.21 PrefixLength 24
New-NetIPAddress InterfaceAlias vEthernet (MgmtClusterPrivate)
IPAddress 192.168.30.21 PrefixLength 24
New-NetIPAddress InterfaceAlias vEthernet (MgmtLiveMigration)
IPAddress 192.168.31.21 PrefixLength 24

12. Confirm the cluster public network (VLAN 40) should is at the top of the network binding order.

13. The Cluster Private and Live Migration networks should not have any default gateway defined.

Host Storage Connections
Two volumes are required to support the Management host cluster as follows:
2TB Volume to be used as a Cluster Shared Volume
1GB Volume to be used as the Management Cluster Quorum

Once the switch FCoE configuration and Flex V7000 volume mapping has been completed storage
volumes should be visible to the host servers.
1. Confirm the disks are visible in Windows Disk Manager
A disk rescan may be required

2. From one server, bring each disk online, and format it as a GPT disk for use by the cluster. Assigning
drive letters is optional since they will be used for specific clustering roles such as CSV, and Quorum.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 43
Validate that each potential host server can see the disks and bring them online.

Note: Only one server can have the disks online at a time, until they have been added to
Cluster Shared Volumes.

Management Host Cluster Creation
Microsoft Windows clustering will be used to join the two host servers together in a highly available
configuration that will allow both servers to run virtual machines to support a production environment.
Virtual machine workloads should be balanced across all hosts and careful attention should be paid to
ensure that the combined resources of all virtual machines do not exceed those available on N-1 cluster
nodes. Staying below this threshold will allow any single server to be taken out of the cluster while
minimizing the impact to your management servers.

A policy of monitoring resource utilization such as CPU, Memory, and Disk (space, and I/O) will help keep
the cluster running at optimal levels, and allow for proper planning for additional resources as needed.

1. Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes. If not, it
will cause the validation to issue a warning during network detection, due to all nodes sharing the
same IP address. These can be re-enabled after validation.

2. Using the Failover Cluster Manager on one of the two management nodes, run the Cluster Validation
Wizard to assess the two physical host servers as potential cluster candidates and address any
errors.
The cluster validation wizard checks for available cluster compatible host servers, storage, and
networking (Figure 36).
Make sure the intended cluster storage is online in only one of the cluster nodes.
Address any issues that are flagged during the validation.


Figure 36. Cluster Validation Wizard

Use the Failover Cluster Manager to create a cluster with the two physical host servers that are to
be used for the management cluster. Step through the cluster creation wizard.
o You will need a cluster name and IP address
Using Hyper-V Manager, set the default paths for VM creating to use the Cluster Shared Volume.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 44

Figure 37. Failover Cluster Manager

3. Add the previously created storage volume that will be dedicated as a Cluster Shared Volume
(Figure 38).


Figure 38. Changing cluster storage volume to a Cluster Shared Volume

Note: The cluster quorum volume should not be configured as a CSV.

4. Using Hyper-V Manager, set the default paths for VM creation to use the Cluster Shared
Volume.

5. Configure the Live Migration settings to route this network traffic over the dedicated link as
shown in Figure 39.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 45

Figure 39. Changing Live Migration settings

Virtual Machine Fibre Channel Storage Connections
The majority of the storage used in a virtual environment will consist of virtual hard drives in the form of
VHDX files which are used by Hyper-V. The VHDX fiels typically reside on storage provided by the
Cluster Shared Volume and managed by the host server. In some cases you need to create direct Fibre
Channel (FC) connections from the virtual machine to the storage. An example of this is to support the
creation of a cluster between two virtual machines also known as Guest Clustering. The steps listed
below will provide information about the setup and configuration of direct Fibre Channel connections to a
virtual machine.
1. Install the Flex V7000 MPIO driver on the virtual machine as shown previously, and then shut down
the virtual machine.
2. Using the Emulex OneCommand tool NPIV needs to be enabled on each of the Fibre Channel HBA
ports used for storage access in the host operating system as seen in Figure 40. The default setting
for NPIV is disabled. The system will then have to be restarted.


Figure 40. Enabling NPIV on FC ports

3. Using Hyper-V Manager Virtual SAN Manager create two virtual SAN switches with one HBA port in
each virtual SAN (Figure 41). Previous WWPN information can be reviewed to determine proper port-
to-switch correlation.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 46

Figure 41. Creation of vSAN with Hyper-V Manager

4. From Hyper-V manager console open the settings for the virtual machine that will have FC
connectivity, and add two vHBAs to the VM under Add Hardware and associate each vHBA with one
of the two vSANs as shown in Figure 42.


Figure 42. Adding vHBA to Hyper-V virtual machine

5. Record the WWPN that will be provided to the new vHBA under Address Set A. If this virtual
machine will be Live Migrated also record the WWPN seen under Address Set B.
Hyper-V Live Migration will alternate the WWPN port names between Address Set A and
Address Set B during migration between host servers.

6. Start the virtual machine

7. The new active WWPN from the virtual machine should be seen by each Flex System CN4093 switch
at this point. A new fcalias will need to be created for this new WWPN on each switch as shown
previously. The second WWPN will not become visible until it becomes active, but should be added to
the fcalias list, zone, and zoneset at this time
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 47

CN 4093(config)# fcalias vmSQL1_vSAN1_Addr_A wwn c0:03:ff:d1:90:f4:00:00
CN 4093(config)# fcalias vmSQL1_vSAN1_Addr_B wwn c0:03:ff:d1:90:f4:00:01
CN 4093(config)# fcalias vmSQL2_vSAN1_Addr_A wwn c0:03:ff:d1:90:f4:00:00
CN 4093(config)# fcalias vmSQL2_vSAN1_Addr_B wwn c0:03:ff:d1:90:f4:00:01
CN 4093(config)# write

8. A new zone will need to be created on each switch that contains the fcaliases of the two storage
ports, and new virtual machine WWPNs.

CN 4093(config-zone)# zone name SW1Zone_SQL_Cluster
CN 4093(config-zone)# member fcalias vmSQL1_vSAN1_Addr_A
CN 4093(config-zone)# member fcalias vmSQL1_vSAN1_Addr_B
CN 4093(config-zone)# member fcalias vmSQL2_vSAN1_Addr_A
CN 4093(config-zone)# member fcalias vmSQL2_vSAN1_Addr_B
CN 4093(config-zone)# member fcalias V7KLeft_PortA13
CN 4093(config-zone)# member fcalias V7KRight_PortA14
CN 4093(config-zone)# exit
CN 4093(config)# show zone
CN 4093(config)# write

9. Add the zone(s) just created to the current zoneset and activate.

CN 4093(config)# zoneset name SW1_ZoneSet
CN 4093(config-zoneset)# member SW1Zone_SQL_Cluster
CN 4093(config)# exit
CN 4093(config)# show zoneset
CN 4093(config)# zoneset activate name SW1_ZoneSet
CN 4093(config)# write

10. Repeat the process for the second vHBA (vSAN2) on the other CN4093 switch.

Note: SAN connectivity will need to be configured for four of the management fabric virtual machines
(vmSQL1, vmSQL2, vmVMM1, vmVMM2) that will be using Guest Clustering

The Flex V7000 storage should now be able to see the new WWPNs and be mapped to a new
host with storage assigned as needed.
The alternate B addresses should be included in the host mapping. They will show as unverified
until they are brought on line. As seen in Figure 43.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 48

Figure 43. Flex System V7000 Host Mappings of virtual machine

Volumes can then be assigned to the host virtual machine in the Flex V7000 storage manager, as
shown previously.
The new volumes should be visible in Windows Disk Manager. A disk rescan may be required.

Virtual Machine Setup and Configuration
Setup of the management fabric virtual machines will utilize several methods of Windows clustering to
create a robust and fault tolerant management environment.

The operating system can be installed on a virtual machine by using a variety of methods. One approach
is to modify the VM DVD drive settings to specify an image file that points to the Windows installation ISO
image, then start the VM to begin the install. Other deployment methods such as using a VHD file with a
Sysprepd image, WDS server, or SCCM are acceptable as well.

After the operating system is installed and while the VM is running, the following actions should be
performed before installing application software:
1. Run Windows Update

2. Install the integration services on the VM. Although the latest Windows Server builds have integration
services built-in, it is important to make sure the Hyper-V child and parent run the same version of
integration components.

3. Activate Windows

Additional steps for specific components of the management fabric can be found in the section below.

Hyper-V supports Dynamic Memory in virtual machines. This allows some flexibility in the assignment of
memory resources to VMs, however some applications may experience performance related issues if the
virtual machines memory settings are not configured correctly. We suggest that the management fabric
utilizes static memory assignments to ensure optimal performance.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 49

System Center 2012 SP1 Setup and
Configuration
Microsoft System Center 2012 SP1 with IBM Upward Integration Modules (UIM) provides an environment
to configure, monitor, and manage a Windows Server with Hyper-V based configuration.

System Center 2012 SP1 is a collection of several server applications that are integrated to provide a
management fabric. Each of the components has scale points well within the present configuration,
however they are listed in Table 8 to provide a reference point as your virtual environment scales up.

Component Scalability Reference Notes
Virtual Machine
Manager
800 hosts/25,000 virtual machines
per instance.
A Virtual Machine Manager instance
is defined as a standalone or cluster
installation. While not required,
scalability is limited to 5,000 virtual
machines when Service Provider
Foundation (SPF) is installed. A
single SPF installation can support up
to five Virtual Machine Manager
instances.
App Controller Scalability is proportional to
Virtual Machine Manager.
Supports 250 virtual machines per
Virtual Machine Manager user role.
Operations
Manager
3,000 agents per management
server, 15,000 agents per
management group,
50,000 agentless managed devices
per management group.

Orchestrator Simultaneous execution of 50
runbooks per Runbook server.

Service Manager Large deployment supports up to
20,000 computers.
Topology dependent. Note that in Fast
Track Service Manager is used solely
for virtual machine management. An
advanced deployment topology can
support up to 50,000 computers.
Table 8. Scale points for System Center 2012 components

The Fast Track fabric management architecture utilizes System Center 2012 SP1 Datacenter edition. For
more information about System Center 2012, refer to Whats New in System Center 2012 SP1.

SQL Server 2012 Setup and Configuration
Microsoft SQL Server 2012 performs the data repository functions for all the key components of System
Center management fabric. Each System Center component will store its data on its own clustered SQL
Server database instance.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 50
Following the Microsoft Hyper-V Private Cloud Fast Track best practices, our solution is comprised of two
SQL Server 2012 virtual machines configured as a failover cluster with guest clustering. There will be
seven instances of SQL installed on each of these two machines with each instance supporting a specific
role in the management fabric. Figure 44 shows the high level architecture of a SQL Server guest failover
cluster.


Figure 44. Microsoft SQL Server 2012 Cluster

The base VM configuration for both of these VMs should be as follows:
Eight vCPU
16GB RAM
Two virtual network interfaces
o One interface configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
o Second interface for SQL Cluster Private (VLAN33)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS installation (VHDX format)
One 1GB Shared Storage LUN (for Guest Cluster Quorum)
Fourteen Shared Storage LUNs (for Database Instances)
Windows Server 2012 Datacenter edition installed and updated
High availability achieved through the use of guest clustering between VMs.

Note: Do not use host clustering.

SQL Clustered Instances
A total of 7 Clustered SQL instances will be created to service the management fabric. A high level
overview of each instance is presented in Table 9.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 51
Fabric
Management
Component
Instance
Name
(Suggested)
Components Collation
1
Storage
Volumes
2

Virtual Machine
Manager
SCVMMDB Database Engine SQL_Latin1_General_CP1_CI_AS 2 LUNs

Windows Server
Update Services
(optional)
SCVMMDB Database Engine SQL_Latin1_General_CP1_CI_AS N/A Shared instance
with Virtual Machine
Manager
Operations
Manager
SCOMDB Database Engine,
Full-Text Search
SQL_Latin1_General_CP1_CI_AS 2 LUNs
Operations
Manager
Data Warehouse
SCOMDW Database Engine,
Full-Text Search
SQL_Latin1_General_CP1_CI_AS 2 LUNs
Service Manager SCSMDB Database Engine,
Full-Text Search
Latin1_General_100_CI_AS 2 LUNs
Service Manager
Data Warehouse
SCSMDW Database Engine,
Full-Text Search
Latin1_General_100_CI_AS 2 LUNs
SCSMAS Analysis Services Latin1_General_100_CI_AS 2 LUNs
Service Manager
Web Parts and
Portal
SCDB Database Engine SQL_Latin1_General_CP1_CI_AS N/A Shared instance
with Orchestrator and
App Controller
Orchestrator SCDB Database Engine SQL_Latin1_General_CP1_CI_AS 2 LUNs
App Controller SCDB Database Engine SQL_Latin1_General_CP1_CI_AS N/A Shared instance
with Orchestrator and
Service Manager Portal
Table 9. Database Instance Requirements

SQL Cluster Storage
A total of fifteen storage volumes (LUNs) will need to be created and mapped to the two SQL Server VMs
in the Flex V7000 storage. One for the cluster quorum, and an additional fourteen for the database
instances (2 per instance). Information on enabling a direct SAN connection to virtual machines through
the use of virtual HBAs can be found above in the section Virtual Machine Fibre Storage Connections.
The SQL virtual machines should be created and configured for networking and storage before
proceeding with the creation of the SQL cluster. Table 10 provides additional guidance on suggested SQL
Server instance names, disk sizes, assignments, and IP addresses.

Component Service
Manager
Management
Server
Service
Manager
Data
Warehouse
Server
Service
Manager
Analysis
Server
App
Controller,
Orchestrator
, Microsoft
SharePoint
services
Farm and
WSUS
Virtual
Machine
Manager
Operations
Manager
Operations
Manager Data
Warehouse



1
The default SQL collation settings are not supported for multi-lingual installations of the Service Manager component. Only use the default
SQL collation if multiple languages are not required. Note that the same collation must be used for all Service Manager databases (management,
DW, and reporting services).
2
Note that additional LUNs may be required for TempDB management in larger scale configurations

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 52
Component Service
Manager
Management
Server
Service
Manager
Data
Warehouse
Server
Service
Manager
Analysis
Server
App
Controller,
Orchestrator
, Microsoft
SharePoint
services
Farm and
WSUS
Virtual
Machine
Manager
Operations
Manager
Operations
Manager Data
Warehouse
SQL Server Instance
Name
SCSMDB SCSMDW SCSMAS SCDB SCVMMDB SCOMDB SCOMDW
SQL Server Instance
Failover Cluster
Network Name
SCSMDB SCSMDW SCSMAS SCDB SCVMMDB SCOMDB SCOMDW
SQL Server Instance
DATA Cluster Disk
Resource
Cluster Disk2
(145GB)
Cluster Disk4
(1TB)
Cluster Disk6
(8GB)
Cluster Disk8
(10GB)
Cluster Disk
10 (6GB)
Cluster Disk
12 (130GB)
Cluster Disk
14 (1TB)
SQL Server Instance
LOG Cluster Disk
Resource
Cluster Disk3
(70GB)
Cluster Disk5
(500GB)
Cluster Disk7
(4GB)
Cluster Disk9
(5GB)
Cluster Disk
11 (3GB)
Cluster Disk
13 (65 GB)
Cluster Disk
15 (500GB)
SQL Server Instance
Install Drive
E: G: I: K: M: O: Q:
SQL Server Instance
DATA Drive
E: G: I: K: M: O: Q:
SQL Server Instance
LOG Drive
F: H: J: L: N: P: R:
SQL Server Instance
TEMPDB Drive
F: H: J: L: N: P: R:
Cluster Service
Name
SQL Server
(SCSMDB)
SQL Server
(SCSMDW)
SQL Server
(SCSMAS)
SQL Server
(SCDB)
SQL Server
(SCVMMDB)
SQL Server
(SCOMDB)
SQL Server
(SCOMDW)
Clustered SQL
Server Instance IP
Address
192.168.40.43 192.168.40.44 192.168.40.45 192.168.40.46 192.168.40.47 192.168.40.48 192.168.40.49
Host Cluster Public
Network Interface
Subnet Mask
255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0 255.255.255.0
Host vSwitch for
Network Interface
Name
VM_Comm VM_Comm VM_Comm VM_Comm VM_Comm VM_Comm VM_Comm
SQL Server Instance
Listening TCP/IP
Port
10437 10438 10439 1433
3
10434 10435 10436
SQL Server Instance
Preferred Owners
Node2,
Node1
Node2,
Node1
Node2,
Node1
Node1,
Node2
Node1,
Node2
Node1,
Node2
Node1,
Node2
Table 10. Database Instance guidance

SQL Server Guest Clustering
The two virtual machines hosting SQL Server 2012 will utilize guest clustering to create a highly available
fault tolerant configuration. This requires that the two VMs be assigned virtual HBAs. Refer to the
preceding section Virtual Machine Fibre Channel Storage Connections for additional information.

1. Add the two vHBAs, and record the WWPN information from Hyper-V Manager.

2. Create the fcaliases, and zones on both Flex CN4093 switches

3. Add new FC zones to the active Flex CN4093 zonesets.



3
Note that the SCDB instance must be configured to port 1433 if the Cloud Services Process Pack will be used.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 53

4. Map the storage volumes to the two virtual machines.

5. Validate network and storage connections.

6. Create the Windows cluster.

Please refer to the Microsoft Infrastructure as a Service (IaaS) Deployment Guide containing detailed
installation instructions for the SQL Server 2012 configuration:

System Center Virtual Machine Manager 2012 SP1 Setup and Configuration
The Systems Center 2012 Virtual Machine Manager component of this configuration consists of two
virtual machines utilizing guest clustering, and running Microsoft System Center 2012 SP1 Virtual
Machine Manager (VMM) to provide virtualization management services to this environment. Figure 45
shows the high level architecture of a Virtual Machine Manager high availability cluster.


Figure 45. Virtual Machine Manager high availability cluster

The base VM configuration for both of these VMs should be as follows:
Four vCPU
8GB RAM
Two virtual network interfaces
o One interface configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
o Second interface for SCVMM Cluster Private
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
One 1GB Shared Storage LUN (for Guest Cluster Quorum)
One 500GB Shared Storage LUN (for VMM Library)
o This LUN is actually owned and shared out by the SQL Cluster also serving as a File Share
Cluster.
Windows Server 2012 Datacenter edition installed and updated
High availability achieved through the use of guest clustering between VMs.

Note: Do not use host clustering.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 54
Note: A dedicated SQL Server 2012 instance running on a separate previously configured
SQL Server cluster is required to support this workload.

Note: The Microsoft IaaS Deployment Guide recommends installing the Virtual Machine
Manager Library as a Clustered file resource using the two SQL servers. Prescriptive
guidance for this is included in the above document.


SCVMM Guest Clustering for Virtual Machine Manager
The two virtual machines hosting SCVMM will utilize guest clustering to create a highly available fault
tolerant configuration. This requires that the two VMs be assigned virtual HBAs. Refer to the preceding
section Virtual Machine Fibre Channel Storage Connections for additional information.

1. Add the two vHBAs, and record the WWPN information with Hyper-V Manager.

2. Create the fcaliases, and zones on both Flex CN4093 switches.

3. Add new FC zones to the active Flex CN4093 zonesets.

4. A small (1GB) volume should then be created on the Flex V7000 storage and mapped to both of the
VMs for use as the cluster quorum.

5. Map the storage volumes to the two virtual machines.

6. Validate network and storage connections.

7. Create the Windows cluster.

Please refer to the Microsoft Infrastructure as a Service (IaaS) Deployment Guide containing detailed
installation instructions for the System Center Virtual Machine Manager 2012 SP1 configuration:

IBM Pro Pack for Microsoft System Center Virtual Machine Manager
The IBM Pro Pack Modules for System Center Virtual Machine Manager provides performance, health
alerting, and virtual machine migration recommendations for the IBM Flex Systems that are used in this
configuration. This package will be installed on the System Center Operations Manager using the
instructions found in IBM Hardware Performance and Resource Optimization Pack for Microsoft System
Virtual Machine Manager.

Flex System V7000 Storage Automation with SMI-S
The IBM Flex System V7000 storage nodes support storage automation through the implementation of
the Storage Management Initiative Specification (SMI-S). Core storage management tasks such as
discovery, classify, provisioning/de-provisioning, and mapping can be completed from the System Center
Virtual Machine Manager 2012 SP1 console.

1. Select the Fabric panel on the bottom left side of the SCVMM console, and then choose Providers
as shown in Figure 46.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 55

Figure 46. SCVMM SMI-S Provider Setup

2. Specify the Discovery Scope on the next step of the SCVMM Provider setup wizard. Make sure SSL
is selected for communication with the Flex System V7000 storage as seen in Figure 47.


Figure 47. SCVMM Provider Discovery Scope

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 56
3. Provide the account for SCVMM to use in order to log into the Flex System V7000 (Figure 48).

Note: This should have been setup up in advance on the Flex V7000.


Figure 48.SCVMM account login for Flex System V7000 Storage

4. Discover the storage and import the Flex V7000 security certificate (Figure 48)


Figure 49. Flex System V7000 Storage Discover and Security Certificate importation

5. Classify the discovered storage and complete the SMI-S setup wizard (Figure 50)
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 57

Figure 50. SCVMM storage classification and setup completion

Bare Metal Provisioning
Bare metal provisioning of compute nodes can be accomplished using several methods. Windows
Deployment Server (included in Windows Server) provides a straight forward bare metal deployment
framework. A more comprehensive deployment and update solution is found in Microsoft System Center
Configuration Manager (SCCM) 2012 SP1. IBM offers an upward integration module that plugs into
SCCM to better manage, deploy, and update IBM hardware.

System Center Operations Manager 2012 SP1 Setup and Configuration
The System Center 2012 Operations Manager component of this configuration consists of two virtual
machines utilizing native application failover and redundancy features and operating as a single
management group. A third virtual machine will be configured as Highly Available (HA) under the host
cluster and utilized as the Operations Manager Reporting Server. Figure 51 shows the high level
architecture of the Operations Manager components in the management cluster.


Figure 51. System Center Operations Manager virtual machines

The base VM configuration for all three of these VMs should be as follows:
Eight vCPU
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 58
16GB RAM
One virtual network interface
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
o Routing will need to be provided between OOB Network and the Host Management Network
(MgmtClusterPublic) for Operations Manager to be able to communicate to the IBM Flex
CMM.
One 60GB virtual hard disk for OS and application installation (VHDX format)
Windows Server 2012 Datacenter edition installed and updated.
High availability is achieved through native application interfaces for the SCOM servers.
High availability is achieved through host clustering for the SCOM Data Warehouse.

Note: Two dedicated SQL Server 2012 instances running on the separate previously configured
SQL Server cluster is required to support these workloads.
One instance for the Operations Manager Database
One instance for the Operations Manager Data Warehouse

Please refer to the Microsoft Infrastructure as a Service (IaaS) Deployment Guide containing detailed
installation instructions for the System Center Operations Manager 2012 SP1 configuration:

IBM Upward Integration Modules for Microsoft System Center Operations Manager
Upon completion of the System Center Operations Manage setup and configuration the IBM Upward
Integration Modules SCOM management package can be installed and configured.

The IBM Upward Integration Modules for System Center expands the server management capability by
integrating IBM hardware management functionality and reduce time and effort for routine system
administration. It provides discovery, deployment, configuration, monitoring, event and power
management. The download location for the IBM UIM package can be found at:

IBM Hardware Management Pack for Microsoft System Center Operations Manager

The following steps found in the Installation Guide should be executed:
Install the IBM SCOM package on the Operations Management Server
Install the optional Power CIM provider
Install and activate the IBM License tool on the SCOM server
Discover and manage the IBM Flex x240 servers
Create connection to IBM Flex Chassis through the Flex System CMM using SNMP V3

IBM Pro Pack for Microsoft System Center Virtual Machine Manager
The following steps found in the previously mentioned Installation Guide should be executed:
Verify the SCOM and SCVMM servers have been integrated
Install the IBM Hardware Performance and Resource Optimization Pack for Microsoft System
Center Virtual Machine Manager on the Operations Management Server.

IBM Flex System V7000 Storage Management Pack for Microsoft System Center
Operations Manager
A management package for System Center Operations Manager for the IBM Flex V7000 Storage
Controller is also available under the IBM Fix Central website.

Note: Systems Storage->Disk Systems->Mid-range disk systems->IBM Storwize V7000->IBM
Storage Management Pack for Microsoft SCOM v2.1. The 64-Bit version should be used.

The following steps found in the Installation Guide should be executed:
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 59
Install the IBM SCOM package on the Operations Management Server
Confirm network connectivity to the IBM Flex V7000 from the SCOM server
Configure the management pack to monitor the Flex V7000 storage using the instructions
provided.

System Center Orchestrator 2012 SP1 Setup and Configuration
Microsoft System Center Orchestrator 2012 is a workflow management solution for the datacenter.
Orchestrator enables automate the creation, monitoring, and deployment of resources in your
environment.

System Center Orchestrator 2012 is configured as a highly available platform by clustering two virtual
machines with native applications services built into the product. Figure 52 shows the high level
architecture of the Orchestrator Native Application high availability running on the Management Cluster.


Figure 52. System Center Orchestrator Highly Available Cluster

The base VM configuration for both of these VMs should be as follows:
Four vCPU
8GB RAM
One virtual network interfaces
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
Windows Server 2012 Datacenter edition installed and updated
Native application interface used for high availability.

Note: Do not use host clustering.

Note: A dedicated SQL Server 2012 instances running on a separate previously configured
SQL Server cluster is required to support this workload.

Please refer to the Microsoft Infrastructure as a Service (IaaS) Deployment Guide containing detailed
installation instructions for the System Center Orchestrator 2012 SP1 configuration:


Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 60
System Center Service Manager 2012 SP1 Setup and Configuration
Microsoft System Center Service Manager 2012 provides an integrated management platform with a
robust set of capabilities which provides built-in processes for incident and problem resolution, change
control, and asset lifecycle management.

The Microsoft Hyper-V Private Cloud Fast Track solution consists of four Service Manager components.
Each component runs on its own VM instance as seen in Figure 53. High availability for each of these
VMs is achieved through Windows Failover Clustering on the hosts with the ability to rapidly Live Migrate
or Failover the virtual machine between hosts.


Figure 53. Service Manager Architecture

The base VM configuration for the two Service Manager VMs should be as follows:
Four vCPU
16GB RAM
One virtual network interfaces
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
Windows Server 2012 Datacenter edition installed and updated
Highly available through Windows Failover host clustering

The base VM configuration for the two Service Manager Data Warehouse VMs should be as follows:
Eight vCPU
16GB RAM
One virtual network interfaces
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
Windows Server 2012 Datacenter edition installed and updated
Highly available through Windows Failover host clustering
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 61

The base VM configuration for the two Service Manager Self Service Portal VM should be as follows:
Eight vCPU
16GB RAM
One virtual network interfaces
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
Windows Server 2008 R2 SP1 Datacenter edition installed and updated
Highly available through Windows Failover host clustering

Note: Two dedicated SQL Server 2012 instances running on a separate previously configured SQL
Server cluster are required to support these workloads.
One instance for the Service Manager Database
One instance for the Service Manager Data Warehouse

Please refer to the Microsoft Infrastructure as a Service (IaaS) Deployment Guide containing detailed
installation instructions for the System Center Service Manager 2012 SP1 configuration:

WSUS Server Setup and Configuration
Windows Server Update Services (WSUS) enables you to deploy the latest Microsoft critical updates to
servers that are running the Windows operating system. By using WSUS combined with Cluster Aware
Updating you can control which specific updates are applied consistently across the clustered Windows
Server environment.

Figure 54 shows high level architecture of WSUS VM HA running on Windows Failover Cluster.


Figure 54. WSUS Highly Available VM

The base VM configuration for the WSUS VM should be as follows:
Two vCPU
4GB RAM
One virtual network interfaces
o Configured for the Host Management network (ClusterPublic/VLAN40)
Utilize VM_Communications vSwitch to connect to vNIC
One 60GB virtual hard disk for OS and application installation (VHDX format)
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 62
Windows Server 2012 Datacenter edition installed and updated.
Highly available through Windows Failover host clustering.

Note: WSUS does require a small database. It is suggested to host the WSUS database on the
SCDB instance on the previously configured SQL Server cluster to support this workload.

For detail instructions on how to setup and configure WSUS, consult Microsoft TechNet document Deploy
Windows Update Services in Your Organization.

Cluster Aware Updating Setup and Configuration
Cluster Aware Updating (CAU) is a feature in Windows Server 2012 that simplifies the management and
updating of cluster nodes. With CAU managing the Windows Server cluster updates one node is paused
at a time, and the virtual machines migrated off the node. Updates are then applied, and the cluster node
is rebooted and brought back online. This process is repeated until the entire cluster has been updated.

The Cluster Aware Updating feature is installed as part of the cluster management administrative tools
along with clustering, or as part of the Remote Server Administration tools if installed on a separate
management console.

From within the CAU console (Figure 55) select the cluster to be updated and connect

Figure 55. CAU Console Cluster Selection

Install the CAU role on the cluster nodes (Figure 56)

Figure 56. Installing CAU on cluster

Complete the Wizard by selecting the self-updating mode, Self-Updating schedule, any optional
parameters that are needed, and confirm (Figure 57).
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 63

Figure 57. Complete CAU wizard

IBM Flex System x240 Compute Node Setup
The production fabric for this configuration will consist of eight dual socket IBM Flex System x240
Compute Nodes with 128GB of RAM, and one CN4054 4-port converged network card each. These Flex
System compute nodes will be configured into a highly available Windows Server cluster to support rapid
deployment, resource balancing, and failover by the management fabric.

Setup involves the installation and configuration of Windows Server 2012 Datacenter edition, networking,
and storage on each node. After setup, and discovery by the previously configured management fabric,
production virtual machines can be deployed on this compute cluster.

Pre-OS Installation
1. Confirm the CN4054 4-port Ethernet devices are installed in each compute node
The FCoE Feature on Demand (FoD) must be imported on each node via an IMM connection as
shown previously in the management fabric setup.

2. Validate FW levels are consistent across all compute nodes and update if needed.

3. Verify UEFI performance settings are consistent with corporate operation procedures.

4. Configure the CN4054 card to disable multichannel and enable FCoE as documented previously in
the management fabric setup.

5. Confirm CN4093 switches have been configured
Inter switch links, VLAGs, and VLANs should have been created and assigned as previously
described.

6. The V7000 Storage Node should be configured as defined in the storage section and be ready for
host mapping assignments to map the volumes for the volumes when the WWPN become available.

7. The two local disks should be configured as a RAID 1 array.

OS Installation and Configuration
Windows Server 2012 Datacenter allows unlimited Windows virtual machine rights on the host servers
and is the preferred version for building Hyper-V compute configurations.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 64
1. Install Windows Server 2012 Datacenter Edition.

2. Set your server name, and join the domain.

3. Install the Hyper-V role and Failover Clustering feature.

4. Update Windows Server to ensure any new patches are installed.

5. The latest CN4054 NIC and FCoE drivers should be downloaded from IBM Fix Central and installed.

6. Install the Emulex OneCommand Manager utility to provide additional information for the CN4054
converged network adapter.

7. Multipath I/O is used to provide balanced and fault tolerant paths to V7000 Storage Node. This
requires an additional Flex V7000 MPIO DSM specific driver to be installed on the host servers before
attaching the storage.
The Microsoft MPIO pre-requisite driver will also be installed if not already on the system

8. Install the IBM Systems Director platform agent 6.3.3
http://www-03.ibm.com/systems/software/director/downloads/agents.html

Network Configuration
Two separate Windows Server 2012 teams will be created in the following configuration. One team is
used to support host server management and cluster private traffic, a second team is used to support
Live Migration and VM Communication.

Pay careful attention when identifying and enumerating the network interfaces in each host to make sure
teams are spread across the two network interfaces, and properly routed to the correct switch ports. Use
Emulex OneCommand Manager Utility to expose each network interface and MAC address.

Note: Refer to Figure 8 and 9 to clarify the mapping of server ports to the two CN4093 switch
ports. Each ASIC has two network interfaces, the top one is for switch 1 and the bottom for switch
2.

The following PowerShell commands can also be useful (additional PowerShell
scripts can be found in the Appendix)
Get-NetAdapterHardwareInfo | Sort-Object Property Bus,Function
Get-NetAdapter InterfaceDescription Emulex*
Rename-NetAdapter name Ethernet NewName PortA1_SW1

Figure 58 is a NIC interface naming example.

Figure 58. Displaying all Emulex network adapters with PowerShell

The team created to support cluster public and private communication with the host servers will be share
the same ports being used for FCoE traffic. The CN4093 switches prioritize FCoE traffic over Ethernet
data traffic. To further reduce the potential of bandwidth over subscription Quality of Service (QoS) limits
will be bandwidth will be placed on these interfaces in Windows Server 2012. This team should be
created using the two NIC ports as described in Table 2.
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 65

1. The ClusterTeam team should be created using the default Switch Independent mode and Address
Hash mode with the following PowerShell commands.
New-NetLbfoTeam name ClusterTeam TeamMembers PortA1_SW1,
PortB1_SW2 TeamingMode SwitchIndependent

Note: The team member names will vary by host server

2. The second team will be created off the NIC interfaces configured to support VLAG. This allows the
use of LACP teaming and is then backed by the full aggregated bandwidth of the two ports.
New-NetLbfoTeam name VLAG TeamMembers vLAG PortA1_SW2, VLAG
PortB1_SW1 TeamingMode Lacp

3. When Windows Server 2012 NIC teaming is complete there should be two teams displayed visible
when queried in PowerShell as seen in Figure 59.


Figure 59. Windows Server NIC teaming.

4. Virtual switches will be created on each of these teams. Generally each Hyper-V virtual switch can
provide one interface to the management operating system. A second virtual NIC for use by the host
operating system will be created on the ClusterTeam in order to provide a segregated network path
for the Cluster Private/CSV network. The following PowerShell commands are used to create the
virtual switches and second virtual NIC.
New-VMSwitch Name ComputeClusterPublic NetAdapterName ClusterTeam
MinimumBandwidthMode Absolute AllowManagementOS $true
Add-VMNetworkAdapter ManagementOS -Name ComputeClusterPrivate
SwitchName ComputeClusterPublic
New-VMSwitch Name VM_Communication NetAdapterName VLAG
MinimumBandwidthMode Weight AllowManagementOS $true

5. Rename the management facing network interface on the VM_Communication team to reflect the role
that it is fulfilling.
Rename-VMNetworkAdapter -NewName LiveMigration ManagmentOS name
VM_Communication

6. Confirm the network interfaces are available to the management operating systems with the following
Powershell command and as shown in Figure 60.
Get-VMNetworkAdapter ManagementOS

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 66

Figure 60. Displaying all network interfaces created for host partition from vSwitches

7. Assign VLAN IDs to each of these interfaces and verify with the following PowerShell commands.
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtClusterPublic Access Vlanid 40
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtClusterPrivate Access Vlanid 30
Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName
MgmtLiveMigration Access Vlanid 31

8. Confirm that the network interfaces VLAN IDs that are available to the management operating
systems with the following PowerShell command and as shown in Figure 61.
Get-VMNetworkAdapterVlan ManagementOS


Figure 61. VLAN Assignments on Management OS NIC interfaces

9. Bandwidth limits should be placed on these network interfaces. The ClusterPublic virtual switch was
created with Absolute weighting. This allows a maximum bandwidth cap to be placed on these
network interfaces with the following PowerShell commands. Maximum bandwidth is defined in bits
per second.
SetVMNetworkAdpater ManagementOS -Name ClusterPublic -
MaximumBandwidth 2GB
SetVMNetworkAdpater ManagementOS -Name ClusterPrivate -
MaximumBandwidth 1GB

10. The network interface used for Hyper-V Live Migration uses the team that was created using
Weighted mode. A minimum bandwidth setting of 30% will be set for the LiveMigration network.
SetVMNetworkAdpater ManagementOS -Name ClusterPrivate -
MinimumBandwidthWeight 30
When complete then confirm your Management OS network adapter names and VLAN

11. Assign TCP/IP addresses and confirm network connectivity for all network connections on each
VLAN
New-NetIPAddress InterfaceAlias vEthernet (ClusterPublic) IPAddress
192.168.40.21 PrefixLength 24

12. The cluster public network (VLAN 40) should be at the top of the network binding order

13. The Cluster Private and Live Migration network should not have any default gateway defined.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 67
Host Storage Connections
Two volumes are required to support the Compute host cluster:
4TB Volume to be used as a Cluster Shared Volume
1GB Volume to be used as the Compute Cluster Quorum

Once the switch FCoE configuration and Flex V7000 volume mapping has been completed storage
volumes should be visible to the host servers.
1. Confirm the disks are visible in Windows Disk Manager
A disk rescan may be required

2. From one server, bring each disk online, and format it as a GPT disk for use by the cluster. Assigning
drive letters is optional since they will be used for specific clustering roles such as CSV, and Quorum.
Validate that each potential host server can see the disks and bring them online.

Note: Only one server can have the disks online at a time, until they have been added to
Cluster Shared Volumes.

Compute Host Cluster Creation
Microsoft Windows clustering will be used to join the eight host servers together in a highly available
configuration that will allow the eight servers to run virtual machines to support a production environment.
Virtual machine workloads should be balanced across all hosts with careful attention to ensure that the
combined resources of all virtual machines do not exceed those available on N-1 cluster nodes.
Remaining below this threshold will minimize the impact to your production servers if any single server
should be taken out of the cluster.

A policy of monitoring resource utilization such as CPU, Memory, and Disk (space, and I/O) will help keep
the cluster running at optimal levels, and allow for proper planning to add additional resources as needed.

1. Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes because
it causes the validation to issue a warning during network detection due to all nodes sharing the same
IP address. These can be re-enabled after validation.

2. Open the Failover Cluster Manager on one of the eight compute nodes, run the cluster validation
wizard to assess the eight physical host servers as potential cluster candidates and address any
errors.
The cluster validation wizard checks for available cluster compatible host servers, storage, and
networking (Figure 62).
Make sure the intended cluster storage is online in only one of the cluster nodes.
Address any issues that are flagged during the validation.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 68

Figure 62. Cluster Validation Wizard

3. Use the Failover Cluster Manager to create a cluster with the eight physical host servers to be used
for the compute cluster. Follow the steps found in the cluster wizard.
You will need a cluster name and IP address
Figure 63 shows the Failover Cluster Manager with the eight node compute cluster visible


Figure 63. Compute Cluster Failover Cluster Manager

4. Add the storage volume that will be dedicated to Cluster Shared Volumes.
5. Using Hyper-V Manager, set the default paths for VM creation to use the Cluster Shared Volumes on
each host.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 69
6. Configure the Live Migration settings to route this network traffic over the dedicated link as shown
in Figure 64.

Figure 64. Changing Live Migration settings


Summary
Upon completing implementation steps, a highly available, elastic, and flexible virtualization environment
is created based on IBM Flex Systems and Microsoft Hyper-V. This consists of a two node cluster forming
the management fabric based on Microsoft System Center 2012 SP1 and IBM Upward Integration
Modules, and an eight node compute cluster for deployment of production resources. With enterprise-
class multi-level software and hardware, fault tolerance is achieved by configuring a robust collection of
industry-leading IBM Flex System Compute Nodes, Storage Nodes, and networking components to meet
Microsofts Private Cloud Fast Track program guidelines. The programs unique framework promotes
standardized and highly manageable virtualization environments which can help satisfy even the most
challenging business critical virtualization demands such as high availability, rapid deployment with self
service provisioning, and load balancing, and the ability to easily scale the compute cluster as business
needs grow.


Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 70
Appendix
Related Links

IBM Support:
http://www.ibm.com/support

Pure Systems Updates:
https://www-304.ibm.com/software/brandcatalog/puresystems/centre/update

IBM Flex System x240 Compute Node Installation and Service Guide
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.8737.doc/dw1ko_book.pdf

IBM Flex System Chassis Management Module Installation Guide
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.cmm.doc/dw1ku_cmm_ig_
book.pdf

IBM Flex System Chassis Management Module Users Guide
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.cmm.doc/dw1kt_cmm_ug_
pdf.pdf

IBM Flex System Chassis Management Module Command-Line Interface Reference Guide
http://publib.boulder.ibm.com/infocenter/flexsys/information/topic/com.ibm.acc.cmm.doc/dw1ku_cmm_ig_
book.pdf

IBM Flex System Power Guide:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102111

Using IBM Features on Demand:
http://www.redbooks.ibm.com/redpapers/pdfs/redp4895.pdf

IBM Server Guide:
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=serv-guide

IBM Firmware update and best practices guide:
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5082923

IBM Bootable Media Creator:
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-BOMC

IBM Director Agent Download (Platform Agent)
http://www-03.ibm.com/systems/software/director/downloads/agents.html

IBM Upward Integration for Microsoft System Center Operations Manager, v4.5
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=migr-5082204

IBM Upward Integration for Microsoft System Center, v4.5
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-5087849

IBM Flex System V7000 Storage Node Management and Configuration
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp?topic=%2Fcom.ibm.acc.8731.d
oc%2Fconfiguring_and_managing_storage_node.html

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 71
IBM Flex System V7000 Storage Node SCOM Management Pack (under Storage/V7000)
http://www.ibm.com/support/fixcentral

IBM Flex System V7000 Storage Node MPIO driver
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4000350

IBM Flex System CN4093 10Gb Converged Scalable Switch
http://publib.boulder.ibm.com/infocenter/flexsys/information/index.jsp?topic=%2Fcom.ibm.acc.networkdev
ices.doc%2FIo_module_compassFC.html

IBM Storage Automation with SMI-S and SCVMM 2012
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102071

IBM Reseller Option Kit for Windows Server 2012
http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=AN&subtype=CA&htmlfid=897/ENUS212-
513&appname=totalstorage

IBM Fast Setup
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=TOOL-FASTSET

IBM x86 Server Cloud Solutions
http://www-03.ibm.com/systems/x/solutions/cloud/index.html

Emulex OneCommand Utility
http://www.emulex.com/downloads/oem-qualified-downloads/ibm/vfafc-software-kits/

Microsoft Fast Track Deployment Guide
http://www.microsoft.com/en-us/download/details.aspx?id=39088

Microsoft System Center Licensing
http://www.microsoft.com/en-us/server-cloud/system-center

Microsoft IaaS Fabric Management Architecture Guide
http://www.microsoft.com/en-us/download/confirmation.aspx?id=38813

Microsoft Fast Track Fabric Manager Architecture Guide
http://www.microsoft.com/en-us/download/details.aspx?id=38813
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 72
Bill of Materials

PN Description Quantity
Flex System Enterprise Chassis

8721HC1 IBM Flex System Enterprise Chassis Base Model 1
A0UE IBM Flex System Chassis Management Module 1
A3HH IBM Flex System Fabric CN4093 10Gb Scalable Switch 2
5053 IBM SFP+ SR Transceiver 4
3701 5m LC-LC Fibre Cable (networking) 4
3268 IBM SFP RJ45 Transceiver 2
3793 3m Yellow Cat5e Cable 2
A3HL IBM Flex System Fabric CN4093 10Gb Scalable Switch (Upgrade 1) 2
A0UC IBM Flex System Enterprise Chassis 2500W Power Module Standard 2
6252 2.5m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 6
A0TW System Documentation and Software - US English 1
A0TA IBM Flex System Enterprise Chassis 1
A0UA IBM Flex System Enterprise Chassis 80mm Fan Module 4
A1NF IBM Flex System Console Breakout Cable 1
A1PH 1m IBM Passive DAC SFP+ Cable 2
A0UD IBM Flex System Enterprise Chassis 2500W Power Module 4
2300 BladeCenter Chassis Configuration 1
2306 Rack Installation >1U Component 1


Flex System V7000 Storage Node

4939X49 IBM Flex System V7000 Control Enclosure 1
AD24 900 GB 10,000 RPM 6Gbps 2.5-inch SAS HDD 16
AD41 200GB 6Gbps 2.5-inch SAS SSD 8
ADB1 10Gb Converged Network Adapter 2 Port Daughter Card 2
ADA6 External Expansion Cable 6M HD-SAS to Mini-SAS 2

Flex System x240 Management Node
8737MC1 Flex System node x240 Base Model 2
A1BB Intel Xeon Processor E5-2680 8C 2.7GHz 20MB Cache 1600MHz 130W 2
A1D9 2
nd
Intel Xeon Processor E5-2680 8C 2.7GHz 20MB Cache 1600MHz 130W 2
A2U5 16GB (1x16GB, 2Rx4, 1.5V) 1600 MHz LP RDIMM 32
5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD 4
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 73
A1BL IBM Flex System Compute Node 2.5" SAS 2.0 Backplane 2
A1C2 System Documentation and Software-US English 2
A1BD IBM Flex System x240 Compute Node 2
A1R1 IBM Flex System CN4054 10Gb Virtual Fabric Adapter 2
A1R0 IBM Flex System Virtual Fabric Adapter Upgrade (SW Upgrade) 2

Flex System x240 Compute Node

8737MC1 Flex System node x240 Base Model 4
A2ER Intel Xeon Processor E5-2690 8C 2.9GHz 20MB Cache 1600MHz 135W 8
A2ES 2nd Intel Xeon Processor E5-2690 8C 2.9GHz 20MB Cache 1600MHz 135W 8
A2U5 16GB (1x16GB, 2Rx4, 1.5V) 1600 MHz LP RDIMM 64
5599 IBM 300GB 10K 6Gbps SAS 2.5" SFF Slim-HS HDD 16
A1BL IBM Flex System Compute Node 2.5" SAS 2.0 Backplane 8
A1C2 System Documentation and Software-US English 8
A1BD IBM Flex System x240 Compute Node 8
A1R1 IBM Flex System CN4054 10Gb Virtual Fabric Adapter 8
A1R0 IBM Flex System CN4054 Virtual Fabric Adapter (SW Upgrade) 8


Rack

9363RC4 IBM 42U 1100mm Enterprise V2 Dynamic Rack 1
A2EV RFID Tag, AG/AP: 902-928Mhz 1

DPI Single phase 60A/208V C19 Enterprise PDU (US) 2
4275 5U black plastic filler panel 6
4271 1U black plastic filler panel 2
2304 Rack Assembly - 42U Rack 1


Software
0027 IBM Flex System V7000 Base SW Per Storage Device with 1 Year SW Maintenance 1
0036
IBM Flex System V7000 Base SW Per Storage Device SW Maintenance 3 Yr
Registration 1
0672 Windows Server 2012 Datacenter Per 2 Processor Server 4


Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 74
Networking Worksheets
The table below provides a reference for the mapping of hosts to switch ports, and VLAN
assignments.



Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 75
Switch Configuration
Switch-1
version "7.7.5"
switch-type "IBM Flex System Fabric CN4093 10Gb Converged Scalable
Switch(Upgrade1)"
!
!

!
system port EXT11,EXT12 type fc
!
system idle 60
!
!
interface port INTA1
tagging
no flowcontrol
exit
!
interface port INTA2
tagging
no flowcontrol
exit
!
interface port INTA3
tagging
no flowcontrol
exit
!
interface port INTA4
tagging
no flowcontrol
exit
!
interface port INTA5
tagging
no flowcontrol
exit
!
interface port INTA6
tagging
no flowcontrol
exit
!
interface port INTA7
tagging
no flowcontrol
exit
!
interface port INTA8
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 76
tagging
no flowcontrol
exit
!
interface port INTA9
tagging
no flowcontrol
exit
!
interface port INTA10
tagging
no flowcontrol
exit
!
interface port INTA11
no flowcontrol
exit
!
interface port INTA12
no flowcontrol
exit
!
interface port INTA13
pvid 1002
no flowcontrol
exit
!
interface port INTA14
pvid 1002
no flowcontrol
exit
!
interface port INTB1
tagging
no flowcontrol
exit
!
interface port INTB2
tagging
no flowcontrol
exit
!
interface port INTB3
tagging
no flowcontrol
exit
!
interface port INTB4
tagging
no flowcontrol
exit
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 77
!
interface port INTB5
tagging
no flowcontrol
exit
!
interface port INTB6
tagging
no flowcontrol
exit
!
interface port INTB7
tagging
no flowcontrol
exit
!
interface port INTB8
tagging
no flowcontrol
exit
!
interface port INTB9
tagging
no flowcontrol
exit
!
interface port INTB10
tagging
no flowcontrol
exit
!
interface port INTB11
no flowcontrol
exit
!
interface port INTB12
no flowcontrol
exit
!
interface port INTB13
no flowcontrol
exit
!
interface port INTB14
no flowcontrol
exit
!
interface port EXT1
tagging
pvid 4094
exit
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 78
!
interface port EXT2
tagging
pvid 4094
exit
!
interface port EXT13
tagging
exit
!
vlan 1
member INTA1-INTA12,INTB1-INTB14,EXT3-EXT16
no member INTA13-INTA14,EXT1-EXT2
!
vlan 2
enable
name "VLAN 2"
!
vlan 30
enable
name "VLAN 30"
member INTA9-INTA10,EXT1-EXT2
!
vlan 31
enable
name "VLAN 31"
member INTB9-INTB10,EXT1-EXT2
!
vlan 33
enable
name "VLAN 33"
member INTB9-INTB10,EXT1-EXT2
!
vlan 35
enable
name "VLAN 35"
member INTB9-INTB10,EXT1-EXT2
!
vlan 37
enable
name "VLAN 37"
member INTA1-INTA8,EXT1-EXT2
!
vlan 38
enable
name "VLAN 38"
member INTB1-INTB8,EXT1-EXT2
!
vlan 40
enable
name "VLAN 40"
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 79
member INTA1-INTA10,INTB9-INTB10,EXT1-EXT2,EXT13
!
vlan 50
enable
name "VLAN 50"
member INTB1-INTB8,EXT1-EXT2,EXT13
!
vlan 1002
enable
name "VLAN 1002"
member INTA1-INTA10,INTA13-INTA14,EXT11-EXT12
fcf enable
!
vlan 4094
enable
name "VLAN 4094"
member EXT1-EXT2
!
!
spanning-tree stp 2 vlan 2
!
no spanning-tree stp 20 enable
spanning-tree stp 20 vlan 4094
!
spanning-tree stp 30 vlan 30
!
spanning-tree stp 31 vlan 31
!
spanning-tree stp 33 vlan 33
!
spanning-tree stp 35 vlan 35
!
spanning-tree stp 37 vlan 37
!
spanning-tree stp 38 vlan 38
!
spanning-tree stp 40 vlan 40
!
spanning-tree stp 50 vlan 50
!
spanning-tree stp 113 vlan 1002
!
!
interface port INTB1
lacp mode active
lacp key 101
!
interface port INTB2
lacp mode active
lacp key 102
!
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 80
interface port INTB3
lacp mode active
lacp key 103
!
interface port INTB4
lacp mode active
lacp key 104
!
interface port INTB5
lacp mode active
lacp key 105
!
interface port INTB6
lacp mode active
lacp key 106
!
interface port INTB7
lacp mode active
lacp key 107
!
interface port INTB8
lacp mode active
lacp key 108
!
interface port INTB9
lacp mode active
lacp key 109
!
interface port INTB10
lacp mode active
lacp key 110
!
interface port EXT1
lacp mode active
lacp key 100
!
interface port EXT2
lacp mode active
lacp key 100
!
!
!
vlag enable
vlag tier-id 10
vlag isl adminkey 100
vlag adminkey 101 enable
vlag adminkey 102 enable
vlag adminkey 103 enable
vlag adminkey 104 enable
vlag adminkey 105 enable
vlag adminkey 106 enable
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 81
vlag adminkey 107 enable
vlag adminkey 108 enable
vlag adminkey 109 enable
vlag adminkey 110 enable
!
!
fcoe fips enable
!
cee enable
!
!
!
!
!
!
fcalias ITE1_PortA1 wwn 10:00:00:90:fa:1e:cd:21
fcalias ITE2_PortA2 wwn 10:00:00:90:fa:1e:d0:81
fcalias ITE3_PortA3 wwn 10:00:00:90:fa:1e:d7:49
fcalias ITE4_PortA4 wwn 10:00:00:90:fa:1e:cd:31
fcalias ITE5_PortA5 wwn 10:00:00:90:fa:1e:d1:b1
fcalias ITE6_PortA6 wwn 10:00:00:90:fa:1e:dd:1f
fcalias ITE7_PortA7 wwn 10:00:00:90:fa:1e:8a:71
fcalias ITE8_PortA8 wwn 10:00:00:90:fa:1e:19:cd
fcalias ITE9_PortA9 wwn 10:00:00:90:fa:1e:cd:61
fcalias ITE10_PortA10 wwn 10:00:00:90:fa:1e:d1:c1
fcalias V7KLeft_PortA13 wwn 50:05:07:68:05:04:05:b2
fcalias V7KRight_PortA14 wwn 50:05:07:68:05:04:05:b3
fcalias vmSQL1_vSAN1_Addr_A wwn c0:03:ff:d1:90:f4:00:04
fcalias vmSQL2_vSAN1_Addr_B wwn c0:03:ff:03:74:0b:00:01
fcalias vmSQL2_vSAN1_Addr_A wwn c0:03:ff:03:74:0b:00:00
fcalias vmSQL1_vSAN1_Addr_B wwn c0:03:ff:d1:90:f4:00:05
fcalias vmVMM1_vSAN1_Addr_A wwn c0:03:ff:d1:90:f4:00:08
fcalias vmVMM1_vSAN1_Addr_B wwn c0:03:ff:d1:90:f4:00:09
fcalias vmVMM2_vSAN1_Addr_A wwn c0:03:ff:03:74:0b:00:04
fcalias vmVMM2_vSAN1_Addr_B wwn c0:03:ff:03:74:0b:00:05
zone name SW1_MgmtSvrs
member fcalias ITE9_PortA9
member fcalias ITE10_PortA10
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zone name SW1_ComputeSvrs
member fcalias ITE1_PortA1
member fcalias ITE2_PortA2
member fcalias ITE3_PortA3
member fcalias ITE4_PortA4
member fcalias ITE5_PortA5
member fcalias ITE6_PortA6
member fcalias ITE7_PortA7
member fcalias ITE8_PortA8
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 82
zone name SW1_SQL_Cluster
member fcalias vmSQL1_vSAN1_Addr_A
member fcalias vmSQL1_vSAN1_Addr_B
member fcalias vmSQL2_vSAN1_Addr_A
member fcalias vmSQL2_vSAN1_Addr_B
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zone name SW1_VMM_Cluster
member fcalias vmVMM1_vSAN1_Addr_A
member fcalias vmVMM1_vSAN1_Addr_B
member fcalias vmVMM2_vSAN1_Addr_A
member fcalias vmVMM2_vSAN1_Addr_B
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zoneset name SW1ZoneSet
member SW1_ComputeSvrs
member SW1_MgmtSvrs
member SW1_SQL_Cluster
member SW1_VMM_Cluster
zoneset activate name SW1ZoneSet
!
!
!
!
!
!
!
!
!
ntp enable
ntp ipv6 primary-server fe80::211:25ff:fec3:72ba MGT
ntp interval 15
ntp authenticate
ntp primary-key 11217
!
ntp message-digest-key 11217 md5-ekey
"656d252c616820283c34e6e7a2d883dad38188dd2476303c4d4f7830c536ada56f22e
78e3eaaebd0324b467f3bb79431038f95d8f04a6ec9d36fbae29ffe0ecb"
!
ntp message-digest-key 25226 md5-ekey
"528812c9428802881fd4f3a28138a17ac3a3f4ed98510677a93ff4b7ec49907bf83a3
cb36eff6f6d946eed01480cb91d1cec86fbe6b911b3abc8940fb6a146ae"
!
ntp message-digest-key 38739 md5-ekey
"42db029a42ca028a1f96e3f3817aa178ce0368a79d2b18cb74541f762fff171c2a239
5e3416bba0683710cdaa9044f50547a5effb387e04250d059e3eb2c8421"
!
ntp trusted-key 11217,25226,38739
!
end

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 83



Switch-2
version "7.7.5"
switch-type "IBM Flex System Fabric CN4093 10Gb Converged Scalable
Switch(Upgrade1)"
!
!

!
system port EXT11,EXT12 type fc
!
system idle 60
!
!
interface port INTA1
tagging
no flowcontrol
exit
!
interface port INTA2
tagging
no flowcontrol
exit
!
interface port INTA3
tagging
no flowcontrol
exit
!
interface port INTA4
tagging
no flowcontrol
exit
!
interface port INTA5
tagging
no flowcontrol
exit
!
interface port INTA6
tagging
no flowcontrol
exit
!
interface port INTA7
tagging
no flowcontrol
exit
!
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 84
interface port INTA8
tagging
no flowcontrol
exit
!
interface port INTA9
tagging
no flowcontrol
exit
!
interface port INTA10
tagging
no flowcontrol
exit
!
interface port INTA11
no flowcontrol
exit
!
interface port INTA12
no flowcontrol
exit
!
interface port INTA13
pvid 1002
no flowcontrol
exit
!
interface port INTA14
pvid 1002
no flowcontrol
exit
!
interface port INTB1
tagging
no flowcontrol
exit
!
interface port INTB2
tagging
no flowcontrol
exit
!
interface port INTB3
tagging
no flowcontrol
exit
!
interface port INTB4
tagging
no flowcontrol
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 85
exit
!
interface port INTB5
tagging
no flowcontrol
exit
!
interface port INTB6
tagging
no flowcontrol
exit
!
interface port INTB7
tagging
no flowcontrol
exit
!
interface port INTB8
tagging
no flowcontrol
exit
!
interface port INTB9
tagging
no flowcontrol
exit
!
interface port INTB10
tagging
pvid 2
no flowcontrol
exit
!
interface port INTB11
no flowcontrol
exit
!
interface port INTB12
no flowcontrol
exit
!
interface port INTB13
no flowcontrol
exit
!
interface port INTB14
no flowcontrol
exit
!
interface port EXT1
tagging
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 86
pvid 4094
exit
!
interface port EXT2
tagging
pvid 4094
exit
!
interface port EXT13
tagging
exit
!
vlan 1
member INTA1-INTA12,INTB1-INTB14,EXT3-EXT16
no member INTA13-INTA14,EXT1-EXT2
!
vlan 2
enable
name "VLAN 2"
member INTA9,INTB9-INTB10
!
vlan 30
enable
name "VLAN 30"
member INTB9-INTB10,EXT1-EXT2
!
vlan 31
enable
name "VLAN 31"
member INTA9-INTA10,EXT1-EXT2
!
vlan 33
enable
name "VLAN 33"
member INTA9-INTA10,EXT1-EXT2
!
vlan 35
enable
name "VLAN 35"
member INTA9-INTA10,EXT1-EXT2
!
vlan 37
enable
name "VLAN 37"
member INTB1-INTB8,EXT1-EXT2
!
vlan 38
enable
name "VLAN 38"
member INTA1-INTA8,EXT1-EXT2
!
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 87
vlan 40
enable
name "VLAN 40"
member INTA9-INTA10,INTB1-INTB10,EXT1-EXT2,EXT13
!
vlan 50
enable
name "VLAN 50"
member INTA1-INTA8,EXT1-EXT2,EXT13
!
vlan 1002
enable
name "VLAN 1002"
member INTA13-INTB10,EXT11-EXT12
fcf enable
!
vlan 4094
enable
name "VLAN 4094"
member EXT1-EXT2
!
!
spanning-tree stp 2 vlan 2
!
no spanning-tree stp 20 enable
spanning-tree stp 20 vlan 4094
!
spanning-tree stp 30 vlan 30
!
spanning-tree stp 31 vlan 31
!
spanning-tree stp 33 vlan 33
!
spanning-tree stp 35 vlan 35
!
spanning-tree stp 37 vlan 37
!
spanning-tree stp 38 vlan 38
!
spanning-tree stp 40 vlan 40
!
spanning-tree stp 50 vlan 50
!
spanning-tree stp 113 vlan 1002
!
!
no logging console
!
interface port INTA1
lacp mode active
lacp key 101
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 88
!
interface port INTA2
lacp mode active
lacp key 102
!
interface port INTA3
lacp mode active
lacp key 103
!
interface port INTA4
lacp mode active
lacp key 104
!
interface port INTA5
lacp mode active
lacp key 105
!
interface port INTA6
lacp mode active
lacp key 106
!
interface port INTA7
lacp mode active
lacp key 107
!
interface port INTA8
lacp mode active
lacp key 108
!
interface port INTA9
lacp mode active
lacp key 109
!
interface port INTA10
lacp mode active
lacp key 110
!
interface port EXT1
lacp mode active
lacp key 100
!
interface port EXT2
lacp mode active
lacp key 100
!
!
!
vlag enable
vlag tier-id 10
vlag isl adminkey 100
vlag adminkey 101 enable
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 89
vlag adminkey 102 enable
vlag adminkey 103 enable
vlag adminkey 104 enable
vlag adminkey 105 enable
vlag adminkey 106 enable
vlag adminkey 107 enable
vlag adminkey 108 enable
vlag adminkey 109 enable
vlag adminkey 110 enable
!
!
fcoe fips enable
!
cee enable
!
!
!
!
!
!
fcalias INT1_PortB1 wwn 10:00:00:90:fa:1e:cd:2d
fcalias ITE2_PortB2 wwn 10:00:00:90:fa:1e:d0:8d
fcalias ITE3_PortB3 wwn 10:00:00:90:fa:1e:d7:55
fcalias ITE4_PortB4 wwn 10:00:00:90:fa:1e:cd:3d
fcalias ITE5_PortB5 wwn 10:00:00:90:fa:1e:d1:bd
fcalias ITE6_PortB6 wwn 10:00:00:90:fa:1e:dd:2b
fcalias ITE7_PortB7 wwn 10:00:00:90:fa:1e:8a:7d
fcalias ITE8_PortB8 wwn 10:00:00:90:fa:1e:19:d9
fcalias ITE9_PortB9 wwn 10:00:00:90:fa:1e:cd:6d
fcalias ITE10_PortB10 wwn 10:00:00:90:fa:1e:d1:cd
fcalias V7KLeft_PortA13 wwn 50:05:07:68:05:08:05:b2
fcalias V7KRight_PortA14 wwn 50:05:07:68:05:08:05:b3
fcalias vmSQL1_vSAN2_Addr_A wwn c0:03:ff:d1:90:f4:00:06
fcalias vmSQL1_vSAN2_Addr_B wwn c0:03:ff:d1:90:f4:00:07
fcalias vmSQL2_vSAN2_Addr_A wwn c0:03:ff:03:74:0b:00:02
fcalias vmSQL2_vSAN2_Addr_B wwn c0:03:ff:03:74:0b:00:03
fcalias vmVMM1_vSAN2_Addr_A wwn c0:03:ff:d1:90:f4:00:0a
fcalias vmVMM1_vSAN2_Addr_B wwn c0:03:ff:d1:90:f4:00:0b
fcalias vmVMM2_vSAN2_Addr_A wwn c0:03:ff:03:74:0b:00:06
fcalias vmVMM2_vSAN2_Addr_B wwn c0:03:ff:03:74:0b:00:07
zone name SW2_MgmtSvrs
member fcalias ITE9_PortB9
member fcalias ITE10_PortB10
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zone name SW2_ComputeSvrs
member fcalias INT1_PortB1
member fcalias ITE2_PortB2
member fcalias ITE3_PortB3
member fcalias ITE4_PortB4
member fcalias ITE5_PortB5
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 90
member fcalias ITE6_PortB6
member fcalias ITE7_PortB7
member fcalias ITE8_PortB8
member fcalias V7KRight_PortA14
member fcalias V7KLeft_PortA13
zone name SW2_SQL_Cluster
member fcalias vmSQL1_vSAN2_Addr_A
member fcalias vmSQL1_vSAN2_Addr_B
member fcalias vmSQL2_vSAN2_Addr_A
member fcalias vmSQL2_vSAN2_Addr_B
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zone name SW2_VMM_Cluster
member fcalias vmVMM1_vSAN2_Addr_A
member fcalias vmVMM1_vSAN2_Addr_B
member fcalias vmVMM2_vSAN2_Addr_A
member fcalias vmVMM2_vSAN2_Addr_B
member fcalias V7KLeft_PortA13
member fcalias V7KRight_PortA14
zoneset name SW2ZoneSet
member SW2_ComputeSvrs
member SW2_MgmtSvrs
member SW2_SQL_Cluster
member SW2_VMM_Cluster
zoneset activate name SW2ZoneSet
!
!
!
!
!
!
!
!
!
ntp enable
ntp ipv6 primary-server fe80::211:25ff:fec3:72ba MGT
ntp interval 15
ntp authenticate
ntp primary-key 11217
!
ntp message-digest-key 11217 md5-ekey
"cc487ae70c002aa2a7b2b3a6cfb08950530557d3fe9dc8148470d9b1fe20f37141315
a7c3ea22f848dec77ee11bde9e6bc4098cc760a8848ead2ec8bac2d4c2d"
!
ntp message-digest-key 25226 md5-ekey
"db7d6dd219042882b2b6a6b3dab48b70bb59a20a43e5702d0ec06b5bd05538bf192b7
1d37bcc9918cfad2c4ae734090ca45029be04339bd75668dd5c4a7548a6"
!
ntp message-digest-key 26424 md5-ekey
"dc2e6a811c042a80b7b6a3e2dfb48972b761487e30d045c804fce00f62bb05ce844a4
77877718285981e922f20872edb9be73ea1de5d332457e865d381cecccd"
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 91
!
ntp message-digest-key 38739 md5-ekey
"d6d7607814042028bfb6a3b3d7b483da24747141da244318a3b827c147569dab9fdf2
b2e1bd6b60cc363014f71f3bfea9d51df3bc242e499557c404ac1e199cb"
!
ntp message-digest-key 52895 md5-ekey
"e58c532305040222aeb6b2e2c6b4a1d03d5bb5240090fba701cbe91321b26eaf03506
f69b8bddf53968584c3505e6f4a2f5bd84fa564d6e8cd1ada709dba25b7"
!
ntp trusted-key 11217,25226,26424,38739,52895
!
end


PowerShell Scripts
Management Node Network Configuration
#Build the string list for the Physical NIC Interface Names. This needs to be modified per ITE.
$NewNameList = "PortA9_SW1", "PortA9_SW2 VLAG", "PortB9_SW1 VLAG", "PortB9_SW2"

$NIC_List=Get-NetAdapterHardwareInfo | Sort-Object -Property Bus,Function

#Assumes that the list returned will be in the same order since sorted by Bus & Function.
Rename-NetAdapter $NIC_List[0].name -NewName $NewNameList[0]
Rename-NetAdapter $NIC_List[1].name -NewName $NewNameList[1]
Rename-NetAdapter $NIC_List[2].name -NewName $NewNameList[2]
Rename-NetAdapter $NIC_List[3].name -NewName $NewNameList[3]

#Create the Managment Team, vSwitch, and vNICs
New-NetLbfoTeam -name "ClusterTeam" -TeamMembers $NIC_List[0].name, $NIC_List[3].name -
TeamingMode SwitchIndependent
New-VMSwitch "MgmtClusterPublic" -NetAdapterName "ClusterTeam" -MinimumBandwidthMode
Absolute -AllowManagementOS $true
Add-VMNetworkAdapter -ManagementOS -Name "MgmtClusterPrivate" -SwitchName
"MgmtClusterPublic"

#Create the VLAG Team, vSwitch, and vNIC
#New-NetLbfoTeam -name "VLAG" -TeamMembers $NIC_List[1].name, $NIC_List[2].name -
TeamingMode Lacp
New-VMSwitch "VM_Communication" -NetAdapterName "VLAG" -MinimumBandwidthMode Weight -
AllowManagementOS $true
Rename-VMNetworkAdapter -NewName "MgmtLiveMigration" -ManagementOS -Name
"VM_Communication"

#Set the Bandwidth restrictions for each VMNetworkAdapter facing the ManagementOS
Set-VMNetworkAdapter -ManagementOS -Name "MgmtClusterPublic" -MaximumBandwidth 2GB
Set-VMNetworkAdapter -ManagementOS -Name "MgmtClusterPrivate" -MaximumBandwidth 1GB
Set-VMNetworkAdapter -ManagementOS -Name "MgmtLiveMigration" -MinimumBandwidthWeight 30

#Set the VLANs for the above devices (Compute will be different than Mgmt)
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "MgmtClusterPublic" -Access -
VlanId 40
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "MgmtClusterPrivate" -Access
-VlanId 30
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 92
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "MgmtLiveMigration" -Access
-VlanId 31

#Get-NetAdapterHardwareInfo | Sort-Object -Property bus,Function
Get-VMNetworkAdapter -ManagementOS

#Set the IP Addresses for the Management Interfaces
New-NetIPAddress -InterfaceAlias "vEthernet (MgmtClusterPublic)" -IPAddress 192.168.40.21 -
PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (MgmtClusterPrivate)" -IPAddress 192.168.30.21 -
PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (MgmtLiveMigration)" -IPAddress 192.168.31.21 -
PrefixLength 24

#Set the DNS Address
Set-DnsClientServerAddress -Interfacealias "vEthernet (MgmtClusterPublic)" -ServerAddress
192.168.40.10

Compute Node Network Configuration
#Build the string list for the Physical NIC Interface Names. This needs to be modified per ITE.
$NewNameList = "PortA1_SW1", "PortA1_SW2 VLAG", "PortB1_SW1 VLAG", "PortB1_SW2"

$NIC_List=Get-NetAdapterHardwareInfo | Sort-Object -Property Bus,Function

#Assumes that the list returned will be in the same order since sorted by Bus & Function.
Rename-NetAdapter $NIC_List[0].name -NewName $NewNameList[0]
Rename-NetAdapter $NIC_List[1].name -NewName $NewNameList[1]
Rename-NetAdapter $NIC_List[2].name -NewName $NewNameList[2]
Rename-NetAdapter $NIC_List[3].name -NewName $NewNameList[3]

#Create the Managment Team, vSwitch, and vNICs
New-NetLbfoTeam -name "ClusterTeam" -TeamMembers $NIC_List[0].name, $NIC_List[3].name -
TeamingMode SwitchIndependent
New-VMSwitch -Name "ComputeClusterPublic" -NetAdapterName "ClusterTeam" -
MinimumBandwidthMode Absolute -AllowManagementOS $true
Add-VMNetworkAdapter -ManagementOS -Name "ComputeClusterPrivate" -SwitchName
"ComputeClusterPublic"

#Create the VLAG Team, vSwitch, and vNIC
New-NetLbfoTeam -name "VLAG" -TeamMembers $NIC_List[1].name, $NIC_List[2].name -
TeamingMode Lacp
New-VMSwitch -Name "VM_Communication" -NetAdapterName "VLAG" -MinimumBandwidthMode
Weight -AllowManagementOS $true
Rename-VMNetworkAdapter -NewName "ComputeLiveMigration" -ManagementOS -Name
"VM_Communication"

#Set the Bandwidth restrictions for each VMNetworkAdapter facing the ManagementOS
Set-VMNetworkAdapter -ManagementOS -Name "ComputeClusterPublic" -MaximumBandwidth 2GB
Set-VMNetworkAdapter -ManagementOS -Name "ComputeClusterPrivate" -MaximumBandwidth 1GB
Set-VMNetworkAdapter -ManagementOS -Name "ComputeLiveMigration" -MinimumBandwidthWeight 30

#Set the VLANs for the above devices (Compute will be different than Mgmt)
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "ComputeClusterPublic" -
Access -VlanId 40
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 93
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "ComputeClusterPrivate" -
Access -VlanId 37
Set-VMNetworkAdapterVLAN -ManagementOS -VMNetworkAdapterName "ComputeLiveMigration" -
Access -VlanId 38

#Set the IP Addresses for the Management Interfaces. This will have to be modified per ITE at this time.
New-NetIPAddress -InterfaceAlias "vEthernet (ComputeClusterPublic)" -IPAddress 192.168.40.31 -
PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (ComputeClusterPrivate)" -IPAddress 192.168.37.31 -
PrefixLength 24
New-NetIPAddress -InterfaceAlias "vEthernet (ComputeLiveMigration)" -IPAddress 192.168.38.31 -
PrefixLength 24

#Set the DNS Address
Set-DnsClientServerAddress -Interfacealias "vEthernet (ComputeClusterPublic)" -ServerAddress
192.168.40.10

#Get-NetAdapterHardwareInfo | Sort-Object -Property bus,Function
Get-VMNetworkAdapter -ManagementOS
Get-VMNetworkAdapterVLAN ManagementOS

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 94
Network Address Tables
VLAN 30 Addresses (Management Cluster Private) IP Addresses
Management Server1 192.168.30.21
Management Server2 192.168.30.22

VLAN 31 Addresses (Management Cluster Live Migration) IP Addresses
Management Server1 192.168.31.21
Management Server2 192.168.31.22

VLAN 33 Addresses (SQL Cluster Private) IP Addresses
SQL1 192.168.33.41
SQL2 192.168.33.42

VLAN 35 Addresses (SCVMM Cluster Private) IP Addresses
SCVMM1 192.168.35.51
SCVMM2 192.168.35.52

VLAN 37 Addresses (Production Cluster Private) IP Addresses
Hyper-V Compute Node1 192.168.37.31
Hyper-V Compute Node2 192.168.37.32
Hyper-V Compute Node3 192.168.37.33
Hyper-V Compute Node4 192.168.37.34
Hyper-V Compute Node5 192.168.37.35
Hyper-V Compute Node6 192.168.37.36
Hyper-V Compute Node7 192.168.37.37
Hyper-V Compute Node8 192.168.37.38

VLAN 38 Addresses (Production Cluster Live Migration) IP Addresses
Hyper-V Compute Node1 192.168.38.31
Hyper-V Compute Node2 192.168.38.32
Hyper-V Compute Node3 192.168.38.33
Hyper-V Compute Node4 192.168.38.34
Hyper-V Compute Node5 192.168.38.35
Hyper-V Compute Node6 192.168.38.36
Hyper-V Compute Node7 192.168.38.37
Hyper-V Compute Node8 192.168.38.38

VLAN 40 Addresses (Management/Cluster Public) IP Addresses
AD Server 192.168.40.10
AD Server (ALT) 192.168.40.11
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 95
Management Server 1 192.168.40.21
Management Server 2 192.168.40.22
Management Cluster 192.168.40.29
SQL Server VM1 192.168.40.41
SQL Server VM2 192.168.40.42
SQL Service Manager Cluster 192.168.40.43
SQL Service Manager Data Warehouse Cluster 192.168.40.44
SQL Service Manager Analysis Cluster 192.168.40.45
SQL Service App Controller/Orch/Sharepoint/WSUS Cluster 192.168.40.46
SQL Service Virtual Machine Manager Cluster 192.168.40.47
SQL Service Operations Manager Cluster 192.168.40.48
SQL Service Operations Manager Data Warehouse Cluster 192.168.40.49
SCVMM Server VM1 192.168.40.51
SCVMM Server VM2 192.168.40.52
SCVMM Cluster 192.168.40.59
System Center Operations Manager VM1 192.168.40.61
System Center Operations Manager VM2 192.168.40.62
System Center Ops Manager Reporting Server 192.168.40.66
System Center Orchestrator (Mgmt & Action Svr) 192.168.40.71
System Center Orchestrator (Supplemental Action Svr) 192.168.40.72
Service Manager (Change Management) 192.168.40.81
Service Manager (Datawarehouse) 192.168.40.82
Service Manager (Portal) 192.168.40.83
Optional Windows Deployment Server (& WSUS) 192.168.40.91
Hyper-V Production Server 1 192.168.40.31
Hyper-V Production Server 2 192.168.40.32
Hyper-V Production Server 3 192.168.40.33
Hyper-V Production Server 4 192.168.40.34
Hyper-V Production Server 5 192.168.40.35
Hyper-V Production Server 6 192.168.40.36
Hyper-V Production Server 7 192.168.40.37
Hyper-V Production Server 8 192.168.40.38
Hyper-V Production Cluster 192.168.40.39

VLAN 50 Addresses (Production)
IP Address
Range
Production VMs as needed 192.168.50.xx

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 96
The team who wrote this paper
Thanks to the authors of this paper working with the International Technical Support Organization,
Raleigh Center.

Scott Smith is a Senior Solutions Architect working at the IBM Center for Microsoft Technology. Over
the past 15 years, Scott has worked to optimize the performance of IBM x86-based servers running
the Microsoft Windows Server operating system and Microsoft application software. Recently his
focus has been on Microsoft Hyper-V based solutions with IBM System x servers, storage, and
networking. He has extensive experience in helping IBM customers understand the issues that they
are facing and developing solutions that address them.

David Ye is a Senior Solutions Architect and has been working at IBM Center for Microsoft
Technologies for 15 years. He started his career at IBM as a Worldwide Windows Level 3 Support
Engineer. In this role, he helped IBM customers solve complex problems and was involved in many
critical customer support cases. He is now a Senior Solutions Architect in the IBM System x
Enterprise Solutions Technical Services group, where he works with customers on Proof of Concepts,
solution sizing, performance optimization, and solution reviews. His area of expertise are Windows
Server, SAN Storage, Virtualization, and Microsoft Exchange Server.

Thanks to the following people for their contributions to this project:
Dan Ghidali, IBM Senior Development Engineer
Mike Miller, IBM Advisory Software Engineer
Terry Broman, IBM Software Developer
Anh Tran, IBM Software Engineer
Vinay Kulkarni, IBM Senior Performance Engineer


Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 97
Trademarks and special notices
Copyright IBM Corporation 2013. All rights Reserved.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked
terms are marked on their first occurrence in this information with a trademark symbol ( or ), these
symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information
was published. Such trademarks may also be registered or common law trademarks in other countries. A
current list of IBM trademarks is available on the Web at "Copyright and trademark information" at
www.ibm.com/legal/copytrade.shtml.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
SET and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used IBM
products and the results they may have achieved. Actual environmental costs and performance
characteristics may vary by customer.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.
All statements regarding IBM future direction and intent are subject to change or withdrawal without
notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller
for the full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a
definitive statement of a commitment to specific levels of performance, function or delivery schedules with
respect to any future products. Such commitments are only made in IBM product announcements. The
Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 98
information is presented here to communicate IBM's current investment and development activities as a
good faith effort to help with our customers' future planning.
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending
upon considerations such as the amount of multiprogramming in the user's job stream, the I/O
configuration, the storage configuration, and the workload processed. Therefore, no assurance can be
given that an individual user will achieve throughput or performance improvements equivalent to the
ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.

Hyper-V Fast Track on IBM Flex System

Copyright IBM Corporation, 2013 Page 99




Copyright 2013 by International Business Machines Corporation.
IBM Systems and Technology Group
Dept. U2SA
3039 Cornwallis Road
Research Triangle Park, NC 27709
Produced in the USA
November 2013
All rights reserved
Warranty Information: For a copy of applicable product warranties, write to: Warranty Information, P.O. Box 12195, RTP, NC 27709, Attn: Dept. JDJA/B203. IBM
makes no representation or warranty regarding third-party products or services including those designated as ServerProven or ClusterProven. Telephone support may
be subject to additional charges. For onsite labor, IBM will attempt to diagnose and resolve the problem remotely before sending a technician.
IBM, the IBM logo, System x, X-Architecture and System Storage are trademarks or registered trademarks of IBM Corporation in the United States and/or other
countries. For a list of additional IBM trademarks, please see http://ibm.com/legal/copytrade.shtml.
Intel and Xeon are registered trademarks of Intel Corporation.
Microsoft, Windows, SQL Server, Hyper-V are registered trademarks of Microsoft Corporation in the United States and/or other countries.
All other company/product names and service marks may be trademarks or registered trademarks of their respective companies.
This document could include technical inaccuracies or typographical errors. IBM may make changes, improvements or alterations to the products, programs and
services described in this document, including termination of such products, programs and services, at any time and without notice. Any statements regarding IBMs
future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document is
current as of the initial date of publication only, and IBM shall have no responsibility to update such information. Performance data for IBM and non-IBM products
and services contained in this document was derived under specific operating and environmental conditions. The actual results obtained by any party implementing
and such products or services will depend on a large number of factors specific to such partys operating environment and may vary significantly. IBM makes no
representation that these results can be expected or obtained in any implementation of any such products or services.
THE INFORMATION IN THIS DOCUMENT IS PROVIDED AS-IS WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY
DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR INFRINGEMENT. References in this document
to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates
or does business. Any reference to an IBM program or product in this document is not intended to state or imply that only that program or product may be used. Any
functionally equivalent program or product, that does not infringe IBMs intellectually property rights, may be used instead. It is the users responsibility to evaluate
and verify the operation of any non-IBM product, program or service.
Information in this presentation concerning non-IBM products was obtained from the suppliers of these products, published announcement material or other publicly
available sources. IBM has not tested these products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
The provision of the information contained herein is not intended to, and does not grant any right or license under any IBM patents or copyrights. Inquiries regarding
patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A.

Das könnte Ihnen auch gefallen