Beruflich Dokumente
Kultur Dokumente
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Contents
1 Introduction.................................................................................................................................... 3
2 Microsoft Private Cloud Fast Track Program Description .................................................... 5
2.1 The Program ..................................................................................................................................................... 5
2.2 Business Benefits ............................................................................................................................................. 5
5 Conclusion .................................................................................................................................... 42
6 Terms and Abbreviations .......................................................................................................... 43
Ver 1.0
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Introduction
The Microsoft Private Cloud Fast Track Program is a joint effort between
Microsoft and its partner original equipment manufacturers (OEMs) to help
organizations quickly develop and implement private clouds, while reducing both
the cost and the risk. It provides a reference architecture that combines Microsoft
software, consolidated guidance, and validated configurations with partner
technology-such as computing power, network and storage architectures, and
value-added software components.
Each Private Cloud Fast Track solution is an integrated solution from Microsoft
and its OEM partner. This document describes the Microsoft implementation and
does not include information about partner implementations. Instead, it is
designed to provide an overview of Microsoft guiding principles and design
criteria for this solution, and how our Hyper-V technology cloud solution
conforms to these principles.
This private cloud fast track solution combines Microsoft software, consolidated
guidance, and validated configurations with HUAWEI computing power, network,
and storage architectures:
Ver 1.0
The joint solution for server, network, storage management, and out-of-band
management in the Private Cloud;
The SANs cloning capabilities, use of data de-duplication and thinprovisioning, and Hyper-V integration;
The network infrastructure card (NIC) teaming solution, and the host
adaptors Fibre Channel (FC) and Internet Small Computer System Interface
(iSCSI) settings;
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
The private cloud model provides much of the efficiency and agility of cloud
computing along with the increased control and customization achieved through
dedicated private resources. With the Microsoft Private Cloud Fast Track
program, Microsoft and its hardware partners provide organizations both the
control and the flexibility required to reap the full benefits of the private cloud.
Ver 1.0
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Ver 1.0
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Ver 1.0
Dynamic pooling that can enhance the use of virtualization resources with
Hyper-V and with supported storage and network devices
Utilization of low-cost switches that consume less power and deliver high
throughput for large bandwidth requirements
Benefit from high performance and scalability with HUAWEI Tecal RH2288
V2 Server and T Series Unified Storage solutions along with the Microsoft
Windows Server 2012 operating system and Microsoft Hyper-V technology.
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Technical Overview
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
3.1.4 Predictability
Whether youre a consumer or a provider, predictability is a fundamental cloud
principle. From the vantage point of the consumer, cloud services should be
consistent; they should have the same quality and functionality any time they
are used. To achieve predictability, a provider must deliver an underlying
infrastructure that assures a consistent experience to the hosted workloads. This
consistency is achieved through the homogenization of underlying physical
servers, network devices, and storage systems.
From the providers service management perspective, this predictability is driven
through the standardization of service offerings and processes. The principle of
predictability is needed to ensure service quality.
3.1.6 Multitenancy
Multitenancy refers to the ability of the infrastructure to be logically subdivided
and provisioned to different organizations or organizational units. The traditional
example is a hosting company that provides servers to multiple customer
Ver 1.0
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
software, to determine the optimum scale units for input to the overall
architecture. Scale units enable the documentation of all the requirements (such
as space, power, HVAC, and connectivity) that are needed for implementation.
3.3 Servers
The hardware architecture choices that are available to data center architects are
constantly evolving. Choices range from rack-mounted servers, to tightly
integrated, highly redundant blade systems, to container models. The same
spectrum exists for storage and networking equipment.
Server scale limits are well published and include factors such as the number
and speed of CPU cores, maximum amount and speed of RAM, and the number
and type of expansion slots. Particularly important are the number and type of
onboard input-output (I/O) ports, as well as the number and type of supported I/O
cards. Both Ethernet and Fibre Channel expansion cards often provide multiport
options where a single card can have four ports. Additionally, in blade server
architectures, there are often limitations on the number of I/O card and
supported combinations. It is important to be aware of these limitations, as well
as the oversubscription ratio between blade I/O ports and any blade chassis
switch modules.
A single server is not typically a good scale unit for a Private Cloud solution, due
to the amount of overhead and cost required to install and configure an
individual server, as well as the lack of high availability.
3.4 Storage
Storage architecture is a critical design consideration for Private Cloud solutions.
The topic is challenging, because it is rapidly evolving in terms of new standards,
protocols, and implementations. Storage and the support of storage networking
are critical to the overall performance of the environment; however, they also can
play a large role in the overall cost, because storage tends to be one of the more
expensive items in a Private Cloud solution.
Storage architectures today have several layers including the storage arrays, the
storage network, the storage protocol, and for virtualization, the clustered
volume manager that utilizes the physical storage.
One of the primary objectives of the private cloud is to enable rapid provisioning
and deprovisioning of VMs. Doing so at large scale requires tight integration with
the storage architecture and robust automation. Provisioning a new VM on an
already existing logical unit number (LUN) is a simple operation. However,
provisioning a new LUN, adding it to a host cluster, and then provisioning a new
VM on that LUN involves relatively complex tasks that also greatly benefit from
Ver 1.0
10
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
automation.
For example, the HUAWEI OceanStor T series unified storage system supports
all major storage protocols (including Fibre Channel, FCoE, and iSCSI).
Additionally, all HUAWEI OceanStor T series storage systems use the same
integrated storage management software (ISM).
3.5 Networking
Many network architectures include a tiered design with three or more layers
such as core, distribution, and access. Designs are driven by the port bandwidth
and quantity required at the edge, as well as the ability of the distribution and
core layers to provide higher-speed uplinks to aggregate traffic. Additional
considerations include Ethernet broadcast boundaries and limitations, and
spanning tree and or other loop avoidance technologies.
A dedicated management network is a common feature of advanced data center
virtualization solutions. Most virtualization vendors recommend that hosts be
managed via a dedicated network to avoid competition with guest traffic needs
and to provide a degree of separation for security and ease of management
purposes. This typically implies dedicating one network interface card (NIC) per
host and one port per network device to the management network.
With advanced data center virtualization, a frequent use case is to provide
isolated networks where different owners such as particular departments or
applications are provided their own dedicated networks. Multitenant networking
refers to the use of technologies such as virtual local area networks (VLANs) or
Internet Protocol security (IPSec) isolation techniques to provide dedicated
networks that utilize a single network infrastructure or wire.
Managing the network environment in an advanced data center virtualization
solution can present challenges that must be addressed. Ideally, network settings
and policies are defined centrally and applied universally by the management
solution. In the case of IPSec-based isolation, this can be accomplished using the
Active Directory service and Group Policy to control firewall settings across the
hosts and guest as well as the IPSec policies controlling network communication.
For VLAN-based network segmentation, several components-including the host
servers, host clusters, Microsoft System Center Virtual Machine Manager, and
the network switches---must be configured correctly to enable both rapid
provisioning and network segmentation. With Hyper-V and host clusters,
identical virtual networks must be defined on all nodes so that a VM can fail over
to any node and maintain its connection to the network. At large scale, this can
be accomplished via scripting with the Windows PowerShell command-line
interface.
Ver 1.0
11
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
3.6 Virtualization
The virtualization layer is one of the primary enablers in mature IT
environments. The decoupling of hardware, operating systems, data, applications,
and user state opens up a wide range of options for better management and
distribution of workloads across the physical infrastructure. The ability of the
virtualization layer to migrate running VMs from one server to another with zero
downtime, as well as many other features that are provided by hypervisor-based
virtualization technologies, provides a rich set of capabilities. These capabilities
can be utilized by the automation, management, and orchestration layers to
maintain desired states (such as load distribution) or to proactively address
decaying hardware (such as prefailure detection) or other issues that would
otherwise cause faults or service disruptions.
As with the hardware layer, the virtualization layer must be able to be managed
by the automation, management, and orchestration layers. The abstraction of
software from hardware that virtualization provides moves the majority of
management and automation into the software space, instead of requiring people
to perform manual operations on physical hardware.
Ver 1.0
12
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Reference Architrecture
The HUAWEI and Microsoft Window Server 2012 Hyper-V Private Cloud Fast
Track solution provides reference architecture for building private clouds
according to the customers organizations unique requirements. This private
cloud fast track solution combines Microsoft software, consolidated guidance, and
validated configurations with HUAWEI computing power, network, and storage
architectures and value-added software components.
This solution integrates all the components necessary for small infrastructure
implementation. It consists of Microsoft Windows Server 2012 Hyper-V,
HUAWEI Tecal RH2288 V2 series rack servers (RH2288 V2 for short), HUAWEI
Quidway S5700 series network switches (S5700 for short) and HUAWEI
OceanStor S5000T series unified storage systems (S5000T for short). All
components are configured for high availability and reliability.
This solution provides a highly scalable and reliable platform and enables small
businesses and branch offices to take advantage of the power of cloud computing
without the complexity entailed in designing and implementing custom solutions.
The reference architecture for HUWEI solution for Microsoft Private Cloud Fast
Track is shown as follow.
Ver 1.0
13
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Figure 4-1 Reference Architecture for Microsoft Private Cloud Fast Track
Internal Network Switch
WS2012 Failover
Cluster:
2 * HUAWEI
RH2288 V2 Rack
Server
Data Center
Switch
Storage System:
HUAWEI OceanStor
S5500T Unified Storage
System
VLAN A --- iSCSI
VLAN B --- VMvSwitch
Ver 1.0
14
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
taking
advantage
of
Helps save money because less management, less space, and fewer kilowatt
hours are needed.
Virtualization can also help to simplify and accelerate provisioning, one of the
tenets of the private cloud. The acquisition of workload resources and hardware
can be decoupled. Adding the capability that is required for a particular business
process (say, a web commerce engine) becomes streamlined and immediate. In a
more advanced virtualized environment, workload requirements can be selfprovisioning, which results in dynamic resource allocation.
Figure 4-2 Server-based Consolidation through Virtualization (Supplied by Microsoft)
15
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
while allowing users to access and run their desktop applications wherever they
may be. Flexible desktop scenarios using the Windows operating system give
organizations the ability to choose the client computing scenarios that best meet
the unique needs of their business.
When an organization manages its virtual infrastructure with the same tools it
uses to manage its physical assets, system complexity can be reduced, and
changes made to the overall infrastructure can be streamlined. By using some or
all of these technologies together, organizations can provide very flexible
solutions to support many user scenarios.
Figure 4-3 Virtualization Desktop Infrastracture (Supplied by Microsoft)
Ver 1.0
16
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Ver 1.0
17
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
The detailed information for HUAWEI RH2288 V2 servers used in the HUAWEI
and Microsoft Private Cloud Fast Track solution is shown in Table 4-1.
Table 4-1 Detailed Information for HUAWEI Tecal RH2288 V2 Rack Server
Hardware
HUAWEI
Tecal RH2288
V2 Rack
Server
Description
Memory: 8 * 16 GB or 16 * 16 GB DDRIII
Network Adapter:
4 * built-in Intel 82580 Gigabyte network adapter
4 * extended Intel 82576 Gigabyte network adapter
8 Intel 1Gps Network ports in total
18
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
One network dedicated to the clustered shared volumes (CSV) and cluster
communication network;
One or more networks dedicated to the guest VMs (use 10Gbps network
adapters for highest consolidation);
For the recommended configuration by quantity and type of NIC, see the Hyper-V:
Live Migration Network Configuration Guide in the Microsoft TechNet Library.
On each HUAWEI Tecal RH2288V2 server, eight 1Gbps Ethernet network ports
are configured including 2 ports using as iSCSI connection to storage system, 2
ports using as a nic team for management console, 2 ports using as virtual
machine virtual switches and last 2 ports using as a nic team for virtual machine
live migration and heartbeat.
The detailed configuration information for HUAWEI Tecal RH2288 V2 is shown
as follow.
Figure 4-2 Networking Configuration for HUAWEI RH2288 V2 Servers
As shown above, eight network ports are divided into four groups, and each two
ports in a group are used as a team located on different network adapter, which
can implement high availability for network access. This configuration can
Ver 1.0
19
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
effectively avoid single point of failure even one network adapter is down.
In addition to redundancy, you need a platform that has optimal virtualization
support through hardware virtualization capabilities such as Intel
virtualization (Intel-VT) as the baseline, along with the Second Level Address
Translation (SLAT) capabilities of Windows Server 2012 Hyper-V to optimize for
the best possible performance. All of these features are available with HUAWEI
Tecal RH2288 V2 series rack server.
Ver 1.0
20
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
System enclosure
Controller module
The detailed information for HUAWEI S5500T storage system used in the
HUAWEI and Microsoft Private Cloud Fast Track solution is shown in Table 3-2;
Table 4-2 HUAWEI S5500T Series Unified Storage System Configuration
Hardware
Description
HUAWEI
OceanStor S5500T
Unified Storage
System
Dual controllers
21
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
22
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Fibre Channel has historically been the storage protocol of choice for enterprise
data centers for a variety of reasons, including good performance and low latency.
Over the past several years, however, the advancing performance of Ethernet
from 1Gbps to 10Gbps and beyond has led to great interest in storage protocols
that make use of Ethernet transport-such as Internet Small Computer System
Interface (iSCSI) and, more recently, Fibre Channel over Ethernet (FCoE).
A key advantage of the protocols that use the Ethernet transport is the ability to
use a converged network architecture in which a single Ethernet infrastructure
serves as the transport for both LAN and storage traffic. FCoE is an emerging
technology that brings the benefits of using an Ethernet transport while
retaining the advantages of the Fibre Channel protocol and the ability to use
Fibre Channel storage arrays.
Several enhancements to standard Ethernet are required for FCoE. This is
commonly referred to as enhanced Ethernet or data center Ethernet. These
enhancements require Ethernet switches that are capable of supporting
enhanced Ethernet. A common practice in large-scale Hyper-V deployments is
to use both Fibre Channel and iSCSI. Fibre Channel and iSCSI can provide the
host storage connectivity. In contrast, iSCSI is directly used by guests-for
example, for the shared disks in a guest cluster. In this case, although Ethernet
and some storage I/O will share the same pipe, segregation is achieved by VLANs,
and quality of service can be further applied by the OEMs networking software.
In this solution, iSCSI is used as the interconnection protocol between Hyper-V
hosts and storage system. Multiple 1Gbps network links and two HUAWEI
S5700 switches are used to implement interconnection between HUAWEI
RH2288 V2 and S5500T, shown as follow.
Ver 1.0
23
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Figure 4-7 iSCSI Network Schema Configuration for S5500T Unified Storage System
Each S5700 switch is connected to both two controllers so that even one
controller is down, the other controller can take over all the businesses,
without single point of failure.
2.
Storage Network
Both iSCSI and FCoE utilize an Ethernet transport for storage networking. This
provides another architecture choice in terms of whether to use a dedicated
Ethernet network with separate switches, cables, paths, and other infrastructure,
or instead to use a converged network where multiple traffic types are run over
the same cabling and infrastructure.
The storage solution must provide logical or physical isolation between storage
and Ethernet I/O. If its a converged network, QoS must be provided to guarantee
storage performance. The storage solution must provide iSCSI connectivity for
guest clustering, and there must be fully redundant, independent paths for
storage I/O.
Standards-based converged network adapters, switches, and Fibre Channel
storage arrays should be used for FCoE. The selected storage arrays should also
provide iSCSI connectivity over standard Ethernet so that Hyper-V guest
clusters can be used. If iSCSI or Fibre Channel is used, make sure that there are
dedicated network adapters, switches, and paths for the storage traffic.
Ver 1.0
24
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
CSV Characteristics
Table 4-3 below shows the characteristics that are defined by the New
Technology File System (NTFS) and are inherited by CSV.
Ver 1.0
25
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Characteristics
256TB
Max Partitions
128
Directory Structure
Unrestricted
4+ Billion
Unlimited
2.
Because all cluster nodes can access all CSV volumes simultaneously, IT
managers can now use standard LUN allocation methodologies based on
performance and capacity requirements of the expected workloads. Generally
speaking, isolating the VM operating system I/O from the application data I/O is
a good start. In addition, implement application-specific considerations such as
segregating the database I/O from the logging I/O and creating SAN volumes and
storage pools that factor in the I/O profile itself (that is random read and write
operations versus sequential write operations).
The architecture of CSV differs from traditional clustered file systems, which
frees it from common scalability limitations. As a result, there is no special
guidance for scaling the number of Hyper-V nodes or VMs on a CSV volume. The
important thing to keep in mind is that all VM virtual disks running on a
particular CSV will contend for storage I/O.
Also worth noting is that individual SAN LUNs do not necessarily equate to
dedicated disk spindles. A SAN storage pool or RAID array might contain many
LUNs. A LUN is simply a logical representation of a disk provisioned from a pool
of disks. Therefore, if an enterprise application requires specific storage I/O
operations per second (IOPS) or disk response times, IT managers must consider
all the LUNs in use on that storage pool. An application that requires dedicated
physical disks if it is not virtualized might require dedicated storage pools and
CSV volumes running within a VM.
Consider the following when setting up your CSV infrastructure:
Ver 1.0
At least four CSVs per host cluster are recommended for segregating
operating system I/O, random R/W I/O, sequential I/O, and other VM-specific
data.
Create a standard size and IOPS profile for each type of CSV LUN to use for
capacity planning. When additional capacity is needed, provision additional
standard CSV LUNs.
26
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Consider prioritizing the network used for CSV traffic. For more information,
refer to Designating a Preferred Network for Cluster Shared Volumes
Communication in the Microsoft TechNet Library.
Figure 4-9 Example of a common CSV design for a large Hyper-V cluster
Data storage redundancy such as that that occurs with volume mirroring, or
synchronous or asynchronous replication
Address the following elements when designing or modifying your SAN as the
basis of your Private Cloud storage infrastructure:
Ver 1.0
Performance
Drive types
Multipathing
iSCSI SAN
Thin provisioning
HUAWEI Technologies Co., Ltd
27
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Volume cloning
Volume snapshots
1.
Performance
28
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
performance.
2.
Drive Types
The type of hard drive utilized in the host server or the storage array has the
most significant impact on the overall storage architecture performance. As with
the storage connectivity, high IOPS and low latency are more critical than
maximum sustained throughput when it comes to host server sizing and guest
performance. When selecting drives, this translates into selecting those with the
highest rotational speed and lowest latency possible. Utilizing 15K RPM drives
over 10K RPM drives can result in up to 35 percent more IOPS per drive.
HUWEI OceanStor S5000T series Storage System supports SAS, NL SAS, and
SATA, FC and SSD disks, fitting into various scenarios. For high performance it
is recommended to choose solid state disk (SSD), for both performance and
capacity it is recommended to choose SSD and SAS disks, for capacity NL-SAS
and SATA disks are recommended.
For this solution, SSD and traditional 2.5 inch 10K RPM SAS disks are used for
both performance and capacity.
3.
Multipathing
iSCSI SAN
Ver 1.0
29
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Jumbo frames. If supported at all points in the entire path of the iSCSI
network, jumbo frames can increase throughput by up to 20 percent. Jumbo
frames are supported in Hyper-V at the host and guest levels.
6.
Thin Provisioning
7.
Volume Cloning
30
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Volume Snapshots
31
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
As shown in Figure 4-12, the HUAWEI RH2288 V2 servers can connect to the
existing liver migration and heartbeat LAN, iSCSI SAN and virtual machine
vswitch WAN network.
The following list outlines the general considerations for the Hyper-V host
operating system. Note that these are not meant to be installation instructions
but rather the process requirements.
Ver 1.0
32
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Use Windows Server 2012 Datacenter Edition, with either the full or server
core installation option.
Apply relevant Windows updates, and all relevant out-of-band updates that
are not offered on Microsoft Update. For more information, see Hyper-V
Update List for Windows Server 2012 in the Microsoft TechNet Library.
All nodes, networks, and storage must pass the Cluster Validation Wizard in
Windows Server 2012.
2.
Performance Settings
TCP checksum offload benefits both CPU and overall network throughput
performance, and is fully supported by live migration.
Virtual machine queue (VMQ) allows the hosts single NIC card to appear as
multiple NICs to the VMs by allowing the hosts NIC to direct memory access
(DMA) packets directly into individual VM memory stacks. Each VM device
buffer is assigned a VMQ, which avoids needless packet copies and route
lookups in the virtual switch. The result is less data in the hosts buffers and
an overall performance improvement to I/O operations.
3.
IP Networks Configuration
Ensure that the following rules are followed when setting up the IP network:
Ver 1.0
The VM network adapter should not be shared with the host operating
system and therefore should not have the TCP/IP protocols (IPv4 and IPv6)
bound to it.
4.
MPIO Configuration
33
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Microsoft MPIO architecture supports iSCSI, Fibre Channel, and Serial Attached
Storage SAN connectivity by establishing multiple sessions or connections to the
storage array.
Multipathing solutions use redundant physical path components-adapters, cables,
and switches-to create logical paths between the server and the storage device. In
the event that one or more of these components fails, causing the path to fail,
multipathing logic uses an alternate path for I/O so that applications can still
access their data. Each network interface card (in the iSCSI case) or HBA should
be connected using redundant switch infrastructures to provide continued access
to storage in the event of a failure in a storage fabric component.
Failover times vary by storage vendor, and can be configured using timers in the
Microsoft iSCSI Software Initiator driver, or by modifying the Fibre Channel host
bus adapter driver parameter settings.
Microsoft Multipath I/O (MPIO) must be used by all storage adapters, both
iSCSI and Fibre Channel.
In this solution, the Microsoft MPIO is used for iSCSI interconnection as default
option. It is recommended to use HUAWEI UltraPath in Windows Server 2012 as
the default multipath software.
5.
Ver 1.0
34
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Figure 4-14 NIC Teaming for HUAWEI and Microsoft Fast Track Solution on Server Hosts
Ver 1.0
Server Topology
35
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
A Microsoft private cloud consists of at least two Hyper-V host clusters. The first
consists of at least two nodes and is referred to as the management cluster. The
second and any additional clusters are referred to as host clusters. Each host
cluster can contain up to 64 nodes for Windows Server 2012. Host clusters
require some form of shared storage such as a Fibre Channel or iSCSI SAN..
Figure 4-15 Example of the topology of a Microsoft private cloud (supplied by Microsoft)
The Microsoft Private Cloud Fast Track configurations are made up of the
following elements:
Ver 1.0
36
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
for CSV) to the node where the disk is currently mounted. This is called CSV
redirected I/O mode.
Live migration network. During live migration, the contents of the memory
of the VM running on the source node needs to be transferred to the
destination node over a LAN connection. To provide a high-speed transfer, a
dedicated, redundant, 10-Gbps live migration network is required.
2.
Storage Topology
Cluster shared volumes (CSV) is a feature that simplifies the configuration and
management of Hyper-V VMs in failover clusters. With CSV on a failover cluster
that runs Hyper-V, multiple VMs can use the same LUN (disk) yet fail over (or
move from node to node) independently of one another. CSV provides increased
flexibility for volumes in clustered storage-for example, it allows IT managers to
keep system files separate from data to optimize disk performance, even if the
system files and the data are contained within virtual hard disk (VHD) files. If
you choose to use live migration for your clustered VMs, CSV can also provide
performance improvements for the live migration process.
Cluster Shared Volumes in a Windows Server 2012 failover cluster allow
multiple nodes in the cluster to simultaneously have read-write access to the
same LUN (disk) that is provisioned as an NTFS volume. With CSVs, clustered
roles can fail over quickly from one node to another node without requiring a
change in drive ownership, or dismounting and remounting a volume. CSVs also
help simplify managing a potentially large number of LUNs in a failover cluster.
The diagram below illustrates a design where a Hyper-V host cluster is utilizing
three CSV LUNs to store different data types. One stores the guest VM operating
system partitions, the other stores the guest VM data partition, and the third
stores guest VM application data such as database files. CSV must be enabled
and able to be utilized for storing multiple VMs on a single LUN.
Ver 1.0
37
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Figure 4-16 Example of a common CSV design for a large Hyper-V cluster
Ver 1.0
Template
Specs
Network
OS
Unit Cost
Template 1 Small
VLAN x
WS2003 R2
Template 2 Med
VLAN x
WS2003 R2
Template 3 Large
VLAN x
WS2003 R2
Template 4 Small
VLAN x
WS2008 R2
Template 5 Med
VLAN x
WS2008 R2
38
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Template 6 Large
VLAN x
WS2008 R2
VM storage
VM networking
Virtual processors
1.
VM Storage
Fixed-Size Disks
Fixed virtual hard disks provide storage capacity by using a VHD file that is
expanded to the size specified for the VHD when the disk is created. The size of
the VHD file remains fixed regardless of the amount of data stored. However, you
can use the Edit Virtual Hard Disk Wizard to increase the size of the VHD, which
increases the size of the VHD file. By allocating the full capacity at the time of
creation, fragmentation at the host level is typically not an issue (fragmentation
inside the VHD itself must be managed within the guest). Use fixed-size disks for
production environments, which provide better performance than the other
options and ease the monitoring of storage availability. Using fixed disks
allocates the full size of the disk upon creation.
Differencing Disks
Differencing VHDs provide storage that enables you to preserve the content of a
VHD and record all changes in a separate VHD file linked to it. In that
configuration, the parent disk/parent VHD is read-only. Differencing disks are
not recommended for production server workloads because they introduce
additional challenges with respect to storage management and storage subsystem
efficiency. Use pass-through disks for cases in which maximum size and extreme
performance are required and portability (in the VHD sense) is not required.
Ver 1.0
Pass-Through Disks
39
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Hyper-V enables VM guests to directly access local disks or SAN LUNs that are
attached to the physical server, without requiring the volume to be presented to
the host server. The VM guest accesses the disk directly (using the disks globally
unique identifier, or GUID) without having to use the hosts file system. Given
that the performance difference between fixed disks and pass-through disks is
now negligible, the decision should be mainly based on manageability. For
instance, if the data on the volume is very large (hundreds of gigabytes), a VHD
is hardly portable at that size given the extreme amount of time it takes to copy.
Hence, a pass-through disk would be a reasonable alternative. The backup
scheme should also be considered. With pass-through disks, the data can only be
backed up from within the guest. A VHD file is not created when using passthrough disks; the LUN is used directly by the guest.
Hyper-V can also use iSCSI storage by directly connecting to iSCSI LUNs
through the guests virtual network adapters. This connection is mainly used for
access to large volumes, for access to volumes on SANs that the Hyper-V host
itself is not connected to, or for guest clustering.
For in-guest iSCSI, use a separate virtual network for access to the iSCSI storage
to obtain acceptable performance. If the VM iSCSI network is shared with
Ethernet traffic, use QoS to provide performance guarantees to the different
networks. Consider using jumbo frames within the guest to improve iSCSI
performance.
2.
VM Networking
Hyper-V guests support two types of virtual network adapters: synthetic and
emulated. The faster performing of the two, synthetic, makes use of the Hyper-V
VMBus architecture and is the high-performance, native device in the VM.
Synthetic devices require that the Hyper-V integration components be installed
within the guest. Emulated adapters are available to all guests even if
integration components are not available.
Use synthetic virtual network adapters whenever possible. Because all supported
Hyper-V guest operating systems have integration services, the primary reason
to use the emulated network adapter is for preboot execution environment (PXE)
booting.
You can create many virtual networks on the server running Hyper-V to provide
a variety of communications channels. For example, you can create networks to
provide the following:
Ver 1.0
Communications between the host server and VMs; this type of virtual
network is called an internal network
HUAWEI Technologies Co., Ltd
40
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
For the private cloud scenario, use one or more external networks per VM and
segregate the networks with VLANs and other network security infrastructure as
needed.
3.
Virtual Processors
A logical processor (LP) is defined as a processing core seen by the host operating
system or parent partition. For example, in the case of Intel Hyper-Threading
Technology, each thread is considered an LP. Hyper-V supports a maximum ratio
of 8 virtual processors (VPs) per logical processor for server workloads, and a
maximum ratio of 12 virtual processors for virtual desktop infrastructure (VDI)
workloads.
Therefore, looking at a server workload scenario, a 16-LP server supports a
maximum of 128 VPs. That would in turn equate to 128 single-processor VMs, 64
dual-processor VMs, or 32 quad-processor VMs. The VP/LP ratio is a maximum
supported limit. Different workloads will require different ratios.
For a table showing the supported number of VPs in a Hyper-V guest, see About
Virtual Machines and Guest Operating Systems in the Microsoft TechNet
Library.
Ver 1.0
41
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Conclusion
Ver 1.0
42
White Paper for Microsoft Private Cloud Fast Track on HUAWEI Server and Storage System
Ver 1.0
AD DS
CSV
HA
High Availability
IDC
iSCSI
SCSI over IP
ISM
LUN
MSDTC
MPIO
Multipath I/O
NIC
NL-SAS
SAS
SSD
VLAN
43