Beruflich Dokumente
Kultur Dokumente
Mike Perks
Kenny Bain
Pawan Sharma
Table of Contents
1
Introduction ................................................................................................ 1
3.1.1
Provisioning Services.............................................................................................................. 5
3.1.2
3.2
3.3
3.3.1
3.3.2
3.3.3
4.1.1
4.1.2
4.1.3
4.2
4.2.1
4.2.2
Citrix XenServer.................................................................................................................... 12
4.2.3
4.3
4.3.1
4.3.2
Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 17
4.4
4.4.1
4.4.2
Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO ......................................... 22
4.5
4.5.1
4.6
4.6.1
ii
Intel Xeon E5-2600 v2 processor family servers with local storage ........................................ 24
Intel Xeon E5-2600 v3 processor family servers with Atlantis USX......................................... 25
4.7
4.8
4.9
4.9.1
4.9.2
4.10.2
4.10.3
4.10.4
4.12.1
Deployment example 1: Flex Solution with single Flex System chassis .................................. 37
4.12.2
4.12.3
Deployment example 3: System x server with Storwize V7000 and FCoE .............................. 41
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
Resources ....................................................................................................... 65
Document history .......................................................................................... 66
iii
1 Introduction
The intended audience for this document is technical IT architects, system administrators, and managers who
are interested in server-based desktop virtualization and server-based computing (terminal services or
application virtualization) that uses Citrix XenDesktop. In this document, the term client virtualization is used as
to refer to all of these variations. Compare this term to server virtualization, which refers to the virtualization of
server-based business logic and databases.
This document describes the reference architecture for Citrix XenDesktop 7.6 and also supports the previous
versions of Citrix XenDesktop 5.6, 7.0, and 7.1. This document should be read with the Lenovo Client
Virtualization (LCV) base reference architecture document that is available at this website:
lenovopress.com/tips1275
The business problem, business value, requirements, and hardware details are described in the LCV base
reference architecture document and are not repeated here for brevity.
This document gives an architecture overview and logical component model of Citrix XenDesktop. The
document also provides the operational model of Citrix XenDesktop by combining Lenovo hardware
platforms such as Flex System, System x, NeXtScale System, and RackSwitch networking with OEM
hardware and software such as IBM Storwize and FlashSystem storage, and Atlantis Computing software. The
operational model presents performance benchmark measurements and discussion, sizing guidance, and
some example deployment models. The last section contains detailed bill of material configurations for each
piece of hardware.
2 Architectural overview
Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with Citrix
XenDesktop. This reference architecture does not address the issues of remote access and authorization, data
traffic reduction, traffic monitoring, and general issues of multi-site deployment and network management. This
document limits the description to the components that are inside the customers intranet.
Active Directory, DNS
Hypervisor
Hypervisor
FIREWALL
FIREWALL
Internet
Clients
3rd party
VPN
Connection
Broker
(Delivery
Controller)
Provisioning Server
(PVS)
Hypervisor
Web
Interface
XenDesktop Pools
License
Server
Machine Creation
Services
Internal
Clients
Shared
Storage
3 Component model
Figure 2 is a layered view of the LCV solution that is mapped to the Citrix XenDesktop virtualization
infrastructure.
Administrator
GUIs for
Support
Services
Client Devices
Client
Receiver
Client
Receiver
Client
Receiver
HTTP/HTTPS
ICA
Delivery Controller
Management Services
vCenter Server
Management Protocols
Web Interface
Dedicated
Virtual
Desktops
Stateless
Virtual
Desktops
VM
Agent
VM
Agent
VM
Agent
VM
Agent
Local SSD
Storage
Hosted
Shared
Desktops
and Apps
Shared
Desktop
Shared
Desktop
Shared
Application
Accelerator
VM
Accelerator
VM
Accelerator
VM
Hypervisor Management
Hypervisor
Hypervisor
Hypervisor
Desktop
Studio
VM
Repository
Difference and
Identity Disks
User
Profiles
Directory
DNS
DHCP
OS Licensing
Lenovo Thin
Client Manager
User
Data Files
Shared Storage
Lenovo Client Virtualization- Citrix XenDesktop
Support
Services
Desktop Studio
Desktop Studio is the main administrator GUI for Citrix XenDesktop. It is used
to configure and manage all of the main entities, including servers, desktop
pools and provisioning, policy, and licensing.
Web Interface
Delivery controller
The Delivery controller is responsible for maintaining the proper level of idle
desktops to allow for instantaneous connections, monitoring the state of online
and connected desktops, and shutting down desktops as needed.
A XenDesktop farm is a larger grouping of virtual machine servers. Each
delivery controller in the XenDesktop acts as an XML server that is responsible
for brokering user authentication, resource enumeration, and desktop starting.
Because a failure in the XML service results in users being unable to start their
desktops, it is recommended that you configure multiple controllers per farm.
Provisioning Services (PVS) is used to provision stateless desktops at a large
License Server
Each Citrix XenDesktop site requires an SQL Server database that is called
the data store, which used to centralize farm configuration information and
transaction logs. The data store maintains all static and dynamic information
about the XenDesktop environment. Because the XenDeskop SQL server is a
critical component, redundant servers must be available to provide fault
tolerance.
By using a single console, vCenter Server provides centralized management
vCenter Server
of the virtual machines (VMs) for the VMware ESXi hypervisor. VMware
vCenter can be used to perform live migration (called VMware vMotion), which
allows a running VM to be moved from one physical server to another without
downtime.
Redundancy for vCenter Server is achieved through VMware high availability
(HA). The vCenter Server also contains a licensing server for VMware ESXi.
vCenter SQL Server
vCenter Server for VMware ESXi hypervisor requires an SQL database. The
vCenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL
Server. Because the vCenter SQL server is a critical component, redundant
servers must be available to provide fault tolerance. Customer SQL databases
(including respective redundancy) can be used.
Client devices
Citrix XenDesktop supports a broad set of devices and all major device
operating platforms, including Apple iOS, Google Android, and Google
ChromeOS. XenDesktop enables a rich, native experience on each device,
including support for gestures and multi-touch features, which customizes the
experience based on the type of device. Each client device has a Citrix
Receiver, which acts as the agent to communicate with the virtual desktop by
using the ICA/HDX protocol.
Each VM needs a Citrix Virtual Desktop Agent (VDA) to capture desktop data
VDA
and send it to the Citrix Receiver in the client device. The VDA also emulates
keyboard and gestures sent from the receiver. ICA is the Citrix remote display
protocol for VDI.
Hypervisor
Accelerator VM
Shared storage
Shared storage is used to store user profiles and user data files. Depending on
the provisioning model that is used, different data is stored for VM images. For
more information, see Storage model on section page 7.
For more information, see the Lenovo Client Virtualization base reference architecture document that is
available at this website: lenovopress.com/tips1275.
Virtual Desktop
Write
Cache
Master
Image
Snapshot
vDisk
Pooled-random: Desktops are assigned randomly. When they log off, the desktop is free for another
user. When rebooted, any changes that were made are destroyed.
Pooled-static: Desktops are permanently assigned to a single user. When a user logs off, only that
user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are
made are destroyed.
Dedicated: Desktops are permanently assigned to a single user. When a user logs off, only that user
can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made
persist across subsequent restarts.
MCS thin provisions each desktop from a master image by using built-in technology to provide each desktop
with a unique identity. Only changes that are made to the desktop use more disk space. For this reason, MCS
dedicated desktops are used for dedicated desktops.
The master VM image and snapshots are stored by using Network File System (NFS) or block I/O
shared storage.
The paging file (or vSwap) is transient data that can be redirected to NFS storage. In general, it is
recommended to disable swapping, which reduces storage use (shared or local). The desktop memory
size should be chosen to match the user workload rather than depending on a smaller image and
swapping, which reduces overall desktop performance.
User profiles (from MSRP) are stored by using Common Internet File System (CIFS).
Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on
NFS or block I/O shared storage:
Difference disks are used to store users changes to the base VM image. The difference disks are per
user and can become quite large for dedicated desktops.
Identity disks are used to store the computer name and password and are small.
Stateless desktops can use local solid-state drive (SSD) storage for the PVS write cache, which is
used to store all image writes on local SSD storage. These image writes are discarded when the VM is
shut down.
Min Number
Performance
Capacity
Type
of Servers
Tier
Tier
Hyperconverged
Simple
Hybrid
Simple
All Flash
Simple
In-Memory
Cluster of 3
1
1
1
Memory or
local flash
DAS
Memory
Shared
or flash
storage
Local flash
Shared
flash
HA
USX HA
Hypervisor HA
Hypervisor HA
Comments
Good balance between
performance and capacity
Functionally equivalent to
Atlantis ILIO Persistent VDI
Good performance, but lower
capacity
Memory
Memory
N/A (daily
Functionally equivalent to
or flash
or flash
backup)
4 Operational model
This section describes the options for mapping the logical components of a client virtualization solution onto
hardware and software. The Operational model scenarios section gives an overview of the available
mappings and has pointers into the other sections for the related hardware. Each subsection contains
performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM
configurations that are described in section 5 on page 43. The last part of this section contains some
deployment models for example customer scenarios.
Traditional
Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi, XenServer
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC
Converged
Flex System Chassis
Compute/Management Servers
Flex System x240
Hypervisor
ESXi
Shared Storage
IBM Storwize V7000
IBM FlashSystem 840
Networking
10GbE
10GbE FCoE
8 or 16 Gb FC
Compute/Management Servers
x3550, x3650, nx360
Hypervisor
ESXi, XenServer, Hyper-V
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
IBM Storwize V3700
Networking
10GbE
Hyper-converged
Compute/Management Servers
x3650
Hypervisor
ESXi, XenServer
Graphics Acceleration
NVIDIA GRID K1 or K2
Shared Storage
Not applicable
Networking
10GbE
Not Recommended
10
right-most column represents hyper-converged systems and the software that is used in these systems. For
the purposes of this reference architecture, the traditional and converged columns are merged for enterprise
solutions; the only significant differences are the networking, form factor and capabilities of the compute
servers.
Converged systems are not generally recommended for the SMB space because the converged hardware
chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the
converged chassis can be used for other workloads to make this hardware architecture more cost-effective.
4.10 Networking
4.11 Racks
To show the enterprise operational model for different sized customer environments, four different sizing
models are provided for supporting 600, 1500, 4500, and 10000 users.
To show the SMB operational model for different sized customer environments, four different sizing models are
provided for supporting 75, 150, 300, and 600 users.
11
4.11 Racks
To show the hyper-converged operational model for different sized customer environments, four different sizing
models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a
hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage.
12
Disable Large Send Offload (LSO) by using the Disable-NetadapterLSO command on the
Hyper-V compute server.
Disable virtual machine queue (VMQ) on all interfaces by using the Disable-NetAdapterVmq
command on the Hyper-V compute server.
Apply registry changes as per the Microsoft article that is found at this website:
support.microsoft.com/kb/2681638
The changes apply to Windows Server 2008 and Windows Server 2012.
Disable VMQ and Internet Protocol Security (IPSec) task offloading flags in the Hyper-V settings for
the base VM.
By default, storage is shared as hidden Admin shares (for example, e$) on Hyper-V compute server
and XenDesktop does not list Admin shares while adding the host. To make shared storage available
to XenDesktop, the volume should be shared on the Hyper-V compute server.
Because the SCVMM library is large, it is recommended that it is accessed by using a remote share.
13
MCS Stateless
MCS Dedicated
239 users
234 users
283 users
291 users
284 users
301 users
301 users
306 users
Table 3 lists the results for the Login VSI 4.1 knowledge worker workload.
Table 3: Performance with knowledge worker workload
Processor with knowledge worker workload
MCS Stateless
MCS Dedicated
244 users
237 users
252 users
246 users
These results indicate the comparative processor performance. The following conclusions can be drawn:
The Xeon E5-2650v3 processor has performance that is similar to the previously recommended Xeon
E5-2690v2 processor (IvyBridge), but uses less power and is less expensive.
The Xeon E5-2690v3 processor does not have significantly better performance than the Xeon
E5-2680v3 processor; therefore, the E5-2680v3 is preferred because of the lower cost.
Between the Xeon E5-2650v3 (2.30 GHz, 10C 105W) and the Xeon E5-2680v3 (2.50 GHz, 12C 120W) series
processors are the Xeon E5-2660v3 (2.6 GHz 10C 105W) and the Xeon E5-2670v3 (2.3GHz 12C 120W)
series processors. The cost per user increases with each processor but with a corresponding increase in user
density. The Xeon E5-2680v3 processor has good user density, but the significant increase in cost might
outweigh this advantage. Also, many configurations are bound by memory; therefore, a faster processor might
not provide any added value. Some users require the fastest processor and for those users the Xeon
E5-2680v3 processor is the best choice. However, the Xeon E5-2650v3 processor is recommended for an
average configuration.
Previous Reference Architectures used Login VSI 3.7 medium and heavy workloads. Table 4 gives a
comparison with the newer Login VSI 4.1 office worker and knowledge worker workloads. The table shows that
Login VSI 3.7 is on average 20% to 30% higher than Login VSI 4.1.
14
Workload
MCS Stateless
MCS Dedicated
239 users
234 users
3.7 Medium
286 users
286 users
301 users
306 users
3.7 Medium
394 users
379 users
252 users
246 users
3.7 Heavy
348 users
319 users
Table 5 compares the E5-2600 v3 processors with the previous generation E5-2600 v2 processors by using
the Login VSI 3.7 workloads to show the relative performance improvement. On average, the E5-2600 v3
processors are 25% - 40% faster than the previous generation with the equivalent processor names.
Table 5: Comparison of E5-2600 v2 and E5-2600 v3 processors
Processor
Workload
MCS Stateless
MCS Dedicated
3.7 Medium
204 users
204 users
3.7 Medium
286 users
286 users
3.7 Medium
268 users
257 users
3.7 Medium
394 users
379 users
3.7 Heavy
224 users
229 users
3.7 Heavy
348 users
319 users
Table 6 lists the LoginVSI performance of E5 2600 v3 processors from Intel that uses the Office worker
workload with XenServer 6.5.
Table 6: XenServer 6.5 performance with Office worker workload
Processor with medium workload
Hypervisor
MCS stateless
MCS dedicated
XenServer 6.5
225 users
224 users
XenServer 6.5
274 users
278 users
Table 7 shows the results for the same comparison that uses the Knowledge worker workload.
Table 7: XenServer 6.5 performance with Knowledge worker workload
Processor with heavy workload
Hypervisor
MCS stateless
MCS dedicated
XenServer 6.5
210 users
208 users
The default recommendation for this processor family is the Xeon E5-2650v3 processor and 512 GB of system
memory because this configuration provides the best coverage for a range of users. For users who need VMs
that are larger than 3 GB, Lenovo recommends the use of 768 GB and the Xeon E5-2680v3 processor.
15
Lenovo testing shows that 150 users per server is a good baseline and has an average of 76% usage of the
processors in the server. If a server goes down, users on that server must be transferred to the remaining
servers. For this degraded failover case, Lenovo testing shows that 180 users per server have an average of
89% usage of the processor. It is important to keep this 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 8 lists the processor usage with ESXi for the recommended user counts for normal mode and failover
mode.
Table 8: Processor usage
Processor
Workload
Stateless Utilization
Dedicated Utilization
Two E5-2650 v3
Office worker
78%
75%
Two E5-2650 v3
Office worker
88%
87%
Two E5-2680 v3
Knowledge worker
78%
77%
Two E5-2680 v3
Knowledge worker
86%
86%
Table 9 lists the recommended number of virtual desktops per server for different VM memory. The number of
users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced
system of compute and memory.
Table 9: Recommended number of virtual desktops per server
Processor
E5-2650v3
E5-2650v3
E5-2680v3
VM memory size
2 GB (default)
3 GB
4 GB
System memory
384 GB
512 GB
768 GB
150
140
150
180
168
180
Table 10 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 10: Compute servers needed for different numbers of users and VM sizes
Desktop memory size (2 GB or 4 GB)
600 users
1500 users
4500 users
10000 users
10
30
68
25
56
Failover ratio
4:1
4:1
5:1
5:1
600 users
1500 users
4500 users
10000 users
11
33
72
27
60
Failover ratio
4:1
4.5:1
4.5:1
5:1
16
For stateless desktops, local SSDs can be used to store the write-back cache for improved performance. Each
stateless virtual desktop requires a cache, which tends to grow over time until the virtual desktop is rebooted.
The size of the write-back cache depends on the environment. Two enterprise high-speed 200 GB SSDs in a
RAID 0 configuration should be sufficient for most user scenarios; however, 400 GB (or even 800 GB) SSDs
might be needed. Because of the stateless nature of the architecture, there is little added value in configuring
reliable SSDs in more redundant configurations.
4.3.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides storage optimization by using a 100% software solution. There is a cost for processor
and memory usage while offering decreased storage usage and increased input/output operations per second
(IOPS). This section contains performance measurements for processor and memory utilization of ILIO
technology and gives an indication of the storage usage and performance. Dedicated and stateless virtual
desktops have different performance measurements and recommendations.
VMs under ILIO are deployed on a per server basis. It is also recommended to use a separate storage logical
unit number (LUN) for each ILIO VM to support failover. Therefore, the performance measurements and
recommendations in this section are on a per server basis. Note that these measurements are currently for the
E5-2600 v2 processor using Login VSI 3.7.
Workload
Dedicated
Medium
218 users
175 users
Medium
282 users
241 users
Heavy
249 users
186 users
There is an average difference of 20% - 30% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as E5-2690v2) are used to maximize density.
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM and
Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 275 VMs used 35 GB out of
the 50 GB RAM. In practice, most servers host less VMs, but each VM is much larger. Proof of concept (POC)
testing can help determine the amount of RAM, but for most situations 50 GB of RAM should be sufficient.
Assuming 4 GB for the hypervisor, 59 GB (50 + 5 + 4) of system memory should be reserved. It is
recommended that at least 384 GB of server memory is used for ILIO Persistent VDI deployments.
17
Table 12 lists the recommended number of virtual desktops per server for different VM memory sizes for a
medium workload. This configuration can be a more cost-effective, higher-density route for larger VMs that
balance RAM and processor utilization.
Table 12: Recommended number of virtual desktops per server with ILIO Persistent VDI
Processor
E5-2690v2
E5-2690v2
E5-2690v2
VM memory size
2 GB (default)
3 GB
4 GB
384 GB
512 GB
768 GB
59 GB
59 GB
59 GB
325 GB
452 GB
709 GB
125
125
125
150
150
150
Table 13 lists the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 384 GB system memory is used for 2 GB VMs, 512 GB system memory is used for 3 GB VMs, and
768 GB system memory is used for 4 GB VMs.
Table 13: Compute servers needed for different numbers of users with ILIO Persistent VDI
600 users
1500 users
4500 users
10000 users
12
36
80
10
30
67
Failover ratio
4:1
5:1
5:1
4:1
The amount of disk storage that is used depends on several factors, including the size of the original image,
the amount of user unique storage, and the de-duplication and compression ratios that can be achieved.
Here is a best case example: A Windows 7 image uses 21 GB out of an allocated 30 GB. For 160 VMs that are
using full clones, the actual storage space that is needed is 3360 GB. For ILIO, the storage space that is used
is 60 GB out of an allocated datastore of 250 GB. This configuration is a saving of 98% and is best case, even
if you add the 50 GB of disk space that is needed by the ILIO VM.
It is still a best practice to separate the user folder and any other shared folders into separate storage. That
leaves all of the other possible changes that might occur in a full clone must be stored in the ILIO data store.
This configuration is highly dependent on the environment. Testing by Atlantis Computing suggests that 3.5 GB
of unique data per persistent VM is sufficient. Comparing against the 4800 GB that is needed for 160 full clone
VMs, this configuration still represents a saving of 88%. It is recommended to reserve 10% - 20% of the total
storage that is required for the ILIO data store.
As a result of the use of ILIO Persistent VDI, the only read operations are to fill the cache for the first time. For
all practical purposes, the remaining reads are few and at most 1 IOPS per VM. Writes to persistent storage
are still needed for starting, logging in, remaining in steady state, and logging off, but the overall IOPS count is
substantially reduced.
Assuming the use of a fast, low-latency shared storage device, such as the IBM FlashSystem 840 system, a
18
single VM boot can take 20 - 25 seconds to get past the display of the logon window and get all of the other
services fully loaded. This process takes this time because boot operations are mainly read operations,
although the actual boot time can vary depending on the VM. Citrix XenDesktop boots VMs in batches of 10 at
a time, which reduces IOPS for most storage systems but is actually an inhibitor for Atlantis ILIO. Without the
use of XenDesktop, a boot of 100 VMs in a single ILIO data store is completed in 3.5 minutes; that is, only 11
times longer than the boot of a single VM and far superior to existing storage solutions.
Login time for a single desktop varies, depending on the VM image but can be extremely quick. In some cases,
the login will take less than 6 seconds. Scale-out testing across a cluster of servers shows that one new login
every 6 seconds can be supported over a long period. Therefore, at any one instant, there can be multiple
logins underway and the main bottleneck is the processor.
Workload
MCS stateless
MCS stateless
MCS stateless
with ILIO
with ILIO
Persistent VDI
Diskless VDI
Medium
218 users
178 users
180 users
Medium
282 users
226 users
236 users
Heavy
247 users
178 users
189 users
There is an average difference of 20% - 35% in the work that is done by the two vCPUs of the Atlantis ILIO VM.
It is recommended that higher-end processors (such as the E5-2690v2) are used to maximize density. The
maximum number of users that is supported is slightly higher for ILIO Diskless VDI, but the RAM requirement
is also much higher.
For the ILIO Persistent VDI that uses local SSDs, the memory calculation is similar to that for persistent virtual
desktops. It is recommended that at least 384 GB of server memory is used for ILIO Persistent VDI
deployments. For more information about recommendations for ILIO Persistent VDI that use local SSDs for
stateless virtual desktops, see Table 12 and Table 13. The same configuration can also be used for stateless
desktops with shared storage; however, the performance of the write operations likely becomes much worse.
The ILIO Diskless VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache and RAM data store requires
extra RAM. Atlantis Computing provides a calculator for this RAM. Lenovo testing found that 230 VMs used 69
GB of RAM. In practice, most servers host less VMs and each VM has more differences. POC testing can help
determine the amount of RAM, but 128 GB should be sufficient for most situations. Assuming 4 GB for the
hypervisor, 137 GB (128 + 5 + 4) of system memory should be reserved. In general, it is recommended that a
19
minimum of 512 GB of server memory is used for ILIO Diskless VDI deployments.
Table 15 lists the recommended number of stateless virtual desktops per server for different VM memory sizes
for a medium workload.
Table 15: Recommended number of virtual desktops per server with ILIO Diskless VDI
Processor
E5-2690v2
E5-2690v2
E5-2690v2
VM memory size
2 GB (default)
3 GB
4 GB
512 GB
512 GB
768 GB
137 GB
137 GB
137 GB
375 GB
375 GB
631 GB
125
100
125
150
125
150
Table 16 shows the number of compute servers that are needed for different numbers of users and VM sizes. A
server with 512 GB system memory is used for 2 GB and 3 GB VMs, and 768 GB system memory is used for 4
GB VMs.
Table 16: Compute servers needed for different numbers of users and VM sizes with ILIO Diskless VDI
Desktop memory size (2 GB or 4GB)
600 users
1500 users
4500 users
10000 users
11
30
67
25
56
Failover ratio
4:1
4.5:1
5:1
5:1
600 users
1500 users
4500 users
10000 users
12
36
80
10
30
67
Failover ratio
4:1
5:1
5:1
4:1
Disk storage is needed for the master images and each SnapClone data store for ILIO Diskless VDI VMs. This
storage does not need to be fast because it is used only to initially load the master image or to recover an ILIO
Diskless VDI VM that was rebooted.
As with persistent virtual desktops, the addition of the ILIO technology reduces the IOPS that is needed for
boot, login, remaining in steady state, and logoff. This reduces the time to bring a VM online and improves user
response time.
20
As the name implies, multiple desktops share a single VM; however, because of this sharing, the compute
resources often are exhausted before memory. Lenovo testing showed that 128 GB of memory is sufficient for
servers with two processors.
Other testing showed that the performance differences between four, six, or eight VMs is minimal; therefore,
four VMs are recommended to reduce the license costs for Windows Server 2012 R2.
For more information, see BOM for hosted desktops section on page 48.
Hypervisor
Workload
Hosted Desktops
ESXi 6.0
Office Worker
222 users
ESXi 6.0
Office Worker
255 users
ESXi 6.0
Office Worker
264 users
ESXi 6.0
Office Worker
280 users
ESXi 6.0
Knowledge Worker
231 users
ESXi 6.0
Knowledge Worker
237 users
Table 18 lists the processor performance results for different size workloads that use four Windows Server
2012 R2 VMs with the Xeon E5-2600v3 series processors and XenServer 6.5 hypervisor.
Table 18: XenServer 6.5 results for shared hosted desktops using the E5-2600 v3 processors
Processor
Hypervisor
Workload
Hosted Desktops
XenServer 6.5
Office Worker
225 users
XenServer 6.5
Office Worker
243 users
XenServer 6.5
Office Worker
262 users
XenServer 6.5
Office Worker
271 users
XenServer 6.5
Knowledge Worker
223 users
XenServer 6.5
Knowledge Worker
226 users
Lenovo testing shows that 170 hosted desktops per server is a good baseline. If a server goes down, users on
that server must be transferred to the remaining servers. For this degraded failover case, Lenovo recommends
204 hosted desktops per server. It is important to keep a 25% headroom on servers to cope with possible
failover scenarios. Lenovo recommends a general failover ratio of 5:1.
Table 19 lists the processor usage for the recommended number of users.
21
Workload
Utilization
Two E5-2650 v3
Office worker
78%
Two E5-2650 v3
Office worker
88%
Two E5-2680 v3
Knowledge worker
65%
Two E5-2680 v3
Knowledge worker
78%
Table 20 lists the number of compute servers that are needed for different numbers of users. Each compute
server has 128 GB of system memory for the four VMs.
Table 20: Compute servers needed for different numbers of users and VM sizes
600 users
1500 users
4500 users
10000 users
10
27
59
22
49
Failover ratio
3:1
4:1
4.5:1
5:1
4.4.2 Intel Xeon E5-2600 v2 processor family servers with Atlantis ILIO
Atlantis ILIO provides in-memory storage optimization by using a 100% software solution. There is an effect on
processor and memory usage while offering decreased storage usage and increased IOPS. This section
contains performance measurements for processor and memory utilization of ILIO technology and describes
the storage usage and performance.
VMs under ILIO are deployed on a per server basis. It is recommended to use a separate storage LUN for
each ILIO VM to support failover. The performance measurements and recommendations in this section are on
a per server basis. Note that these measurements are currently for the E5-2600 v2 processor using Login VSI
3.7.
The performance measurements and recommendations are for the use of ILIO Persistent VDI with hosted
shared desktops. Table 21 lists the processor performance results for the Xeon E5-2600 v2 series of
processors.
Table 21: Performance results for shared hosted desktops using the E5-2600 v2 processors
Workload
Hypervisor
Processor
Desktops without
ILIO
Persistent VM
Medium
Two E5-2650v2
214 users
193 users
Medium
Two E5-2690v2
283 users
257 users
Heavy
Two E5-2690v2
241 users
220 users
On average, there is a difference of 20% - 30% that can be attributed to work that is done by the two vCPUs of
the Atlantis ILIO VM. It is recommended that higher-end processors (such as E5-2690v2) are used to maximize
density.
22
The ILIO Persistent VDI VM uses 5 GB of RAM. In addition, the ILIO RAM cache requires more RAM. Atlantis
Computing provides a calculator for this RAM. Lenovo testing found that the four VMs used 32 GB. In practice,
most servers host less VMs and each VM is much larger. POC testing can help determine the amount of RAM.
However, for most circumstances, 60 GB should be enough. It is recommended that at least 192 GB of server
memory is used for ILIO Persistent VDI deployments of hosted desktops.
Table 22 shows the recommended number of shared hosted desktops per compute server that uses two Xeon
E5-2690v2 series processors, which allows for some processor headroom for the hypervisor and a 5:1 failover
ratio in the compute servers.
Table 22: Recommended number of shared hosted desktops per server
Workload
Normal case
Normal utilization
Failover case
Failover utilization
Medium
180
70%
216
84%
Heavy
160
73%
192
87%
Table 23 shows the number of compute servers that is needed for different number of users. Each compute
server has 256 GB of system memory for the four VMs and the ILIO Persistent VDI VM.
Table 23: Compute servers needed for different numbers of users and VM sizes
600 users
1500 users
4500 users
10000 users
25
56
21
46
Failover ratio
3:1
3.5:1
5:1
4.5:1
The amount of disk storage that is used depends on several factors, including the size of the original Windows
Server image, the amount of unique storage and the de-duplication and compression ratios that can be
achieved. A Windows 2008 R2 image uses 19 GB. For four VMs, the actual storage space that is needed is
76 GB. For ILIO, the storage space that is used is 25 GB, which is a saving of 67%.
As a result of the use of ILIO Persistent VDI, the only read I/O operations that are needed are those to fill the
cache for the first time. For all practical purposes, the remaining reads are few and at most 1 IOPS per VM.
Writes to persistent storage are still needed for booting, logging in, remaining in steady state, and logging off,
but the overall IOPS count is substantially reduced.
23
Use local HDDs in compute servers and use shared storage only when necessary
The Citrix XenDesktop VDI edition has fewer features than the other XenDesktop versions and might be
sufficient for the customer SMB environment. For more information and a comparison, see this website:
citrix.com/go/products/xendesktop/feature-matrix.html
Citrix XenServer has no other license cost and is an alternative to other hypervisors. The performance
measurements in this section show XenServer and ESXi.
Providing that there is some kind of HA for the management server VMs, the number of compute servers that
are needed can be reduced at the cost of less user density. There is a cross-over point on the number of users
where it makes sense to have dedicated compute servers for the management VMs. That cross-over point
varies by customer, but often it is in the range of 300 - 600 users. A good assumption is a reduction in user
density of 20% for the management VMs; for example, 125 users reduces to 100 per compute server.
Shared storage is expensive. Some shared storage is needed to ensure user recovery if there is a failure and
the IBM Storwize V3700 with an iSCSI connection is recommended. Dedicated virtual desktops always must
be on a server in case of failure, but stateless virtual desktops can be provisioned to HDDs on the local server.
Only the user data and profile information must be on shared storage.
4.5.1 Intel Xeon E5-2600 v2 processor family servers with local storage
The performance measurements and recommendations in this section are for the use of stateless virtual
machines with local storage on the compute server. For more information about for persistent virtual desktops
as persistent users must use shared storage for resilience, see Compute servers for virtual desktops on page
13. Note that these measurements are currently for the E5-2600 v2 processor using Login VSI 3.7.
XenServer 6.5 formats a local datastore by using the LVMoHBA file system. As a consequence XenServer 6.5
supports only thick provisioning and not thin provisioning. This fact means that VMs that are created with MCS
are large and take up too much local disk space. Instead, only PVS was used to provision stateless virtual
desktops.
Table 24 shows the processor performance results for the Xeon E5-2650v2 and Xeon E5-2690v2 series of
processors by using stateless virtual machines on local HDDs with XenServer 6.5. This configuration used 12
300 GB 15 k RPM HDDs in a RAID 10 array. Measurements showed that 8 HDDs was barely sufficient for the
required IOPS and the capacity requirements. Table 24 also shows the performance with SSDs to compare the
overhead of local HDD usage.
Table 24: Performance results for stateless virtual desktops using the E5-2600 v2 processors
Processor
Workload
Medium
168 users
215 users
Medium
221 users
268 users
Heavy
192 users
213 users
Because an SMB environment is used with a range of user sizes from less than 75 to 600, the following
configurations are recommended:
For small user counts, each server can support 75 users at most by using two E5-2650v2 processors,
256 GB of memory, and 8 HDDs. The extra memory is needed for the VM for management servers.
24
For average user counts, each server supports 125 users at most by using two E5-2650v2 processors,
256 GB of memory, and 12 HDDs.
For heavy users the E5-2690v2 processor can be used with 384 GB of memory.
These configurations need two HDDs for XenServer or a USB key for ESXi.
Table 25 shows the number of compute servers that are needed for different number of medium users.
Table 25: SMB Compute servers needed for different numbers of users and VM sizes
75 users
150 users
250 users
500 users
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Yes
Yes
No
No
For more information, see BOM for SMB compute servers on page 43.
4.6.1 Intel Xeon E5-2600 v3 processor family servers with Atlantis USX
Atlantis USX is tested by using the knowledge worker workload of Login VSI 4.1. Four Lenovo x3650 M5
servers with E5-2680v3 processors were networked together by using a 10 GbE TOR switch. Atlantis USX was
installed and four 400 GB SSDs per server were used to create an all-flash hyper-converged volume across
the four servers that were running ESXi 5.5 U2.
This configuration was tested with 500 dedicated virtual desktops on four servers and then three servers to see
the difference if one server is unavailable. Table 26 lists the processor usage for the recommended number of
users.
Table 26: Processor usage for Atlantis USX
Processor
Workload
Servers
Utilization
Two E5-2680 v3
Knowledge worker
66%
Two E5-2680 v3
Knowledge worker
89%
From these measurements, Lenovo recommends 125 user per server in normal mode and 150 users per
server in failover mode. Lenovo recommends a general failover ratio of 5:1.
25
Table 27 lists the recommended number of virtual desktops per server for different VM memory sizes.
Table 27: Recommended number of virtual desktops per server for Atlantis USX
Processor
E5-2680 v3
E5-2680 v3
E5-2680 v3
VM memory size
2 GB (default)
3 GB
4 GB
System memory
384 GB
512 GB
768 GB
63 GB
63 GB
63 GB
321 GB
449 GB
705 GB
125
125
125
150
150
150
Table 28 lists the approximate number of compute servers that are needed for different numbers of users and
VM sizes.
Table 28: Compute servers needed for different numbers of users for Atlantis USX
Desktop memory size
300 users
600 users
1500 users
3000 users
12
24
10
20
Failover ratio
3:1
4:1
5:1
5:1
An important part of a hyper-converged system is the resiliency to failures when a compute server is
unavailable. Login VSI was run and then the compute server was powered off. This process was done during
the steady state phase. For the steady state phase, 114 - 120 VMs were migrated from the failed server to the
other three servers with each server gaining 38 - 40 VMs.
Figure 7 shows the processor usage for the four servers during the steady state phase when one of the servers
is powered off. The processor spike for the three remaining servers is noticeable.
26
There is an impact on performance and time lag if a hyper-converged server suffers a catastrophic failure, yet
VSAN can recover quite quickly. However, this situation is best avoided as it is important to build in redundancy
at multiple levels for all mission critical systems.
Dedicated GPU with one GPU per user which is called pass-through mode.
GPU hardware virtualization (vGPU) that partitions each GPU for 1 to 8 users.
The performance of graphics acceleration was tested on the NVIDIA GRID K1 and GRID K2 adapters by using
the Lenovo System x3650 M5 server and the Lenovo NeXtScale nx360 M5 server. Each of these servers
supports up to two GRID adapters. No significant performance differences were found between these two
servers when they were used for graphics acceleration and the results apply to both.
Because pass-through mode offers a low user density (eight for GRID K1 and four for GRID K2), it is
recommended that this mode is only used for power users, designers, engineers, or scientists that require
powerful graphics acceleration.
Lenovo recommends that a high-powered CPU, such as the E5-2680v3, is used for vDGA and vGPU because
accelerated graphics tends to put an extra load on the processor. For the vDGA option, with only four or eight
users per server, 128 GB of server memory should be sufficient even for the high end GRID K2 users who
might need 16 GB or even 24 GB per VM.
The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image
quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or
knowledge workers usually have less intense graphics workloads and can achieve higher frame rates.
Table 29 lists the results of the Heaven benchmark as frames per second (FPS) that are available to each user
with the GRID K1 adapter by using pass-through mode with DirectX 11.
Table 29: Performance of GRID K1 pass-through mode by using DirectX 11
Quality
Tessellation
Anti-Aliasing
Resolution
K1
High
Normal
1024x768
14.3
High
Normal
1280x768
12.3
High
Normal
1280x1024
11.4
High
Normal
1680x1050
7.8
High
Normal
1920x1200
5.5
Table 30 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2
adapter by using pass-through mode with DirectX 11.
27
Tessellation
Anti-Aliasing
Resolution
K2
High
Normal
1680x1050
52.6
Ultra
Extreme
1680x1050
29.2
Ultra
Extreme
1920x1080
25.9
Ultra
Extreme
1920x1200
23.9
Ultra
Extreme
2560x1600
14.8
Table 31 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K1
adapter by using vGPU mode with DirectX 11. The K180Q profile has similar performance to the K1
pass-through mode.
Table 31: Performance of GRID K1 vGPU modes by using DirectX 11
Quality
Tessellation
Anti-Aliasing
Resolution
K180Q
K160Q
K140Q
High
Normal
1024x768
14.3
7.8
4.4
High
Normal
1280x768
12.3
6.7
3.7
High
Normal
1280x1024
11.4
5.5
3.1
High
Normal
1680x1050
7.8
4.3
N/A
High
Normal
1920x1200
5.5
3.5
N/A
Table 32 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID K2
adapter by using vGPU mode with DirectX 11. The K280Q profile has similar performance to the K2
pass-through mode.
Table 32: Performance of GRID K2 vGPU modes by using DirectX 11
Quality
Tessellation
Anti-Aliasing
Resolution
K280Q
K260Q
K240Q
High
Normal
1680x1050
52.6
26.3
13.3
High
Normal
1920x1200
N/A
20.3
10.0
Ultra
Extreme
1680x1050
29.2
N/A
N/A
Ultra
Extreme
1920x1080
25.9
N/A
N/A
Ultra
Extreme
1920x1200
23.9
11.5
N/A
Ultra
Extreme
2560x1600
14.8
N/A
N/A
The GRID K2 GPU has more than twice the performance of the GRID K1 GPU, even with the high quality,
tessellation, and anti-aliasing options. This result is expected because of the relative performance
characteristics of the GRID K1 and GRID K2 GPUs. The frame rate decreases as the display resolution
increases.
28
Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is
done in the customer environment to verify the performance for the required user workloads.
For more information about the bill of materials (BOM) for GRID K1 and K2 GPUs for Lenovo System x3650
M5 and NeXtScale nx360 M5 servers, see the following corresponding BOMs:
Virtual
System
service VM
processors
memory
Delivery
4 GB
4 GB
Storage
Windows
HA
Performance
OS
needed
characteristic
15 GB
2008 R2
Yes
15 GB
2008 R2
Yes
controller
Web Interface
hour
Citrix licensing
4 GB
15 GB
2008 R2
No
4 GB
15 GB
2008 R2
Yes
server
XenDesktop
SQL server
PVS servers
32 GB
40 GB
2008 R2
Yes
Up to 1000 desktops,
(depends
memory should be a
on number
minimum of 2 GB plus
of images)
vCenter server
4 GB
15 GB
2008 R2
No
Up to 2000 desktops
vCenter SQL
4 GB
15 GB
2008 R2
Yes
server
PVS servers often are run natively on Windows servers. The testing showed that they can run well inside a VM,
if it is sized per Table 33. The disk space for PVS servers is related to the number of provisioned images.
Table 34 lists the number of management VMs for each size of users following the high availability and
performance characteristics. The number of vCenter servers is half of the number of vCenter clusters because
each vCenter server can handle two clusters of up to 1000 desktops.
29
600 users
1500 users
4500 users
10000 users
Delivery Controllers
2 (1+1)
2 (1+1)
2 (1+1)
2(1+1)
Web Interface
N/A
2 (1+1)
2 (1+1)
2 (1+1)
N/A
2 (1+1)
2 (1+1)
2 (1+1)
2 (1+1)
2 (1+1)
4 (2+2)
8 (6+2)
14 (10+4)
600 users
1500 users
4500 users
10000 users
vCenter servers
2 (1+1)
2 (1+1)
2 (1+1)
2 (1+1)
Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough
capacity in the management servers for all of these VMs. Table 35 lists an example mapping of the
management VMs to the four physical management servers for 4500 users.
Table 35: Management server VM mapping (4500 users)
Management service for 4500
Management
Management
Management
Management
stateless users
server 1
server 2
server 3
server 4
1
1
1
2
It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol
(DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment.
For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O
servers that support CIFS or NFS shares and translate file requests to the block storage system. For high
availability, two or more Windows storage servers are clustered.
Based on the number and type of desktops, Table 36 lists the recommended number of physical management
servers. In all cases, there is redundancy in the physical management servers and the management VMs.
30
600 users
1500 users
4500 users
10000 users
For more information, see BOM for enterprise and SMB management servers on page 56.
Protocol
Size
IOPS
Write %
NFS or Block
CIFS/NFS
5 GB
75%
CIFS
100 MB
0.8
75%
Table 38 summarizes the peak IOPS and disk space requirements for dedicated or shared stateless virtual
desktops on a per-user basis. Persistent virtual desktops require a high number of IOPS and a large amount of
disk space. Stateless users that require mobility and have no local SSDs also fall into this category. The last
three rows of Table 38 are the same as Table 37 for stateless desktops.
31
Table 38: Dedicated or shared stateless virtual desktop shared storage performance requirements
Dedicated virtual desktops
Protocol
Size
Master image
NFS or Block
30 GB
NFS or Block
10 GB
NFS or Block
User files
User profile (through MSRP)
Difference disks
User AppData folder
IOPS
Write %
18
85%
CIFS/NFS
5 GB
75%
CIFS
100 MB
0.8
75%
The sizes and IOPS for user data files and user profiles that are listed in Table 37 and Table 38 can vary
depending on the customer environment. For example, power users might require 10 GB and five IOPS for
user files because of the applications they use. It is assumed that 100% of the users at peak load times require
concurrent access to user data files and profiles.
Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for
dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any
storage controller configuration.
The storage configurations that are presented in this section include conservative assumptions about the VM
size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most
demanding user scenarios.
This reference architecture describes the following different shared storage solutions:
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Fibre Channel (FC)
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using FC over Ethernet (FCoE)
Block I/O to IBM Storwize V7000 / Storwize V3700 storage using Internet Small Computer System
Interface (iSCSI)
Block I/O to IBM FlashSystem 840 with Atlantis ILIO storage acceleration
32
be transparently stored on SSDs. There is a noticeable improvement in performance when Easy Tier is used,
which tends to tail off as SSDs are added. It is recommended that approximately 10% of the storage space is
used for SSDs to give the best balance between price and performance.
The tiered storage support of Storwize storage also allows a mixture of different disk drives. Slower drives can
be used for shared folders and profiles; faster drives and SSDs can be used for persistent virtual desktops and
desktop images.
To support file I/O (CIFS and NFS) into Storwize storage, Windows storage servers must be added, as
described in Management servers on page 29.
The fastest HDDs that are available for Storwize storage are 15k rpm drives in a RAID 10 array. Storage
performance can be significantly improved with the use of Easy Tier. If this performance is insufficient, SSDs or
alternatives (such as a flash storage system) are required.
For this reference architecture, it is assumed that each user has 5 GB for shared folders and profile data and
uses an average of 2 IOPS to access those files. Investigation into the performance shows that 600 GB
10k rpm drives in a RAID 10 array give the best ratio of input/output operation performance to disk space. If
users need more than 5 GB for shared folders and profile data then 900 GB (or even 1.2 TB), 10k rpm drives
can be used instead of 600 GB. If less capacity is needed, the 300 GB 15k rpm drives can be used for shared
folders and profile data.
Persistent virtual desktops require both: a high number of IOPS and a large amount of disk space for the linked
clones. The linked clones can grow in size over time as well. For persistent desktops, 300 GB 15k rpm drives
configured as RAID 10 were not sufficient and extra drives were required to achieve the necessary
performance. Therefore, it is recommended to use a mixture of both speeds of drives for persistent desktops
and shared folders and profile data.
Depending on the number of master images, one or more RAID 1 array of SSDs can be used to store the VM
master images. This configuration help with performance of provisioning virtual desktops; that is, a boot storm.
Each master image requires at least double its space. The actual number of SSDs in the array depends on the
number and size of images. In general, more users require more images.
Table 39: VM images and SSDs
600 users
1500 users
4500 users
10000 users
Image size
30 GB
30 GB
30 GB
30 GB
16
120 GB
240 GB
480 GB
960 GB
RAID 1 (2)
RAID 1 (2)
Two RAID 1
Four RAID 1
arrays (4)
arrays (8)
Table 40 lists the Storwize storage configuration that is needed for each of the stateless user counts. Only one
Storwize control enclosure is needed for a range of user counts. Based on the assumptions in Table 40, the
IBM Storwize V3700 storage system can support up to 7000 users only.
33
600 users
1500 users
4500 users
10000 users
12
28
80
168
12
Table 41 lists the Storwize storage configuration that is needed for each of the dedicated or shared stateless
user counts. The top four rows of Table 41 are the same as for stateless desktops. Lenovo recommends
clustering the IBM Storwize V7000 storage system and the use of a separate control enclosure for every 2500
or so dedicated virtual desktops. For the 4500 and 10000 user solutions, the drives are divided equally across
all of the controllers. Based on the assumptions in Table 41, the IBM Storwize V3700 storage system can
support up to 1200 users.
Table 41: Storwize storage configuration for dedicated or shared stateless users
Dedicated or shared stateless storage
600 users
1500 users
4500 users
10000 users
12
28
80
168
12
40
104
304
672
12
12
32
64
16 (2 x 8)
36 (4 x 9)
desktops
Refer to the BOM for shared storage on page 60 for more details.
34
not recommended to use this device for user counts because it is not cost-efficient.
Persistent virtual desktops require the most storage space and are the best candidate for this storage device.
The device also can be used for user folders, snap clones, and image management, although these items can
be placed on other slower shared storage.
The amount of required storage for persistent virtual desktops varies and depends on the environment. Table
42 is provided for guidance purposes only.
Table 42: FlashSystem 840 storage configuration for dedicated users with Atlantis ILIO VM
Dedicated storage
1000 users
3000 users
5000 users
10000 users
2 TB flash module
12
4 TB flash module
12
Capacity
4 TB
12 TB
20 TB
40 TB
Refer to the BOM for OEM storage hardware on page 64 for more details.
4.10 Networking
The main driver for the type of networking that is needed for VDI is the connection to shared storage. If the
shared storage is block-based (such as the IBM Storwize V7000), it is likely that a SAN that is based on 8 or
16 Gbps FC, 10 GbE FCoE, or 10 GbE iSCSI connection is needed. Other types of storage can be network
attached by using 1 Gb or 10 Gb Ethernet.
Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb
Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this
website: lenovopress.com/tips1275.
Automated failover and redundancy of the entire network infrastructure and shared storage is important. This
failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths
between the compute servers, management servers, and shared storage.
If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR
switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required.
For more information, see BOM for networking on page 62.
35
600 users
1500 users
4500 users
10000 users
600 users
1500 users
4500 users
10000 users
600 users
1500 users
4500 users
10000 users
600 users
1500 users
4500 users
10000 users
Table 47 shows the number of 1 GbE connections that are needed for the administration network and switches
for each type of device. The total number of connections is the sum of the device counts multiplied by the
36
TOR switches
4.11 Racks
The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that
is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex
System Enterprise Chassis (if applicable). The number of racks for System x servers is also dependent on the
total height of all of the components. For more information, see the BOM for racks section on page 63.
4.12.1 Deployment example 1: Flex Solution with single Flex System chassis
As shown in Table 48, this example is for 1250 stateless users that are using a single Flex System chassis.
There are 10 compute nodes supporting 125 users in normal mode and 156 users in the failover case of up to
two nodes not being available. The IBM Storwize V7000 storage is connected by using FC directly to the Flex
System chassis.
37
Table 48: Deployment configuration for 1250 stateless users with Flex System x240 Compute Nodes
Stateless virtual desktop
1250 users
10
40
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Compute
Manage
Manage
Total height
14U
WSS
WSS
4500 users
Compute servers
Management servers
V7000 Storwize controller
V7000 Storwize expansion
Flex System EN4093R switches
Flex System FC3171 switches
Flex System Enterprise Chassis
10 GbE network switches
1 GbE network switches
SAN network switches
Total height
Number of racks
36
4
1
3
6
6
3
2 x G8264R
2 x G8052
2 x SAN24B-5
44U
2
Figure 8 shows the deployment diagram for this configuration. The first rack contains the compute and
management servers and the second rack contains the shared storage.
38
M1 VMs
vCenter Server
vCenter SQL Server
Desktop controller
PVS Servers (2)
TOR Switches
2 x G8052
2 x SAN24B-5
2 x G8124
M4 VMs
Cxx
Each compute
server has
125 user VMs
M2 VMs
vCenter Server
vCenter SQL Server
License Server
PVS Servers (2)
M3 VMs
vCenter Server
Desktop SQL Server
Web Server
PVS Servers (2)
Figure 8: Deployment diagram for 4500 stateless users using Storwize V7000 shared storage
Figure 9 shows the 10 GbE and Fibre Channel networking that is required to connect the three Flex System
Enterprise Chassis to the Storwize V7000 shared storage. The detail is shown for one chassis in the middle
and abbreviated for the other two chassis. The 1 GbE management infrastructure network is not shown for the
purpose of clarity.
Redundant 10 GbE networking is provided at the chassis level with two EN4093R switches and at the rack
level by using two G8264R TOR switches. Redundant SAN networking is also used with two FC3171 switches
and two top of rack SAN24B-5 switches. The two controllers in the Storwize V7000 are redundantly connected
to each of the SAN24B-5 switches.
39
Figure 9: Network diagram for 4500 stateless users using Storwize V7000 shared storage
40
4.12.3 Deployment example 3: System x server with Storwize V7000 and FCoE
This deployment example is derived from an actual customer deployment with 3000 users, 90% of which are
stateless and need a 2 GB VM. The remaining 10% (300 users) need a dedicated VM of 3 GB. Therefore, the
average VM size is 2.1 GB.
Assuming 125 users per server in the normal case and 150 users in the failover case, then 3000 users need 24
compute servers. A maximum of four compute servers can be down for a 5:1 failover ratio. Each compute
server needs at least 315 GB of RAM (150 x 2.1), not including the hypervisor. This figure is rounded up to
384 GB, which should be more than enough and can cope with up to 125 users, all with 3 GB VMs.
Each compute server is a System x3550 server with two Xeon E5-2650v2 series processors, 24 16 GB of
1866 MHz RAM, an embedded dual port 10 GbE virtual fabric adapter (A4MC), and a license for FCoE/iSCSI
(A2TE). For interchangeability between the servers, all of them have a RAID controller with 1 GB flash upgrade
and two S3700 400 GB MLC enterprise SSDs that are configured as RAID 0 for the stateless VMs.
In addition, there are three management servers. For interchangeability in case of server failure, these extra
servers are configured in the same way as the compute servers. All the servers have a USB key with ESXi.
There also are two Windows storage servers that are configured differently with HDDs in RAID 1 array for the
operating system. Some spare, preloaded drives are kept to quickly deploy a replacement Windows storage
server if one should fail. The replacement server can be one of the compute servers. The idea is to quickly get
a replacement online if the second one fails. Although there is a low likelihood of this situation occurring, it
reduces the window of failure for the two critical Windows storage servers.
All of the servers communicate with the Storwize V7000 shared storage by using FCoE through two TOR
RackSwitch G8264CS 10GbE converged switches. All 10 GbE and FC connections are configured to be fully
redundant. As an alternative, iSCSI with G8264 10GbE switches can be used.
For 300 persistent users and 2700 stateless users, a mixture of disk configurations is needed. All of the users
require space for user folders and profile data. Stateless users need space for master images and persistent
users need space for the virtual clones. Stateless users have local SSDs to cache everything else, which
substantially decreases the amount of shared storage. For stateless servers with SSDs, a server must be
taken offline and only have maintenance performed on it after all of the users are logged off rather than being
able to use vMotion. If a server crashes, this issue is immaterial.
It is estimated that this configuration requires the following IBM Storwize V7000 drives:
This configuration requires 96 drives, which fit into one Storwize V7000 control enclosure and three expansion
enclosures.
Figure 10 shows the deployment configuration for this example in a single rack. Because the rack has 36 items,
it should have the capability for six power distribution units for 1+1 power redundancy, where each PDU has 12
41
C13 sockets.
Figure 10: Deployment configuration for 3000 stateless users with System x servers
42
Description
9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG
Quantity
1
1
1
1
1
1
1
5978
Select Storage devices - Lenovo-configured RAID
7860
Integrated Solid State Striping
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ
Lenovo SD Media Adapter for System x
ASCH
RAID Adapter for SD Media w/ VMware ESXi 5.5 U2 (1 SD Media)
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
1
1
2
43
1
1
1
24
16
24
1
1
2
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A4TR
A5UT
A5AF
A5AG
44
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
1
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1
2
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Select SSD storage for stateless virtual desktops
5978
Select Storage devices - Lenovo-configured RAID
2302
RAID configuration
A2K6
Primary Array - RAID 0 (2 drives required)
2499
Enable selection of Solid State Drives for Primary Array
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
Select amount of system memory
A5B7
16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
45
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
2
1
1
1
1
1
2
1
2
1
1
1
1
2
24
16
24
1
2
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
1
2
Description
5456HC1
A41D
A4MM
6201
A42S
A4AK
46
Quantity
1
1
6
6
1
1
Description
5465AC1
A5HH
A5J0
A5JU
A4MB
A5JX
A1MK
47
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
2
24
16
2
2
1
2
Description
48
Quantity
1
1
1
1
1
1
1
1
1
1
8
16
1
1
2
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
A5UT
A5AF
A5AG
49
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1
2
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
A5UT
9297
50
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
8
16
1
2
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
Quantity
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
1
2
1
8
16
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5TJ
Lenovo SD Media Adapter for System x
300GB 15K 12Gbps SAS 2.5" 512e HDD for NeXtScale System
ASBZ
1
2
Description
5456HC1
A41D
A4MM
6201
A42S
A4AK
51
Quantity
1
1
6
6
1
1
System x3650 M5
Code
Description
5462AC1
A5GU
A5EM
A5FD
A5EA
A5FN
A5R5
A5AX
6311
A1ML
A5EY
A5GG
A3YZ
A3Z2
2302
A2KB
A4TR
A5UT
9297
System x3650 M5
Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W
Intel Xeon Processor E5-2650 v3 10C 2.3GHz 25MB 2133MHz 105W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
RAID Configuration
Primary Array - RAID 10 (minimum of 4 drives required)
52
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
12
1
1
1
1
1
2
1
2
16
24
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
6311
A1ML
A5AB
A5AK
A59F
A59W
A59X
A3YZ
A3Z2
5977
A5UT
A5AF
A5AG
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
System x3550 M5 Slide Kit G4
System Documentation and Software-US English
System x3550 M5 4x 2.5" HS HDD Kit
System x3550 M5 4x 2.5" HS HDD Kit PLUS
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
53
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
1
24
16
24
4
2
6
1
1
2
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977
A5UT
9297
54
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
24
16
24
4
2
14
1
1
2
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 900W High Efficiency Platinum AC Power Supply
2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System x Enterprise Slides Kit
System Documentation and Software-US English
x3650 M5 16x 2.5" HS HDD Assembly Kit (Dual RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5EW
6400
A1ML
A5FY
A5G3
A4VH
A5FV
A5EY
A5GG
A3YZ
A3Z2
5977
A5UT
9297
A5B9
32GB TruDDR4 Memory (4Rx4, 1.2V) PC417000 CL15 2133MHz LP LRDIMM
Select drive configuration for hyper-converged system (all flash or SDD/HDD combination)
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4U4
S3700 400GB SATA 2.5" MLC G3HS Enterprise SSD for System x
A4TP
1.2TB 10K 6Gbps SAS 2.5" G3HS HDD
2498
Install largest capacity, faster drives starting in Array 1
Select GRID K1 or GRID K2 for graphics acceleration
AS3G
NVIDIA Grid K1 (Actively Cooled)
A470
NVIDIA Grid K2 (Actively Cooled)
Select flash memory for ESXi hypervisor or 2 drives (RAID 1) for other hypervisors
A5R7
32GB Enterprise Value USB Memory Key
A4TR
300GB 15K 6Gbps SAS 2.5" G3HS HDD
55
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
2
2
1
1
1
12
4
2
14
1
2
2
1
2
Description
9532AC1
A5SX
A5TE
A5RM
A5S0
A5SG
5978
8039
A4TR
A5B8
A5RP
56
Quantity
1
1
1
1
1
1
1
1
1
8
1
1
1
1
System x3550 M5
Code
Description
5463AC1
A5BL
A5C0
A58X
A59V
A5AG
A5AX
System x3550 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3550 M5 8x 2.5" Base Chassis
System x3550 M5 Planar
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x Advanced LCD Light path Kit
6311
A1ML
A5AB
A5AK
A59F
A59W
A3YZ
A3Z2
5978
A2K7
A4TR
A5B8
A5UT
A5AF
A5AG
System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0)
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
57
Quantity
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
1
2
1
2
System x3650 M5
Code
Description
5462AC1
A5GW
A5EP
A5FD
A5EA
A5FN
A5R5
System x3650 M5
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
Intel Xeon Processor E5-2680 v3 12C 2.5GHz 30MB 2133MHz 120W
System x3650 M5 2.5" Base without Power Supply
System x3650 M5 Planar
System x3650 M5 PCIe Riser 1 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x3650 M5 PCIe Riser 2 (1 x16 FH/FL + 1 x8 FH/HL Slots)
System x 550W High Efficiency Platinum AC Power Supply
2.8m, 10A/100-250V, C13 to IEC 320-C14 Rack Power Cable
Lenovo Integrated Management Module Advanced Upgrade
System x3650 M5 2.5" ODD/LCD Light Path Bay
System x3650 M5 2.5" ODD Bezel with LCD Light Path
Lightpath LCD Op Panel
System Documentation and Software-US English
x3650 M5 8x 2.5" HS HDD Assembly Kit (Single RAID)
ServeRAID M5210 SAS/SATA Controller for System x
ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade
A5AX
6311
A1ML
A5FY
A5G3
A4VH
A5EY
A5G6
A3YZ
A3Z2
5978
A2K7
A4TR
A5B8
A5UT
Emulex VFA5 2x10 GbE SFP+ PCIe Adapter for System x
9297
2U Bracket for Emulex 10GbE Virtual Fabric Adapter for System x
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
Emulex VFA5 FCoE/iSCSI SW for PCIe Adapter for System x (FoD)
A5UV
Brocade 8Gb FC Dual-port HBA for System x
3591
7595
2U Bracket for Brocade 8GB FC Dual-port HBA for System x
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
58
Quantity
1
1
1
1
1
1
1
2
2
1
1
1
1
1
1
1
1
1
1
1
8
1
1
1
1
1
2
1
2
NeXtScale nx360 M5
Code
Description
5465AC1
A5HH
A5J0
A5JU
A5JX
A1MK
A1ML
A5KD
A5JF
5978
A2K7
ASBZ
A5JZ
nx360 M5 RAID Riser
A5V2
nx360 M5 2.5" Rear Drive Cage
A5K3
nx360 M5 1x2, 2.5" 12G HDD short cable, HW RAID (stack-up)
A5B8
8GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM
A40Q
Emulex VFA5 ML2 Dual Port 10GbE SFP+ Adapter for System x
A5UX
nx360 M5 ML2 Bracket for Emulex VFA5 ML2 Dual Port 10GbE SFP+
A5JV
nx360 M5 ML2 Riser
Select extra network connectivity for FCoE or iSCSI, 8Gb FC, or 16 Gb FC
A4NZ
Emulex VFA5 ML2 FCoE/iSCSI License for System x (FoD)
Brocade 8Gb FC Dual-port HBA for System x
3591
88Y6854 5m LC-LC fiber cable (networking)
Brocade 16Gb FC Dual-port HBA for System x
A2XV
88Y6854 5m LC-LC fiber cable (networking)
Quantity
1
1
1
1
1
1
1
1
1
1
1
2
1
1
1
8
1
1
1
1
1
2
1
2
Description
5456HC1
A41D
A4MM
6201
A42S
A4AK
59
Quantity
1
1
6
6
1
1
Description
619524F
AHE1
AHE2
AHF9
AHH2
AS26
Quantity
1
20
0
0
4
1
Description
6195524
IBM Storwize V7000 Disk Control Enclosure
AHE1
300 GB 2.5-inch 15K RPM SAS HDD
AHE2
600 GB 2.5-inch 15K RPM SAS HDD
AHF9
600 GB 10K 2.5-inch HDD
AHH2
400 GB 2.5-inch SSD (E-MLC)
AHCB
64 GB to 128 GB Cache Upgrade
AS26
Power Cord PDU connection
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
AHB5
10Gb Ethernet 4 port Adapter Cards (Pair)
AHB1
8Gb 4 port FC Adapter Cards (Pair)
60
Quantity
1
20
0
0
4
2
1
1
1
Description
609924C
IBM Storwize V3700 Disk Control Enclosure
ACLB
300 GB 2.5-inch 15K RPM SAS HDD
ACLC
600 GB 2.5-inch 15K RPM SAS HDD
ACLK
600 GB 10K 2.5-inch HDD
ACME
400 GB 2.5-inch SSD (E-MLC)
ACHB
Cache 8 GB
ACFA
Turbo Performance
ACFN
Easy Tier
Select network connectivity of 10 GbE iSCSI or 8 Gb FC
ACHM
10Gb iSCSI - FCoE 2 Port Host Interface Card
ACHK
8Gb FC 4 Port Host Interface Card
ACHS
8Gb FC SW SPF Transceivers (Pair)
Quantity
1
20
0
0
4
2
1
1
2
2
2
Description
609924E
ACLB
ACLC
ACLK
ACME
61
Quantity
1
20
0
0
4
RackSwitch G8052
Code
Description
7309HC1
6201
3802
A3KP
Quantity
1
2
3
1
RackSwitch G8124E
Code
Description
7309HC6
6201
3802
A1DK
Quantity
1
2
1
1
RackSwitch G8264
Code
Description
7309HC3
6201
A3KP
5053
A1DP
A1DM
Quantity
1
2
1
2
1
0
RackSwitch G8264CS
Code
Description
7309HCK
6201
A2ME
A1DK
A1DP
A1DM
5075
62
Quantity
1
2
2
1
1
0
12
Description
9363RC4
5897
Quantity
1
6
System x rack
Code
Description
9363RC4
6012
Quantity
1
6
Description
8721HC1
IBM Flex System Enterprise Chassis Base Model
A0TA
IBM Flex System Enterprise Chassis
A0UC
IBM Flex System Enterprise Chassis 2500W Power Module Standard
6252
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A1PH
IBM Flex System Enterprise Chassis 2500W Power Module
3803
2.5 m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable
A0UA
IBM Flex System Enterprise Chassis 80mm Fan Module
A0UE
IBM Flex System Chassis Management Module
3803
3 m Blue Cat5e Cable
5053
IBM SFP+ SR Transceiver
A1DP
1 m IBM QSFP+ to QSFP+ Cable
A1PJ
3 m IBM Passive DAC SFP+ Cable
A1NF
IBM Flex System Console Breakout Cable
5075
BladeCenter Chassis Configuration
6756ND0
Rack Installation >1U Component
675686H
IBM Fabric Manager Manufacturing Instruction
Select network connectivity for 10 GbE, 10 GbE FCoE or iSCSI, 8 Gb FC, or 16 Gb FC
A3J6
IBM Flex System Fabric EN4093R 10Gb Scalable Switch
A3HH
IBM Flex System Fabric CN4093 10Gb Scalable Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A0UD
IBM Flex System FC3171 8Gb SAN Switch
5075
IBM 8 Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
A3DP
IBM Flex System FC5022 16Gb SAN Scalable Switch
A22R
Brocade 16Gb SFP + SW Optical Transceiver
5605
Fiber Cable, 5 meter multimode LC-LC
63
Quantity
1
1
2
2
4
4
4
1
2
2
2
4
1
4
1
1
2
2
4
4
2
4
4
2
4
4
Description
9840-AE1
IBM FlashSystem 840
AF11
4TB eMLC Flash Module
AF10
2TB eMLC Flash Module
AF1B
1 TB eMLC Flash Module
AF14
Encryption Enablement Pack
Select network connectivity for 10 GbE iSCSI, 10 GbE FCoE, 8 Gb FC, or 16 Gb FC
AF17
iSCSI Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF1D
10 Gb iSCSI 8 Port Host Optics
AF15
FC/FCoE Host Interface Card
AF18
8 Gb FC 8 Port Host Optics
3701
5 m Fiber Cable (LC-LC)
AF15
FC/FCoE Host Interface Card
AF19
16 Gb FC 4 Port Host Optics
3701
5 m Fiber Cable (LC-LC)
Quantity
1
12
0
0
1
2
2
2
2
2
2
8
2
2
4
Description
3873HC2
Brocade 6505 FC SAN Switch
00MT457
Brocade 6505 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver
Quantity
1
1
24
24
Description
3873HC3
Brocade 6510 FC SAN Switch
00MT459
Brocade 6510 12 Port Software License Pack
Select network connectivity for 8 Gb FC or 16 Gb FC
88Y6416
Brocade 8Gb SFP+ Optical Transceiver
88Y6393
Brocade 16Gb SFP+ Optical Transceiver
64
Quantity
1
2
48
48
Resources
For more information, see the following resources:
Citrix XenDesktop
citrix.com/products/xendesktop
Citrix XenServer
citrix.com/products/xenserver
VMware vSphere
vmware.com/products/datacenter-virtualization/vsphere
65
Document history
Version 1.0
Version 1.1
30 Jan 2015
5 May 2015
66
67