Beruflich Dokumente
Kultur Dokumente
ABSTRACT
This guide overviews the best practices for EMC Data Domain Virtual Edition (DD VE).
July, 2017
WHITE PAPER
To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
VMware and <insert other VMware marks in alphabetical order; remove sentence if no VMware marks needed. Remove highlight and
brackets> are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks
used herein are the property of their respective owners.
2
Contents
VIRTUAL ENVIRONMENT BEST PRACTICES FOR DD VE ...................................................4
STORAGE BEST PRACTICES FOR DD VE.............................................................................8
NETWORKING BEST PRACTICES FOR DD VE .................................................................. 16
3
Virtual Environment Best Practices for DD VE
OVERVIEW
DD VE is hosted on a hypervisor. This document provides tips and information about how DD VE should be configured and used in a vSphere
hypervisor environment.
Table 1:
4 8 16 32 48 64 96
Memory Size 6 8 16 24 36 48 64
CPU CONFIGURATION
CPU settings except reservation and limit must not be changed. DD VE CPU configuration must meet following criteria:
CPU RESERVATION
The total reservations assigned must be 1500 x CPU cores. Reservations must be set to 3000 for 2 CPU, 6000 for 4 CPU and 12000
for 8 CPU DD VE configuration.
MEMORY CONFIGURATION
Memory settings except memory size and reservation must not be changed.
4
DD VE requires “Reserve all guest memory (All locked)” option checked.
DD VE uses open source vmtools version 10.0.0. Vmware tools upgrade is “not supported”. If you try to upgrade vmtools, you might
see following error.
POWER OPERATIONS
Use only Shut Down Guest and Restart Guest when needed. They are highlighted green in the picture below.
Power off and Reset operations must be avoided as they act like pulling off power cable on a bare metal machine. If these operations are used it raises
possibility of partial data loss because data in-flight may not get synched to disks. DD VE does not support Suspend operation. Using Suspend operation
may cause unexpected results. As a quick guidance the operations highlighted in red box in the picture must be avoided.
5
TIME MANAGEMENT
There are four sources of the time in DD VE.
4. DD VE syncing time to active directory domain controller, if DD VE has joined active directory.
DD VE must use only one time source at any time. Having multiple time source would cause DD VE to flip through multiple time and
would cause inconsistent file time stamps and access issues through CIFS protocol.
o When you enable Active directory authentication, following operations must be performed in the order as mentioned
below.
6
2. Disable “Time synchronization with host” for DD VE virtual machine on ESXi server. Right click on the VM ->
Edit Settings -> Options -> VMware Tools -> Uncheck “Synchronize guest time with host”
To avoid DD VE to have snapshots, the root disk and nvram disk (Hard disk 1 and Hard disk 2) are set to Independent persistent mode.
This prohibit user to take DD VE snapshot. When a snapshot request is issued it would display error.
Cannot take a memory snapshot, since the virtual machine is configured with independent disks.
7
PERFORMANCE COUNTER COLLECTION
To troubleshoot DD VE performance issues, it requires DD VE to collect performance counters from hypervisor. Performance counter
collection can be done by setting vcenter credentials in DD VE using vserver command lines. DD VE collects its performance counters
in every 5 minutes. You can use following CLIs to configure and start performance counter collection. Once the troubleshooting is
completed the performance counter collection should be stopped.
The vcenter user must have at least read-only privileges on the data center object where DD VE virtual machine’s ESXi server or
cluster is.
Set vCenter credential on DD VE: vserver config set host <vserver-host> [port <port-number>]
VAPP OPTIONS
1. Do not change default vApp options.
vCenter Server 4.x and vCenter Server 5.x support 1 virtual CPU per protected virtual machine. vCenter Server 6.0 is supported up to 4
CPUs, depending on licensing.
VM SUPPORT BUNDLE
VM support bundle may be needed to resolve DD VE issues related to virtual infrastructure. Following link describes different methods
to collect vm support bundle on ESXi server.
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=653
LICENSING:
DD VE license is node locked, that mean same license cannot be used on multiple DD VE instances. The DD VE license may become
invalid in following cases.
1. Do not remove ethV0 interface. It might disable license on the DD VE. Please see DD VE Install and Admin guide for add/remove
network interfaces.
2. When a clone of DD VE virtual machine is created. A separate license needs to be applied on the cloned virtual machine.
3. Removing DD VE from inventory and Adding it back may change the DD VE virtual machine attributes to which license was tied to
that would result in a disabled license.
8
Storage Best Practices for DD VE
PHYSICAL STORAGE CONSIDERATIONS
To ensure data safety, it is strongly recommended that when disk cache is enable, make sure UPS (Uninterruptible Power Supply) is
available for the disk. Otherwise in the event of unexpected power failure, data loss might happen. In order for the RAID-ON-LUN
feature in DD VE to be effective, make sure that the stripe unit size of the storage array RAID is no larger than 128K.
RESOURCE—HARDWARE REQUIREMENT
When deploying a DD VE in customer environment, they need to make sure that their storage hardware meets minimal performance
requirement. Since DDFS performance requirement is limited by DDFS capacity, customer should prepare their storage hardware
based on DDFS capacity.
According to section “Data protection considerations”, customers are recommended to configure their physical storage array with RAID
groups, so in terms of the performance requirement, it is actually the performance requirement for the RAID group.
Table 2:
Capacity Read Random Read Random IO Write Sequential
NVRAM Write IOPS
(TiB) IOPS Latency(MS) Throughput(MiB/s)
4 160 14 40 150
8 320 14 80 300
When a customer plans to deploy a DD VE for certain size, the storage should be assured to meet the minimal requirement as shown in
above table. Things are a bit different depends on the underlying storage type:
1) DAS
In case of DAS, we’ll use local storage device to create DDFS. Thus local storage need to meet above performance requirement. As
indicated in above section, we recommend customer to build a RAID on top of the raw disk to ensure data safety. Assume the RAID
group is dedicated to DD VE, then the RAID group should at least provide above performance to DD VE. Usually hard disk’s
performance is predictable, so customers can estimate storage hardware requirement.
Below we give a suggested way of designing you RAID for performance. The write penalty and minimum number of disks for common RAID is listed
below:
9
Table 3:
RAID Type Write Penalty Minimum Number of Disks
RAID 1 2 2
RAID 10 2 4
RAID 5 4 3
RAID 6 6 4
First calculate the minimum required IOPS for your system based on the required read and write IOPS:
Then we can calculate the minimum required disk number based on the maximum disk performance.
The number of disks should satisfy the minimum number of disks for corresponding RAID and a spare disk is recommended for
redundancy reason. Therefore, the minimum disk number for your RAID is determined by ([] represents satisfying the minimum disk
number for RAID, 1 represents a spare disk):
2) NAS/SAN
In case of NAS and SAN, customers need to make sure that the storage can meet minimal performance requirement for DD VE (see
the previous table).
If iSCSI data store is used, it is recommended that hardware iSCSI is used. This will ensure that minimal CPU resource is occupied by
iSCSI.
To isolate storage traffic from other networking traffic, it is considered best practice to use either dedicated switches or VLANs for your
NFS and iSCSI ESX server traffic. The minimum NIC speed should be 1 gigabit Ethern et. Hardware acceleration should be enabled for
NFS data store due to performance reason.
Please refer to VMware document for more information about best practice for NFS data store
DataDomain strongly recommends customer enable vserver monitor feature in DD VE. Combined with other performance stats in
DDOS, it’s much easier for customer to triage storage performance issue.
DATASTORE CONSIDERATIONS
Data disks of DD VE can be carved out of both types of datastores. Sharing datastore with other Virtual Machines takes a risk to
degrade DD VE performance because all Virtual Machines reside on the same datastore share IO resources. Dedicated datastore for
DD VE is important to get best performance of DD VE.
When shared storage is used, Storage I/O control (SIOC) can be used to resolve the issue of resource contention. SIOC provides I/O
prioritization for virtual machines running on a cluster of ESX servers that share a common pool of storage. During periods of high I/O
congestion, SIOC will engage to dynamically adjust the I/O queue slots on each ESX server accessing that shared resource to align the
available throughput to the prioritization of virtual machines running on shared datastore. Customer should assure DD VE data disks get
enough share (including bandwidth, IOPs and latency) when configuring SIOC. Refer to VMware documents for more details.
When DD VE is used in a Storage DRS-enabled datastore cluster, it is recommended that VMDK anti-affinity rule is used. This will
assure that Virtual disks of a virtual machine with multiple virtual disks are placed on different datastores. For more details about
storage DRS related configuration, please refer to VMware documents.
VMFS DATASTORE
• Always create only one VMFS datastore for each LUN.
• Do NOT expand a VMFS datastore with a LUN from different disk group.
For example, datastore1 originally has only one extent from LUN1. Extend 2 expands datastore1 later. Disks from datastore1
in DD VE might be from LUN1 or LUN2, but this is not deterministic. In this case, customer can’t specify correct spindle group number
for data disks. More details about spindle group number of data disks are discussed in “Spindle Group of DD VE Data Disk”.
Figure 1:
11
NFS DATASTORE
• Always export NFS storage to ESXi host as a unique datastore on all ESXi hosts. NFS datastore is identified by its server and
folder names. For example, if the server name is given by “nfssrv1” and “nfssrv1.doman.com” respectively on 2 ESXi hosts, these 2
hosts will see the same NFS volume as different datastores. In this case, it is very difficult to dedicate NFS datastore to DD VE.
DD VE is a normal Virtual Machine and it should work well during vMotion, Storage vMotion or DRS procedure. Please refer to
VMware documents about vMotion, Storage vMotion and DRS.
It is recommended to do vMotion, Storage vMotion and DRS for DD VE when DDFS workload is low to ensure no performance
degradation of DDFS.
Though “spindle-group” parameter is optional, DataDomain strongly recommends customer to give the right spindle-group number.
Correct spindle-group number of data disks can take advantage of load balancing and capacity balancing mechanisms build in DDOS
to efficiently spread workload across data disks.
The following best practices of specifying spindle-group in DDVE are strongly recommended:
Always dedicate a NFS volume to DDVE and other VMs should NOT use this NFS volume at the same time.
Data disks from the same datastore must be set with the same spindle group number.
Data disks from different datastores must be set with different spindle group numbers unless LUNs for these datastores are
created from the same RAID Group.
Never add multiple LUNs created from different RAID Groups respectively to one datastore. Spindle group number of data
disks from this datastore cannot be set correctly and might result in performance degradation.
In this model, disk space will be allocated when creating the disk. Disk adding process won’t take too long time. Sectors will be
initialized to 0 when we first access them. So it will have little performance impact the first time we write to them.
In this model, disk space will be allocated when creating it. All space will be initialized to 0 by the time the disk is created. So the disk
creating process will take very long time. But no performance impact when we use the disk.
3. Thin provision
Storage will be allocated and zeroed when the disk is accessed. Great impact to performance
12
Thick provision lazy zero is the recommended model. Performance impact is tiny and disk creation time is relatively short.
DISK MODE
By default, root disk and NVRAM disk are set to independent persistent mode, the reason is that we don’t want user to do
snapshot/linked clone accidently. The performance will be impacted when snapshot/linked clone is created.
However, users are free to choose their disks mode—just keep in mind that the performance will be impacted when doing
clone/snapshot.
To qualify a storage for a particular capacity of DDVE the storage must meet the minimum IOPS/Throughput and latency numbers
as shown in the performance requirement table for DDVE in the above Table2.
You can also use the command “disk benchmark show requirements” on your DDVE to figure out the minimal performance
requirement for both data and NVRAM disks as shown below.
For help on running DAT test, type
sysadmin@localhost## disk benchmark help
sysadmin@localhost## disk benchmark show requirements
File System Write Sequential Read Random Read Random vNVRAM
13
Capacity (TiB) Throughput (MiB/s) IOPS Latency (ms) Write IOPS
-------------- ------------------- ------------- -------------- ------------
4 40 160 14 150
8 80 320 14 300
16 160 640 14 600
32 320 1280 14 1200
48 480 1920 14 1800
64 640 2560 14 2400
96 960 3840 14 3600
-------------- ------------------- ------------- -------------- ------------
DAT test can be performed with-vnvram and with no vnvram. If you are using DDBOOST protocol, run DAT test with no vnvram.
DAT by default, runs with no vnvram, as shown in the example below on a 16TB DDVE with 8x2TB disks.
sysadmin@localhost## disk benchmark start dev[3-10]
This will take about 40 minutes to complete.
Are you sure? (yes|no) [no]: yes
ok, proceeding.
14
Benchmark test 1 was completed.
Devices: dev3+dev4+dev5+dev6+dev7+dev8+dev9+dev10
Start Time: 2017/06/23 14:08:36
Duration (hh:mm:ss): 00:38:14
Incase if you are using NFS or CIFS protocol, run DAT test with with-vnvram option. For running DAT test with-vnvram, you need to
pass the option “with-vnvram”, as shown in the example below on 16TB DDVE with 8x2TB vdisks
sysadmin@localhost## disk benchmark start dev[3-10] with-vnvram
This will take about 40 minutes to complete.
Are you sure? (yes|no) [no]: yes
ok, proceeding.
If performance number of the storage is lower than the expected minimal requirement, you will get a message “** This set of devices
does not meet the criteria for any file system capacity “ on executing “disk benchmark show” command after DAT test completion.
In this case, it is suggested for the user, NOT to use this storage to create/expand file system or the user can create vdisks from a
faster storage to achieve greater storage performance. For existing data disks used by file system, it is strongly suggested for the
user to migrate data to higher performance disks. However, this is not enforced by DD VE. User could still use virtual disks which don’t
meet expected performance requirement to create/expand file system, but user is warned to experience possible performance issues
during backup/restore.
In the above DAT performance runs, tests are performed serially on each vdisk. It is strongly recommended to run DAT tests serially
on each vdisk if the vdisks are from the same physical disks.
When can user run DAT tests in parallel on vdisks? If the vdisks are carved from different physical disks, the DAT tests can be
performed in parallel on vdisks.
If the datastores that are coming from a traditional SAN storage, user can run the DAT tests in parallel mode, on vdisks from different
luns, if the vdisks are from same lun, run the DAT tests in serial mode.
Note: For running DAT test in parallel mode make sure vdisks are not competing for the same physical disk. Running DAT test in
serial or parallel mode depends on how the vdisks are using the physical disks.
Example below shows DAT test performed parallel on vdisks on a 4TB DDVE with 2x2TB vdisks.
sysadmin@localhost## disk benchmark start dev(3-4)
This will take about 5 minutes to complete.
Are you sure? (yes|no) [no]: yes
ok, proceeding.
16
Devices: dev3 dev4
Benchmark 31 progress: 8%
dev3 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
dev4 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
Benchmark 31 progress: 41%
dev3 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
dev4 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
Benchmark 31 progress: 75%
dev3 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
dev4 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
Benchmark 31 has completed
Use `disk benchmark show 31' to view results
sysadmin@localhost## disk benchmark show 31
Checking devices, please wait.
Benchmark test 31 was completed.
Devices: dev3 dev4
Start Time: 2017/06/23 15:59:05
Duration (hh:mm:ss): 00:04:51
It is highly recommended for higher capacity DDVE’s like 48TB and 64TB to have minimum 2 spindle groups and 96TB capacity
DDVE to have minimum 3 spindle groups (Refer Figure1). If the user is configuring more than 1 spindle group, ensure to have same
capacity in each spindle group for better performance.
BASIC CONFIGURATION
Data Domain Virtual Edition comes with two pre-configured interfaces with IPv4 DHCP clients enabled. User can modify the interface settings later as desired.
2) The adaptor type must be selected as VMXNET3. Other adaptor types like E1000 are not supported by DD VE.
After adding the virtual adaptors, once the DD VE is powered on, it identifies the new adaptors based on the PCI-ID provided by VMware/VMXNET3
driver. Then DD VE provides the new interface names in the order of their PCI-ID that is presented to it, or existing interface name if the PCI-ID is
already known to it.
17
IDENTIFYING THE DD VE INTERFACE NAME
Whenever a virtual adapter setting being modified in VMware, the user may want to know the associated interface that will get affected in the DD VE.
For example a virtual adapter in vSphere can be deleted or its settings can be modified.
User can identify the interface name in the DD VE that corresponds to a virtual adaptor in VMware by comparing the MAC address.
The MAC address on the VMware/vSphere can be identified from the Virtual Adaptor settings.
The MAC address in the DD VE can be found using the commands ‘net show hardware’ or ‘net show config’ or ‘ifconfig’
DELETING ADAPTORS
Deleting adaptor must not be attempted while DD VE is up and running. One must shutdown the DD VE. After the adaptor is deleted and the DD VE is
brought up, then the DD VE will not list the interface that is associated with the adaptor.
Note: There might be licensing tied to ethV0 interface. If so do not delete the adaptor that is associated to ethV0 interface without knowing the licensing
dependencies.
One can identify the adaptor that is associated with ethV0 interface by comparing MAC address that is displayed against ethV0 interface in DD VE, with
the MAC address that is on the virtual adaptor in the vSphere console.
VLAN CONFIGURATION
VMware supports three types of VLAN configuration. They are External Switch Tagging, Virtual Switch Tagging and Virtual Guest Tagging. For more
information on VLAN configuration, please refer to the VMware KB article, http://kb.vmware.com/kb/1003806
In reality the EST and VST modes are transparent to DD VE and are managed at ESXi host and/or at External switch. DD VE is aware of VLAN tagged
packets only in case of VGT mode. Below are the details.
1) On the VMware side, the port groups that are connected to the virtual switch should have appropriate VLAN IDs.
2) The ESXi host’s physical network adaptors must be connected to trunk ports on the physical switch to carry VLAN tagging.
1) The virtual switch must be configured to VLAN ID 4095 to carry VLAN tags from DD VE.
2) In case of distributed vSwitch one must set VLAN type to VLAN trunking and shall specify the range of VLANS.
3) Also the physical switch port that connects ESXi hosts must be set to trunk mode to transport VLAN tags.
INTERFACE BONDING
18
Bonding interfaces for failover or load sharing within the DD VE is not supported.
Due to the facts that are mentioned in the above paragraph, if two VMs those are connected to two different vSwitches and want to communicate with
each other, then their communication traffic has to go via external media/external switch
The DD VE is not aware that the switch it is connected to is a standard switch (vSwitch) or a distributed switch (dvSwitch). So in general any issues
with distributed switch configuration are not within the scope of DD VE, but will be with VMware environment.
Please refer to the following link for the detailed information on creating, configuring and managing the distributed switch: https://docs.vmware.com/en/VMware-
vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-375B45C7-684C-4C51-BA3C-70E48DFABF04.html
Make sure that the distributed switch has at least one uplink port per ESXi host, if the VM on an ESXi host wants to communicate with external network.
If a VM on a particular ESXi host that is using distributed switch, wants to talk to a VM that is either on different ESXi host or with any other machine that is not on the
same ESXi host, then please follow the below instruction.
Make sure that the distributed switch’s uplink port group is associated with at least one physical NIC per ESXi host. This is to ensure that a VM on one ESXi host can
communicate with a VM on another ESXi host, where a single distributed switch connects both of these VMs.
TROUBLESHOOTING
1) The DD VE, ‘ethtool’ should work just like the DDR. One can check for interface statistics in ethtool log file.
2) The other networking tools like tcpdump, congestion check etc are expected to run as usual.
3) When there is a network connectivity issue, please refer to the following VMware KB article for troubleshooting in the
VMware environment.
http://kb.vmware.com/kb/1003893
DD VE NETWORKING PROTOCOLS
The networking protocols like DHCP, DNS/DDNS and IPv6 are expected to work in DD VE, just as they do in the DDR.
DD VE MTU
As of today DD VE supports MTU range “350-9000” for IPv4 and “1280-9000” for IPv6. There is no packet loss observed when pinged with packet size
of 9000. This is probably the ESXi host taking care of fragmentation.
MAC CHANGES
To change MAC address on virtual adaptor, one has to power off virtual machine. After setting the new MAC address and after powering on DD VE, one
can observe MAC address changed at DD VE.
1) To the extent possible separate the VM Kernel port group traffic and DD VE port group traffic. Possibly onto different
physical NICs of the ESXi host. This prevents VM Kernel control traffic from being impacted at peak DD VE traffic times.
19
2) The same rule applies to vmNICs, separate traffic so that one can effectively utilize all the physical interfaces. This helps to
even out the traffic distribution amongst the interfaces so that some are not overloaded whilst others are idle.
3) Please be aware that there are automatic rollbacks that are performed on ESXi when a configuration is not valid. For
example on a distributed switch changing MTU to invalid number, or changing parameters of teaming, traffic shaping that
are invalid, then automatic rollback of the configuration may occur.
To disable automatic rollback on VMware infrastructure please refer to the below link:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-C50DBAAC-89A7-4BF9-A7F8-
70EAB37E53C7.html
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-995E627C-284C-492A-A988-
D7ABA2A665AE.html
20
APPENDIX
Although below section is not within the scope of DD VE, but may be helpful to find quick references to networking concepts while working in VMware
environment.
There are various other networking policies provided by VMware. For example Security, Traffic Shaping, Resource allocation, Monitoring, Teaming,
Failover are some useful policies that can be configured.
Some policies can be configured at vSwitch level, some can be applied at port group level; For more information please refer to the following link:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-B5218294-9838-475F-8B28-B7EA077BE45C.html
VMware supports various MAC address allocation mechanisms like vCenter server generated, Assigned by ESXi host, or manual. These addresses will
be assigned to the virtual adaptor. For more information please refer to the below link.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-1C9C9FA5-2D2D-48DA-9AD5-110171E8FD36.html
VMware provides some best practices for configuring/setting up networks. Please refer to their recommendation at the below link.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-B57FBE96-21EA-401C-BAA6-BDE88108E4BB.html
CAPTURING PACKETS
VMware provides packet capturing tools and procedures for capturing packets at virtual adapter or at physical adapter or at VMkernel adapter. Please
refer to the following link for more information:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-5CE50870-81A9-457E-BE56-C3FCEEF3D0D5.html
21