Sie sind auf Seite 1von 21

EMC DATA DOMAIN VIRTUAL EDITION

Best Practices Guide: Virtual Infrastructure, Storage and


Networking

ABSTRACT
This guide overviews the best practices for EMC Data Domain Virtual Edition (DD VE).

July, 2017

WHITE PAPER
To learn more about how EMC products, services, and solutions can help solve your business and IT challenges, contact your local
representative or authorized reseller, visit www.emc.com, or explore and compare products in the EMC Store

Copyright © 2014 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided “as is.” EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

VMware and <insert other VMware marks in alphabetical order; remove sentence if no VMware marks needed. Remove highlight and
brackets> are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks
used herein are the property of their respective owners.

2
Contents
VIRTUAL ENVIRONMENT BEST PRACTICES FOR DD VE ...................................................4
STORAGE BEST PRACTICES FOR DD VE.............................................................................8
NETWORKING BEST PRACTICES FOR DD VE .................................................................. 16

3
Virtual Environment Best Practices for DD VE
OVERVIEW

DD VE is hosted on a hypervisor. This document provides tips and information about how DD VE should be configured and used in a vSphere
hypervisor environment.

DATA DOMAIN VIRTUAL EDITION CONFIGURATION

VIRTUAL HARDWARE VERSION


DD VE uses virtual hardware version 9. It is recommended not to upgrade virtual hardware version.

Table 1:

Virtual Hardware Maximum storage capacity(TB)

4 8 16 32 48 64 96

CPU CPU numbers 2 4 8

Reservation 2 X 1.5 GHz 4 X 1.5 GHz 8 X 1.5 GHz

Memory Size 6 8 16 24 36 48 64

System disks One 250 GB root disk and 10 GB NVRAM disk

Data disk Minimum size of 200GB 500GB


First one

Comments Minimum size of the other data disks: 100GB. We strongly


recommend to use larger virtual disks like when possible. The
maximum size of DD VE is defined by DD VE license and maximum
virtual disk supported by hypervisor.

NVRAM simulation file size 512MB 1GB 2GB

Network Card NIC numbers Up to 4 Up to 8

NIC type VMXNET3

Storage controller Up to 4 VMware ParaVirtual SCSI controllers

CPU CONFIGURATION
CPU settings except reservation and limit must not be changed. DD VE CPU configuration must meet following criteria:

CPU RESERVATION
The total reservations assigned must be 1500 x CPU cores. Reservations must be set to 3000 for 2 CPU, 6000 for 4 CPU and 12000
for 8 CPU DD VE configuration.

MEMORY CONFIGURATION
Memory settings except memory size and reservation must not be changed.

4
DD VE requires “Reserve all guest memory (All locked)” option checked.

DD VE uses open source vmtools version 10.0.0. Vmware tools upgrade is “not supported”. If you try to upgrade vmtools, you might
see following error.

Call "VirtualMachine.MountToolsInstaller" for object "<DD VEname>" on


vCenter Server "<vcentername>" failed.

No VMware Tools package is available. The VMware Tools package is not


available for this guest operating system.

POWER OPERATIONS

Use only Shut Down Guest and Restart Guest when needed. They are highlighted green in the picture below.

Power off and Reset operations must be avoided as they act like pulling off power cable on a bare metal machine. If these operations are used it raises
possibility of partial data loss because data in-flight may not get synched to disks. DD VE does not support Suspend operation. Using Suspend operation
may cause unexpected results. As a quick guidance the operations highlighted in red box in the picture must be avoided.

5
TIME MANAGEMENT
There are four sources of the time in DD VE.

1. The DD VE virtual machine’s own clock.

2. DD VE syncing time to ESXi host.

3. DD VE syncing time to NTP server.

4. DD VE syncing time to active directory domain controller, if DD VE has joined active directory.

DD VE must use only one time source at any time. Having multiple time source would cause DD VE to flip through multiple time and
would cause inconsistent file time stamps and access issues through CIFS protocol.

Behavior of different time sync mechanisms in DD VE:

 Time Sync with ESX Host: The default time source of DD VE

o "Time synchronization with host" is enabled by default.

 NTP: NTP can be enabled on DD VE using ntp command lines.

o “ntp enable” would disable “Time synchronization with host”

o “ntp disable” would enable “Time synchronization with host

 Time Sync with Domain controller if Active directory authentication is enabled.

o When you enable Active directory authentication, following operations must be performed in the order as mentioned
below.

1. NTP must be disabled using “ntp disable”.

6
2. Disable “Time synchronization with host” for DD VE virtual machine on ESXi server. Right click on the VM ->
Edit Settings -> Options -> VMware Tools -> Uncheck “Synchronize guest time with host”

VMWARE VIRTUAL MACHINE SNAPSHOT


Taking virtual machine snapshot of DD VE is not supported. There will be serious performance issues if DD VE snapshots are taken.
This is vsphere technical limitation.

To avoid DD VE to have snapshots, the root disk and nvram disk (Hard disk 1 and Hard disk 2) are set to Independent persistent mode.
This prohibit user to take DD VE snapshot. When a snapshot request is issued it would display error.

Cannot take a memory snapshot, since the virtual machine is configured with independent disks.

CPU USAGE ALARMS


Doing backup to DD VE is a CPU intensive operation. You might see alarm “Virtual machine cpu usage” getting triggered. It is expected
behavior. vSphere does not support disable alarms for individual virtual machines. No user action is required, this alarm can be safely
ignored.

7
PERFORMANCE COUNTER COLLECTION
To troubleshoot DD VE performance issues, it requires DD VE to collect performance counters from hypervisor. Performance counter
collection can be done by setting vcenter credentials in DD VE using vserver command lines. DD VE collects its performance counters
in every 5 minutes. You can use following CLIs to configure and start performance counter collection. Once the troubleshooting is
completed the performance counter collection should be stopped.

The vcenter user must have at least read-only privileges on the data center object where DD VE virtual machine’s ESXi server or
cluster is.

Set vCenter credential on DD VE: vserver config set host <vserver-host> [port <port-number>]

Start performance counter collection: vserver perf-stats start

Stop performance counter collection: vserver perf-stats stop

VAPP OPTIONS
1. Do not change default vApp options.

2. Do not add OVF properties to DD VE.

VMWARE VSPHERE FAULT TOLERANCE (FT)


DD VE uses 2 to 4 CPUs. vCenter Server 6.0 or higher version is needed to support VMware vSphere Fault Tolerance (FT) feature for
DD VE.

vCenter Server 4.x and vCenter Server 5.x support 1 virtual CPU per protected virtual machine. vCenter Server 6.0 is supported up to 4
CPUs, depending on licensing.

VMWARE VSPHERE HIGH AVAILABILITY (HA)


There are no specific requirements to use VMware vSphere High Availability (HA) feature with DD VE.

VM SUPPORT BUNDLE
VM support bundle may be needed to resolve DD VE issues related to virtual infrastructure. Following link describes different methods
to collect vm support bundle on ESXi server.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=653

LICENSING:
DD VE license is node locked, that mean same license cannot be used on multiple DD VE instances. The DD VE license may become
invalid in following cases.

1. Do not remove ethV0 interface. It might disable license on the DD VE. Please see DD VE Install and Admin guide for add/remove
network interfaces.

2. When a clone of DD VE virtual machine is created. A separate license needs to be applied on the cloned virtual machine.

3. Removing DD VE from inventory and Adding it back may change the DD VE virtual machine attributes to which license was tied to
that would result in a disabled license.

UPGRADING FROM TRY AND BUY:


 Try and Buy version DD VE instance can be upgraded to a paid version by applying the new license. Best practice
recommendation - Discard the test data if instance is upgraded to paid version.

8
Storage Best Practices for DD VE
PHYSICAL STORAGE CONSIDERATIONS

DATA PROTECTION CONSIDERATIONS


Though DD VE has many features to ensure data integrity on virtual machine layer, customer must configure their physical storage
array with RAID groups to provide data protection. In terms of RAID level, RAID 6 is recommended to provide the maximal data
protection. If the storage array does not support RAID 6, then RAID5 or RAID10 (mirroring) can be used. If the RAID controller has
write-back cache enabled, it must be ensured that it is backed by battery to avoid data loss in the case of power failure.

To ensure data safety, it is strongly recommended that when disk cache is enable, make sure UPS (Uninterruptible Power Supply) is
available for the disk. Otherwise in the event of unexpected power failure, data loss might happen. In order for the RAID-ON-LUN
feature in DD VE to be effective, make sure that the stripe unit size of the storage array RAID is no larger than 128K.

RESOURCE—HARDWARE REQUIREMENT
When deploying a DD VE in customer environment, they need to make sure that their storage hardware meets minimal performance
requirement. Since DDFS performance requirement is limited by DDFS capacity, customer should prepare their storage hardware
based on DDFS capacity.

According to section “Data protection considerations”, customers are recommended to configure their physical storage array with RAID
groups, so in terms of the performance requirement, it is actually the performance requirement for the RAID group.

Here is the minimal performance requirement for DD VE:

Table 2:
Capacity Read Random Read Random IO Write Sequential
NVRAM Write IOPS
(TiB) IOPS Latency(MS) Throughput(MiB/s)

4 160 14 40 150

8 320 14 80 300

16 640 14 160 600

32 1280 14 320 1200

48 1920 14 480 1800

64 2560 14 640 2400

96 3840 14 960 3600

When a customer plans to deploy a DD VE for certain size, the storage should be assured to meet the minimal requirement as shown in
above table. Things are a bit different depends on the underlying storage type:

1) DAS
In case of DAS, we’ll use local storage device to create DDFS. Thus local storage need to meet above performance requirement. As
indicated in above section, we recommend customer to build a RAID on top of the raw disk to ensure data safety. Assume the RAID
group is dedicated to DD VE, then the RAID group should at least provide above performance to DD VE. Usually hard disk’s
performance is predictable, so customers can estimate storage hardware requirement.

Below we give a suggested way of designing you RAID for performance. The write penalty and minimum number of disks for common RAID is listed
below:

9
Table 3:
RAID Type Write Penalty Minimum Number of Disks

RAID 1 2 2

RAID 10 2 4

RAID 5 4 3

RAID 6 6 4

First calculate the minimum required IOPS for your system based on the required read and write IOPS:

minReqIops = reqdReadIops + (reqdWriteIops * raidWritePenalty)

Then we can calculate the minimum required disk number based on the maximum disk performance.

minNumDisksMinPerf = minReqIops / maxIopsByDisk

The number of disks should satisfy the minimum number of disks for corresponding RAID and a spare disk is recommended for
redundancy reason. Therefore, the minimum disk number for your RAID is determined by ([] represents satisfying the minimum disk
number for RAID, 1 represents a spare disk):

minNumDisksMinPerf = [ minReqIops / maxIopsByDisk ] + 1

2) NAS/SAN

In case of NAS and SAN, customers need to make sure that the storage can meet minimal performance requirement for DD VE (see
the previous table).

If iSCSI data store is used, it is recommended that hardware iSCSI is used. This will ensure that minimal CPU resource is occupied by
iSCSI.

3) NFS DATA STORE

To isolate storage traffic from other networking traffic, it is considered best practice to use either dedicated switches or VLANs for your
NFS and iSCSI ESX server traffic. The minimum NIC speed should be 1 gigabit Ethern et. Hardware acceleration should be enabled for
NFS data store due to performance reason.

Please refer to VMware document for more information about best practice for NFS data store

STORAGE PERFORMANCE MONITORING CONSIDERATIONS


It is useful for DD VE performance troubleshooting by monitoring physical storage performance when there is a DD VE performance
problem. If performance degradation already happens on physical storage layer, customer should firstly resolve performance problems
on this layer and this might save a lot of effort to make DD VE perform well.

DataDomain strongly recommends customer enable vserver monitor feature in DD VE. Combined with other performance stats in
DDOS, it’s much easier for customer to triage storage performance issue.

DATASTORE CONSIDERATIONS

PHYSICAL STORAGE OF DATASTORE


VMware supports different types of datastores: DAS, NAS, SAN. All of these types of datastores could be used as long as they can
10
meet our minimal performance requirement. When NAS is used, it’s better to have hardware acceleration enabled; otherwise it can’t
support thick provision, in which case it can’t achieve good performance.

DATASTORE SHARING CONSIDERATION


DataDomain strongly recommends that DD VE should NOT share datastore with other IO intensive Virtual Machines for performance
consideration.

VMware supports 2 types of datastore depending on the type of backend storage:

• Virtual Machine File System(VMFS) Datastore

• Network File System(NFS) Datastore

Data disks of DD VE can be carved out of both types of datastores. Sharing datastore with other Virtual Machines takes a risk to
degrade DD VE performance because all Virtual Machines reside on the same datastore share IO resources. Dedicated datastore for
DD VE is important to get best performance of DD VE.

When shared storage is used, Storage I/O control (SIOC) can be used to resolve the issue of resource contention. SIOC provides I/O
prioritization for virtual machines running on a cluster of ESX servers that share a common pool of storage. During periods of high I/O
congestion, SIOC will engage to dynamically adjust the I/O queue slots on each ESX server accessing that shared resource to align the
available throughput to the prioritization of virtual machines running on shared datastore. Customer should assure DD VE data disks get
enough share (including bandwidth, IOPs and latency) when configuring SIOC. Refer to VMware documents for more details.

When DD VE is used in a Storage DRS-enabled datastore cluster, it is recommended that VMDK anti-affinity rule is used. This will
assure that Virtual disks of a virtual machine with multiple virtual disks are placed on different datastores. For more details about
storage DRS related configuration, please refer to VMware documents.

DATASTORE CREATING AND INCREASING CONSIDERATION

VMFS DATASTORE
• Always create only one VMFS datastore for each LUN.

• Do NOT expand a VMFS datastore with a LUN from different disk group.

For example, datastore1 originally has only one extent from LUN1. Extend 2 expands datastore1 later. Disks from datastore1
in DD VE might be from LUN1 or LUN2, but this is not deterministic. In this case, customer can’t specify correct spindle group number
for data disks. More details about spindle group number of data disks are discussed in “Spindle Group of DD VE Data Disk”.

Figure 1:

11
NFS DATASTORE
• Always export NFS storage to ESXi host as a unique datastore on all ESXi hosts. NFS datastore is identified by its server and
folder names. For example, if the server name is given by “nfssrv1” and “nfssrv1.doman.com” respectively on 2 ESXi hosts, these 2
hosts will see the same NFS volume as different datastores. In this case, it is very difficult to dedicate NFS datastore to DD VE.

vMotion, Storage vMotion and DRS

DD VE is a normal Virtual Machine and it should work well during vMotion, Storage vMotion or DRS procedure. Please refer to
VMware documents about vMotion, Storage vMotion and DRS.

It is recommended to do vMotion, Storage vMotion and DRS for DD VE when DDFS workload is low to ensure no performance
degradation of DDFS.

VIRTUAL DISK SETTINGS


SPINDLE GROUP OF DDVE DATA DISK
Data disks of DDVE are used to create DataDomain file system which is for backup/restore purpose. When adding new data disks to
storage tier, spindle-group parameter is needed:

#storage add <LUN-list> [spindle-group <1-16>]

Though “spindle-group” parameter is optional, DataDomain strongly recommends customer to give the right spindle-group number.
Correct spindle-group number of data disks can take advantage of load balancing and capacity balancing mechanisms build in DDOS
to efficiently spread workload across data disks.

The following best practices of specifying spindle-group in DDVE are strongly recommended:

 Always create only one VMFS datastore on each LUN.

 Always dedicate a NFS volume to DDVE and other VMs should NOT use this NFS volume at the same time.

 Data disks from the same datastore must be set with the same spindle group number.

 Data disks from different datastores must be set with different spindle group numbers unless LUNs for these datastores are
created from the same RAID Group.

 Never add multiple LUNs created from different RAID Groups respectively to one datastore. Spindle group number of data
disks from this datastore cannot be set correctly and might result in performance degradation.

DISK PROVISIONING CONSIDERATION


VMware provides three type of disk provisioning options:

1. Thick provision lazy zero

In this model, disk space will be allocated when creating the disk. Disk adding process won’t take too long time. Sectors will be
initialized to 0 when we first access them. So it will have little performance impact the first time we write to them.

2. Thick provision eager zero

In this model, disk space will be allocated when creating it. All space will be initialized to 0 by the time the disk is created. So the disk
creating process will take very long time. But no performance impact when we use the disk.

3. Thin provision

Storage will be allocated and zeroed when the disk is accessed. Great impact to performance
12
Thick provision lazy zero is the recommended model. Performance impact is tiny and disk creation time is relatively short.

DISK MODE
By default, root disk and NVRAM disk are set to independent persistent mode, the reason is that we don’t want user to do
snapshot/linked clone accidently. The performance will be impacted when snapshot/linked clone is created.

However, users are free to choose their disks mode—just keep in mind that the performance will be impacted when doing
clone/snapshot.

DISK RESOURCE ALLOCATION


Non-data Disks: After initial deployment, by default there are 2 disks, dev1 and dev2, which are used for non-data disks by DD VE.
Customer should NOT remove these 2 disks or change any attribute of these 2 disks. Otherwise, DD VE will not run.

DATA DISK WITH RDMS


RDM is not support in DD VE.

NEW SCSI CONTROLLER


Customer should not add extra SCSI HBA to DD VE in a live system. It is not supported. DD VE need to be powered off before
adding a new SCSI HBA to DD VE.

DATA DISK SIZE/COUNT CHOSEN


The first disk size should at least be 200GB(DDOS restriction), for ESXi 5.1 and before, the largest capacity is 2T(-512Byte), for ESXi
5.5 and later, the max size is 62TB. We only support at most 14 data disks in a single DD VE instance. Customers need to
provision their storage properly to make sure total data disk number won’t exceed this threshold.

DATA DISK SIZE AND USABLE FILE SYSTEM SIZE


When adding disk to file system, the usable file system size is different from disk size—it is a bit smaller. This is due to RAID
overhead(around 5.6%), EXT3 overhead (3GB) and DDFS overhead (?), below table shows the mapping between disk size and usable
file system size:
Table 4:
Disk size 4TB (4096 GiB) 8TB (8192 GiB) 16TB(16384 GiB)

Usable FS size 3716.3GiB(3.629TiB) 7585.7GiB(7.408TiB) 14892.0GiB(14.543TiB)

USE DAT TOOL TO EVALUATE DISK PERFORMANCE BEFORE DEPLOYING DDFS.

To qualify a storage for a particular capacity of DDVE the storage must meet the minimum IOPS/Throughput and latency numbers
as shown in the performance requirement table for DDVE in the above Table2.

You can also use the command “disk benchmark show requirements” on your DDVE to figure out the minimal performance
requirement for both data and NVRAM disks as shown below.
For help on running DAT test, type
sysadmin@localhost## disk benchmark help
sysadmin@localhost## disk benchmark show requirements
File System Write Sequential Read Random Read Random vNVRAM

13
Capacity (TiB) Throughput (MiB/s) IOPS Latency (ms) Write IOPS
-------------- ------------------- ------------- -------------- ------------
4 40 160 14 150
8 80 320 14 300
16 160 640 14 600
32 320 1280 14 1200
48 480 1920 14 1800
64 640 2560 14 2400
96 960 3840 14 3600
-------------- ------------------- ------------- -------------- ------------

DAT test can be performed with-vnvram and with no vnvram. If you are using DDBOOST protocol, run DAT test with no vnvram.
DAT by default, runs with no vnvram, as shown in the example below on a 16TB DDVE with 8x2TB disks.
sysadmin@localhost## disk benchmark start dev[3-10]
This will take about 40 minutes to complete.
Are you sure? (yes|no) [no]: yes

ok, proceeding.

Checking devices, please wait.


Benchmark test 1 started, use 'disk benchmark watch' to monitor its progress.
sysadmin@localhost## disk benchmark watch
Benchmark 1
Devices: dev3+dev4+dev5+dev6+dev7+dev8+dev9+dev10
Benchmark 1 progress: 1%
dev3 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
Benchmark 1 progress: 5%
dev3 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
Benchmark 1 progress: 9%
dev3 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
…………
Benchmark 1 progress: 92%
dev10 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
Benchmark 1 progress: 96%
dev10 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
Benchmark 1 has completed
Use `disk benchmark show 1' to view results
sysadmin@localhost## disk benchmark show 1
Checking devices, please wait.

14
Benchmark test 1 was completed.
Devices: dev3+dev4+dev5+dev6+dev7+dev8+dev9+dev10
Start Time: 2017/06/23 14:08:36
Duration (hh:mm:ss): 00:38:14

Sequential Read Random Read Random vNVRAM


Throughput (MiB/s) IOPS Latency (ms) Write IOPS
------------------ ----------- ------------ ----------
1139 1042 6.10 n/a
------------------ ----------- ------------ ----------
This set of devices is suitable for use in a 16 TiB file system.
sysadmin@localhost##

Incase if you are using NFS or CIFS protocol, run DAT test with with-vnvram option. For running DAT test with-vnvram, you need to
pass the option “with-vnvram”, as shown in the example below on 16TB DDVE with 8x2TB vdisks
sysadmin@localhost## disk benchmark start dev[3-10] with-vnvram
This will take about 40 minutes to complete.
Are you sure? (yes|no) [no]: yes

ok, proceeding.

Checking devices, please wait.


Benchmark test 10 started, use 'disk benchmark watch' to monitor its progress.
sysadmin@localhost## disk benchmark watch
Benchmark 10
Devices: vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM
dev3+dev4+dev5+dev6+dev7+dev8+dev9+dev10
Benchmark 10 progress: 1%
vNVRAM test: 1 of 3, random write, duration: 94s , blk-size: 32KiB, test-size: 524256KiB, threads: 1,
dev3 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
Benchmark 10 progress: 5%
vNVRAM test: 2 of 3, random write, duration: 94s , blk-size: 32KiB, test-size: 524256KiB, threads: 1,
dev3 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
dev3 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
…….
Benchmark 10 progress: 92%
vNVRAM test: 2 of 3, random write, duration: 94s , blk-size: 32KiB, test-size: 524256KiB, threads: 1,
dev10 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
Benchmark 10 progress: 96%
vNVRAM test: 3 of 3, random write, duration: 94s , blk-size: 32KiB, test-size: 524256KiB, threads: 1,
15
dev10 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
Benchmark 10 has completed
Use `disk benchmark show 10' to view results
sysadmin@localhost## disk benchmark show 10
Checking devices, please wait.
Benchmark test 10 was completed.
Devices: vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM+vNVRAM
dev3+dev4+dev5+dev6+dev7+dev8+dev9+dev10
Start Time: 2017/06/23 14:58:33
Duration (hh:mm:ss): 00:39:07

Sequential Read Random Read Random vNVRAM


Throughput (MiB/s) IOPS Latency (ms) Write IOPS
------------------ ----------- ------------ ----------
1049 620 13.00 2434
------------------ ----------- ------------ ----------
This set of devices is suitable for use in a 16 TiB file system.

If performance number of the storage is lower than the expected minimal requirement, you will get a message “** This set of devices
does not meet the criteria for any file system capacity “ on executing “disk benchmark show” command after DAT test completion.
In this case, it is suggested for the user, NOT to use this storage to create/expand file system or the user can create vdisks from a
faster storage to achieve greater storage performance. For existing data disks used by file system, it is strongly suggested for the
user to migrate data to higher performance disks. However, this is not enforced by DD VE. User could still use virtual disks which don’t
meet expected performance requirement to create/expand file system, but user is warned to experience possible performance issues
during backup/restore.

In the above DAT performance runs, tests are performed serially on each vdisk. It is strongly recommended to run DAT tests serially
on each vdisk if the vdisks are from the same physical disks.

When can user run DAT tests in parallel on vdisks? If the vdisks are carved from different physical disks, the DAT tests can be
performed in parallel on vdisks.

If the datastores that are coming from a traditional SAN storage, user can run the DAT tests in parallel mode, on vdisks from different
luns, if the vdisks are from same lun, run the DAT tests in serial mode.

Note: For running DAT test in parallel mode make sure vdisks are not competing for the same physical disk. Running DAT test in
serial or parallel mode depends on how the vdisks are using the physical disks.

Example below shows DAT test performed parallel on vdisks on a 4TB DDVE with 2x2TB vdisks.
sysadmin@localhost## disk benchmark start dev(3-4)
This will take about 5 minutes to complete.
Are you sure? (yes|no) [no]: yes

ok, proceeding.

Checking devices, please wait.


Benchmark test 31 started, use 'disk benchmark watch' to monitor its progress.
sysadmin@localhost## disk benchmark watch
Benchmark 31

16
Devices: dev3 dev4
Benchmark 31 progress: 8%
dev3 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
dev4 test: 1 of 3, sequential write, duration: 94s , blk-size: 4608KiB, test-size: 2017GiB, threads: 4,
Benchmark 31 progress: 41%
dev3 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
dev4 test: 2 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 8,
Benchmark 31 progress: 75%
dev3 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
dev4 test: 3 of 3, random read, duration: 94s , blk-size: 64KiB, test-size: 2017GiB, threads: 1,
Benchmark 31 has completed
Use `disk benchmark show 31' to view results
sysadmin@localhost## disk benchmark show 31
Checking devices, please wait.
Benchmark test 31 was completed.
Devices: dev3 dev4
Start Time: 2017/06/23 15:59:05
Duration (hh:mm:ss): 00:04:51

Sequential Read Random Read Random vNVRAM


Throughput (MiB/s) IOPS Latency (ms) Write IOPS
------------------ ----------- ------------ ----------
880 1199 8.57 n/a
------------------ ----------- ------------ ----------
This set of devices is suitable for use in a 4 TiB file system.

It is highly recommended for higher capacity DDVE’s like 48TB and 64TB to have minimum 2 spindle groups and 96TB capacity
DDVE to have minimum 3 spindle groups (Refer Figure1). If the user is configuring more than 1 spindle group, ensure to have same
capacity in each spindle group for better performance.

Networking Best Practices for DD VE


SCOPE
The scope of this document is limited to Data Domain Virtual Edition networking functionality with VMware infrastructure. As DD VE is a virtual
appliance, most of the configuration challenges will be part of VMware infrastructure. In this document we will discuss the things that are related to DD
VE. Exploring the extensive networking features of VMware is not in the scope of this document.

BASIC CONFIGURATION
Data Domain Virtual Edition comes with two pre-configured interfaces with IPv4 DHCP clients enabled. User can modify the interface settings later as desired.

ADDING NEW ADAPTORS


The following rules must be followed while adding a new adaptor.

1) DD VE must be shutdown while adding any new adaptor.

2) The adaptor type must be selected as VMXNET3. Other adaptor types like E1000 are not supported by DD VE.

After adding the virtual adaptors, once the DD VE is powered on, it identifies the new adaptors based on the PCI-ID provided by VMware/VMXNET3
driver. Then DD VE provides the new interface names in the order of their PCI-ID that is presented to it, or existing interface name if the PCI-ID is
already known to it.

17
IDENTIFYING THE DD VE INTERFACE NAME
Whenever a virtual adapter setting being modified in VMware, the user may want to know the associated interface that will get affected in the DD VE.
For example a virtual adapter in vSphere can be deleted or its settings can be modified.
User can identify the interface name in the DD VE that corresponds to a virtual adaptor in VMware by comparing the MAC address.

The MAC address on the VMware/vSphere can be identified from the Virtual Adaptor settings.

The MAC address in the DD VE can be found using the commands ‘net show hardware’ or ‘net show config’ or ‘ifconfig’

DELETING ADAPTORS
Deleting adaptor must not be attempted while DD VE is up and running. One must shutdown the DD VE. After the adaptor is deleted and the DD VE is
brought up, then the DD VE will not list the interface that is associated with the adaptor.

Note: There might be licensing tied to ethV0 interface. If so do not delete the adaptor that is associated to ethV0 interface without knowing the licensing
dependencies.

One can identify the adaptor that is associated with ethV0 interface by comparing MAC address that is displayed against ethV0 interface in DD VE, with
the MAC address that is on the virtual adaptor in the vSphere console.

VLAN CONFIGURATION
VMware supports three types of VLAN configuration. They are External Switch Tagging, Virtual Switch Tagging and Virtual Guest Tagging. For more
information on VLAN configuration, please refer to the VMware KB article, http://kb.vmware.com/kb/1003806

The DD VE is compliant to all the three modes of VLAN configuration.

In reality the EST and VST modes are transparent to DD VE and are managed at ESXi host and/or at External switch. DD VE is aware of VLAN tagged
packets only in case of VGT mode. Below are the details.

EXTERNAL SWITCH TAGGING (EST)


In this mode the VLAN configuration at external switch is transparent to ESXi host or to the VMs. However as recommended by VMware, the port groups
connected to the virtual switch mush have their VLAN ID set to 0.

VIRTUAL SWITCH TAGGING (VST)


In this mode the virtual switch does the VLAN tagging. Below are some important notes.

1) On the VMware side, the port groups that are connected to the virtual switch should have appropriate VLAN IDs.
2) The ESXi host’s physical network adaptors must be connected to trunk ports on the physical switch to carry VLAN tagging.

VIRTUAL GUEST TAGGING (VGT)


In this mode the Guest (DD VE) performs the VLAN tagging.Below are some important notes.

1) The virtual switch must be configured to VLAN ID 4095 to carry VLAN tags from DD VE.
2) In case of distributed vSwitch one must set VLAN type to VLAN trunking and shall specify the range of VLANS.
3) Also the physical switch port that connects ESXi hosts must be set to trunk mode to transport VLAN tags.

INTERFACE BONDING
18
Bonding interfaces for failover or load sharing within the DD VE is not supported.

WORKING WITH MULTIPLE VSWITCHES


Each vSwitch in an ESXi host will have zero or more number of uplinks. Each uplink will be associated with one physical NIC in the ESXi host. The
important point to note here is that, these vSwitches are not connected to each other.

Due to the facts that are mentioned in the above paragraph, if two VMs those are connected to two different vSwitches and want to communicate with
each other, then their communication traffic has to go via external media/external switch

VSPHERE DISTRIBUTED SWITCH CONSIDERATIONS


The vSphere distributed switch simplifies and centralizes the vSphere management. One can even configure per-port policies and monitoring settings.

The DD VE is not aware that the switch it is connected to is a standard switch (vSwitch) or a distributed switch (dvSwitch). So in general any issues
with distributed switch configuration are not within the scope of DD VE, but will be with VMware environment.

Please refer to the following link for the detailed information on creating, configuring and managing the distributed switch: https://docs.vmware.com/en/VMware-
vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-375B45C7-684C-4C51-BA3C-70E48DFABF04.html

Make sure that the distributed switch has at least one uplink port per ESXi host, if the VM on an ESXi host wants to communicate with external network.
If a VM on a particular ESXi host that is using distributed switch, wants to talk to a VM that is either on different ESXi host or with any other machine that is not on the
same ESXi host, then please follow the below instruction.
Make sure that the distributed switch’s uplink port group is associated with at least one physical NIC per ESXi host. This is to ensure that a VM on one ESXi host can
communicate with a VM on another ESXi host, where a single distributed switch connects both of these VMs.

TROUBLESHOOTING
1) The DD VE, ‘ethtool’ should work just like the DDR. One can check for interface statistics in ethtool log file.

2) The other networking tools like tcpdump, congestion check etc are expected to run as usual.

3) When there is a network connectivity issue, please refer to the following VMware KB article for troubleshooting in the
VMware environment.
http://kb.vmware.com/kb/1003893

DD VE NETWORKING PROTOCOLS
The networking protocols like DHCP, DNS/DDNS and IPv6 are expected to work in DD VE, just as they do in the DDR.

DD VE MTU
As of today DD VE supports MTU range “350-9000” for IPv4 and “1280-9000” for IPv6. There is no packet loss observed when pinged with packet size
of 9000. This is probably the ESXi host taking care of fragmentation.

MAC CHANGES
To change MAC address on virtual adaptor, one has to power off virtual machine. After setting the new MAC address and after powering on DD VE, one
can observe MAC address changed at DD VE.

DO’S AND DON’TS OF VMWARE NETWORKING W.R.T DD VE


Below are some of the Do’s and Don’ts of VMware networking.

1) To the extent possible separate the VM Kernel port group traffic and DD VE port group traffic. Possibly onto different
physical NICs of the ESXi host. This prevents VM Kernel control traffic from being impacted at peak DD VE traffic times.

19
2) The same rule applies to vmNICs, separate traffic so that one can effectively utilize all the physical interfaces. This helps to
even out the traffic distribution amongst the interfaces so that some are not overloaded whilst others are idle.

3) Please be aware that there are automatic rollbacks that are performed on ESXi when a configuration is not valid. For
example on a distributed switch changing MTU to invalid number, or changing parameters of teaming, traffic shaping that
are invalid, then automatic rollback of the configuration may occur.

To disable automatic rollback on VMware infrastructure please refer to the below link:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-C50DBAAC-89A7-4BF9-A7F8-
70EAB37E53C7.html
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-995E627C-284C-492A-A988-
D7ABA2A665AE.html

20
APPENDIX
Although below section is not within the scope of DD VE, but may be helpful to find quick references to networking concepts while working in VMware
environment.

VMWARE NETWORKING POLICIES:

There are various other networking policies provided by VMware. For example Security, Traffic Shaping, Resource allocation, Monitoring, Teaming,
Failover are some useful policies that can be configured.

Some policies can be configured at vSwitch level, some can be applied at port group level; For more information please refer to the following link:
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-B5218294-9838-475F-8B28-B7EA077BE45C.html

MAC ADDRESS MANAGEMENT

VMware supports various MAC address allocation mechanisms like vCenter server generated, Assigned by ESXi host, or manual. These addresses will
be assigned to the virtual adaptor. For more information please refer to the below link.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-1C9C9FA5-2D2D-48DA-9AD5-110171E8FD36.html

VMWARE NETWORKING BEST PRACTICES

VMware provides some best practices for configuring/setting up networks. Please refer to their recommendation at the below link.
https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-B57FBE96-21EA-401C-BAA6-BDE88108E4BB.html

CAPTURING PACKETS

VMware provides packet capturing tools and procedures for capturing packets at virtual adapter or at physical adapter or at VMkernel adapter. Please
refer to the following link for more information:

https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-5CE50870-81A9-457E-BE56-C3FCEEF3D0D5.html

21

Das könnte Ihnen auch gefallen