Sie sind auf Seite 1von 34

EMC VPLEX

with GeoSynchrony 5.2

Release Notes
302-000-035-01

June 25, 2013

These release notes contain supplemental information for EMC VPLEX with GeoSynchrony 5.2.

Revision history ....................................................................................... 2 Product description ................................................................................. 2 New features in this release ................................................................... 2 Configuration limits................................................................................ 6 Fixed problems in Release 5.2................................................................ 8 Known problems and expected behaviors ........................................ 12 Documentation updates ....................................................................... 28 Documentation ...................................................................................... 29 Upgrading GeoSynchrony ................................................................... 30 Software packages ................................................................................. 33 Troubleshooting and getting help ....................................................... 33

Revision history

Revision history
The following table presents the revision history of this document.
Revision 01 Date June 25, 2013 Description First draft

Product description
The EMC VPLEX family removes physical barriers within, across and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility between two VPLEX clusters within synchronous distances. VPLEX Geo further dissolves those distances by extending data access across asynchronous distances. With a unique scale-up and scale-out architecture, VPLEX's advanced data caching and distributed cache coherency provides workload resiliency, automatic sharing, balancing and failover of storage domains, and enables both local and remote data access with predictable service levels.

New features in this release


Release 5.2 includes the following new features:

New performance dashboard and CLI-based performance capabilities A new customizable performance monitoring dashboard provides a view into the performance of your VPLEX system. You decide which aspects of the system's performance to view and compare. Alternatively, you can use the CLI to create a toolbox of custom monitors to operate under varying conditions including debugging, capacity planning, and workload characterization. The following new dashboards are provided by default: System Resources End To End Dashboard Front End Dashboard Back End Dashboard Rebuild Dashboard

EMC VPLEX with GeoSynchrony 5.2 Release Notes

New features in this release

WAN Dashboard A number of new charts are also available in the GUI.

Improved diagnostics Enhancements include the following: Collect diagnostics improvements Prevent more than one user from running Log Collection at any one time, thus optimizing resources and maintaining the validity of the collected logs. Accelerate performance by combining multiple scripts to get tower debug data into a single script. These improvements decrease log collection time and log size by not collecting redundant information. Health check improvements Include Consistency group information into overall health check. Include WAN link information into overall health check. Storage array based volume expansion Storage array based volume expansion enables storage administrators to expand the size of any virtual volume by expanding the underlying storage-volume. The supported device geometries include virtual volumes mapped 1:1 to storage volumes, virtual volumes on multi-legged RAID-1, and distributed RAID-1, RAID-0, and RAID-C devices under certain conditions. The expansion operation is supported through expanding the corresponding Logical Unit Numbers (LUNs) on the back-end (BE) array. Storage array based volume expansion might require that you increase the capacity of the LUN on the back-end array. Procedures for doing this on supported third-party luns are availble with the storage array based volume expansion procedures in the generator.
Note: Virtual volume expansion is not supported on RecoverPoint enabled volumes.

EMC VPLEX with GeoSynchrony 5.2 Release Notes

New features in this release

VAAI VMware API for Array Integration (VAAI) now supports WriteSame(16) calls. The WriteSame (16) SCSI command provides a mechanism to offload initializing virtual disks to VPLEX. WriteSame (16) requests the server to write blocks of data transferred by the application client multiple times to consecutive logical blocks. Cluster repair and recover In the event of a disaster that destroys the VPLEX cluster, but leaves the storage (including meta data) and the rest of the infrastructure intact, the cluster recover procedure restores the full configuration after replacing the VPLEX cluster hardware.

FRU procedure A field replaceable unit (FRU) procedure is implemented that automates engine chassis replacement in VS2 configurations.

Emerson 350VA UPS support Either APC or Emerson uninterruptible power supplies (UPS) can be used in a VPLEX cluster. SYR collects the same data for Emerson units as it currently does for APC units.

SYR Reporting SYR Reporting is enhanced to collect Local COM switch information.

Element Manager API VPLEX Element Manager API has been enhanced to support additional external management interfaces. Supported interfaces include: ProSphere for discovery and capacity reporting/chargeback UIM for provisioning and reporting on VPLEX in a Vblock Foundation Management for discovery of VPLEX in a Vblock Archway for application consistent PiT copies with RecoverPoint Splitter

Event message severities All VPLEX events with a severity of ERROR in previous releases of GeoSynchrony have been re-evaluated to ensure the accuracy of their severity with respect to Service Level Agreement requirements.

Back end (BE) Logical Unit Number (LUN) swapping

EMC VPLEX with GeoSynchrony 5.2 Release Notes

New features in this release

The system detects and corrects BE LUN swap automatically. Once LUN Remap is detected, VPLEX corrects its mapping to prevent data corruption. On detection of LUN remap, a call-home event will be sent.

Invalidate cache procedure This is a procedure to invalidate the cache associated with a virtual-volume or a set of virtual-volumes within a consistency-group that has experienced a data corruption and needs data to be restored from backup.

Customer settable password policy You can set the password policy for all VPLEX administrators, for example, specifying the minimum password length and the password expiration date.

VPLEX presentation of fractured state for DR1 replica volumes DR1 RecoverPoint-Replica volumes will not be DR1s while in use and their status will reflect this in the CLI and GUI as disconnected.

Performance improvement for larger IO block sizes System performance of write operations is improved for block sizes greater than 128KB.

VPLEX presentation of Fake Size Fake Size is the ability to use Replica volumes larger than Production. The limitation of creating RecoverPoint replicas where source LUN and target LUN must be of identical volume has been removed. Using the VPLEX RecoverPoint Splitter, you can now replicate to a target LUN that is larger than the source LUN. To use the Fake Size feature, you must be running RecoverPoint 4.0 or higher.
Note: If a RecoverPoint fail-over operation is used, which swaps the production/replica roles, it is possible for the new production volume to have a fake size instead.

RecoverPoint Splitter support for 8k LUNs For 8K Volumes to be put in use, it would need RP 4.0 or higher builds configured.

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Configuration limits

OS2007 bit

In Release 5.2, VPLEX supports the OS2007 bit on Symmetrix arrays. This setting is vital to detect LUN swap situations and storage volume expansion automatically on a Symmetrix array. The Symmetrix section of the Configuring Arrays procedure in the generator contains instructions on setting the OS2007 bit.

Configuration limits
Table 1 lists the configuration limits in the current release.
Table 1

Configuration limits Object Virtual volumes Storage volumes IT nexusa per cluster in VPLEX Local IT nexusa per cluster in VPLEX Metro IT nexusa per cluster in VPLEX Geo IT nexus per back-end port IT nexus per front-end port Extents Extents per storage volume RAID-1 mirror legs b Local top-level devices Distributed devices (includes total number of distributed devices and local devices with global visibility) Storage volume size Virtual volume size Maximum 8000 8000 3200 3200 400 256 400 24000 128 2 8000 8000

32 TB 32 TB

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Configuration limits

Table 1

Configuration limits (continued) Object Total storage provisioned in a system Extent block size Active intra-cluster rebuilds Active inter-cluster rebuilds (on distributed devices) Clusters Synchronous Consistency Groups Asynchronous Consistency Groups Volumes per Consistency Group Paths per storage volume per VPLEX director Note: A path is a connection from an initiator back-end port on the director to the target port on the array (IT connection). Minimum bandwidth for VPLEX Geo IP WAN link Minimum bandwidth for VPLEX Metro IP WAN link Minimum bandwidth for VPLEX Metro with RAPIDPath IP WAN link Maximum WAN latency (RTT) in a VPLEX Metro Maximum latency in a VPLEX Geo 1 Gbps 3 Gbps 1 Gbps 5 ms 50 ms Maximum 8 PB 4 KB 25 25 2 1024 16 1000 4

a. A combination of host initiator and VPLEX front-end target port. b. The number of mirror legs directly underneath a RAID-1 device. A RAID-1 device can contain a RAID-1 device as a mirror leg (up to one level deep).

Software versions

The software version for Release 5.2 is: 5.2.0.00.00.05. The software version number can be interpreted as follows: VPLEX A.B.C.DD.EE.FF

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Fixed problems in Release 5.2

Where each position has the following meaning:


Digit Position A B C DD EE FF Description Major release number Minor release number Service Pack number Patch number Hot Fix number Build number

For example: VPLEX 5.2.0.00.00.38


5 2 38 Major release number Minor release number Build

Interoperability

For Control Center support, refer to ECC Support Matrix to obtain the correct ECC version. For all other VPLEX interoperability, refer to the EMC Simple Support Matrix, which is available from: https://elabnavigator.emc.com

Fixed problems in Release 5.2


The following problems have been fixed in Release 5.2:

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Fixed problems in Release 5.2

Issue Number 11467qVPX

Summary Previously released versions of PowerPath do not fully support devices with two unique array serial numbers when running powermt display. Although the multipathing functionality is unaffected, in cross-connect configurations, PowerPath has the following cosmetic issues with distributed RAID-1 (DR-1) devices: powermt display dev=device only shows one of the VPLEX IDs powermt display paths might not show the correct number of VPLEX paths powermt display ports may not show the correct number of VPLEX ports Refer to the ESSM for information on the environments supported by VPLEX.

15690qVPX

LUN swapping issues could result if storage volumes were deleted and created together. Release 5.2 detects and corrects LUN re-mapping on back end storage-volumes. A duplicate UUID was produced following an SSD replacement procedure if the most recent director state backup occurred before a configuration activity. This has been fixed in GeoSynchrony 5.2. The error reported by health-check or VPLEXPlatformHealthCheck command for "Checking port Status" check for B3-FC02,B3-FC03,A3-FC02,A3-FC03 can was incorrect. This error has been corrected in Release 5.2. VPLEX NDU caused disruption to a virtualized MSCS 2008 cluster running in a VMWare environment on NMP. This disruption has been fixed. After certain boot-up conditions, a distributed-device with two healthy legs could refuse detaching one of the legs. In release 5.2, after all boot up conditions, distributed devices re-attach all legs successfully. In previous versions of VPLEX, health-check full displayed incorrect error about sub-page writes was displayed. In Release 5.2, the subpage writes check is still performed, but the message is a warning, not an error. When NDU was performed on a quad-engine configuration on Local and Metro deployments, if there were meta-data changes due to a WAN link failure, back-end failure, or any other reason, you would see an abort message. In Release 5.2 the director stays up and there is no abort message. If there was a staggered disabling of local COM ports starting with director-1-1-A and so on, there was a temporary split brain between the directors in a cluster. This has been fixed in Release 5.2.

18228qVPX

18522qVPX

18653qVPX

18719qVPX

19188qVPX

19540qVPX

19546qVPX

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Fixed problems in Release 5.2

Issue Number 19945qVPX

Summary EMC does not recommend or support the shrinking of volumes. However, In previous releases, if a volume was shrunk below 8 SCSI blocks, all directors would fail and end up in a reboot loop. This fix prevents that director failure if this recommendation is not adhered to. When there was an asymmetrical COM visibility across sites (timing dependent), the clusters temporarily departed each other (and a detach could happen depending on detach rules). This issue has been fixed in Release 5.2. During a metadata backup operation, an asynchronous task i s queued to persist that change of metadata backup in the active metadata. If during that time, there were many admin operations (such as removing a virtual volume) the director would possibly assert due to a bug in the firmware. This defect has been fixed in Release 5.2. Normally, when the batteries are operational, when there is a power failure, the cluster will vault. When the power is restored it will unvault, restoring its dirty data. However, if while the first cluster is down and the other cluster was active (was servicing I/O) and had abrupt failure (perhaps it also had a power failure but its battery was not operational), it simply stops and does not vault. This cluster will lose data. When both the clusters come up again, there will be an mismatch in the data between the cluster which has a vault and the one which did not have a vault. This should just result in the consistency groups becoming suspended with data loss. However, due to this defect, it also results in failures of the directors due to mismatch in data during the synchronization of the data between the clusters. During the initial creation of a DR-1 the CLI and GUI report that the DR-1 has a health state of major-failure. The health status of a DR-1 will now be reported minor-failure or degraded. If a system encounters director failures on the 2nd upgraders, ndu recover can engage the CLI to update the VPLEX Witness context information from directors that are in the process of initialization, resulting in incorrect information in the VPLEX Witness CLI context. If this happens the VPLEX Witness upgrade failed as part of the ndu recover. In Release 5.2 the ndu recover works correctly through this issue. In some circumstances, RecoverPoint reports failed attaching to the splitter. This occurred when attempting to attach the splitter to a VPLEX volume after some volumes were deleted from the environment. This issue has been fixed in Release 5.2.

19967qVPX

22035qVPN

20093qVPX

20698qVPX

20759qVPX

22095qVPX

10

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Fixed problems in Release 5.2

Issue Number 22227qVPX

Summary A consistency group with single-legged distributed virtual volumes, would be suspended at both clusters if VPLEX Witness became isolated from the cluster with the healthy storage leg and the clusters partitioned. This issue was fixed in Release 5.2. In some cases, NDU failed in the Finishing NDU phase with the following error: " * waiting for management connectivity: ..................ERROR Encountered a problem while rebooting the second upgraders: interrupted sleep This error no longer occurs in upgrades to Release 5.2 or later. A timing issue under certain conditions was causing a full-sweep (loss of history) on RecoverPoint Consistency Groups. This issue has been fixed in Release 5.2 Interruption in VPLEX director connectivity to all RecoverPoint appliances resulted in corrupt bookmarks for replica images. This issue has been fixed in Release 5.2 and requires the use of RecoverPoint Release 4.0 SP1 or later.

22733qVPX

22837qVPX

23021qVPX

EMC VPLEX with GeoSynchrony 5.2 Release Notes

11

Known problems and expected behaviors

Known problems and expected behaviors


This section describes known problems, expected behaviors, and documentation issues.

Known problems

The following issues should be noted:


Issue Number 7910qVPX Known Problem Windows 2003 and Windows 2008 hosts with the Storport storage driver experienced I/O disruption during the I/O transfer phase of a VPLEX GeoSynchrony upgrade (NDU). Upgrades from Release 5.2 will no longer see this issue. A conflicting detach can fail with No conflict detected in the set. Contact EMC support for workaround procedure to clear the consistency group. During a device or extent migration, the virtual volume or devices built from the migration source report degraded for operational status, minor failure for health status, and rebuilding for health indication. This is due to the data copy from the source to target of the migration. When the migration is in the commit-pending state, the operational status and health indications should return to their original value. When VPlexPlatformHealthCheck command is run before configuring the system, it reports the SPS status as OK even if the serial cable is not connected between the director serial port and the SPS. Sample output: . . . Sps : OK stand-by-power-supply-A : OK Note: Check the serial cable connections before proceeding. 14459qVPX When a third-party array registers with VPLEX with zero LUNs, a critical call home message is sent: Target device is detected to be unsuitable for operation with VPlex. Workaround: Once the LUN is exported from the third-party array to the VPLEX, it recognizes the array and identifies it as a supported storage device.

12403qVPX

13466qVPX

14386qVPX

12

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Issue Number 14500qVPX

Known Problem After an NDU failure, the ndu recover command might fail to enable the VPLEX Witness. Workaround: After running ndu recover, ensure that the VPLEX Witness is running. If it is not, manually enable the VPLEX Witness. Refer to the Generator to find troubleshooting information for this procedure. NetApp array names might not be unique because NetApp does not provide the array serial number in any inquiry data. VPLEX identifies an array using a portion of its World Wide Port Name (WWPN).
Netapp Name = <Vendor ID> < Product ID> <lower 28 bits of WWPN> Example: T10 Vendor ID: NETAPP Product ID: LUN ITL = "x fcp i 0x5000144240014d20 t 0x500a098587792de8 0x0002000000000000" (t = netapp port name) Name = NETAPP~LUN~7792de8

15559qVPX

15823qVPX

Storage Volumes responding to I/Os with continuous "SCSI Busy" on storage-arrays connected to VPLEX can negatively affect performance on a VPLEX system. In rare cases, a VPLEX director can fail during NDU Geo if the discovery of the port state takes too long, causing failure of the NDU Geo process. This issue does not cause data unavailability. Workaround: 1. Issue ndu recover command 2. Re-start upgrade using ndu start --force-geo option In a cross-connected host configuration, when the clusters rejoin after a partition, the losing cluster for the distributed device suspends I/O (until it is manually resumed) if auto-resume is false. This causes cross-connected hosts to continuously try the I/Os on the path to the losing cluster. To avoid this scenario, set auto-resume to true for the consistency groups corresponding to the volumes provisioned to the cross-connected hosts. The VPLEX system, while the clusters are in contact, will prevent the same storage volume from being claimed at each cluster. However, if the clusters are partitioned VPLEX can not prevent the same storage volume from being claimed at both clusters. If this happens, once VPLEX detects this, a call home will be sent. This issue should be corrected as soon as it is detected.

15985qVPX

16196qVPX 17126qVPX

17218qVPX

EMC VPLEX with GeoSynchrony 5.2 Release Notes

13

Known problems and expected behaviors

Issue Number 17963qVPX

Known Problem In some cases, when a data migration is started on one cluster and it is viewed from the management server on the other cluster, the migration can appear in an error state on the management server of the second cluster. In fact, the migration has completed, been committed, and removed on the original management server. Workaround: On the management server that displays the error, cancel and then remove the migration jobs. When attempting to reduce the number of back-end paths on VPLEX to logical units on the storage array be extremely careful with zoning, LUN masking, and the reported size of the logical unit. This process can cause storage views to enter error state, LUN shrinking messages, or unreachable volumes if steps are not executed correctly. If a storage-volume on VPLEX has its LUN unmasked on the array, and the VPLEX storage-volume is not unclaimed and forgotten and an array re-discover is not run, a LUN swapping condition can occur if the back end LUN on the array is re-masked to the VPLEX with a different LUN ID. A non-disruptive procedure to reduce the number of paths to a storage array volume is available in the Generator. Problem 21987qVPX was fixed in Patch 4 but is present in previous releases. In some cases, when one leg of a distributed-device within a consistency group is unhealthy and marked for a rebuild, removal of the unhealthy leg results in error. Workaround: 1. Find the consistency-groups containing the distributed virtual-volume. 2. Remove the distributed virtual-volume from the consistency-groups. 3. Re-try the detach mirror command. 4. Replace the unhealthy leg with a healthy leg. When running configuration system-setup after pre-configuring call-home on VPLEX using configuration event-notices-reports-config, the system-setup command will come up immediately to the 'Review and Finish' screen with incomplete answers. This is due to the configuration event-notices-reports-config command incorrectly marking the interview as completed. Workaround: Do one of the following: 1. Exit the VPlexcli. Remove the file /var/log/VPlex/cli/VPlexconfig.xml. Then go back into the CLI, and execute configuration system-setup. You should be brought to the start of the interview process. 2. Say no to the question Would you like to run the setup process now? (yes/no) This will bring you back to the start of the interview process, and it can be completed from there.

18029qVPX, 21987qVPX

18502qVPX

20446qVPX

14

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Issue Number 20632qVPX

Known Problem On very large scale systems, collect diagnostics can take a long time. Workaround: use collect-diagnostics --large-scale for large scale systems where collect-diagnostics takes longer than expected. When adding LUNs to an array, if I/O is not running to all VPLEX directors, these new LUNs will not be detected. When a LUN is not visible from all directors, an array's connectivity status is set to error. Workaround: If this status is reported after adding LUNs, run array re-discover. Once all directors detect these new LUNs, array connectivity status will be OK. When clusters are joined, if one or more directors loses WAN connectivity, it will cause the two clusters to partition. If director without WAN connectivity attempts to join a system with clusters joined, that director with WAN connectivity issues will be prevented from joining the system until its WAN connectivity is restored. In some cases, VMAX LUNs with device ID greater than A000 are claimed by Unisphere for VPLEX with an incorrect name starting with 0000. Workaround: Use the VPlexcli claiming wizard instead to claim these volumes with correctly generated names. Rebuild transfer sizes of larger than 2 MB may cause performance issues on host I/O. If the auto-boot flag of the director is set to false when you are performing the SSD field replacement, the replacement fails while restarting the director firmware. Workaround: Before performing the SSD replacement procedure, enter the following command: ll /engines/*/directors/<Director Involved in Replacement> If auto-boot is set to false, enter the following command
set /engines/*/directors/<Director Involved in Replacement>::auto-boot true

21014qVPX

22327qVPX

22584qVPX

22943qVPX

22962qVPX

Run the ll command again to ensure that the change was saved.

EMC VPLEX with GeoSynchrony 5.2 Release Notes

15

Known problems and expected behaviors

Issue Number 22963qVPX

Known Problem If the auto-boot flag on the director is set to false when you are performing a director replacement, the director replacement fails while restarting the director firmware. Workaround: Before performing the director replacement procedure, enter the following command: ll /engines/*/directors/<Director Involved in Replacement> If auto-boot is set to false, enter the following command
set /engines/*/directors/<Director Involved in Replacement>::auto-boot true

Run the ll command again to ensure that the change was saved. 23050qVPX An extent whose underlying storage volume has been removed from the system is unable to be destroyed. See primus solution emc319893. Paths to VPLEX storage are not restored automatically on Solaris 11 on x86 with PowerPath multipathing when the initiator is removed and added back in VPLEX storage view. Workaround: To recover from this problem, on the host, execute following commands: 1. cfgadm -al 2. devfsadm -C 3. powermt restore Paths on Standby Node on Windows Server Failover Cluster 2012 with MPIO do not recover automatically after the HBA initiators on the standby node are removed and added back to a VPLEX storage view. Workaround - To recover the paths, the standby node needs to be rebooted. If a RecoverPoint journal is full while a replica is in image access mode, director restarts may be delayed. RecoverPoint blocks I/O to replica volumes if the associated journal is full. A VPLEX director restart will be delayed if the following conditions are true: 1. There are active I/Os to a VPLEX volume that is serving as a replica volume. 2. The RecoverPoint journal associated with the replica volume is full.

23058qVPX

23223qVPX

23280qVPX

16

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Issue Number 23332qVPX

Known Problem When the subnet address for any subnet under cluster connectivity is set to an address NOT ending in ".0" then the health-check command output for "IP WAN COM connectivity" reports that the remote cluster is "not-configured" for associated peer cluster port group. This may lead to the concern that IP wan com connectivity is not configured. Workaround:-When a subnet address is set to a value not ending in ".0", then connectivity validate-wan-com command should be used to check the connectivity after the configuration is completed. If this command reports OK for all port groups then this is an assurance that IP wan com connectivity is successfully configured. Clustered Windows hosts configured with native MPIO, may fail I/O when VPLEX GeoSynchrony is upgraded (NDU). The native MPIO path failover can take more than 45 seconds to discover new paths being presented by the NDU process causing I/O to fail. I/O will continue after about 45 seconds. During an NDU session right in the end of I/O transfer phase, the NDU procedure can rollback because of metadata updates seen on second upgraders. Below is the sample output from an NDU session:
Enabling front-end on 1st upgraders (IOFWD is active): .........................DONE WARNING: A meta-volume update was observed on 2nd upgraders after 1st upgraders had already processed meta-data.

23362qVPX

23451qVPX

If there is a metadata update made by the second upgraders within the I/O transfer phase, then first and second upgraders are going to have different (inconsistent) views of the metadata. Since first upgraders do not have the capability to read the update made by second upgraders, NDU has to rollback first upgraders. This issue is mostly observed on Geo systems because the inter-cluster link is intentionally disabled during NDU. If a virtual-volume gets write I/O during NDU, the first write requires VPLEX to mark the volume out-of-date, which in turn requires a metadata update. Depending on the I/O workload pattern, a virtual volume can get its first write I/O (since NDU disabled inter-cluster link) within I/O transfer phase.

EMC VPLEX with GeoSynchrony 5.2 Release Notes

17

Known problems and expected behaviors

Issue Number 23565qVPX

Known Problem VPLEX only supports block-based storage devices which uses 512-byte sector for allocation or addressing, hence ensure the storage array connecting to VPLEX supports or emulates the same. The storage devices which doesn't use 512-byte sector may be discovered by VPLEX but cannot be claimed for use within VPLEX and cannot be used to create meta-volume. When you try to use the discovered storage volumes having unsupported block size within VPLEX either by claiming them or for creating meta-volume using appropriate VPLEX CLI commands, the command fails with this error - the disks has an unsupported disk block size and thus can't be moved to a non-default spare pool. Also the storage device having capacity not divisible by 4KB (4096B) may be discovered by VPLEX but cannot be claimed for use within VPLEX. When a user tries to utilize a discovered storage volume having unsupported aligned capacity within VPLEX by claiming them using appropriate VPLEX CLI commands, the command fails with this error The storage-volume <storageVolumeName> does not have a capacity divisible by the system block size (4K and cannot be claimed.) A large number of I/O timeouts on an array to different storage volumes can potentially impact I/Os to healthy arrays causing both the healthy and unhealthy storage volumes to be marked HW dead. The healthy storage volumes will be auto-resurrected. - The visibility property of RecoverPoint Repository and Journal volumes should never be set to "global". - RecoverPoint Repository and Journal volumes should never be placed in a storage view with any hosts other than the RecoverPoint appliances that use them. - If distributed volumes are ever used for RecoverPoint Repository and Journal volumes, their winner-loser rules should be set such that the site with RecoverPoint appliance is the winner If the storage volume that makes up a leg of a VPLEX Raid-1 begins to perform poorly, VPLEX does not isolate that leg. The performance problems on the storage volume result in a performance problem on the virtual volume and ultimately the host that is accessing that virtual volume.The problem persist s until performance to that storage volume improves or the storage volume is removed from the Raid-1. Refer to the troubleshooting procedures for information on how to find and remedy degraded disks. VPLEX currently uses all degraded I/O paths. If there are no more healthy I/O paths left., VPLEX does not try to distinguish more degraded paths from less degraded paths.

23570qVPX

23602qVPX

23609qVPX

23737qVPX

18

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Issue Number 23826qVPX

Known Problem When configuration system-setup is run, the command has been observed to hang on completion. It is also possible for this to occur on completion of configuration complete-system-setup or NDU. This issue is due to a deadlock condition that can occur when call home is enabled and versions are refreshed for the first time. Workaround: 1. Open a new console window on the management station 2. Restart the VPLEX CLI process:
On the management station unix prompt issue: sudo /etc/init.d/VPlexManagementConsole restart

3. Log back in to the VPlexcli, and continue the bring-up procedure from the point of the problem. You do not need to rerun the command that hung. 23836qVPX The VPLEX CLI cluster-status command currently only reports unhealthy devices or volumes. Unhealthy extends are not reported. The log rebuild to a DR1 can get stuck and will never complete, if all of the following conditions are to happen. -The logging volume and the rebuilding leg of the DR1 should fail simultaneously. -There should be an outstanding write started to the rebuilding leg, before the above failures and is pending completion. -The log rebuild region should overlap with the pending write completion region. Please contact EMC support for remedial steps. Incorrect information in portroles and portlayout files can lead to EZ-setup process getting stuck at a point without discovering the back-end ports and storage. The side effect of this is preventing the configuration of meta-volume and not allowing the directors to gain quorum. Workaround is Contact EMC support for help. Distributed-devices with only one leg are reported as healthy rather than degraded. The VPN service does not restart automatically after the Ethernet port on the management server goes down and comes back up. Workaround: Restart the VPN using:
VPlexcli:/> vpn restart

23928qVPX

23929qVPX

23964qVPX

23973qVPX

EMC VPLEX with GeoSynchrony 5.2 Release Notes

19

Known problems and expected behaviors

Issue Number 24005qVPX

Known Problem In vSphere 5.0, when a new datastore is created in with VMFS5.0 (the default), it uses ATS only for all VM operations (VM creation, snapshot, etc.). So, when ATS (CAW) is disabled in VPLEX, all operations to the datastore fail. Workaround: Reset this mode in the datastore formatted for VMFS5. This can only be done through the ESX CLI, not through vSphere client GUI. There is a troubleshooting procedure for this issue in the generator. The current FRU replacement script for I/O modules does not warn you if a different type of I/O module was inserted. Workaround: Double check that the type of replacement I/O module is of the same type (Fibre Channel or IP) as the one you are replacing before beginning the replacement. During the Engine Chassis Replacement procedure, "Verifying that the FRU replacement was successful" step might fail if the director attributes "operational-status" and "health-status" are not "ok" in director context in VPLEX CLI. This issue is observed in cases where the FRU Replacement script verifies the director attributes before the VPLEX CLI could update the attributes with correct values. Because of this issue, the verification step might have failed but the Engine Chassis Replacement would have completed successfully. The following steps need to be performed to complete the Engine Chassis Replacement procedure. Workaround: See the related troubleshooting issue in the generator. A GeoSynchrony upgrade rolls back because of a director failure. Rollback fails for upgrade from 5.1, Patch 4 to 5.2 on GEO configurations when there is a director failure in the second upgrader set. Workaround: 1. Resolve the director failure issue 2. Check all WAN-COM ports. Enable the disabled ones and wait for all directors to join the quorum. 3. Issue ndu recover to complete the recovery of the failed NDU and to restore the system settings. If a distributed virtual volume has inconsistent logging settings, the removal of a mirror on that distributed virtual volume without the --discard option, can lead to a director firmware failure. Workaround: In the generator, see the troubleshooting issues for distributed devices.

24114qVPX

24123qVPX

24166qVPX

24199qVPX

20

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Issue Number 24207qVPX

Known Problem During Engine Expansion, the procedure fails when "not all" the directors of the expansion engine(s) netboot and the error message thrown is as below:
The replacement procedure has encountered a fatal error and cannot continue. The collect-diagnostics command has been initiated and will be needed in the investigation of this issue.

The following steps need to be performed to retry the Engine Expansion procedure: 1. Shutdown all the directors of the expansion engine(s) 2. Verify the cable connections and make sure all the cable connections are properly done as per the procedure document. 3. Import VPlexadmin module. 4. Re-run the VPlexadmin command "add-engines" and power-on the directors as per the script instructions. If the problem persists, please contact the EMC Support Center. 24245qVPX While logging into the VPlexcli, the localhost resolution might fail which can result in a failed login. This issue can be hit if there are incorrect permissions on /etc/hosts file or if the local host information is removed/commented from /etc/hosts file. Workaround is to correct the permission or localhost information in /etc/hosts file. Event 0x8A4830DC lists an incorrect remedy. The remedy of event 0x8A4830DC should read, Ensure both Fibre channel switches are plugged in to different UPSs and that the management-server is plugged in to one of the UPSs. The var/log/ partition over a period of time can get filled up because of large number of capture log files. Workaround to reduce the /var/log partition is, - Move out the capture logs from /var/log/ partition. - Restart the management server.

24321qVPX

24324qVPX

EMC VPLEX with GeoSynchrony 5.2 Release Notes

21

Known problems and expected behaviors

Issue Number 24213qVPX

Known Problem During the execution of VPLEX EZ-Setup on a GEO/Metro system the warning The Fibre Channel switch points to an incorrect NTP Server is displayed indicating that the NTP server was not configured correctly on the Fibre Channel switch. Workaround: 1. Get the IP address of the eth2 for the management server. 2. Use the procedure generator to create the document and follow Task 15: Establish a HyperTerminal connection to the switch 3. Log into the side-A of the switch and execute the command tsclockserver with the eth2 IP address to point to the correct NTP server. If the command fails, then re-try the command. 4. Repeat the same procedure on the side-B of the switch. On an upgrade from Release 5.1 to Release 5.2, changes to the password minimum length attribute do not get updated. Workaround: 1. Login as 'admin' into VPlexcli 2. Navigate to "/security/authentication/password-policy" context 3. Execute "reset" command, input the admin password and later agree at the warning prompt 4. The values are reset to Kiev password policy 5. After the above steps, any of the password policy attributes can be modified using the set attribute-name value command in the password policy context. During NDU in a VPLEX Metro, if the upgrading cluster cant see its peer during the Post NDU tasks, the NDU will display an error message indicating that it failed to configure call home. Workaround: Run the configuration update-callhome-properties command.

24484qVPX

24550qVPX

Expected behaviors

If you intend to use the entire capacity of a storage volume, you configure a full-sized extent on top of that storage volume This way, when the storage volume grows, it's likely that you want the extent to grow as well. Therefore, when the storage volume grows, the Current Capacity of the extent increases as well, and is available for expansion. In this case, the volume displays the amount of actual expandable capacity. If you are expecting to use a specific amount of the storage volume and no more, you configure a less-than-full-sized extent on that storage volume. In this case, the volume displays 0B expandable space even if it displays expandable = true.

22

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

This does not mean that you can not use this extra storage. In the GUI, you can expand the volume and use the Show Available button to find the available space on that same volume and then use that to expand with.

Using the CLARiiON Navisphere Management Suite, if you change the active storage processor (SP) for a LUN, the incorrect SP may be reported as active in the VPLEX user interface. For example, SPA may be reported as active, when in fact SPB is active. To correct this reporting inaccuracy, start I/O. After I/O initiates, the system recognizes which SP is active and reports it correctly. If host I/O performance is impacted during a data migration or during a rebuild, then lower the rebuild transfer-size setting for the devices, or reduce the number of concurrent migrations/rebuilds. Ensure that the host resources are sufficient to handle the number of paths provisioned for your VPLEX system. Poor QoS on the WAN-COM link in a Metro or Geo configuration could lead to undeterministic behavior and data unavailability in extreme cases. Please follow the Best Practices to configure and monitor WAN-COM links. VPLEX in Metro and Geo configurations does not provide native encryption over the IP WAN COM link. Customers should deploy an external appliance to achieve data encryption over the IP WAN links between clusters. When a storage volume becomes hardware dead VPLEX automatically probes the storage volume within 20 seconds. If the probe succeeds, VPLEX removes the dead status from the volume, thus returning it to a healthy state. WARNING During the time that the device is hw-dead, users should not perform operations that change data on the storage volumes underneath VPLEX Raid-1 (through maintenance or replacing disks within the array). If such operations need to be performed, users should first detach the storage volumes from the VPLEX Raid-1, perform the data changing operations, and then re-add the storage volumes to the VPLEX Raid-1 as necessary to trigger a rebuild. Failure to follow these steps will change data

EMC VPLEX with GeoSynchrony 5.2 Release Notes

23

Known problems and expected behaviors

underneath VPLEX without its knowledge; without a data rebuild the Raid-1 legs might be inconsistent, which may lead to data corruption upon resurrection.

By default, for any user created on the management server, who has not changed their password in the last 91 days, their accounts will get locked. The admin user account will never be locked out, but the admin user will be forced to change their password on the next login. Refer the Password Policy section of the Generator troubleshooting section to overcome account lockouts. Policies are not enforced for the service user. A back-end failure on both legs of a distributed RAID (back-end failure at each cluster) that belongs to an asynchronous consistency group, causes the operational status for the consistency group to display: "requires-resume-after-data-loss-failure". Under the rare circumstances where both clusters completely fail at the same time, simultaneously vault, or one cluster vaults and the other cluster completely fails before the vault is recovered, please contact EMC support for assistance with recovery.

WARNING When performing maintenance activities on a VPLEX Geo configuration, service personnel must not remove the power in one or more engines unless both directors in those engines have been shutdown and are no longer monitoring power. Failure to do so, leads to data unavailability in the affected cluster. To avoid unintended vaults, always follow official maintenance procedures.

Devices used as system volumes (VPLEX meta-volume, mirrored copy, logging volumes, and backups for the meta-volume and mirror), must be formatted/zeroed out prior to being used by VPLEX as a meta-volume. There are two types of failure handling for back-end array interactions. The unambiguous failure responses, such as requests rejected by storage volume or port leaving the back-end fabric. The condition where storage arrays would enter fault modes such that one or more of its target ports remained on the fabric, while all SCSI commands initiated to it by the initiator (VPLEX) timed out.

24

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Prior to Release 5.1 Patch 3, VPLEX handled the unambiguous failure responses by isolating the failed storage volume or path (Initiator-Target Nexus). In the second condition, VPLEX would not take any isolation action for these paths. In Release 5.1 Patch 3 and later, VPLEX now isolates the paths which remain on the fabric but stay unresponsive. In this case, I/O requests initiated by a host initiator to VPLEX virtual volumes are redirected away from unresponsive paths to the back-end array, onto paths that are responsive. At the time of isolation, VPLEX issues a call home event.

Changing the time-zone on VPLEX components is not supported. If a RecoverPoint appliance and a VPLEX director restart at the same time, a full sweep may occur. In VPLEX Metro configurations, RecoverPoint Appliances (RPAs) can be configured at only one VPLEX cluster. Data from the cluster where the RPAs are configured is replicated to the peer VPLEX cluster (by VPLEX), and to a third site (by RecoverPoint). Virtual image access is not supported. Device migrations between two VPLEX clusters are not supported if one leg of the device is replicated by RecoverPoint.

Veritas DMP settings with VPLEX


If a host attached to VPLEX is running Veritas DMP Multipathing, change the following values of the DMP tunable parameters on the host to improve the way DMP handles transient errors at the VPLEX array in certain failure scenarios. 1. Set the dmp_lun_retry_timeout for the VPLEX array to 60 seconds using the vxdmpadm setattr enclosure emc-vplex0 dmp_lun_retry_timeout=60 command. 2. Set the recoveryoption to throttle and iotimeout to 30 using the vxdmpadm setattr enclosure emc-vplex0 recoveryoption=throttle iotimeout=30 command.

Expected behaviors - VPLEX Geo


The following expected behaviors are specific to VPLEX Geo:

EMC VPLEX with GeoSynchrony 5.2 Release Notes

25

Known problems and expected behaviors

Migration of volumes between clusters in a VPLEX Geo environment should be performed during maintenance when host access to the volumes can be stopped. Currently, VPLEX volume migrations are always synchronous, even across VPLEX Geo asynchronous distances. For the duration of this synchronous migration, this synchronous replication can lead to data unavailability to those applications sensitive to Geo latency. As such, this operation has been disallowed in the VPLEX GUI. The migration can be done using the CLI only.

Replication of volumes between clusters in a Geo environment should be performed during maintenance when host access to the volumes can be stopped. Currently, VPLEX volume replication (converting from a local volume to a distributed asynchronous volume) temporarily changes the asynchronous volumes to synchronous volumes. This change in latency is disruptive to applications and risks data unavailability. The replication operation is available in both the GUI and the CLI However, EMC does not support this operation when hosts are accessing the volumes.

Use of remote volumes in VPLEX Geo is not supported. While both the VPLEX GUI and the CLI allow creation of remote volumes in VPLEX Geo, the only cache mode available at this time is synchronous. If the user makes use of remote volumes, there is a risk of data unavailability and for this reason, remote volumes in VPLEX Geo are not supported. Furthermore, should remote volumes be present, the upgrade software prevents Non-Disruptive Upgrade of the VPLEX software to the next release until the remote volumes are either converted to local volumes or to distributed volumes.

Deletion of asynchronous consistency groups requires that the applications accessing the volumes in the asynchronous consistency group be halted or shut down. When deleting the asynchronous consistency group, the volumes in the group are removed and the cache mode of the volumes is changed to synchronous. If the applications using these volumes are up and running, there is a risk of data unavailability. While neither the GUI nor the CLI prevents this action, EMC Support will be limited to those cases where the applications using the volumes have been halted or the volumes removed from the storage view.

26

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Known problems and expected behaviors

Removal of a volume from an asynchronous consistency group without first halting the applications accessing the volume is not supported. Removing a volume from an asynchronous consistency group automatically changes its cache mode from asynchronous to synchronous. If the application attached to the volume is up and running there is a risk of data unavailability. The GUI only allows this operation if the volume has been removed from its storage view, which makes it inaccessible to the host. To avoid unexpected data unavailability, shut the application down prior to the removal from the storage view to avoid data unavailability. The CLI does not provide this safeguard, and EMC does not support the removal of volumes from an asynchronous consistency group unless the volumes have been removed from the storage view or the application has been shutdown.

Modification of the cache mode of an asynchronous consistency group to synchronous is not supported. Changing the cache mode of a consistency group from asynchronous to synchronous risks data unavailability. The VPLEX GUI does not allow the user to make such a change. While the CLI still allows the user to change the cache mode from asynchronous to synchronous, it provides no safeguard against data unavailability. EMC does not support changing the cache mode of a consistency group from asynchronous to synchronous.

Exposure of synchronous volumes (remote and synchronous distributed RAID-1) to views could cause data unavailability. EMC does not support this configuration.

Expected behaviors RecoverPoint


In a VPLEX Metro/RecoverPoint environment, the detach rule for a VPLEX Consistency Group protected by RecoverPoint is required to have the winner configured to be the VPLEX cluster with the RecoverPoint attached. During a WAN Link outage, the other cluster suspends. If the suspended cluster resumes, WAN Link restoration result in a conflicting detach. If the conflicting detach is to be resolved from the VPLEX cluster without RecoverPoint protection, the RecoverPoint protection for the VPLEX consistency group needs to be torn down, and the
EMC VPLEX with GeoSynchrony 5.2 Release Notes
27

Documentation updates

RecoverPoint enabled flag removed from the VPLEX consistency group. If an attempt is made to resolve the conflicting detach before the RecoverPoint protection is torn down, the following error will be seen:
In the presence of splitting, cannot declare a winning cluster other than the one doing splitting.

At this point, you can resolve the conflicting detach, and recreate the RecoverPoint protection. If VPLEX resolves the conflicting detach from the VPLEX cluster with RecoverPoint attached, no tear down of RecoverPoint protection is required.

Documentation errata
The following errors were found in the VPLEX documentation for Release 5.2 since product release: In the CLI Guide, in the password-policy set command reference page, password-minimum-length value is 14, not 8.

Documentation updates

28

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Documentation

Documentation
The following documentation is available to support VPLEX:

EMC VPLEX Hardware Installation Guide High-level overview of the steps to configure a new VPLEX installation, and references to the applicable documents. EMC VPLEX Site Preparation Guide Steps to prepare the customer site for VPLEX installation. EMC VPLEX Configuration Worksheet Tables of parameters for which specific values are required to configure a VPLEX cluster. The tables include space to enter the required values. EMC VPLEX Configuration Guide Detailed steps to configure a VPLEX implementation at the customer site. Unisphere for VPLEX Online Help Information on performing various VPLEX operations available on the VPLEX GUI. EMC VPLEX Administration Guide High-level information on system administration topics specific to VPLEX. EMC VPLEX CLI Guide Descriptions of VPlexcli commands. EMC VPLEX Element Manager API Guide Describes the element manager programming interface. EMC VPLEX Security Configuration Guide Provides an overview of security configuration settings available in VPLEX. EMC VPLEX Open-Source Licenses For reference. This document contains the content of the open-source licenses used by VPLEX. EMC Regulatory Statement for EMC VPLEX Provides regulatory statements in a single document, eliminating the need to duplicate them across documents. SolVe Desktop generator Replaces the EMC VPLEX Procedure Generator. For use in performing upgrades, component replacement, troubleshooting, and miscellaneous management procedures. The SolVe Desktop is available on the EMC Online Support website, for download and use on local PCs. This tool is also referred to in this document as the Generator. Implementation and Planning Best Practices for EMC VPLEX Technical Notes

EMC VPLEX with GeoSynchrony 5.2 Release Notes

29

Installation

EMC VPLEX Product Guide High level overview of the VPLEX hardware and GeoSynchrony 5.2 software including descriptions of common use cases. EMC Best Practices Guide for AC Power Connections in Two-PDP Bays

Installation
To install and set up a new VPLEX implementation, use the documents in the following order: 1. EMC VPLEX GeoSynchrony Release Notes 2. EMC VPLEX Site Preparation Guide 3. EMC VPLEX Configuration Worksheet 4. EMC VPLEX Configuration Guide

Installing VASA Provider

If you are installing VASA provider for the first time, follow the instructions in the Generator and use the following files for each release of VPLEX:
GeoSynchrony Release Release 5.2 VASA OVA File VPlex-5.2.0.00.00.05_D10_VASA_9-vasa.ova

You can download these files from http://support.EMC.com/downloads >VPLEX. If the VASA Provider is already installed on your VPLEX system, there is no need to upgrade.

Upgrading GeoSynchrony
EMC provides a Procedure Generator for generating custom procedures to assist you in managing your system. A new tool, the Solve Desktop, combines the functionality of all the Procedure Generators from different EMC products into one desktop tool and integrates those Procedure Generators with other support tools.

30

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Upgrading GeoSynchrony

You can use the Procedure Generator or SolVe Desktop, (available on the EMC Online Support website) to produce the upgrade document. 1. If you have not already done so, download the Procedure Generator or SolVe Desktop from the downloads area on EMC Online Support. 2. Do one of the following to start the tool: If you are using the Procedure Generator, start the Procedure Generator. If you are using the SolVe Desktop: a. Start the SolVe Desktop. b. Log in to the tool. c. Accept the download of the VPLEX module d. Run the VPLEX module. 3. In either the Procedure Generator or in the VPLEX module of the SolVe Desktop, select Procedures for EMC USE ONLY: Installation and Upgrade. 4. Click Next. 5. Select Upgrade GeoSynchrony > Upgrade to GeoSynchrony to produce the upgrade document. Upgrade package location The VPLEX GeoSynchrony upgrade files are available on EMC Online Support (registration required). 1. Navigate to https://support.EMC.com/products 2. In the Find a product field, type VPLEX Series and press Enter 3. Select Downloads >>. 4. Locate and download the following files.
GeoSynchrony Release Release 5.2 Files to download VPlex-5.2.0.00.00.05-management-server-package.tar VPlex-5.2.0.00.00.05-director-firmware-package.tar

EMC VPLEX with GeoSynchrony 5.2 Release Notes

31

Upgrading GeoSynchrony

Upgrade paths for Release 5.2


Upgrade to Release 5.2 is supported from the following releases:
Release VS1 X X X X X X X X X X X X X X X X X X VS2

4.2 Patch 1 5.0 5.0.1 5.0.1 Patch 1 5.0.1 Patch 2


5.1

5.1 Patch 1 5.1 Patch 2 5.1 Patch 3 5.1 Patch 4

For those new systems running Release 4.2 or above that have not been configured yet, the upgrade to 5.2 can be accomplished using the NDU pre-configuration upgrade procedure.

32

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Software packages

Software packages
Software packages that may be contained on this kit: EMC GeoSynchrony

GeoSynchrony provides simplified management and non-disruptive data mobility across heterogeneous arrays with a unique scale-up and scale-out architecture. VPLEX's advanced data caching and distributed cache coherency provides workload resiliency, automatic sharing, balancing and failover of storage domains with predictable service levels. Storage Management System (SMS) SMS provides serviceability capabilities and enables phone home, secure interconnect, data logging, error logging, secure communications paths for local and remote clusters.

Troubleshooting and getting help


EMC support, product, and licensing information can be obtained as follows. Product information For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Online Support website (registration is required) at:
http://support.EMC.com

Technical support

For technical support, go to EMC Online Support. To open a service request through EMC Online Support, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account. Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this document to:
techpubcomments@EMC.com

Your Comments

If you have issues, comments or questions about specific information or procedures, please include the title and, if available, the part

EMC VPLEX with GeoSynchrony 5.2 Release Notes

33

Troubleshooting and getting help

number, the revision (for example, -01), the page numbers, and any other details that will help us locate the subject you are addressing.

Copyright 2013 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

34

EMC VPLEX with GeoSynchrony 5.2 Release Notes

Das könnte Ihnen auch gefallen