Sie sind auf Seite 1von 19

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Evaluation guide v1.0

Sarvesh S. Patel, Bill Scales IBM Systems and Technology Group ISV Enablement January 2014

Copyright IBM Corporation, 2014

Table of contents
Abstract..................................................................................................................................... 1 Getting started .......................................................................................................................... 1
About IBM SAN Volume Controller Stretched Cluster services .............................................................. 1 About this guide ....................................................................................................................................... 2 Assumptions ............................................................................................................................................ 2

How Stretched Cluster works? ................................................................................................ 3


Prerequisites and configuration considerations ....................................................................................... 3 Setup description ..................................................................................................................................... 4 Connectivity details .................................................................................................................................. 4 Configuring site awareness ..................................................................................................................... 6 Disaster recovery sites............................................................................................................................. 6 Assigning sites to IBM SVC nodes .......................................................................................................... 8 Controller site assignment ..................................................................................................................... 10 Multi-WWNN controller .................................................................................................... 12 Quorum disk placement ......................................................................................................................... 12 Enabling / Disabling the site disaster recovery capability ...................................................................... 12

Invoking the site disaster recovery feature .......................................................................... 13


Returning to normal operation after invoking the site disaster recovery feature ................................... 14

Resources ............................................................................................................................... 15 About the authors................................................................................................................... 15 Trademarks and special notices ........................................................................................... 16

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Abstract
In this white paper, the procedure of creating an IBM System Storage SAN Volume Controller (SVC) cluster with the Enhanced Stretched Cluster topology is described. The white paper gives information regarding the stretched topology and the disaster recovery feature in the Enhanced Stretched Cluster feature. The white paper gives a brief introduction to the Enhanced Stretched Cluster feature and provides guidance on how to assign site awareness to the controllers and other entities. It also describes a procedure of enabling and disabling the feature and the procedure to be used in case of disaster recovery.

Getting started
This section gives a brief idea IBM System Storage SAN Volume Controller (SVC) Stretched Cluster implementation and different topologies used to configure the Stretched Cluster implementation.
1

About IBM SAN Volume Controller Stretched Cluster services


IBM SVC, a storage virtualization system supports a stretched implementation of a cluster. This feature is supported since version 6.3, released. The main advantage of configuring a cluster as a stretched implementation is to give more redundancy considering the failures in power domains. The existing SVC Stretched Cluster solution is the concept of having two locations with each location having one node from each I/O group pair; additionally, quorum disks are usually held in a third location. With Stretched Cluster implementation, the two nodes in an I/O group are separated by a distance between two locations. These two locations can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A copy of the volume (VDisk) is stored at both the locations. This configuration implies that in case of a power failure or storage area network (SAN) failure at one site, at least one copy of the volume will be accessible from the user. Both the copies of the storage are maintained in synchronization by SAN Volume Controller. Therefore, the loss of one location causes no disruption to the alternate location. The key benefit of Stretched Cluster compared to Metro Mirror is that it allows for fast non-disruptive failover in the event of small scale outages. For example, if there is an impact to just a single storage device, SVC will fail over internally with minimal delay. If there is a failure in a fabric element or SVC node, a host can fail over to another SVC node and continue performing I/O. The host will see half of the paths active and half dead as at least one node is running. SVC always has automatic quorum to act as a tie-break. This means no external management software or human intervention is ever required to perform a failover. This simplicity is another key advantage.

IBM SAN Volume Controller (SVC) or SVC cluster in the entire guide are general terms that apply for IBM SVC platform only. IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

The following two types of configurations are supported: Stretched Cluster configuration without using inter switch links As explained in the product documentation, this topology has direct connections from IBM SVC nodes to the switches from different power domains. Stretched Cluster configuration using inter switch links This implementation strategy has inter switch links between two sites having different power domains.

For the documentation on the solution as included in 6.3.0, refer to the following link: ftp://ftp.software.ibm.com/storage/san/sanvc/V6.3.0/SVC_Split_IO_Group_requirements_Errata_V1.pdf

About this guide


The purpose of this guide is to support self-guided, hands-on evaluation of the Enhanced Stretched Cluster feature deployments for the storage administrators and IT professionals and walk them through the different configurations considerations and workflow required to set the SVC clusters in enhanced stretched implementation. The guide version 1.0 attempts to demonstrate how to deploy Enhanced Stretched Cluster configurations with site awareness and the most important site disaster recovery feature. This paper is intended to provide an overview of the steps required to successfully evaluate and deploy site awareness on SVC. This is not meant to be a substitute for product documentation and encourages users to refer to the product documentation, information center, command-line interface (CLI) guide of Enhanced Stretched Cluster for more details.

Assumptions
Below are the assumptions considered while writing this white paper. The SVC clusters are successfully installed with the latest (at the time of this publication) IBM SVC 7.2.0 code levels (or above). The SVC clusters have the required licenses. (No separate license is required to enable Enhanced Stretched Cluster site awareness and site disaster recovery feature.) The storage SAN is configured according to the product documentation and the infrastructure to support SVC clusters in a stretched cluster using Fibre Channel 8G is properly in place. The user has the basic understanding and awareness of SVC stretched and split cluster concepts, SVC storage concepts, and configurations for host attach. The user knows the different heterogeneous SVC platforms that can be added in FC partnerships. The same apply for IP partnerships as well.

For SVC documentation, refer to: http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp Note: Refer the configuration section in above documentation for how Stretched Cluster works.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

IP partnership and SVC terminology Metro Mirror, Global Mirror, and Global Mirror with Change Volumes SAN NAS

Brief description These are the different remote copy services supported on SVC platforms. Storage area network Network-attached storage Failure of a node within an I/O group fails causes virtual disk access through the surviving node. The IP addresses fail over to the surviving node in the I/O group. When the configuration node of the system fails, management IPs also fail over to an alternate node. When the failed node rejoins the system, all failed over IP addresses are failed back from the surviving node to the rejoined node and virtual disk access is restored through this node. Two nodes or two canisters form an I/O group. A single SVC system supports four I/O groups, that is, eight nodes. Fibre Channel Unless explicitly specified, a general term that is used to describe all applicable SVC platforms IBM SVC (CG8, CF8, 8G4), IBM Storwize V7000, Storwize V5000, Storwize V3700 and IBM PureFlex System storage nodes. Fibre Channel over Ethernet

Failover

Failback

I/O group FC SVC or IBM SAN Volume Controller FCoE

Table 1: Terminology and abbreviation

How Stretched Cluster works?


The Stretched Cluster feature enables one SVC cluster to configure itself from two sites that are in different power domains and the disaster recovery feature which can be used in case of a disaster to recover a cluster when less resources are available.

Prerequisites and configuration considerations


To enable Enhanced Stretched Cluster site disaster recovery feature, the following details should be considered and understood. The SVC Clustered System should be configured as mentioned in the information center. Note that no extra hardware configuration changes are required to enable the site disaster recovery feature. A cluster can be connected using FC or FCoE SAN. Image mode virtual disks (vdisks) should not be configured on the controller that has site assignments. Details are given in a separate section for restrictions. For the first release that is 7.2.0, Enhanced Stretched Cluster site disaster recovery feature is only supported on SVC nodes without having internal SSDs.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Setup description
Hardware summary Minimum two IBM SVC nodes

Brocade Fibre Channel switches (IBM Brocade 2498)

Backend storage controllers (IBM System Storage DS3400)

Connectivity details
The Stretched Cluster system feature is supported with two types of implementations.

Without inter switch links With inter switch links

Both the implementations will not affect the behavior of site awareness and the site disaster recovery feature. It is completely dependent on how the administrator wants to configure the system

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

implementation. As mentioned in the earlier sections, the connectivity will not change and the feature is optional. The administrator can choose to use the same for disaster recovery. An important thing to notice here is, if the Stretched Cluster implementation has site awareness, only then the site disaster recovery feature can be invoked. Implementation details Without inter switch links This hardware implementation will be the same as recommended for a non-stretched cluster. No change in the connectivity is needed. Refer to the information center document for more details regarding the recommended connections. With inter switch links In the implementation described in the following figure strategy, two production sites are connected using inter switch links. These two sites can be in the same rack with different power domain, across racks, across data centers, and so on as it was supported earlier.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Figure 1: Enhanced Stretched Cluster implementation using inter switch links

Configuring site awareness


Assuming that the connectivity is completed and a Stretched Cluster is implemented, in this paper, you will learn how to set up site awareness for all the different entities in the cluster. Note that the features described here are available on SVC systems only. They are hidden on other platforms.

Disaster recovery sites


As the earlier stretch cluster implementation needed three sites, the same concept has been named using the new CLIs, lssite and chsite.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

A new set of 'site' objects will be defined. These will be implicitly created for every system automatically. There will always be exactly three sites, numbered 1, 2, and 3. (Site index 0 will never be reported). There is no means of deleting sites or creating extra sites. The only configurable attribute for each site is its name. The default names for the sites are 'site1', 'site2', and 'site3'. Site1 and site2 are where the two halves of the Stretched Cluster are located. Site3 is the optional third site for a quorum tie-break disk. The appropriate 'site' instance will be referenced when a site value is defined for an object. Objects can also leave their 'site' value undefined. This is the default setting for an object. Enabling the site disaster recovery feature and correct operation of the disaster recovery feature requires assigning objects to the site. These are the mandatory three sites needed for a Stretched Cluster implementation. Site1: Production site 1 Site2: Production site 2 Site3: A site at a different location to house the quorum disk In IBM SVC version 7.2 onwards, these three sites would be present by default. A new CLI, lssite has been introduced to list down the sites.

Figure 2: Site listing using lssite CLI

The site names can be changed using the CLI, chsite. For example, the Stretched Cluster implementation is spread across two data centers, and therefore, named the site accordingly.

Figure 3: Site name allocation using chsite CLI

After the names are assigned to the sites, they can be viewed using the lssite command.

Figure 4: Changed site information

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Assigning sites to IBM SVC nodes


The site can optionally be defined for each node. The default for a node is to have its site undefined. This is also the initial state on upgrade to version 7.2.0. Now, as the sites are renamed accordingly, the next task is to assign sites to the nodes in the cluster. For example, consider a two-node SVC cluster. The site awareness to nodes can be achieved using the addnode command while adding a new node to the cluster, or by using the chnode command to the existing nodes. The site can be specified when a node is added to the system. It can also be specified or changed after that time. It can be set back to undefined. Nodes can only be assigned to sites 1 or 2. Nodes cannot be assigned to site 3. When the disaster recovery feature has been enabled using chsystem, then extra policing is added which requires that every node have its site defined, and disallows any changes in the site for any configured node. Therefore the site must be specified in addnode, else addnode will fail. Additionally when the disaster recovery feature is enabled, then within an I/O group with two configured nodes, one node must be assigned to each of sites 1 and 2. Here, one node is assigned to site 1 and another to site 2.

Figure 5: Assigning site attributes to IBM SAN Volume Controller nodes

After assigning appropriate site awareness to nodes, the user can verify the site assignment using the lsnode command in concise as well as detailed view.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Figure 6: Listing node site attributes using the lsnode CLI

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

Controller site assignment


After assigning sites to the nodes, the user needs to assign the site attributes to controllers. The site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720. Controllers can be assigned to any of the sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. A managed disk (MDisk) derives its 'site' value from the controller that it is associated with at that time. Some backend storage devices are presented by the SVC system as multiple controller objects, and an MDisk might be associated with any of these from time to time. The user is responsible for ensuring that all such 'controller' objects have the same 'site' specified, so as to ensure that any MDisk associated with that controller is associated with a well-defined single site. The site for a controller can be changed when the disaster recovery feature is disabled. It can also be changed if the controller has no managed (or image-mode) MDisks. The site for a controller cannot be changed when the disaster recovery feature is enabled i.e. the topology is Stretched or if the controller has managed (or image-mode) disks. The site property for a controller adjusts the I/O routing and error reporting for connectivity between nodes and the associated MDisks. These changes are effective for any MDisk whose controller has a site defined, even if the disaster recovery feature is disabled. The use of solid-state drives (SSDs) within SVC nodes is not supported in the configurations described. The software does not enforce any requirement that all MDisks have a site defined. The site attribute can be assigned to a controller using the chcontroller command. As specified earlier, the site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720. Controllers can be assigned to any of sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. The site assigned to a controller can be verified using the lscontroller command.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

10

Figure 7: Listing site attributes for a controller

Connectivity is permitted between: Any node and controllers in site 3, or controllers with no site defined A node with no site defined and any controller

And of course, I/O is permitted in the following most important cases:

A node configured in a site and a controller MDisk configured to the same site A node configured in a site and a controller MDisk configured to site 3

The fault reporting algorithms for raising event logs in the case of missing connectivity are also adjusted to allow for these rules. When a controller is configured to site 1, then connectivity to nodes in site 2 is not expected or required, and is disregarded. Faults are only reported if any node in site 1 has inadequate connectivity (that is, if any node in site 1 has less than two SVC ports with visibility to the controller). Similarly, if a controller is configured to site 2, then connectivity to nodes in site 1 is disregarded.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

11

Mu lti-WWNN c o n trolle r
When the site is changed on a multi-worldwide node name (WWNN) controller, all of the affected controllers are updated with the site setting on all controllers at the same time.

Quorum disk placement


If the site disaster recovery feature is disabled, then quorum selection algorithm operates as in SVC 7.1.0 release. When the disaster recovery feature is enabled and automatic quorum disk selection is also enabled, SVC system elects 3 quorum disks (one in each of the three sites) and makes the quorum disk at site 3 as an active quorum disk. If a site has no suitable MDisks, then less than three quorum disks will be configured. Note that before the cluster topology being changed to stretched (the activation of the disaster recovery feature) system ignores the site parameter when selecting quorum disks. This means that the quorum selection can change if the disaster recovery feature is enabled for an existing installation. If the user is controlling quorum using chquorum, then the choice of the quorum disk the user selects must also follow the one disk per site rule.

Enabling / Disabling the site disaster recovery capability


A system setting will enable and disable the disaster recovery feature. The preconditions on enabling the disaster recovery feature are: All nodes are assigned to a site. All I/O groups with two nodes are assigned with one node in site 1 and one node in site 2.

There is no precondition on sites being configured for controllers. The feature will not be operable on nodes that were absent until they rejoin the cluster. New clusters and clusters upgrading to 7.2.0 and later versions have the disaster recovery feature disabled by default. The site disaster recovery feature can be enabled or disabled by using the chsystem command and can be checked using the lssystem command.

Figure 8: Output of the lssystem command when the feature is disabled

Figure 9: Enabling the site disaster recovery feature using the chsystem command

Figure 10: Output of the lssystem command when the feature is enabled

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

12

Invoking the site disaster recovery feature


When the inter-site link fails in a Stretched Cluster, the two halves of the cluster race each other to use the quorum disk to resolve the tie-break situation. The half that successfully reserves and updates the quorum disk keeps operating; the remaining half of the cluster will stop with each of the nodes suffering a lease expiry. Normally, these nodes will display 'node error 550' indicating that there are not sufficient nodes to form a quorum. For a stretch cluster that has been configured with a topology of 'stretched', the code determines if sufficient nodes are present and if invoking the disaster recovery feature is an alternative option and will modify the node error that is displayed (new node is error 551). If there are insufficient nodes to allow the overridequorum command to be used, the nodes continue to display the existing 'node error 550'. The site that keeps running continues to display a cluster topology of 'stretched' and a topology status of 'dual_site' even though one site is offline. In the simple inter-site link failure, the normal recovery option is to allow the site that got the quorum to keep running, fix the link, and then the other nodes will be automatically added back and copies synchronized, and so on. The disaster recovery feature is only expected to be invoked if the surviving site suffers a disaster just after the inter-site link failed, leaving the other site as the only option for continuing with I/O. There are several more-complex failure scenarios - if access to the site 3 quorum disks fails at the same time as the inter-site link from site 1 to site 2, then both halves of the cluster will display 'node error 551' (the new version of 'node error 550'). If access to the quorum disk is restored, then whichever site regains the access first will be able to automatically restart. Alternatively, the inter-site link from site 1 to site 2 can be restored to allow the cluster to reform. A new service CLI subcommand is defined that attempts to invoke the disaster recovery feature. If accepted, the node that is running the CLI subcommand generates a new worldwide unique cluster ID and informs each of the set of visible nodes of the local site to change their cluster ID to the new cluster ID, set a flag in each node indicating that disaster recovery has been invoked and then warm start. Note that the cluster alias which by default is set to the cluster ID when a cluster is created is not changed and this ensures that the VDisk UIDs do not change. The CLI subcommand will not affect the nodes of the remote site. They cannot be involved in the initial recovery of the local site because their state will corrupt the consistent freeze of the local site. The remote nodes can only be introduced to the local system later using the following procedure (needs disconnecting all FC/FCoE connectivity of the remote node). The code will then disable the disaster recovery feature in the new recovered cluster. The topology (reported by lssystem) remains as stretched but the topology_status changes from dual_site to either recovered_site_1 or recovered_site_2 to indicate that disaster recovery has been invoked. The status will only return to "dual_site" after the user has performed the recovery actions to reintroduce the nodes from the other site. Nodes that did not participate in the recovery process (and hence are still members of the old cluster) will not automatically be re-added to the cluster. The user must explicitly delete these nodes from the cluster (returning them to candidate state) before they can be auto added. This is achieved by using the rmnode command from within the management interface of the old cluster (if it is still operating) or by using the

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

13

satask leavecluster -force command. Before running these commands, the user must disconnect all the FC/FCoE cables from all the nodes which they want to re-add in the cluster.

Returning to normal operation after invoking the site disaster recovery feature
The user must follow a careful set of steps to ensure that the system maintains integrity as the two sites' connectivity is recovered. In particular, care is needed to not conflict with the activity of any still active nodes in the failed site, for example, if power is recovered after a failure. The following sequence of steps are required. 1. Disaster recovery feature is invoked - alert is raised indicating that this process must be used. 2. Access to all 'recovered site' volume copies is recovered. This includes the mirror-half of stretched volumes plus any single-copy volumes with defined local site. 3. Access to all other volume copies is lost. The user must treat all such storage as suspect and potentially corrupted. The conservative approach is to delete all such volume copies. Some users might choose to retain access to such volumes to attempt to recover some data. 4. Mirrored volumes with one online fresh local copy can be retained. 5. Access to all other site quorum disks is lost. All such quorum disks must be deleted. 6. This can be achieved by using rmmdisk if the MDisk no longer holds any volume copy. If there are volume copies that are being retained, then the process must use chquorum to select new quorum disks and prevent attempts to use the other site quorum disks. 7. All inter-system remote copy relationships, consistency groups, and partnerships must be destroyed (partnerships will be in the partially-configured state). 8. At this point, the user can address the missing nodes. This requires disconnecting the FC/FCoE connectivity of the missing nodes, then either unconfiguring the node using svctask rmnode (in the abandoned cluster) or satask leavecluster as described earlier or decommissioning the node so it can no longer access the shared storage and then issuing rmnode in the recovered cluster to inform it that this step has been performed. 9. When the last offline node from the failed site is repaired, the alert auto fixes and any non-local site volume copies become online. The process of reconstructing the system objects can begin, including: Defining quorum disks in the correct sites Re-creating volumes that were not automatically recovered earlier Re-creating any intra-system copy services that were deleted because their volumes were deleted Re-creating any inter-system Metro Mirror or Global Mirror objects Note that there is no need to explicitly re-enable the disaster recovery feature. The cluster topology remains as 'stretched', and when the event log auto fixes the cluster topology, the status returns to 'dual_site' and assuming that there are online nodes at both sites, the voting set will be manipulated to prepare for the next disaster recovery.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

14

Resources
The following websites provide useful references to supplement the information contained in this paper: IBM Systems on IBM PartnerWorld ibm.com/partnerworld/systems/ IBM Publications Center http://www.elink.ibmlink.ibm.com/public/applications/publications/cgibin/pbi.cgi?CTY=US IBM Redbooks ibm.com/redbooks IBM developerWorks ibm.com/developerworks IBM SAN and SVC Stretched Cluster and VMware Solution Implementation ibm.com/redbooks/redbooks/pdfs/sg248072.pdf IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA ibm.com/redbooks/redbooks/pdfs/sg248142.pdf SVC Split Cluster How it Works ibm.com/developerworks/community/blogs/storagevirtualization/entry/split_cluster?lang=en

About the authors


Sarvesh S Patel is a Software engineer in IBM Systems and Technology Group SVC and Storwize family. He has 6 years of experience in Storage test. As part of the Enhanced Stretched Cluster, he was the functional test lead for the feature. You can reach Sarvesh at sarvpate@in.ibm.com. Bill Scales is Software engineer in IBM Systems and Technology Group SVC and Storwize family. As part of the Enhanced Stretched Cluster, he was the functional development architect and lead for the feature.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

15

Trademarks and special notices


Copyright IBM Corporation 2014. References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

16

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

17

Das könnte Ihnen auch gefallen