Beruflich Dokumente
Kultur Dokumente
Agenda
A little history
Options and issues
Requirements and restrictions
Terminology
SVC Split I/O Group = SVC Stretched Cluster = SVC Split Cluster
Two independent SVC nodes in two independent sites + one independent site for Quorum
Acts just like a single I/O Group with distributed high availability
Site 1 Site 2
Distributed I/O groups NOT a HA Configuration and not recommended, if one site failed:
Manual volume move required
Some data still in cache of offline I/O Group
Site 1
Site 2
I/O Group 1
Servers
Lots of cross-cabling
Fabric Fabric
A B
Servers
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Ok, but?
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Fabric Fabric
A A
Fabric Fabric
B B
Active or
Fabric passive Fabric
A DWDM over
B
shared single
mode fibre(s)
SVC V6.3 option 2: Dedicated ISLs for nodes (can use DWDM)
ser chooses number of ISLs on public SAN
U
Only half of all SVC ports used for host I/O
Server Cluster Server Cluster
ublic
P rivate At least 1 ISL
P Private Public
Fabric Fabric Fabric Fabric
Trunk if more than 1 C A
A C
ublic
P rivate
P rivate
P ublic
P
Fabric Fabric Fabric Fabric
A A A A
ote ISLs/Trunks for private
N
SANs are dedicated rather than
being shared to guarantee
dedicated bandwidth available
SVC + UPS for node to node traffic SVC + UPS
ublic
P rivate
P rivate
P ublic
P
Fabric Fabric Fabric Fabric
B B B B
ublic
P Private ISL per I/O group
1 Private ublic
P
VSAN VSAN Configured as trunk VSAN VSAN
A A A A
ote ISLs/Trunks for private
N
VSANs are dedicated rather
than being shared to guarantee
dedicated bandwidth available
SVC + UPS for node to node traffic SVC + UPS
ublic
P Private ISL per I/O group
1 Private ublic
P
VSAN VSAN Configured as trunk VSAN VSAN
B B B B
Metro/Global Mirror
nodes
7a) Write request from SVC 7b) Write request from SVC
8a) Xfer ready to SVC 8b) Xfer ready to SVC
9a) Data transfer from SVC 9b) Data transfer from SVC
10a) Write completed to SVC Steps 16 affect application latency 10b) Write completed to SVC
Steps 710 should not affect the application
Split I/O Group Preferred Node local, Write uses 1 round trip
1 round trip
4) Cache Mirror Data transfer to remote site
5) Acknowledgment
Node 1 Node 2
SVC Split I/O Group
Split I/O Group Preferred Node Remote with help, 2 round trips
5) Write request to SVC
6) Xfer ready from SVC
7) Data transfer to SVC
10) Write completed from SVC
Server Cluster 1 Server Cluster 2
1 round trip 1) Write request from host
11) Write completion to remote site 2) Xfer ready to host
3) Data transfer from host
4) Write+ data transfer to remote site 12) Write completed to host
1 round trip
8) Cache Mirror Data transfer to remote site
9) Acknowledgment
Node 1 Node 2
SVC Split I/O Group
SVC 6.3:
Similar to the support statement in SVC 6.2
Additional: support for active WDM devices Minimum Maximum Maximum
Quorum disk requirement similar to Remote Copy (MM/GM) distance distance Link
requirments: Speed
Max. 80 ms Round Trip delay time, 40 ms each direction
FCIP connectivity supported for quorum disk >= 0 km = 10 km 8 Gbps
No support for iSCSI storage system
> 10 km = 20 km 4 Gbps
> 20km = 40km 2 Gbps
Switch 1 Switch 3
Switch 2 Switch 4
Storage Storage
Site 3
Active
Quorum
Switch 1 Switch 3
Switch 2 Switch 4
Storage 3 Storage 2
Switch 5 Switch 6
Site 3
Act. Quorum
Site 1 Site 2
Server 1 Server 2
Switch 1 Switch 3
Switch 2 Switch 4
Storage 3 Storage 2
Switch 5 Switch 6
Act. Quorum
Recommendation 1:
Use CF8 / CG8 nodes for more than 4km distance for best performance
Recommendation 2:
SAN switches do not auto-negotiate B2B credits and 8 B2B credits is the default setting so change
the B2B credits in the switch to 41 as well
Link speed FC frame Required B2B Max distance with
length credits for 10 km 8 B2B credits
distance
1Gb/sec 1 km 5 16 km
2 Gb/sec 0.5 km 10 8 km
4 Gb/sec 0.25 km 20 4 km
8 Gb/sec 0.125 km 40 2 km
Server 2
Site 1 Site 2 Server 4
WDM WDM
ISL
Publ.SAN1 Publ.SAN1
ISL
Priv.SAN1 Priv.SAN1
WDM WDM
Publ.SAN2 ISL Publ.SAN2
SVC-01 SVC-02
ISL
ISL
Storage Storage
Switch Switch
Ctl. A Ctl. B
Act. Quorum
Consequences:
Metro Mirror is deployed for shorter distances (up to 300km)
Global Mirror is used for longer distances
Split I/O Group supported distance will depend on application latency restrictions
100km for live data mobility (150km with distance extenders)
300km for fail-over / recovery scenarios
SVC supports up to 80ms latency, far greater than most application workloads would tolerate
VMware ESX with VMotion or AIX with live partition mobility Server 2
Site 1 Site 2 Server 4
Example 2)
SVC-01 SVC-02
-> Only SVC 6.3 Split I/O Group with ISLs is supported.
Switch Switch
Example 3)
Ctl. A Ctl. B
Summary
SVC Split I/O Group:
Is a very powerful solution for automatic and fast handling of storage failures
Transparent for servers
Perfect fit in a vitualized environment (like VMware VMotion, AIX Live Partition
Mobility)
Transparent for all OS based clusters
Distances up to 300 km (SVC 6.3) are supported