Beruflich Dokumente
Kultur Dokumente
Increased Risk
30%
throttled to
Complicated and Inflexible
Source: HP research
3 HP Copyright 2011 Peter Mattei
The world has changed
And storage must change with it
1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012
Tier 0 SSD
Tier 1
FC
Tier 2
Nearline
E5000
for Exchange
HP Networking Wired, Wireless, HP Networking SAN Connection B, C & H Series
Data Center, Security & Management Enterprise Switches Portfolio FC Switches & /Directors
Nearline
VLS virtual
EML tape ESL tape library
RDX, tape drives MSL tape libraries libraries D2D Backup systems
& tape autoloaders libraries Systems
Business Copy
Software
Continuous Access
Data Protector Storage Storage Array Data Storage Cluster Extension
Express Mirroring Software Protector Essentials
SupportPlus 24 Proactive Select SAN Assessment Proactive 24 Backup & Recovery Critical Service
Services
Entry Data Migration SAN Implementation Installation & Start-up Data Migration Storage Performance Analysis
Data Protection Remote Support Consulting services (Consolidation, Virtualization, SAN Design)
Architecture Dual Controller Scale-out Cluster Dual Controller Mesh-Active Cluster Fully Redundant
Connectivity SAS, iSCSI, FC iSCSI FC, iSCSI, FCoE iSCSI, FC, (FCoE) FC, FCoE
Performance 30K Random Read IOPs ; 35K Random read IOPs 55K Random read IOPS > 400K random IOPs; >300K Random IOPS
1.5GB/s seq reads 2.6 GB/s seq reads 1.7 GB/s seq Reads > 10 GB/s seq reads > 10GB/s seq reads
Application SMB , enterprise ROBO, SMB, ROBO and Enterprise Enterprise - Microsoft, Enterprise and Service Large Enterprise - Mission
Sweet spot consolidation/ virtualization Virtualized inc VDI , Virtualized, OLTP Provider , Utilities, Cloud, Critical w/Extreme
Server attach, Video Microsoft apps Virtualized Environments, availability, Virtualized
surveillance BladeSystem SAN (P4800) OLTP, Mixed Workloads Environments, Multi-Site DR
Capacity 600GB 192TB; 7TB 768TB; 2TB 480TB; 5TB 1600TB; 10TB 2000 TB;
6TB average 72TB average 36TB average 120TB average 150TB average
Key features Price / performance All-inclusive SW Ease of use and Simplicity Multi-tenancy Constant Data Availability
Controller Choice Multi-Site DR included Integration/Compatibility Efficiency (Thin Provisioning) Heterogeneous
Replication Virtualization Multi-Site Failover Performance Virtualization
Server Attach VM Integration Autonomic Tiering and Multi-site Disaster Recovery
Virtual SAN Appliance Management Application QOS (APEX)
Smart Tiers
OS support Windows, vSphere, HP-UX, vSphere. Windows, Linux, Windows, VMware, HP-UX, vSphere, Windows, Linux, All major OSs including
Linux, OVMS, Mac OS X, HP-UX, MacOS X, AIX, Linux, OVMS, Mac OS X, HP-UX, AIX, Solaris Mainframe and Nonstop
Solaris, Hyper-V Solaris, XenServer Solaris, AIX
16Gb FC
Data Center Fabric Manager
32 & 48 port
Enhanced capabilities DC SAN Backbone Director
32 - 512 8Gb FC ports 8Gb FC & FICON
DC04 SAN Director + 4x 128Gb ICL 16, 32, 48 & 64 port
Host Bus Adapters 32 - 256 8Gb FC ports
4Gbps single and dual port HBA + 4x 64Gb ICL
8Gbps single and dual port HBA
10/24 FCoE
SN6000B FC Switch
DC Encryption
24-48 16Gb ports
MP Router MP Extension
16FC+2IP port
8/8 & 8/24 SAN Switch 8Gb SAN Switch HP 400 MP-Router
Encryption Switch
8-24 8Gb ports for HP c-Class BladeSystem (16x 4Gb FC + 2GbE IP ports)
(32x 8Gb FC ports)
HP Copyright 2011 Peter Mattei
C-Series SAN Portfolio
Cisco MDS9000 and Nexus 5000 family
SN6000C
(MDS 9148)
*planned
17 HP Copyright 2011 Peter Mattei
P10000 3PAR Bigger, Faster, Better! ...all round
2x Raw Capacity (TB) 1.5x Host Ports 1.5x Disk IOPS (,000)
192 360
1,600 240
312
128
96 180
800 800 120
156
64
384 400 93
128 12 24 46
Node Mid-Plane
Cache Coherent Interconnect
Completely passive encased in steel
Defines Scalability
Drive Chassis
Capacity Building Block
F Chassis 3U 16 Disk
T & V Chassis 4U 40 Disks
Service Processor
One 1U SVP per system
Only for service and monitoring
HP Copyright 2011 Peter Mattei
HP 3PAR Architectural differentiation
Purpose built on native virtualization
HP 3PAR Utility Storage
F-, T-, V-Class V-Class
Thin Suite Optimization Suite Recovery
3PAR Software
Additional HP
System Reporter Virtual Lock Cluster Extension
Thin Dynamic Managers
Provisioning Optimization
Thin Adaptive
Conversion Optimization Virtual
Peer Motion Virtual Domains Remote Copy
Thin Copy
Persistence System Tuner
Utilization Performance
InForm
Manageability fine-grained OS Instrumentation
V-Class
F-, T- &
Fast RAID 5 / 6 ASIC Zero Detection
22
HP 3PAR ASIC
Hardware Based for Performance
Thin Built in
Zero Detect
Mixed Workload
Independent Metadata and Data
Processing
Distributed
Controller
Functions
Cost-effective, scalable and resilient architecture.
Meets cloud-computing requirements for efficiency,
Disk Connectivity multi-tenancy and autonomic management.
3PAR
ASIC
3PAR
ASIC
Host Connectivity
Legend
Data Cache
Disk Connectivity
Passive Backplane
Heavy throughput
workload applied
Host Unified Processor Disk
and/or Memory interface disk
interface
Heavy transaction
workload applied
small IOPs wait for large IOPs to
be processed
Heavy throughput
workload sustained 3PAR ASIC &
Memory
Host Disk
interface disk
interface
Control Processor
Heavy transaction & Memory
workload sustained
Host Load
20K IOPs
2500
MBs of Cache Dedicated to Writes per Node
30K IOPs
40K IOPs
2000
1500
1000
500
0
0 10 20 30 40 50 60 70 80 90 100
% Read IOPS from Host
Measured System: 2-Node T800 with 320 15K FC Disks and12 GB data cache per Node
HP Copyright 2011 Peter Mattei
HP 3PAR High Availability
Spare Disk Drives vs. Distributed Sparing
3PAR InServ
Traditional Arrays
Spare chunklets
Spare drive
Many-to-many rebuild
parallel rebuilds in less time
Few-to-one rebuild
hotspots & long rebuild exposure
3PAR InServ
Traditional Arrays Raidlet Groups
Raid Group RG
Shelf
A1 A2 A5 A3 A6
A4
Shelf
A E A
Shelf
B1 B2 B5 B3 B6
B4
Shelf
B F B
Shelf
C1 C2 C5 C3 C6
C4
Shelf
C G C
Shelf
D1 D2 D5 D3 D6
D4
Shelf
D H D
Shelf-independent RAID
Despite shelf failure Data access preserved
Shelf-dependent RAID
Shelf failure might mean no access to data
3PAR InServ
Traditional Arrays
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
R1 R1 R5 R6 R6 R1 R5 R5 R1 R1 R5 R6 R6 R1 R5 R5
LUN 6 LUN 4
Spare
LUN 5
LUN 7 LUN 3
LUN 0
LUN 1 LUN 2
Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive
DIMMs)
6 GB Data
SATA : Local boot disk Cache
Gen3 ASIC
Data Movement
XOR RAID Processing
Built-in Thin Provisioning
I/O per node
3 PCI-X buses/ 2 PCI-X slots and one High Speed
Data Links
0 2 Onboard
onboard 4 port FC HBA 1
4 Port FC
Minimum configuration
2 Drive Chassis
16 same Drives
Min upgrade is 8 Drives
Full-mesh Back-plane
Post-switch architecture
High performance, tightly coupled
Completely passive
* Diagram is not intended to show all components in the 2M Cabinet, but rather to show how
controllers and drive chassis scale. Controllers and Drive Chassis are populated from bottom to top
43 HP Copyright 2011 Peter Mattei
T400 Configuration examples
How do we grow? After looking at the performance
requirements it is decided that adding capacity to the
existing nodes is the best option. This offers a good balance
of capacity and performance.
We decide that the next upgrade should be filling out the
first two nodes.
The next upgrade is going to require additional Controller
Nodes, Drive Chassis and Drive magazines. The minimum
upgrade allowed is:
2 Controller nodes
4 Drive Chassis
8 Drive Magazines
Just because you can do something doesnt mean it is a
good idea. This upgrade makes the Node Pairs very
unbalanced.
Over 50,000 IOPs on 2 nodes and 6400 on the other 2
Over 320 TB on one Node Pair and 19TB on the other 2
A much cleaner upgrade would be to add a lot more FC
capacity. This will bring the node IOP balance up much
closer. 44,800 to 32,000 FC IOPs There will still be a lot
more capacity behind 2 nodes but the volumes that need
more IOPs can be balanced across all FC disks.
Due to power distribution limits in a 3PAR rack you can
only have 8 Chassis per rack. A T400 with 8 Chassis
requires 2 full racks and a mostly un-filled 3rd rack.
44 HP Copyright 2011 Peter Mattei
T400 Configuration examples
Youll notice that the T400 has space for 6 Drive
Chassis but the normal building block is 4 Chassis.
Disk Chassis/Frames may be up to 100m apart from the Controllers (1st Frame)
46 HP Copyright 2011 Peter Mattei
T-Class redundant power
Controller Node(s)
PCI Slots
2 to 8 per System installed in pairs
Controller
2.0 GB/s dedicated ASIC-to-ASIC bandwidth
Control Cache 112 GB/s total backplane bandwidth
16 or 32GB
Data Cache
Inline Fat-to-Thin processing in DMA engine2
32 or 64GB 2 x Intel Quad-Core Processors
V400: 48GB Cache
3PAR Gen4
ASIC
3PAR Gen4
ASIC
V800: 96GB Maximum Cache
8Gb/s FC Host/Drive Adapter
Data Paths
PCIe PCIe PCIe
10Gb/s FCoE/iSCSI Host Adapter (planned)
Switch Switch Switch Warm-plug Adapters
PCI-e
Slots
Minimum upgrade
4 Drive Magazines (16 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks)
Maximum Configuration
4 Racks
4 Controller Nodes
24 Drive Chassis
960 Disks
Minimum upgrade
4 Drive Magazines (16 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks)
Minimum upgrade
4 Drive Magazines (16 Disks)
Disk Chassis/Frames may be up to 100m apart from the Controllers (1st Frame)
60 HP Copyright 2011 Peter Mattei
HP 3PAR InForm OS
Virtualization Concepts
RAID5 (3+1)
Exported
controller nodes LUN
Enables easy mobility between physical disks, RAID 3PAR InServ Controllers
types and service levels by using Dynamic or Adaptive
Optimization
Physical Disks
Performance
Enables wide-striping across hundreds of disks
Avoids hot-spots
Allows Data restriping after disk installations
High Availability
HA Cage - Protect against a cage (disk tray) failure.
HA Magazine - Protect against magazine failure
65 HP Copyright 2011 Peter Mattei
Common Provisioning Groups (CPG)
Multiple CPGs can be configured and optionally overlap the same drives
i.e. a System with 200 drives can have one CPG containing all 200 drives and
other CPGs with overlapping subsets of these 200 drives.
R5 R1 AO
SSD
R1
ThP
R5
R5
AO
FC
R5
R5 R1
Fat
ThPThP
ThP
R1
R6 Fat
R5
ThP
Nearline
ThP
R6 ThP
ThP
Fat
Disk Speed
RAID Type
By selecting advanced options more
granular options can be defined
Availability level
Step size
Preferred Chunklets
Dedicated disks
Optionally
Select specific Array Host Ports
Specify LUN ID
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
Care Pack Support Services for spindle base licenses are charged by the number of
magazines
Support Services cost incrementally increase until they reach a predefined threshold/cap
and stay flat i.e. will not increase anymore.
Capping threshold by array
F200 11Magazine
F400 13 Magazines
T400 / V400 33 Magazines
T800 / V800 41 Magazines
Capping occurs for each software title per magazine type
Example for InForm OS on V800 with 3 Years Critical Service:
50 x 600GB Disk Magazine ---- 41 x HA112A3 - QQ6 - 3PAR InForm V800/4x600GB Mag LTU Support
24 x 2TB Disk Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InForm V800/4x2TB Mag LTU Support
24 x 200GB SSD Magazine ---- 24 x HA112A3 - QQ6 - 3PAR InFrm V800/4x200GB SSD Mag LTU Support
The Thin Suite, Thin Provisioning, Thin Conversion and Thin Persistence do not have any
associated support cost
Server
presented
Capacities
/ LUNs
Required
net Array
Capacities Free
Chunkl
Physical
Disks
Physically installed Disks
Physically installed Disks
b) If you are using filesystem copies to do the migration the copy will defragment the
files as it copies eliminating the need to defragment the source filesystem
LUN 2 LUN 2
LUN 1 LUN 1
Unused Unused
Data 2 Free Free
Data 1 Chunklets Chunklets
Data1 Data 2
LUN 2 LUN 2
LUN 1 LUN 1
00000000
00000000 Free
00000000 Chunklets
00000000
Free
Data1 Data 2 Chunklets Data 1 Data 2
X X X X
DATASTORE DATASTORE
25GB 25GB 25GB 25GB 25GB 25GB 25GB 25GB
X X X
00000000
X
00000000 00000000 00000000
00000000 00000000 00000000 00000000
0 0
0 Slow, Post-process T10 UNMAP 0 Rapid, Inline T10 UNMAP
ASIC Zero Detect
0 Overhead (768kB42MB Coarse) 0 (16kB granularity)
0 0
TIME
20GB VMDKs post deletions consume 40+ GB 20GB VMDKs post deletions consume ~20GB
HP Copyright 2011 Peter Mattei
HP 3PAR Thin Provisioning positioning
Built-in not bolt on
Smart
Promotable snapshots
Base Volume 100s of Snaps
Individually deleteable snapshots
Scheduled creation/deletion
Consistency groups but just
Thin one CoW
No reservations needed
Non-duplicative snapshots
Thin Provisioning aware
Variable QoS
Ready
Instant readable or writeable snapshots
Snapshots of snapshots
Control given to end user for snapshot Integration with
management Oracle, SQL, Exchange, VMware
Virtual Lock for retention of read-only snaps
The base volume space and the virtual copy space can grow
independently without impacting each other (each space has its own
allocation warning and limit).
Dynamic optimization can tune the base volume space and the virtual
copy space independently.
Secondary Tertiary
P
You can use Remote Copy over IP
(RCIP) and/or Fibre Channel (RCFC) Primary Site A
connections
InServ Requirements P
Max support is 4 to 1. RC RC
One of the 4 can mirror bi-directionally Primary Site B
RC P
Each RC relationship requires dedicated RC
Primary Site C
P
RC
Primary / Target
Site D
Target Site B
Primary Secondary
Real-time Mirror Volume Volume
Highest I/O currency
1 2
Lock-step data consistency P S
Space Efficient 4 3
Thin provisioning aware
Step 1 : Host server writes I/Os to primary cache
Targeted Use
Campus-wide business continuity
Step 2 : InServ writes I/Os to secondary cache
Single Volume
All writes to the secondary volume are completed in the
same order as they were written on the primary volume
1 Initial Copy A SA
Resynchronization.
Starts with snapshots B SA
2
P
Resynchronization.
Delta Copy
B-A
delta SB
3 Upon Completion.
Delete old snapshot A SA
Quorum
Data Center 3 What does it do?
Provides manual or automated site-
Serviceguard
failover for Server and Storage resources
A A
A
for HP-UX
Supported environments:
B
HP-UX 11i v2 & v3 with Serviceguard
HP Metrocluster Up to 210km (RC supported max)
Requirements:
HP 3PAR Disk Arrays
Remote Copy 3PAR Remote Copy
HP Serviceguard Metrocluster
Max 200ms network round-trip delay
Data Center 1 Data Center 2
Up to 210km
Servers
Requirements:
HP 3PAR
VMware vSphere
VMware vCenter
HP 3PAR
VMware vCenter Site Recovery Manager
HP 3PAR Replication Adapter for VMware
vCenter Site Recovery Manager
HP 3PAR Remote Copy Software
HP 3PAR Virtual Copy Software (for DR
failover testing)
Production LUNs
Remote Copy DR LUNs
108 HP Copyright 2011 Peter Mattei Virtual Copy Test LUNs
HP 3PAR
Dynamic and Adaptive
Optimization
Tier 0 SSD
A new way of autonomic data
Tier 1 FC Optimized
placement and cost/performance approach for
optimization is required: Tier 2 NL leveraging SSDs
HP 3PAR Adaptive Optimization
Multi-Tiered Volume/LUN
Tier 1 FC
Tier 2 SATA
Autonomic Autonomic
Data Tiering and
- Region
Movement Data Movement
SSD
RAID 5
RAID 5
2+1)
(3+1)
RAID 5
RAID 1
(7+1)
FC RAID 6 (6+2)
RAID 6 (14+2)
Performance
Nearline RAID 5
RAID 5 2+1)
(3+1)
RAID 5
RAID 1
(7+1) In a single command
RAID 5
RAID 6 (6+2) non-disruptively optimize and
RAID 5
(3+1)
(2+1) RAID 6 (14+2) adapt cost, performance,
RAID 5 RAID 1
efficiency and resiliency
(7+1)
RAID 6 (6+2)
RAID 6 (14+2)
~50% ~80%
Savings Savings
10TB net 10TB net 10TB net
600 600
500 500
400 400
Chunklets
Chunklets
300 300
200 200
100 100
0 0
1 20 39 58 77 96 1 20 39 58 77 96
Physical Disks REBALANCE Physical Disks
100,00%
90,00%
80,00%
ex2k7db_cpg
70,00%
Cumulative Access Rate %
ex2k7log_cpg
oracle
60,00%
oracle-stage
50,00% oracle1-fc
windows-fc
40,00%
unix-fc
vmware
30,00%
vmware2
20,00% vmware5
windows
10,00%
0,00%
0,00% 10,00% 20,00% 30,00% 40,00% 50,00% 60,00%
Cumulative Space %
Required IOPS
Wasted space
Medium-speed
media pool Low-speed
media pool
IO IO
distribution distribution
0% 0%
0% Required Capacity 100% 0% Required Capacity 100%
One tier without Adaptive Optimization Two tiers with Adaptive Optimization running
Used Space GiB
This chart out of System reporter shows A Nearline tier has been added and
that most of the capacity has very low Adaptive Optimization enabled
IO activity Adaptive Optimization has moved
Adding Nearline disks would lower the least used chunklets to the
cost without compromising overall Nearline tier
performance
Simple
Extra tools SW Downtime Fool-proof
HP Confidential
Peer Motion Migration Phases
Non-disruptive array migration the steps behind the scene
Initial 3PAR Configuration 1. Install new 3PAR array 6. Create new destination-host zone
2. Configure Array Peer Ports on target 7. Admit source volumes on destination (admitvv)
3. Create new source-destination zone 8. Export destination volumes to host
4. Configure destination as host on source (This adds additional paths to source)
5. Export volumes from source to destination
Windows, Linux, Solaris (more to come) 1 ==> Copy Source Array Configuration Data to Destination Array
2 ==> Analyze Migration Links
No existing snapshots 3 ==> Migtae Volumes
4 ==> Display Array Information
Not part of a replication group
Enter Refresh screen
x Exit
Note: Find more details and the most current support matrix in HP SPOCK
128 HP Copyright 2011 Peter Mattei
HP 3PAR
Virtual Domains
Domain A
Domain B
Domain C
Virtual
Provisioned
Domains
Storage
Centralized Centralized
Storage Storage
Administration Administration
No Domain
CPG(s) Any element
Unassigned Host(s)
elements
Unassigned
elements VLUNs Any element
VVs & TPVVs
VCs & FCs & RCs
Chunklets & LDs
Requires a license
Each User may have privileges over one, up to 32 selected or all domains
System provides different privileges to different users for Domain Objects with
no limit on max # Users per Domain
Step 1 : User initiates login to 3PAR InServ via 3PAR CLI/GUI or SSH
Step 2 : InServ searches local user entries first.
Upon mismatch, configured LDAP Server is checked
Step 3 : LDAP Server authenticates user.
Step 4 : InServ requests Users Group information
Step 5 : LDAP Server provides LDAP Group information for user
Step 6 : InServ authorizes user for privilege level based on Users group-to-role mapping.
One tier without Adaptive Optimization Two tiers with Adaptive Optimization running
Used Space GiB
This chart shows that most of the A Nearline tier has been added and
capacity has very low IO activity Adaptive Optimization enabled
Adding Nearline disks would lower Adaptive Optimization has moved
cost without compromising overall the least used chunklets to the
performance Nearline tier
Improved Visibility
VM-to-Datastore-to-LUN mapping
Storage Properties
View LUN properties
including Thin versus Fat
See capacity utilized
Integration with
3PAR Recovery Manager
Seamless rapid online recovery
Solution composed of
3PAR Recovery Manager for VMware
3PAR Virtual Copy
VMware vCenter
Use Cases
Expedite provisioning of new
virtual machines from VM copies
Snapshot copies for testing and
development
Benefits
Hundreds of VM snapshots granular,
rapid online recovery
Reservation-less, non-duplicative without agents
vCenter integration superior ease of use
Primitive Description
Atomic Test and Set. Stop locking entire LUN and only lock
ATS
blocks.
vSphere 4.1 Also known as Fast or Full Copy. Leverage array ability to mass
XCOPY
copy and move blocks within the array.
3PAR support
introduced with WRITE SAME Eliminate redundant and repetitive write commands
Inform 2.31mu2+
Report array TP state to ESX so VM can gracefully pause if out-
TP Stun
of-space
Used for space reclamation rather than WRITE_SAME. Reclaim
UNMAP
space after a VMDK is deleted within the VMFS environment.
vSphere 5.0
TP LUN identified via TP enabled (TPE) bit from READ CAPACITY
TP LUN Reporting
3PAR support (16) response as described in section 5.16.2 of SBC3 r27
introduced with
Inform 3.11 Uses CHECK CONDITION status with either NOT READY or
Out of Space Condition
DATA PROTECT sense condition
Quota Exceeded Done through THIN PROVISIONING SOFT THRESHOLD
Behavior REACHED (described in 4.6.3.6 of SBC3 r22)
Backend
Disk IO
Frontend
IO
DataMover. DataMover.
HardwareAcceleratedMove=1 HardwareAcceleratedMove=0
147 HP Copyright 2011 Peter Mattei
vStorage API for array integration (VAAI)
Hardware Assisted Locking
Increase I/O performance and scalability, by offloading block locking mechanism
Moving a VM with VMotion;
Creating a new VM or deploying a VM from a template;
Powering a VM ON or OFF;
Creating a template;
Creating or deleting a file, including snapshots
ESX ESX
ESX ESX
0 0
0000000 0000000
0000000 0
0000000
0000000 0000000
X
25GB 25GB 25GB 25GB
X
00000000 00000000 00000000 00000000
00000000 00000000 00000000
Granular
20GB
10GB 10GB 15GB
0 Rapid, Inline
Reclamation granularity is as low as 0 ASIC Zero Detect
T10 UNMAP
(16kB granularity)
16kB compared to 768kB with EMC 0
The VMFS data mover does not leverage hardware offloads and instead uses
software data movement if:
The source and destination VMFS volumes have different block sizes
The source file type is RDM and the destination file type is non-RDM (regular file)
The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
The source or destination VMDK is any sort of sparse or hosted format
The source Virtual Machine has a snapshot
The logical address and/or transfer length in the requested operation are not aligned to
the minimum alignment required by the storage device
all datastores created with the vSphere Client are aligned automatically
The VMFS has multiple LUNs/extents and they are all on different arrays
Hardware cloning between arrays (even if within the same VMFS volume) does not work.
Solution composed of
3PAR Recovery Manager for
VMware
3PAR Virtual Copy
VMware vCenter
Use Cases
Expedite provisioning of new
virtual machines from VM copies
Snapshot copies for testing and
development
Benefits
Hundreds of VM snapshots granular,
rapid online recovery
Reservation-less, non-duplicative without
agents
vCenter integration superior ease of use
3PAR literature
http://h20195.www2.hp.com/v2/erl.aspx?keywords=3PAR&numberitems=25&query=yes
HP Storage Videos
http://www.youtube.com/hewlettpackardvideos#p/c/4884891E6822C9C6
157
157 HP Copyright 2011 Peter Mattei
HP 3PAR the right choice!
Thank you
Serving Information. Simply.