Sie sind auf Seite 1von 579

azarpara.vahid@gmail.

com

Course Overview
Module 1
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

mcse2012.blogfa.com
azarpara.vahid@gmail.com

Module objectives

After completing this module, you should be able to:


• Describe the objectives and purpose of this course
• Discuss the topics addressed in this course

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.

mcse2012.blogfa.com
Course introduction

• This course provides training for the HP 3PAR Disk Array family

• It is designed for HP 3PAR administrators with an emphasis on basic concepts and best
practices needed to administer the array

• An understanding of array and SAN concepts is beneficial

• Emphasis will be placed on hands-on labs reinforcing the course concepts. Labs are performed
on a Windows host and Command Line labs are cross-platform.

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Course agenda
• Module 1: Course Overview • Module 10: Dynamic Optimization and Online
• Module 2: HP 3PAR Solution Overview Volume Conversion
• Module 3: HP 3PAR Array Management: MC, SSMC, • Module 11: Thin Features
and CLI Introduction • Module 12: Local Replication: Virtual Copy and
• Module 4: StoreServ 7000 Hardware Overview Physical Copy
• Module 5: StoreServ 10000 Hardware Overview
• Module 6: Storage Concepts and Terminology • Appendix A: Dedup
• Module 7: Storage Configuration • Appendix B: Adaptive Flash Cache
• Module 8: Host Connectivity and Storage • Appendix C: File Persona Part 1: Concepts and
Allocation Configuration
• Module 9: Autonomic Groups and Virtual Lock • Appendix D: File Persona, Part 2: Snapshots,
Antivirus, Quotas

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Solution Overview
Module 2
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Name the current HP 3PAR hardware offerings
• Explain the benefits and advantages of HP 3PAR technology versus traditional arrays
and competitive arrays
• Describe some of the basic HP 3PAR high availability advantages
• Explain the advantages of Persistent Ports
• Explain the advantages of Cache Persistence
• Explain the advantages of Virtual Domains

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR current product line

All models contain controller nodes,
All models contain controller nodes, drive cages, and physical disks
drive cages, and physical disks.
• Hardware specifics will be addressed in upcoming modules
Hardware specifics will be addressed in
upcoming modules

10800
10400
7450
7400c
7450c
7400
7200
7400c
7200c

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR architectural differentiation: Purpose built

Utility Storage
Thin Provisioning Adaptive,
Virtual Copy
Thin Conversion Virtual Priority, and Adaptive Flash System
Peer Motion Physical Copy Virtual Lock
Thin Persistence Domains Dynamic Cache Reporter
Remote Copy
Thin Dedup Optimization

7000 V-Series/ 10000

Self-Configuring Self-Healing
Autonomic
Self-Optimizing Policy
Management
Self-Monitoring

Utilization Performance
OS
Manageability fine-grained Instrumentation

Mesh Active Mixed Workload


Gen4 ASIC
Fast RAID 5/6 Zero Detection/Thin Dedup
4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Traditional array vs. HP 3PAR architecture comparison
Traditional Modular Storage
HP 3PAR Meshed and Active/Active

Pros Cons
- Lower Cost - Limited Scalability
- Limited Performance

Traditional Monolithic Storage

Host Connectivity Distributed


Controller
Cache Switched Backplane Functions
Cost-effective, scalable, and resilient
architecture that meets cloud-
Disk Connectivity computing requirements for
Pros Cons
efficiency, multitenancy, and
- Very High Availability / Reliability - High Cost autonomic management
- High Performance - High Management Burden
5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR virtualized storage architecture benefits (1 of 3)

Highly available, simple, high performance, cost effective, feature rich


• Fine grained virtualization
• System-wide striping
• Mesh-active/active controller node design
• Mixed workload optimization
• Ease of management
• Self-adapting algorithms
• Express writes
• T10 DIF

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR virtualized storage architecture benefits (2 of 3)
Traditional Controllers/Arrays HP 3PAR Array
• Each RAID level requires dedicated drives • All RAID levels can reside on same drives
• Dedicated spare disk required • Distributed sparing, no dedicated spare drives
• Limited single LUN performance • Built-in wide-striping based on disk chunklets

0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
R1 R1 R5 R6 R6 R1 R5 R5 R1 R1 R5 R6 R6 R1 R5 R5
Traditional Controllers HP 3PAR Controller Nodes
RAID5 Set RAID1 RAID6 Set
LUN 6 LUN 4
LUN 5
Spare
LUN 7 LUN 3
Physical drives
RAID1 Set RAID5 Set
LUN 0
Spare

LUN 1 LUN 2

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR virtualized storage architecture benefits (3 of 3)
Traditional Controllers/Array HP 3PAR Mesh Active-Active Controllers

• Each volume is active on • Each volume is active on all


only one controller—really controllers—true Active-Active
Active-Passive

• Volume is restricted to • Volume is evenly spread across


drives behind a single all resources
Controllers
controller  Disks
• Manual planning and load  Controller nodes
balancing for each  Cache
controller
 I/O
• Cache not necessarily
• Autonomically provisioned
shared
• Cache coherent

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR ASIC
The heart of every HP 3PAR Built-in Zero Detection and
Data Hashing for Dedup
Hardware based for performance All IO goes
through the ASIC

Fast RAID 5/6 Tightly-coupled


clustering
Rapid RAID Rebuild
High bandwidth, low
Integrated XOR Engine
latency interconnect

Mixed workload & CPU


Offload
Independent metadata
and data processing
10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR mixed workload support

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Thin Provisioning: Start thin and stay thin
Traditional Controllers/Array: HP 3PAR Array:
Dedicate on allocation Dedicate on write only—no pools

Server
presented
LUNS

Required net
array Free
Chunklets
capacities

Physical
disks Physically installed Disks
Physically installed disks

Actually written data plus zero detection


12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
plus deduplication (SSDs)
HP 3PAR Virtual Domains: Administrative security
Multitenancy with traditional storage Multitenancy with HP 3PAR Virtual Domains

• Admin A • Admin B • Admin C • Admin A • Admin B • Admin C


• App A • App B • App C • App A • App B • App C
• Dept A • Dept B • Dept C • Dept A • Dept B • Dept C
• Customer A • Customer B • Customer C • Customer A • Customer B • Customer C

Domain A
Domain B
Domain C

Separate, physically secured storage Shared, logically secured HP 3PAR Storage

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR high availability: Loss of disk and rebuild
Traditional Controllers/Array HP 3PAR Array

Dedicated spare disk drives Distributed sparing

Spare drive
Many-to-many rebuild
Few-to-one rebuild Parallel rebuilds in less time
Bottleneck and hotspots Only used chunklets need rebuilt
Non-optimal from cost point of view
Spare chunklets
Full (long) rebuild exposure
14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR high availability: Cage reduced as SPF
Tier 1 availability feature in a modular class array

• Most modular arrays stripe data and parity across drives. If


multiple drives are lost (loss of enclosure) data loss can occur.

• HP 3PAR arrays stripe data and parity across drives and drive

RAID sets
cages, ensuring data integrity even in the event of a drive cage
failure (certain conditions must be met).

• Cage availability (-ha cage) will be discussed in detail in


upcoming modules.

Four drive cages shown from 7x00 series

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Persistent Ports
Path loss, controller maintenance or loss behavior of HP 3PAR arrays

• FC, iSCSI, FCoE ports supported


• In FC SAN environments all
paths stay online in case of loss
of signal of an FC path, during
node maintenance or in case of
node failure
• No user intervention required
• Server will not see the swap of
the HP 3PAR port WWN so no
multi-pathing path failover
Ctrl 0 Ctrl 1 required

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Persistent Cache
Four or more controller nodes result in cache coherence/persistence
Traditional Controller/Array HP 3PAR Array

Write Cache Write Cache

Mirror Mirror Write-Cache stays on


thanks to redistribution

Write-Cache off for data


security

Traditional write-cache mirroring Persistent write-cache mirroring


Either poor performance because of • No write-through mode: Consistent performance
write-through mode or risk of write • Works with 4 or more nodes**
data loss

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR recent enhancements (1 of 2)

• Deduplication
For SSD thin provisioned volumes deduplication of data results in cost savings and
reduces wear and tear on SSD physical disks

• Adaptive Flash Cache (AFC)


Using SSD capacity as an extension of DRAM improves performance of random reads

• StoreServ Management Console (SSMC)


Modern browser based new user interface for HP 3PAR array management

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR recent enhancements (2 of 2)
• File Persona
Extends the HP 3PAR block storage concept to allow users file access through File
Services using a StoreServ file controller

Provides NFS and CIFS access plus object access and is ideal for workloads such as
home directory consolidation, group/department shares, corporate shares, and
custom cloud applications

• Introduction of the 7000 “c” series models: 7200c, 7400c, 7440c, 7450c

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR component connectivity
Node 1

Two-node 7400
example used

Node 0

Front-end host

Fabric switches
Expansion drive cage with disks

FC, iSCSI, FCoE


21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR remote support

• Allows remote service connections for assistance and troubleshooting to deliver effective and reliable
customer support
• Transmits only diagnostic data
• Uses secure service communication over Secure Socket Layer (HTTPS) protocol between HP 3PAR array and
HP 3PAR Central
• With the optional Secure Service Policy Manager remote support capabilities and logging of remote support
activities can be enabled/disabled
• If security rules do not allow a secure Internet connection, a support file can be generated on the SP and
sent to HP 3PAR Central by mail or FTP
22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP Storage interoperability
SPOCK – Single Point of Connectivity Knowledge for HP Storage Products
SPOCK provides the information to determine interoperability between HP storage components
in order to:
• Integrate new products
and features
• Maintain active installations

SPOCK can be accessed by


• HP internal users
• HP customers
• HP Partners

http://www.hp.com/storage/spock

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Array Management:
MC, SSMC, and CLI Introduction
Module 3
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Install the HP 3PAR Management Console (MC) and manage an array
• Install and manage an HP 3PAR array using StoreServ Management Console (SSMC)
• Install the Command Line Interface (CLI) and use CLI commands to manage an array
• Log into the MC, SSMC and CLI
• Use some of the basic features of the SSMC and MC
• Use some of the basic CLI commands
• Understand the high level functionality of HP OneView

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Array Management introduction
Array administration can be done using the new, modern StoreServ Management Console (SSMC),
Management Console (MC) GUI or a very robust Command Line Interface (CLI)

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
MC: HP 3PAR Management Console

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Log in and connect to Management Console

• Initial login to array requires IP/hostname


• Username/password based authentication
• Secure connection (TCP port = 5783) **default
Unsecured connection (TCP port = 5782)
• Available permissions allocated by domain (if configured)
• Administrators can connect to multiple
HP 3PAR arrays according to permissions and privileges
• Default login: 3paradm/3pardata

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Authentication and Authorization
LDAP Login

Management HP 3PAR Array LDAP Server Step 1 : User initiates login through HP 3PAR SSMC/MC/CLI or SSH
Workstation
Step 2 : HP 3PAR OS searches local user entries first
2
On mismatch, configured LDAP Server is checked
1 Step 3 : LDAP Server authenticates user
3

6 4 Step 4: HP 3PAR OS queries LDAP server for group membership

5
Step5: LDAP Server provides LDAP Group information for user
Step6: HP 3PAR OS authorizes user for privilege level based on
User’s group-to-role mapping

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management Console dashboard
Main Menu Bar

Main Tool Menu

Management Tree

Management
Window

Common Actions Panel

Management Pane/
Navigation Pane

Alert/Task
Connection Pane
Status Bar

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Customizing the Management Pane

The Management pane can be customized


• Click the >> button in the bottom corner
• Select desired options to display

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management Console navigation

Context-sensitive Management pane and Common Actions tasks

Content pane with tabbed view

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management Console Help
Access Help
• By pressing F1
• From Help in the Main menu bar

• By clicking the ? icon next to a field

Field sensitive help


10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management
Console wizards
Wizards available for most
configuration options
• Easy to use
• Intuitive
• Simplification of common
tasks

Note: screen
truncated to fit on
slide
11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management Console Welcome screens

• Welcome screens display for some


wizards

• Gives helpful information about


the task being performed

• To suppress display of Welcome


screens, select the checkbox at the
bottom of the Welcome screen

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Viewing alerts in Management Console

• Management Console displays alerts


• View alerts by
− Viewing the Alert pane from the main
dashboard
− Selecting Events and Alerts from the
Navigation pane

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Setting preferences for Management Console

• The Preferences option allows you to set global


preferences for all systems displayed in the
Management Console

• Access the Preferences option from the Main


menu bar at View  Preferences

• Any settings made using the Preferences option


are saved on the system and used the next time
any user logs in to the Management Console

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Management Console New user creation
Users assigned specific role

Create new users from the


Security Manager  Create User option

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: StoreServ Management Console

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Introduction
• Newest and modern GUI for HP 3PAR array
management
• Not a complete replacement for Management
Console (MC) but selected functionality
incorporated into SSMC
• Supported on HP 3PAR arrays running HP 3PAR
OS 3.1.3 or higher
• Supports a variety of browsers
− Google Chrome
− Mozilla Firefox
− Microsoft Internet Explorer
• Easy to use and intuitive
• Functionality includes
− Working with CPGs, Virtual Volumes, Snapshots,
Host Sets, Virtual Volume Sets, Remote Copy
− Performance Monitoring and Reporting
− File Persona management
17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Installation

• Launch the Windows or Linux


installer
• Follow the on-screen wizard
• Configurable installation options
– Installation folder

– Port number (default port is 8443)

• Up to 16 HP 3PAR arrays can be


managed at a time

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Administrator Console (1 of 2)

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Administrator Console (2 of 2)

Arrays must be added through the Administrator Console

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Logging into SSMC and the Dashboard
• The Dashboard is the entry point for SSMC
• High-level overview of all added/connected storage systems

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Mega menu/Main menu
Primary navigation tool

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC wizards

• Wizards will be launched for tasks using


intuitive hyperlinks or the Actions pulldown
• Example is the wizard to create a virtual
volume

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Local searching
• Search for resources locally, on the current page
• Searching is critical for large sets of data

Example shows searching for CPG

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Global searching
• Search for information throughout all data (not case sensitive)
• Results include the object type

• Example from Common Provisioning Group area


• Search results include hosts and users

Search results
can include
• Example shows search on the word create
actions • Options to create CPGs, VVs, Users, and more

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Area Filtering

• Filtering can be done in all areas


• Example shows display of virtual volumes and filtering by Provisioning type
28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Detail maps
• Available for many resources
• Selecting a resource displays page and map/tree
for that resource

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Session Menu and Help
Session menu
Session Menu
•Opens a menu to log out of SSMC
•Allows opening of the Administrator Console in
another tab or session
•Displays user name and session duration

Help
• Click the ? icon at the top right to get Help
• Two options available
– Help for the current page
– Browse help brings up SSMC online help page

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Settings (1 of 2)

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC Settings (2 of 2)

Note: screen
truncated to fit on
slide

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC New user creation (1 of 2)

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC New user creation (2 of 2)

Supply Name, Password and add user role/authorizations

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Activity

View Task details

• Displays tasks
• Expand to view additional details
– Start and end times
– Actions taken by task

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC 2.0 vs. 2.1 Feature Support
Feature Supported 2.0 2.1 Feature Supported 2.0 2.1
Dashboard Y Y Adaptive Optimization N Y
Search Y Y DO/Tunesys N Y
Y
Alerts Y Y VVYConversion N N
Events Y Y Virtual
Y Copy Y Y
Provisioning Y Y N
Physical Copy N Y
Virtual Domains N Y Y
PO/QoS N N
N
File Persona Y Y Adaptive Flash N Y
Y
Cache
Hardware Y Y Y Encryption
DAR N N
System Reporter Y Y Y
FIPS 140-2 Y Y
Remote Copy Y* Y
Peer Persistence N Y
*Limited Remote Copy functionality Peer Motion N N
36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Command Line Interface

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI installation

• Install from HP 3PAR OS CLI and SNMP CD

• PuTTY, secure CRT, and Open ssh can be


used for CLI

• Supported on HP-UX, Linux, Oracle Solaris


(Solaris), and Microsoft Windows
operating systems

• Best practice: Install the CLI on the


Management Console server

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI login

Use the HP 3PAR OS CLI to monitor,


manage, and configure HP 3PAR Storage
Systems
• Access the CLI from the desktop icon created during
installation
• The example shows:
− A login (IP, user, and password)
− The showversion command
• Use cmore before a command for page breaks
in the output
cli% cmore showport

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI Help

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI examples (1 of 4)

Use showlicense to view


installed licenses

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI examples (2 of 4)

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI examples (3 of 4)

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI examples (4 of 4)

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP OneView

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP OneView Introduction

CloudSystem 3rd party integration With just 4 API commands:


• APIs built with modern RESTful interfaces
– the language of the web

• Automate any step or process, integrate


with any device or tool
HP OneView
RESTful APIs Message Bus • Program processes to run in parallel to
drastically reduce project times

• UI is built on the same APIs, anything done


through the UI can be done via APIs

Servers Storage Network

46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
OneView Provision infrastructure: 2 times faster
Deploying infrastructure for an ESX cluster
x16
With traditional tools
6. Update server fw
1. Manually 2. Update 4. Update Virtual = 170 minutes
7. Configure BIOS
identify enclosure fw Connect fw
available 8. Configure iLO of administrator
3. Configure 5. Configure
capacity enclosure Virtual Connect 9. Configure SmartArray time*
settings base settings 10. Configure Virtual
Connect

With HP OneView x16

= 75 minutes
of administrator
1. Configure 2. Configure 3. Configure CI 4. Identify available capacity and time*
network sets enclosure group template provision server via template

*Source: HP testing, based on 640 servers and 60 profiles


47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP OneView Features
Storage management
• Automated storage provisioning for HP 3PAR StoreServ storage
• Server profile integration for automated SAN zoning and provisioning
• Supports Brocade Fibre Channel fabrics and direct-attach with HP Virtual Connect

Network management using Virtual Connect


• Support for new HP Virtual Connect FlexFabric-20/40 F8 modules
• Support for HP Virtual Connect Fibre Channel modules
• Support for Cisco Nexus 5000 switches and Cisco Fabric Extenders (inventory only)
• Untagged traffic, VLAN tunneling, LACP timers configuration, bulk Ethernet networks, MAC Address tables

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP OneView high-level architecture
Storage Resource Manager HP Matrix Microsoft VMware
Manager System Center vCenter

CI UI and API
CI profiles
Single Env. Storage Deploy- Phys. Con- Network
management Resource Resource ment server nectivity Services
Manager Manager Resource Resource Resource Resource
appliance Manager Manager Manager Manager

CI Common Services

Power, Storage OS Servers, Switches Network


cooling targets, images enclosures (LAN, SAN) (IPS, firewall,
volumes routing, and
so on)

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP OneView use cases
Easily set up and consume storage resources

Storage admin: Set up, configure, monitor, and modify


1 storage infrastructure for OneView consumption

Storage admin: Import and display storage arrays


2 and FC pools into OneView

3 Storage admin: Import and display SAN fabrics

4 Storage admin: Create storage templates for provisioning

5 Server admin: Create storage volumes

6 Server admin: Provision storage via server profiles

50 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
StoreServe 7000
Hardware Overview
Module 4
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the HP 3PAR Controller options for the 7000 and 7000c series models
• Explain the drive cage expandability options for the 7000 and 7000c series
• Describe the Node:Slot:Port (N:S:P) naming convention for controller node ports
• Describe the Cage:Magazine:Disk (C:M:D) convention for physical disks
• Understand the rules regarding physical disk population in the 7000/7000c series drive cages

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7000 hardware architecture

• The base enclosure SFF drive chassis holds up to


24 SFF drives per enclosure
• The 7000 series is the only series to include
controller nodes inside the primary drive chassis
• Models cannot be upgraded (ie: 7200c upgraded
to a 7400c or 7450c)

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7400/7400c 4-Controller
HP 3PAR 7000 Series Storage introduction

7200/7200c 2-Controller model

• Start with base enclosure(s)

• Add expansion drive cages


• The 7000 series models have different
hardware configurations

7400 2 Controller

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR 7000 Series Hardware Building Blocks
Host
Base Storage Expansion Drive Adapters Service
Drives Racks
Systems Cages Processor

7200/7200c 4-port 8Gb/s


(2 nodes, 4 FC ports, 24 SFF slots) 2-port 16Gb/s FC
SFF SAS
HBA
HDD & SSD
HP M6710 2.5in 2U SAS
HP G3 rack VM-based
(Default)
2-port 10Gb/s
7400/7400c/7440c iSCSI/FCoE CNA
(2/4-node, 4/8 FC ports, 24/48
SFF slots) LFF SAS
HDD & SSDs
Physical
HP M6720 3.5in 4U SAS
(Optional)
Customer-supplied
7450/7450c
rack
7450 /7450c SSDs Only 2-port 10Gb/s (4-post, square hole,
(2/4-node, 4/8 FC ports, 24/48 4-port 1Gb/s EIA standard, 19 in.,
SFF slots) Ethernet NIC rack from HP or other
(“c” models only) suppliers)
5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR StoreServ 7000 Controller Nodes
Two to four nodes per system, installed in pairs
Per node configuration
• Thin Built In Gen4 ASIC
• Intel processor
7200/7200c 4-core
7400/7400c 6-core
7440c/7450/7450c 8-core
• Data cache and Control Cache (amount depends on model)
• Two built-in/on-board 8Gb/s FC ports
• Two built-in/on-board 6Gb/s SAS ports
• One built-in 1Gb/s Ethernet Remote Copy (RCIP) port
• 50GB MLC SATA boot SSD
• Optional PCI-e adapter
Four-port 8Gb FC or Two-port 16Gb FC
Two-port 10Gb iSCSI
Two-port 10Gb Ethernet Adapter (“c” models only)
Four-port 1Gb Ethernet Adapter (“c” models only)

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR 7000 Series Base Enclosure: Front and rear

Base enclosure contains: Front view

• 24 SFF slots for disks (front)


Node 1
• Two controllers (rear)

PCM0
PCM1

Node 0 Rear view

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR 7400/7400c/7450/7450c four-node
configuration
All upgrades are nondisruptive
• From 74x0 2-node to 4- node
• Interconnected with interconnect cables

Autonomic rebalance
• System-wide striping across controller
nodes and disks
• Double the performance of existing LUNs
Node 3 • Double capacity available to all LUNs

Node 2

Node 1

Node 0

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7200c, 7400c, and 7440c comparison (1 of 2)
Summary 7200c 7400c 7440c
Number of controller nodes 2 2 or 4
2 six-core 2-4 six-core 2-4 eight-core
Processors
1.8 GHz 1.8 GHz 2.3 GHz
Total On-Node Cache 40 GB 96 GB 192 GB

Total On-Node Cache per node pair 40 GB 48 GB 96 GB

Maximum Host ports 12 24

8GB/s Fibre Channel Host Ports 4-12 4-24

16GB/s Fibre Channel Host Ports

10GB/s iSCSI Host Ports 0-4 0-8

10 GB/s Ethernet Adapter Ports *

1 GB/s Ethernet Adapter Ports * 0-8 0-16

* NIC used with File Persona feature


9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7200c, 7400c, and 7440c comparison (2 of 2)

Summary 7200c 7400c 7440c


Maximum Initiators Supported 1024 2048
2U Controller Enclosure SAS Drive Capacity 24
Number of Disk Drives 8-240 8-576 8-960
Number of Solid State Drives 8-120 8-240
Raw Capacity (approx.) 1.2-500 TB 1.2-1600 TB 1.2-2000 TB
Usable File Capacity 2-64 TB 2-128 TB
0-9 0-22 0-38
Number of expansion cages
enclosures enclosures enclosures

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7200 and 7400 comparison
Summary 7200 7400
Number of controller nodes 2 2 or 4
2 four-core 2-4 six-core
Processors Summary 7200 7400
1.8 GHz 1.8 GHz
Total On-Node Cache 24 GB 64 GB Maximum Initiators Supported 1024 1024 or 2048
2U Controller Enclosure
Total On-Node Cache per node pair 24 GB 32 GB 24
SAS Drive Capacity
Maximum Host ports 12 24
Number of Disk Drives 8-240 4-480
8GB/s Fibre Channel Host Ports 4-12 4-24
Number of Solid State Drives 8-120 8-240
16GB/s Fibre Channel Host Ports
0-4 0-8 Raw Capacity (approx.) 1.2-400 TB 1.2-1200 TB
10GB/s iSCSI Host Ports
0-9 0-18
Number of expansion cages
enclosures enclosures

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7450c Specifications
7450c 7450c
Summary
2-node 4-node
Number of controller nodes 2 or 4

Processors 2-4 eight-core 2.3 GHz


7450c 7450c
Total On-Node Cache 96GB 192GB Summary
2-node 4-node
Maximum Host ports 4-12 8-24
Maximum Initiators Supported 1024 2048
8GB/s Fibre Channel Host Ports 4-12 8-24 2U Controller Enclosure
24
16GB/s Fibre Channel Host SAS Drive Capacity
Ports
Number of Solid State Drives 8-120 8-240
10GB/s iSCSI Host Ports 0-4 0-8
Raw Capacity (approx.) .8-260.4 TB .8-460.8 TB
10Gbs/s Ethernet Adapter *
0-9 0-18
1 Gbs/s Ethernet Adapter * 0-8 0-16 Number of expansion cages
enclosures enclosures

* NIC used with File Services/Persona feature


12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7450 Specifications
7450 7450
Summary
2-node 4-node
Number of controller nodes 2 or 4 2 or 4

Processors 2-4 eight-core 2.3 GHz


7450 7450
Total On-Node Cache 64GB 128GB Summary
2-node 4-node
Maximum Host ports 4-12 8-24
Maximum Initiators Supported 1024 2048
8GB/s Fibre Channel Host Ports 4-12 8-24 2U Controller Enclosure
24
16GB/s Fibre Channel Host Ports SAS Drive Capacity
0-4 0-8
10GB/s iSCSI Host Ports Number of Solid State Drives 8-120 8-240

Raw Capacity (approx.) .8-120 TB .8-220 TB


0-4 0-8
Number of expansion cages
enclosures enclosures

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7000 node port locations and numbering (N:S:P)
Standard HBA slot for optional:
4-port 8 Gb/2-port 16 Gb FC
2-port 10 GB iSCSI/FCoE 1 Gb Ethernet
1 Gb Ethernet
4-port 1Gb/2-port 10Gb Ethernet Adapter replication port
Mgmt port
Slot 2 Slot 3

Node 0 Slot Port

1 2 3 4 1 1 0 SAS ports
1 Onboard FC ports
0 1 2 2 0
0 1 2 PCI port
3 RCIP port
N:S:P
2x cluster expansion ports 2x 6 Gb SAS 4-wide 0:1:2
for optional 4-way configuration expansion ports 2x 8 Gb Fibre
Slot 0 Channel host ports
Slot 1
14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
74x0/74x0c: Four-node interconnect cabling

The interconnect link cables are


directional and labeled A and C.

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Expansion cage option 1: Small form factor (SFF)

The M6710 SFF drive cages (including the primary chassis with controllers) support 2.5 inch disks:
• 200, 480, 920 MLC SSDs and 480GB and 1.92TB cMLC SSD
• 300 GB/15K, 450 GB/10K, 600 GB/10K, 900 GB/10K 1.2 TB/10K SAS FC disks
• 1 TB/7.2K NL disks

Model number is M6710

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SFF expansion cage details

Drive 0 IFC1 Drive 23

PCM0 IFC0 PCM1


17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Expansion cage option 2: Large form factor (LFF)

The M6720 LFF drive cages support 3.5 inch disks:


• 200, 480, 920 MLC SSDs and 480GB and 1.92TB cMLC SSDs
• 2 TB, 3 TB, 4 TB 7.2K NL disks

Model number is M6720

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LFF expansion cage details Drive20 Drive 21 Drive 22 Drive 23

• 4U
• All disks are 3.5 inch disks
• Each cage has 2 IFCs
• The SAS expander on each IFC has a dedicated and
direct connection to each drive in the cage Drive 0 Drive1 Drive2 Drive 3
IFC1
• Two IFC slots and two application slots
− Application slots that are unused are filled with
blanking modules
blanking module

blanking module

IFC0
PCM0
PCM1
19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7000 Series disk encryption
• Two types of self-encrypting drives (SED) are supported:
− Data-at-Rest encryption (DAR)
− Federal Information Processing Standards (FIPS) 140-2 (includes a tamper-evident seal)
• Both types of drives leverage industry-standard SEDs
• Both DAR and FIPS drives offer the same level of encryption and can be mixed
− HP 3PAR OS 3.1.2 MU2 or higher (DAR drives)
− HP 3PAR OS 3.1.3 or higher (FIPS 140-2 drives)

SED Type Description


DAR 400GB SFF MLC SSD, 450GB/900GB 10K SFF SAS, 1TB 7.2K NL SFF SAS

FIPS 920GB SFF MLC SSD, 450GB/900GB 10K SFF SAS, 2TB/4TB 7.2K NL LFF SAS

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
7000 series disk numbering convention: C:M:D
One disk per magazine so disk number is always 0 (naming convention: C:M:0)

01

1:18:0

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Controller node to expansion cage cabling (1 of 2)

• Each node has two SAS controller-to-enclosure connectors with 4 x 6Gb/s lanes each
• Controller enclosure internal drives are connected to controller port DP-1
• First external enclosure is added to controller ports DP-2
• Additional enclosures then alternate between controller ports DP-1 and DP-2
• Each drive enclosure comes with mounting rail kit, 2 x IO-modules, 2 x power cables and 2 x
1m SAS cables
• Enclosures can be installed in adjacent racks
• 2m and 6m SAS expansion cable kits are available

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Controller node to expansion cage cabling (2 of 2)
Node 1 DP-1 Node 0 DP-1

• Node 0 chassis internal disks tied to port DP-1 and counts Node 1 DP-2 Node 0 DP-2

as one enclosure
• Node 0 port DP-1 connected to 4 additional disk enclosures
• Node 0 port DP-2 to 5 additional disk enclosures
• Node 1 port DP-1 connected to 4 additional disk enclosures
• Node 1 port DP-2 connected to 5 additional disk enclosures
• Odd node to odd IFCs, and connects to highest numbered
disk enclosure first
• Even node to even IFCs, and connects to lowest numbered
disk enclosure first

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical disk guidelines (1 of 2)

• Before proceeding with the drive population, verify that all applicable licenses associated with the system
are registered.
• If the array is enabled with the Data-at-Rest (DAR) encryption feature, only use the self-encrypting drives
(SED). Using a non-self-encrypting drive can cause errors during the upgrade process.
• The first expansion drive enclosure added to a system must be populated with the same number of drives as
the node enclosure.
• The drives must be identical pairs.
• The same number of drives and type should be added to all of the drive enclosures in the system.
• The minimum addition to a two-node system without expansion drive enclosures is two identical drives.
• The minimum addition to a four-node system without expansion drive enclosures is four identical drives.

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical disk guidelines (2 of 2)
SFF drive cage hard drive population

The PD pairs should be placed in the lowest


available slot numbers LFF drive cage hard drive population

PD pairs should be populated in columns and in


the lowest available vertical slots in that column

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding physical disks

There are five steps to adding physical disks:


• Checking initial status
• Inserting hard drives
• Checking status
• Checking progress
• Completing the upgrade

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Checking initial status
Under Systems, select Physical Disks, and in the right pane, select the Physical Disks tab

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Inserting hard drives and checking status
• Two hard drives are added
to each of the three drive
cages

• The inserted hard drives


display as New in the State
column

• Disks are ready to be


admitted into the system,
which occurs automatically

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Checking progress

On the Physical Disks


tab, in the drop-down
list, select Chunklet
Usage

New disks broken


down into chunklets

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Completing the upgrade
• In the Systems pane,
select Storage
Systems  Physical
Disks, and then select
the Physical Disks tab
New disks display no
• On the Physical Disks chunklets being
tab, in the drop-down used: to rebalance
list, select Chunklet across all PDs
Usage perform a tunesys
after disk addition
• Chunklet initialization
can take hours to
complete and hours
before the output of
the available capacity
is displayed

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
StoreServ 10000
Hardware Overview
Module 5
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the basics of HP 3PAR hardware components for the 10000 series models
• Know the numbering schemes for the HP 3PAR hardware components including:
− Controller node numbering scheme
− Slot numbering scheme
− Port numbering scheme (N:S:P)
− Cage/Magazine/Disk numbering scheme (C:M:D)

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 hardware overview: 10400 and 10800

• 10000 series also referred to as the V series

• Two models:
10400 (also referred to as the V400)
10800 (also referred to as the V800)

• Major capacity, availability, and performance


improvements compared to its predecessors

10000 series controller nodes and cabling in rear of rack

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 Hardware QuickSpecs
10400
Summary 10800
10400 Old 10400 New
Controller nodes 2 or 4 2, 4, 6 or 8

8Gb/s FC host ports 0 – 96 0 – 192


16Gb/s FC host ports 0 –48 0 – 96
10Gb/s iSCSI host ports 0-16 0-32
10Gb/s FCoE ports 0-48 0-96
Built-in Remote Copy ports 2 or 4 2, 4, 6 or 8

Total On-Node Cache 96-192 GB 192-384 GB 192-768 GB


Disk drives 16 – 960 16 – 1920
480GB and 920GB MLC SSD
480GB and 1.92TB cMLC SSD
300/600GB 15K FC
Drive types * 300GB 15K SAS
450/900GB 10K SAS
1.2TB 10K SAS
2 /4/6TB 7.2K SAS NL
Raw Capacity (approx.) 1.6 PB 3.2 PB
Number of Drive Cages 2-24 chassis 2-48 chassis

* Drive types reflect current types/sizes/speeds


4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 hardware components overview
Controller node
Performance and connectivity building Node mid-plane
block Cache Coherent Interconnect
Has CPUs and 2xGen4 ASICs Completely passive
System Management Defines scalability
RAID and Thin Calculations/Dedup
Installed in pairs (node pairs)

Drive cages and drive magazines


Drive Cages are 4 U/40 Disks
• Up to 24 per 10400
• Up to 48 per 10800
Contain magazines that house up to 4 disks

Service processor
One per system
1U
Service and monitoring

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 controller nodes (1 of 2)
• 2 x Intel 2.83 Ghz Quad-Core Processors
• 2 x Gen4 ASICs
− 2.0 GB/s dedicated ASIC-to-ASIC bandwidth
− 112 GB/s total backplane bandwidth
− Inline Fat-to-Thin processing in DMA engine 2
• 10400: 48 GB (old nodes)/96 GB (new nodes) cache
• 10800: 96 GB cache
• 8 and 16 Gb/s FC ports
• 10 Gb/s iSCSI ports
• 10 Gb/s FCoE ports
• Built-in RCIP port
• Internal SSD drive
− HP 3PAR OS
− Cache destaging in case of power failure
• Processor for diagnostics monitoring

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 controller nodes (2 of 2)

10800 nodes replaced 10400 nodes


• Control cache increased from 16 GB to 32 GB per node
• Data cache increased from 32 GB to 64 GB per node
• Supported with HP 3PAR OS 3.1.2 and higher
• Upgrades to higher cache nodes not supported on existing systems
• Mixing of old and new nodes unsupported

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 Full-mesh Controller backplane

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 controller node numbering

Upper Chassis
Controller
nodes
are vertical
versus
horizontal
as in other
models

Lower Chassis

V controller nodes in rear of rack


9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 controller slot and port numbering (N:S:P)
Node 7
Node 0

PCI-e card installation order


N:S:P
Drive cage FC connections 6, 3, 0
Host connections (FC, 2, 5, 8, 1, 4, 7 7:5:4
iSCSI/FCoE)
Remote Copy FC connections 1, 4, 2, 3

Port E1 is RCIP port


Nodes 0, 1, 2, 3 Nodes 4, 5, 6, 7
10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 drive cages and physical disks (1 of 2)

• Holds from 2 to 10 drive magazines

• (1+1) redundant power supplies

• Redundant dual FC paths

• Redundant dual FC switches

• Each magazine always holds 4 drives of the same


drive type

• Each magazine in a cage can be a different drive type

SFF magazine LFF magazine

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 drive cages and physical disks (2 of 2)
Magazine numbers
Drive cage contains:
– 10 drive bays
– Each bay holds one magazine
– Each magazine holds max 4 drives

Drive cage is:


– (1+1) redundant power supplies
– Dual FCAL loops
– Dual switches for dual ported
drives

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 disk numbering convention: C:M:D

3 2 1 0

Physical disk numbering convention is:

• Cage:Magazine:Disk (C:M:D)

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
10000 Series disk encryption
• Two types of self-encrypting drives (SED) are supported:
− Data-at-Rest encryption (DAR)
− Federal Information Processing Standards (FIPS) 140-2 (includes a tamper-evident seal)
• Both types of drives leverage industry-standard SEDs
• Both DAR and FIPS drives offer the same level of encryption and can be mixed
− HP 3PAR OS 3.1.2 MU2 or higher (DAR drives)
− HP 3PAR OS 3.1.3 or higher (FIPS 140-2 drives)

SED Type Description


DAR 4x400GB SSD Magazine, 4x450GB 10K Magazine, 4x900GB 10K Magazine

FIPS 4x450GB 10K Drive Magazine, 4x900GB 10K Drive Magazine


4x2TB NL 7.2K Drive Magazine, 4x4TB NL 7.2K Drive Magazine

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Storage Concepts and
Terminology
Module 6
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain HP 3PAR provisioning terminology
• Understand the HP 3PAR concepts of chunklets and logical disks
• Explain the concept of common provisioning groups (CPGs)
• Explain the three types of HP 3PAR Virtual Volumes (VV): Fully Provisioned, Thin Provisioned, Thin Deduped

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Volume concept (1 of 2)

• A virtual volume (VV) is a virtual disk and is assembled by policy


• A VV is exported to a host normally as a VLUN
• The host sees the VLUN as a disk and this is the only layer visible to the host

Virtual VLUN
exported to host
Volume

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Volume concept (2 of 2)
Hosts/Host Sets

Virtual volumes (VV)


Thick/Thin/Thin Dedup storage space exported to hosts

Common provisioning groups (CPG)


Virtual pools of logical disk space manage the creation
and dedication of virtual volumes

Logical disks (LD)


Intelligent combinations of chunklets
(arranged as rows of RAID sets) and
tailored for cost, performance,
availability

Chunklets
Physical disk space allocated in 1 GB units

* Matching colors indicate chunklets are drawn


Physical disks from any available physical disks, and logical
PDs are populated into drive cages disks are created from any available chunklets

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Chunklet concept (1 of 2)
• Every physical disk is broken down into allocation units called chunklets
• Chunklet size is fixed at 1 GB for all drive sizes on the 10000 and 7000
series (256 MB chunklet size on all earlier models)
• Users do not create chunklets, nor can the chunklet size be changed
• Smallest unit of measure and makeup of a VV at the physical level C C C C
• Chunklets are the very first step in building HP 3PAR Storage
• Chunklets are like miniature physical disks, the lowest common C C
denominator
• Some chunklets are set aside as spare chunklets for data rebuilds SC SC SC

C = 1 GB data chunklet

SC = 1 GB spare chunklet

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Chunklet concept (2 of 2)
Importance of Chunklets:
• Same drive spindle can service many different VVs, and many different RAID types at the same time
• Enables array to be managed by policy, not by administrative planning
• Drives up utilization rate of the disks (no leftover space)
• Improved high availability and sparing
• Enables easy mobility between physical disks, RAID types, service levels

Chunklet Grouping: chunklets are autonomically grouped into virtual pools by drive
rotational speed and type
• Nearline (NL) disk chunklet pool (used for less performance critical applications)
• Fast Class (FC) disk chunklet pool
• Solid State Disk (SSD) chunklet pool (used for high-value/performance critical applications)

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Logical disk (LD) concept and controller node ownership
• An LD is a collection of physical disk chunklets (1 GB) and
arranged as rows of RAID sets L
D Node 3

L
• Each RAID set is made up of chunklets from different physical D Node 2
disks of the same type
L
D Node 1
• A chunklet can only be assigned to one logical disk L
D
Node 0

• LDs are bound to a controller node so VVs are load balanced


across all controller nodes

• Enables TRUE active/active configuration on a per VV/LUN


basis —not “owned” by one controller (Active/Passive) like in
most traditional arrays

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Common provisioning group (CPG)

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Common provisioning group (CPG)

• An understanding of CPGs is critical to administering an HP 3PAR Array


• CPGs are the combination of a RAID type and a drive type, which equals availability level and service level
• They are the policies and act as a template that VVs will use, eventually to determine a VV’s characteristics
• CPGs have many functions including:
− They are the policies by which free chunklets are assembled into logical disks (basis of a VV)
− They are a container for existing volumes and used for reporting purposes
− They are the basis for optimization products such as Adaptive Optimization (AO) and Dynamic Optimization (DO)

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Default CPGs Created on HP 3PAR Arrays

Default CPGs are created depending on the type of disks, number


of disks, and number of cages in the array
11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CPG examples Assume 48 15K /300G FC disks populated across 4 cages

CPG1: all 48 15K FC/RAID 1/Setsize=2/Availability=Cage


CPG2: all 48 15K FC/RAID 5/Setsize=3/Availability=Cage/Growth Limit=4TB
CPG3: all 48 15K FC/RAID 5/Setsize=6/Availability=Mag/Growth Limit=3TB
CPG4: all 48 15K FC/RAID 5/Setsize=9/Availability=Mag/Slower Chunklets
CPG5: all 48 15K FC/RAID 6/Setsize=8/Availability=Cage/Growth Limit=2TB
CPG6: all 48 15K FC/RAID 6/Setsize=16/Availability=Mag
CPG77: 24 of 4815K FC/RAID 1/Setsize=2/Availability=Cage***(using subset of PDs)

Assume 32 7.2K /1 TB NL disks populated across 4 cages

CPG123: all 32 7.2K NL/RAID 1/Setsize=2/Availability=Cage/Growth Limit=6TB


CPG562: all 32 7.2K NL/RAID 5/Setsize=3/Availability=Cage
CPG63: all 32 7.2K NL/RAID 5/Setsize=7/Availability=Mag/Slower Chunklets
CPGfd4: all 32 7.2K NL/RAID 5/Setsize=9/Availability=Mag
CPG235: all 32 7.2K NL/RAID 6/Setsize=8/Availability=Cage
CPG6616: all 32 7.2K NL/RAID 6/Setsize=16/Availability=Mag/Growth Limit=4TB
CPGX: 16 of 32 7.2K NL/RAID 5/Setsize=9/Availability=Mag***(using subset of PDs)

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: Device Type
Three types of disks are supported in the HP 3PAR disk array:
• SSD: Best for performance-critical applications, but most costly media type
• FC (10/15K online disks): Mid-range media (most commonly used), slower than SSDs but cheaper
• NL (7.2K disks): Slowest form of media and least highly available, but cheapest

SSD

10/15K FC

7.2K NL

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: RAID Type

• HP 3PAR supports the following RAID levels:


− RAID 0 (disabled by default)
− RAID 1
− RAID 5 (disabled by default on NL disk CPGs)
− RAID 6

• RAID 1 with 2-way mirror (default) or 3-way or 4-way mirror depending on set size

• Data-to-parity ratios based on set size


− RAID 5 from 3(2D+1P) to 9(8D+1P)
− RAID 6 at 6(4D+2P), 8(6D+2P), 10(8D+2P), 12(10D+2P) or 16(14D+2P)

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR RAID 1 concepts

• RAID 1 is mirrored data


• Data is written into paired chunklets on different disks
• By default, data and the mirror of the data is written to two chunklets
on physical disks on different cages (assuming 2 cages or more) C C
• Set size can be set to a 3 or 4 for a triple mirror or quad mirror (default
set size is 2)

10000 and 7000 series


Set size = 2, 3, or 4

Default size (RAID 1)


Usable space = 1 GB

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR RAID 5 concepts

• RAID 5 uses parity to reconstruct data


• RAID 5 uses a set size (SSZ) of 4 (3D+1P) by default

Set size = 4 (3+1)


C C C p

C C p C Usable space = 3GB (3*1GB)


C p C C

Set size = 6 (5+1)


C C C c c p
Usable space = 5GB
(5*1GB)

Set size can range from 3 (2 data + 1 parity) to 9 (8 data + 1 parity)


17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR RAID 6/multi-parity (MP) concepts

RAID 6 uses double parity


• Can rebuild data into sparing area (spare chunklets) in the event of double disk failure: to potentially lose
RAID 6 virtual volume data three or more physical disks must be lost
• RAID 6 supports set sizes as follows:
6 (4D+2P), 8 (6D+2P), 10 (8D+2P), 12 (10D+2P), 16 (14D+2P)

Set size = 6 (4+2)


p C C c c p
Usable space = 4GB
(4*1GB)

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CPG and VV RAID level considerations

When choosing CPG/VV RAID level consider:


• Cost
• Availability (including number of disks and drive cages)
• Performance

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: Set size (SSZ)
Set size has performance, cost, and availability implications and should be
considered in conjunction with:
• RAID levels
• Number of disks and disk population
• Number of cages
• Availability options (cage vs. magazine)

Set size options shown for RAID 5

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: High availability options
Cage availability minimums
• Cage (default)
− Protect against a cage failure
No. of cages RAID 1 RAID 5 RAID 6
− Safest level 2 OK NA NA
• Magazine (lower)
3 OK 2+1 4+2
− Protect against a magazine failure
• Port (higher) 4 OK 3+1 6+2
− Protect against a back-end port pair
5 OK 4+1 8+2
failure
− 8 OK 7+1 14+2

Policy by which the system lays out RAID sets in order to protect against hardware failures

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
RAID 5 set size and availability example

D D D D D

RAID 5/SSZ=6
RAID 5/SSZ=4 D P D P D (5D+1P) would not
(3D+1P) would survive a drive
survive a drive cage failure
cage failure because data and
because data and D D D D D parity would be put
parity are on different disks
guaranteed to be but possibly on the
on disks from same cage (shown
different cages P D D D P in pink)
(shown in blue)

Assume 48 FC disks across 4 drive cages


22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: Growth Settings
• Growth limit (and optional growth warning) can be set for a CPG limiting the amount of space VVs can use
− Can be set higher than actual physical capacity
− If set incorrectly can impact provisioning, snapshots, and Remote Copy
• Growth increment is the increment used to determine when the system will allocate additional logical disks
for the VVs using the CPG
• Growth increment defaults and minimum increment varies depending on number of controller nodes

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: Step size

• Step size is the number of contiguous bytes that the system accesses before moving to the next chunklet

• Step size defaults are based on RAID type


− R1 default is 256K
− R5 default is 128K
− R6 default varies by set size

• Adjust step size to avoid hot spots on the back end disks

• Best practice: Accept defaults

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR CPG: Subset of Disks/Disk Filter

Use a filter to select a


subset of PDs for the
CPG

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volumes (VV)

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume overview
• A virtual volume (VV) is HP 3PAR terminology for the capacity unit that is presented or exported
to a hosts as a VLUN
• The hosts see the exported VV (VLUN) as a disk
• VVs are created using a CPG, and the VV “inherits” the characteristics (RAID levels, set size,
availability levels, step size, etc.)
• Three types of VVs:
− Fully Provisioned (Fat/Thick/FPVV)
− Thinly Provisioned (TPVV)
− Thin Dedup (Deduplicated/TDVV)

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Full Provisioning overview

• Fully provisioned VV space is allocated up front from the CPG before it is actually used, leading
to over-provisioning and resulting in a cost disadvantage
• Example: A newly created, fully provisioned 2 TB VV will immediately allocate 2 TB (plus space
for parity for VRAID5/VRAID6 and mirror for VRAID1) from the CPG allocating chunklets or
logical disks.
• Fully provisioned VVs are supported but are not the default when creating a VV using
Management Console (assuming that the Thin Provisioning license is installed)

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Provisioning overview
• HP 3PAR Thin Provisioning provides a host with a virtual
amount of space (VV size) allocating actual space for the
Writes 100 GB
VV/VLUN using CPG capacity just-enough and just-in-time
for a write request from the host

• Benefits:
− No server reconfiguration required—configure future
100 GB
capacity requirements upfront TPVV
− Only actually needed and near-time growth capacity
must be purchased, resulting in power and cost savings
− Allows you to present hosts with more capacity than is
actually available in the array: over-
committing/oversubscribing
− Additional license required (7000 Series Thin Provisioning
is bundled with purchase) PD/CPG capacity
30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Provisioning advantages

• No upfront allocation of storage for TPVVs


• No performance impact when using TPVVs, unlike competing storage products
• No restrictions on where TPVVs should be used, unlike many other storage arrays
• Allocation size of 16k, which is much smaller than most competitors’ thin implementations
• TPVVs can be created in under 30 seconds without any disk layout or configuration planning
required
• TPVVs are autonomically wide striped over all drives within a certain tier of storage
• Thin provisioned VVs (from SSD capacity) that are deduplicated save space, reduce costs, and
extend the life of SSDs

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Provisioning disadvantages

• Risk of over-allocating and running out of physical disk space or reaching the CPG growth
limit—no new writes will be allowed to the TPVV (warning and limits can be set, though)
• Not good for thin-unfriendly applications: systems with a high file system utilization, Oracle
redo log files, small capacity requirements
• A Thin Provisioning license must be purchased (except for 7000 series, which is included in
bundle)
• Additional administration required (versus fully provisioned) to write zeroes/re-thin and
remove deleted space from the array

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Provisioning Dedup introduction
• Inline deduplication of SSD
Tier TPVVs with no
performance penalty
• Performed as part of the
process of flushing the
acknowledged host IO
• Less SSD capacity used as
there is no redundant data
• Life of SSDs is extended
because there are fewer
writes to the SSD disks
• Duplicate data between all
TDVVs that use the same
usr cpg will be detected
and eliminated
35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings (1 of 2)
cli% checkvv -dedup_dryrun vv1 vv2

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings (2 of 2)

cli% showtask -d 11705


Id Type Name Status Phase Step -------StartTime-------- -------FinishTime------- -Priority- -User—
11705 dedup_dryrun checkvv done --- --- 2014-10-12 22:25:45 CEST 2014-10-12 22:26:15 CEST n/a 3parsvc

----(MB)---- (DedupRatio)
Id Name Usr Estimated Estimated
395 vv1 10240 -- --
431 vv2 10240 -- --
----------------------------------
2 total 20480 10239 1.82
If vv1 and vv2 were converted to TDVVs
(using the same CPG) the dedup ratio
would be 1.82 to 1 for the CPG

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume for RAID 5 4 RAID sets * 3 GB usable data = 12 GB

24 GB VV Virtual volume
C = 1 GB chunklet

12 GB/node 12 GB/node
Usable Node Node Usable

Logical Logical
Disk Disk
1 C C C P 1 C C C P
2 C C P C 2 C C P C
3 C P C C 3 C P C C
4 P C C C 4 P C C C
RAID set RAID set
39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
How a volume maps from VV to chunklet
RAID 5/Set Size= 4 (3D+1P)

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
The three levels of capacity allocation
Pages
16 KB Regions are divided into 16K pages.

LDs are divided into regions. Regions


Regions are allocated to only one VV.
128 MB

CPG Chunklets are assembled in LDs to


LD LD LD LD
form the CPG.

Multiple VVs will share the CPG.

Chunklets
1 GB

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Storage Configuration
Module 7
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

Upon completing this module, you should be able to:


• Work with common provisioning groups (CPGs) using the Management Console, SSMC, and the CLI
• Work with fully provisioned, thin provisioned, and thin dedup virtual volumes using Management Console,
SSMC, and the CLI

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Common provisioning group
configuration

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating CPGs in Management Console (1 of 3)

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating CPGs in Management Console (2 of 3)

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating CPGs in Management Console (3 of 3)

Use a filter to select a


subset of PDs for the
CPG

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CPG additional functionality in Management Console

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating CPGs in SSMC (1 of 2)

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating CPGs in SSMC (2 of 2)

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Displaying CPGs in SSMC
Allows you to change view including
Overview (default), Settings (shown),
Activity, and Map views

Actions:
• Create
• Edit
• Delete

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CPG functionality using the CLI (1 of 2)
• Create a CPG named CPG_DeptA using all defaults including all FC drives, RAID 1, setsize=2, -ha cage
cli% createcpg CPG_DeptA
• Create a CPG named CPG_R6 RAID 6 using all defaults
cli% createcpg -t r6 CPG_R6
• Create a CPG named CPG_R5_Mag that is RAID 5 with high availability magazine
cli% createcpg –t r5 –ha mag CPG_R5_Mag
• Create a CPG named CPG_DeptA that is RAID 5 with high availability magazine with a set size of 8
cli% createcpg –t r5 –ha mag –ssz 8 CPG_DeptA
• Create a CPG named CPG_X that is RAID 1 (default) using all NL disks
cli% createcpg -p -devtype NL CPG_X
• Create a CPG named SSD_CPG1 that is RAID 1 (default) using all SSD disks with a growth limit of 32G and a
growth warning of 24G
cli% createcpg -p -devtype SSD -sdgl 32g -sdgw 24g SSD_CPG1

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CPG functionality using the CLI (2 of 2)

Set/Change the growth limit and growth warning for CPG named std2NL_r6

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume configuration

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in Management Console (1 of 3)

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in Management Console (2 of 3)

Only displays/applies to TPVVs and TDVVs


Not Fully Provisioned VVs

Example shows creation of Thin


Provisioned volume creation with Show
advanced options option selected

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in Management Console (3 of 3)

Only displays/applies to TPVVs,


Not Fully Provisioned or TDVVs

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV additional functionality in MC (1 of 2)

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV additional functionality in MC (2 of 2)

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in SSMC (1 of 3)

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in SSMC (2 of 3)

Thinly Provisioned
Thinly Deduped
Fully Provisioned

Only displays/applies to TPVVs and TDVVs


Not Fully Provisioned VVs

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV creation in SSMC (3 of 3)

Only displays/applies to TPVVs,


Not Fully Provisioned or TDVVs

A count can be entered to


create up to 999 volumes

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV functionality using the CLI (1 of 5)

• Create a 1G fully provisioned volume named hp whose user space is allocated from CPG cpg1:
cli% createvv cpg1 hp 1G
• Create a 10G TPVV named tpvv1 whose user space is allocated from CPG cpg1, with an allocation warning of
80% and an allocation limit of 90%:
cli% createvv –tpvv –usr_aw 80 -usr_al 90 cpg1 tpvv1 10G
• The following example creates three TPVVs: vv1.2, vv1.3, and vv1.4 (zero detect enabled):
cli% createvv –tpvv –pol zero_detect -cnt 3 cpg1 vv1.2 1G
• The following example creates a TDVV with the restrict export to one host attribute set using the CPG SSD_cpg1
with a size of 12TB and the name vv4
cli% createvv –tdvv –pol one_host SSD_cpg1 vv4 12T
• The following example creates a TPVV named tpvv1 with the template temp2:
cli% createvv -tpvv -templ temp2 cpg2 tpvv1

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV functionality using the CLI (2 of 5)
Example shows creation of a
thinly provisioned virtual
volume of 10G named
oracle_vv using the CPG
named CPG1_NL, followed by
the showvv command using a
wildcard

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV functionality using the CLI (3 of 5)
Compaction and Dedup Ratio
cli% showvv -space vv1 vv2 vv3
---Adm---- --------Snp---------- ----------Usr-----------
---(MB)--- --(MB)-- -(% VSize)-- ---(MB)---- -(% VSize)-- -----(MB)----- -Capacity Efficiency-
Id Name Prov Type Rsvd Used Rsvd Used Used Wrn Lim Rsvd Used Used Wrn Lim Tot_Rsvd VSize Compaction Dedup
483 vv1 tpvv base 256 9 0 0 0.0 -- -- 12800 10240 10.0 0 0 13056 102400 10.0 --
485 vv2 tdvv base 5184 4428 0 0 0.0 -- -- 13056 5129 5.0 0 0 18240 102400 10.7 1.0
486 vv3 tdvv base 5184 4433 0 0 0.0 -- -- 13056 5129 5.0 0 0 18240 102400 10.7 2.0
-----------------------------------------------------------------------------------------------------------------
3 total 10624 8870 0 0 38912 20498 49536 307200 10.5 1.5

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV functionality using the CLI (4 of 5)
Example shows all volumes
exported to the host DL36_049

Example shows volumes and


the User CPG for the volumes
25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV functionality using the CLI (5 of 5)
• Change the name from test to newtest
cli% setvv -name newtest test
• Disable zero detect for a thinly provisioned volume named vv6
cli% setvv -pol no_zero_detect vv6
• Enable zero detect for a thinly provisioned volume named vv5
cli% setvv -pol zero_detect vv5
• Increase the size of virtual volume vv1 by 2 terabytes
cli% growvv vv1 2t
• Remove the virtual volume vv1 using the –f (force) option
cli% removevv –f vv1
• Remove all virtual volumes that start with test using the –f (force) option
cli% removevv –f –pat test*

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Connectivity and
Storage Allocation
Module 8
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Prepare a host to access an HP 3PAR Storage Array
• Create hosts in an HP 3PAR Storage Array
• Explain how to add Fibre Channel (FC) ports to a host
• Export virtual volumes (VV) to a host as VLUNs
• Unexport virtual volumes (VV) from a host
• Use Management Console, SSMC, and CLI to work with hosts and storage
• Use HP3PARInfo to obtain information about visible resources

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host to HP 3PAR zoning configuration: FC example

7x00 with
two controller
nodes

Fabric A Fabric B

Hosts normally have two HBAs connected to two


separate switches, with the switches connected
to FC host interfacing ports on two HP 3PAR Host
controller nodes. iSCSI, FCoE, direct connect, and
single HBA connection are also supported.

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR WWN format of host ports

Format of HP 3PAR WWN:


2 – Ignore
0 – Node
2 – Slot
000E48 in Hex = 3656 decimal 3 – Port
0002AC – HP 3PAR standard
000E48 – Last four digits of the array serial number
5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Configuring HP 3PAR host ports for fabric connection

To configure HP 3PAR ports for a fabric attach


connection:
• The host port must be set to Connection Type “Point” (default host port
setting)
• Take the host port offline
cli% controlport offline 0:0:1
• Set the host port to connection type point
cli% controlport config host -ct point 0:0:1
• Bring the host port online
cli% controlport rst 0:0:1

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Configuring HP 3PAR host ports for direct connection

To configure HP 3PAR ports for a direct connect host:


• The host port must be set to Connection Type “Loop”
• Take the host port offline
cli% controlport offline 0:0:1
• Set the host port to connection type loop
cli% controlport config host -ct loop 0:0:1
• Bring the host port online
cli% controlport rst 0:0:1

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host preparation

Install supported FC adapters (HBAs/FCAs)


• Determine host supported type and driver version
• Install supported FC HBA drivers and tools
• Use HBA tools/OS commands, or log in to the switch to obtain port WWN
• You must add hosts or export VLUNs to the host
• Every host platform has a different method
Install appropriate version of multipath software if necessary
• Software to manage multiple paths between hosts and storage systems
• Enables high availability through robust path management
• Optimizes performance with I/O load balancing
• Multipath software is optional for single HBA hosts

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Obtaining host HBA WWNs

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for hosts: Log into switch
Using switch port number, obtain host HBA WWN

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for VMware: ESXi (1 of 2)
Use the Web Client or vSphere Client to get the HBA WWN

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for VMware: ESXi (2 of 2)
Use the esxcli command

~ # esxcli storage core adapter list


HBA Name Driver Link State UID Description
-------- ------ ---------- ------------------------------------ --------------------
vmhba0 megaraid_sas link-n/a unknown.vmhba0 (0:1:0.0) LSI / Symb
vmhba1 fnic link-up fc.20000025b5010106:20000025b501ac0c (0:8:0.0) Cisco Syst
vmhba2 fnic link-up fc.20000025b5010106:20000025b501ac0d (0:9:0.0) Cisco Syst

Output truncated to fit on slide

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for Windows: Emulex LightPulse and
OneCommand Manager
Use LightPulse or OneCommand Manager

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for Windows: Qlogic QConverge

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for Windows: Qlogic SANsurfer

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for Windows: Storage Explorer

• Tool included in Windows Server 2008 (not included with


Windows 2012)
• Helps you understand your server SAN Storage configuration
• Server must be a member of a domain and not a workgroup
• Must be able to resolve names of servers
• Uses Windows Management Instrumentation (WMI)
• Scripts can also be used to collect WWN information (see
Student Notes)

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for HP-UX: Agilent/fcmsutil
For HP-UX, use the ioscan –fnC fc command to obtain the device
file name of the HBA, and then use the fcmsutil command to
obtain the WWN
# ioscan -fnC fc

Class I H/W Path Driver S/W State H/W Type Description


=================================================================
fc 0 0/1/0 td CLAIMED INTERFACE HP Tachyon TL/TS Fibre Channel Mass Storage Adapter
/dev/td0

# fcmsutil /dev/td0
N_Port Node World Wide Name = 0x50060b000007d199
N_Port Port World Wide Name = 0x50060b000007d198

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Finding WWN for Linux

# ls /sys/class/fc_host
host0 host1
# cat /sys/class/fc_host/host0/port_name
0x500143800637bc40
# cat /sys/class/fc_host/host1/port_name
0x500143800637bc42

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts and storage
using Management Console

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in Management Console (1 of 5)

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in Management Console (2 of 5)

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in Management Console (3 of 5)

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in Management Console (4 of 5)

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in Management Console (5 of 5)

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in Management Console: Export (1 of 4)

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in Management Console: Export (2 of 4)

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in Management Console: Export (3 of 4)

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in Management Console: Export (4 of 4)

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in MC: Unexport (1 of 2)

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in MC: Unexport (2 of 2)

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts and storage
using SSMC

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in SSMC (1 of 3)

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in SSMC (2 of 3)

Top of screen
33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adding hosts in SSMC (3 of 3)

Bottom of screen
34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Displaying hosts in SSMC

View can be
changed

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in SSMC: Export (1 of 3)

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in SSMC: Export (2 of 3)

Top of screen

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in SSMC: Export (3 of 3)

Bottom of screen

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in SSMC: Unexport (1 of 2)

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts in SSMC: Unexport (2 of 2)

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with hosts and storage
using the CLI

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host and storage functionality using the CLI (1 of 3)

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host and storage functionality using the CLI (2 of 3)
Persona No. Persona Name Host Operating System Additional capabilities
1 Generic Linux, Windows, and Solaris UARepLun, SESLun
2 Generic-ALUA Linux, Windows, and Solaris UARepLun, ALUA, SESLun
6 Generic-Legacy Linux, Windows, and Solaris None
7 HPUX-Legacy HP-UX VolSetAddr
8 AIX-Legacy AIX NACA
9 Egenera Egenera, NetApp SoftInq
10 NetApp ONTAP Data ONTAP SoftInq
11 VMware Linux and Windows SubLun, ALUA
12 OpenVMS OpenVMS UARepLun, RTPG, SESLun, LunoSCC
13 HPUX HP-UX UARepLun, VolSetAddr, SESLun, ALUA, LunoSCC
15 Windows Server Windows UARepLun, SESLun, ALUA, WSC

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host and storage functionality using the CLI (3 of 3)

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Making VLUNs visible to hosts

HP-UX Solaris/Linux
• ioscan –fnCdisk • Varies on HBA
• insf –eCdisk (not required for 11iv3) AIX
• No reboot required • cfgmgr or SMIT
Windows • No reboot required
• Scan for new hardware, run Disk Management OpenVMS
(rescan)
• $mcr sysman
• No reboot required
− SYSMAN> io autoconfigure
VMware ESXi
• Rescan devices using the vSphere client or Web client
• No reboot required Check SPOCK for supported
multipathing solutions

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Explorer

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Explorer Introduction
• Runs as a service on Windows and as a daemon on Linux and Solaris operating systems
• No license is required
• Communicates with the system over an FC or iSCSI connection and enables the host to send
detailed host configuration information to the system
• The information gathered from the Host Explorer agent is visible for uncreated hosts and
assists with host creation and diagnosing host connectivity issues
• When a host is created on the system, unassigned WWNs or iSCSI names are presented to the
system
• Without the Host Explorer agents running on the attached hosts, the system is unable to determine which
host the WWN or iSCSI names belongs to and you must manually assign each WWN or iSCSI name to a
host.
• With Host Explorer agents running, the system automatically groups WWNs or iSCSI names for the host
together, assisting with host creation

47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Explorer Example

The Host Explorer collects the following information and


sends it to the storage server:
• Host operating system and version
• Fibre Channel and iSCSI HBA details
• Multipath driver and current multipath configuration
• Cluster configuration information
• Device and path Information

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Explorer Supported Platforms

For the most current Host


Explorer support matrix:
• Visit SPOCK
(http://www.hp.com/storage/spock
)
• Scroll down to Software and select
Array SW: 3PAR
• Select HP 3PAR Host Explorer
Software

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo introduction
• Command line utility that provides useful information on
the volume mapping between the host and the array

• Light-weight application that is installed on the host

• Uses host-specific commands to identify the LUNs

• Provides LUN-to-device file mapping information

• Provides information to easily identify the HP 3PAR LUNs


that are exported to the host

• Communication is in-band so there is no login required,


unlike CLI or MC

51 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo installation
Supported Operating Systems
• Red Hat Enterprise Linux 5.7, 5.8, 6.1, 6.2, 6.3 (x64 and x86_32)
• SUSE LINUX Enterprise Server 10.0 SP4, 11.0 SP1 and SP2 (x64 and x86_32)
• HP-UX 11i v3, 11i v2 (PA-RISC and IA64)
• Windows 2008, 2012 (x64 and X86_32)
• ESX 4.0 and ESXi 5.1
• AIX 6.1, 7.1

Easy to install and uninstall


• Pick appropriate package
• Follow README file included with ISO or HP3PARInfo User Guide
• For Windows, run setup.exe
• For other platforms, run unix_local_install.sh

52 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo -i option

Use the -i option to view the list of LUNs that are exported a host
HP3PARInfo -i

53 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo -f option

Use the -f option to view device specific details for a VLUN


HP3PARInfo –f [devicefile]

54 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo -d option

Use the -d option to view detailed information about the HP 3PAR LUNs exported to a host
separated by the user-specified delimiter. Output shows one LUN per line.
HP3PARInfo –d{char}

55 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP3PARInfo usage and options

Command option Option details

-v Version information of HP 3PARInfo

-h Help for HP 3PARInfo

-i List of LUNs that are exported to hosts

-ea List of LUNs that are exported to hosts (same as –i)

-f Detailed information about a specified LUN specifying a device file name

-d Detailed information about all the HP 3PAR LUNs delimited by a specified character

56 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Autonomic Groups and
Virtual Lock
Module 9
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to :


• Describe the advantages of host sets and volume sets
• Create and maintain host sets and volume sets
• Use the Management Console, SSMC, and CLI to work with host sets and volume sets
• Discuss the guidelines and rules regarding host sets and volume sets
• Explain the Virtual Lock feature

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Autonomic groups:
Host sets and virtual volume sets

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Autonomic Sets: Host sets and volume sets
Simplify provisioning
Autonomic HP 3PAR Storage
Traditional storage
Cluster of VMware ESX Servers

V1 V2 V3 V4 V5 V6 V7 V8 V9 V10
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10

Individual Volumes
• Initial provisioning of the cluster • Initial provisioning of the cluster
− Requires 50 provisioning actions (1 per host—volume − Add hosts to the host set
relationship) − Add volumes to the volume set
− Export volume set to the host set
• Add another host
• Add another host
− Requires 10 provisioning actions (1 per volume)
− Just add a host to the host set
• Add another volume • Add another volume
− Requires 5 provisioning actions (1 per host) − Just add the volume to the volume set
• Volumes are exported automatically
• Private storage can be exported individually to hosts in a host set
4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Autonomic groups use cases

In some instances, a number of objects in the system that often go through the same set of
administrative transitions are logically grouped together
• Virtual volume set: Oracle Database
− Comprises a number of volumes that, together, serve all the data that is needed by that database
• Host set: VMware or Serviceguard cluster
− Nodes in a cluster that share the same storage for application failover can be logically grouped together for
ease of provisioning

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host set specifics

• When virtual volumes or virtual volume sets are exported to a host set, the volumes are exported to each
member of the host set with the same LUN ID, which is critical for certain clustering failover solutions
• An individual virtual volume can be exported to members of a host set as private storage
• A host set can only be removed if there are no virtual volumes or volume sets are exported to the host set
• Removing a host set does not remove the individual private storage exports from the hosts
• If a host is removed from a host set, all private storage remains exported to that host
• A host can be a member of more than one host set
• Hosts sets are visible in the Management Console, SSMC, and output from CLI commands such as showvlun

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume set specifics
• To create a new volume set using existing volumes, the volumes can be exported to hosts or host sets
• When creating virtual volumes in the Management Console, SSMC or with the createvv command using a count, the
volumes are put into a virtual volume set by default
• To remove a virtual volume set, the set must be unexported
• Removing a virtual volume set does not remove the virtual volumes
• To remove a virtual volume, if it is a member of a set, it must be removed from the set first
• A virtual volume can be a member of up to eight virtual volume sets
• Virtual volume sets are used in implementation of the features Priority Optimization/QoS and Adaptive Flash Cache
• When creating a remote copy group for remote replication both primary and target virtual volumes are added to
virtual volume sets at both the local and remote site automatically
• If any virtual volume in a set has the Restrict export to one host attribute set, the volume set cannot be exported to
more than one host or a host set
• Virtual volume sets are visible in the Management Console, SSMC, and output from CLI commands such as showvv

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets in Management Console (1 of 5)

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets in Management Console (2 of 5)

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets in Management Console (3 of 5)

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets in Management Console (4 of 5)

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets in Management Console (5 of 5)

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets in Management Console (1 of 4)

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets in Management Console (2 of 4)

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets in Management Console (3 of 4)

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets in Management Console (4 of 4)

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets and virtual volume sets in
Management Console: Export screen

On the Export screen you


can display virtual
volumes or virtual volume
sets, and export to a host
or host set using the radio
buttons

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host Sets and Virtual Volume Sets using SSMC

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Host Sets (1 of 4)

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Host Sets (2 of 4)

After selecting hosts on the Add Host


window selected hosts will display in
the Host Set Members area

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Host Sets (3 of 4)
• Display of host sets using Map view
• Actions include Edit a host set or Delete a host set

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Host Sets (4 of 4)
From the Virtual Volumes area volumes
can be exported to hosts or host sets
using the Add window

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Virtual Volume Sets (1 of 5)

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Virtual Volume Sets (2 of 5)
In example, a search was performed just display
volumes with the naming convention Raid5

4 VVs selected

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Virtual Volume Sets (3 of 5)

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Virtual Volume Sets (4 of 5)
Display of host sets
using the Exports view

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Virtual Volume Sets (5 of 5)

Choose existing volume set to add


the virtual volume(s) to

Add existing volumes to VV set


from the Virtual Volumes area

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets and Virtual Volume sets CLI overview
Command Summary
createhostset create a new set of hosts and provides the option of assigning one or more existing
hosts to that set
removehostset removes a host set or removes hosts from an existing set
sethostset sets the parameters and modifies the properties of a host set
showhostset displays the host sets defined on the HP 3PAR array and their members
createvvset create a new set of virtual volumes and provides the option of assigning one or more
existing virtual volumes to that set
removevvset removes a virtual volume set or virtual volumes from an existing set
setvvset sets the parameters and modifies the properties of a virtual volume set
showvvset displays the virtual volume sets defined on the HP 3PAR array and their members

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets using the CLI (1 of 2)

• Create an empty host set called hostset1


cli% createhostset hostset1

• Add a host called host1 to the host set hostset1


cli% createhostset –add hostset1 host1

• Create a host set called orahostset with a comment and containing one host called ora1
cli% createhostset -comment “My Domain Set” orahostset ora1

• Export the virtual volume vv1 to the members of the host set called orahostset with a LUN number of 2
cli% createvlun vv1 2 set:orahostset

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Host sets using the CLI (2 of 2)

Show all host sets Show host sets containing hosts


cli% showhostset matching the pattern ora*
Id Name Members cli% showhostset –host ora*
3 exchhosts exchh.0 Id Name Members
exchh.1 19 orahosts ora-12-h1
19 orahosts ora-12-h1 ora-12-h2
ora-12-h2 ora-12-h3
ora-12-h3

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets using the CLI (1 of 2)
• Create an empty VV set called oradb_1
cli% createvvset oradb_1

• Add a VV called oravol1 to the VV set


cli% createvvset –add oradb_1 oravol1

• Create a VV set called oravvs with a comment, and add 10 sequentially named VVs starting with the VV called
oravl.0 through oravl.9
cli% createvvset -comment “Our Oracle VVs” -cnt 10 oravvs oravl.0

• Export the virtual volume set oravvs to the members of the host set called orahostset with LUN numbers of 1-10
cli% createvlun set:oravvs 1-10 set:orahostset

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual volume sets using the CLI (2 of 2)

Show all VV sets Show VV sets containing VVs matching


cli% showvvset the pattern test*
Id Name Members cli% showvvset –vv test*
0 oravv oravv.0
oravv.1 Id Name Members
20 sia-1 test 20 sia-1 test
ttpvv.rw ttpvv.rw
test-sv test-sv

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
CLI commands using a set: parameter
Commands with :set support Commands that display host set and volume set
• Host sets information:
− createvlun − showvlun

− removevlun − showvv

• VV sets
− createvlun
− removevlun
− createsv
− createvvcopy
− promotesv
− promotevvcopy
− updatevv

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Lock

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Lock overview

• HP 3PAR Virtual Lock Software is an optional feature that enforces the retention period of any
volume or copy of a volume
• The Virtual Lock license is required
• Locking a volume prevents the volume data from being deleted, overwritten, or altered
(intentionally or unintentionally) before the retention period lapses
• Virtual Lock can be used on virtual volumes, virtual copies, full copies, and remote copies

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Lock options

• Retention Time: How long to keep the VV or virtual copy


− The -retain option that can be used to specify the amount of time, relative to the current time, that the
volume will be retained.

• Expiration Time: When the VV or VC will expire


− The -exp option that can be used to specify the amount of time, relative to the current time, before the
volume will expire. The expired virtual volumes are not automatically removed, so you must use the
removevv command or the System Scheduler or Management Console to remove the expired volumes.

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Lock guidelines
• The retention time can be set during the volume creation time or applied to an existing volume
• The retention time can be increased, but it cannot be decreased or removed until the end of the retention time
period
• The expiration time can be changed at any time
• If both the retention time and expiration time are specified, the retention time cannot be greater than the
expiration time
• The retention and expiration time can be set in hours or days
− The minimum time is 1 hour, and the maximum time is 43,800 hours (1,825 days or 5 years)
− The default is 336 hours (14 days)
− The vvMaxRetentionTime system parameter determines the maximum retention
• The maximum retention or expiration time for a volume in a domain can be set during the virtual domain
creation time or applied to an existing domain
• If the volume belongs to a virtual domain, the volume’s retention time cannot exceed the domain's maximum
retention time

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Lock example: Create virtual volume

MC

SSMC

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Lock example: Create virtual copy

MC
SSMC
39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Lock CLI

You can use the -retain and -exp options to set volume retention times with the following
commands:
• createvv
• setvv
• createsv
• creategroupsv

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Lock CLI examples

• Create a thin provisioned volume of 10G named hp2 using the CPG cpg1 as the user CPG with a retention
time of 24 hours and expiration time of 72 hours:
cli% createvv –tpvv -retain 24h –exp 72h cpg1 hp2 10G

• Set/change the volume hp2 retention time to 36 hours and expiration time to 48 hours:
cli% setvv –retain 36h –exp 48h hp2

• Display all volumes with retention time/expired volumes:


cli% showvv –retained
cli% showvv –expired

• Remove all expired volumes and do not ask for confirmation (force):
cli% removevv –f expired

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization and
Online Volume Conversion
Module 10
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the benefits of HP 3PAR Dynamic Optimization (DO)
• Change a VV RAID level, set size, availability level, and service level
• Change a VV user data and copy space
• Perform an Online Volume Conversion using Management Console and the CLI

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization introduction (1 of 2)

DO is an optional HP 3PAR OS feature that enables the following:


• A simple method for online and nondisruptive service level optimization
• The virtual volumes being tuned can be exported to a host (no downtime) or unexported
• Volumes can be dynamically tuned by changing volume parameters
• A cost-effective way to manage a large, scalable, tiered storage array
• A Dynamic Optimization license is required
• DO administration can be done using Management Console, CLI or SSMC 2.1 or higher
• Multiple tunes of VVs can be done simultaneously
• Intentionally throttled to avoid host I/O impact

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization introduction (2 of 2)
Set sizes can be changed as well SSD

RAID 5 RAID 1

Fast Class
Performance

RAID 6

RAID 5 RAID 1

Nearline In a single command,


nondisruptively optimize:
RAID 6 • Cost
• Performance
RAID 1 • Efficiency
• Resiliency
• Availability
RAID 6

Cost per usable TB


5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization: Data service level control

RAID Type Set Size/Step Size Drive Type Availability


• RAID 0 Data to parity Various sizes Cage and
• RAID 1 ratio impacting and speeds of Magazine
• RAID 5 cost, availability disks: availability
and performance SSD/FC/NL
• RAID 6

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization vs. Adaptive Optimization
Manual full versus automatic granular tiering
Dynamic Adaptive
Optimization Optimization

Tier 0
CPG 1 CPG A

Tier 1
CPG 2 CPG B

Tier 2 CPG 3 CPG C

Region movement
- Region between tiers
7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Dynamic Optimization use cases (1 of 2)
Deliver the required service levels for the lowest possible cost throughout the data life cycle
~50% ~80%
savings savings
10 TB net 10 TB net 10 TB net

RAID 1 RAID 5 (3+1) RAID 5 (7+1)


900 GB FC drives 900 GB FC drives 2 TB NL disks

Accommodate rapid or unexpected application growth on demand by freeing raw capacity

7.5 TB net
free

Free 7.5 TBs of net capacity on demand


10 TB net 10 TB net

20 TB raw – RAID 1 20 TB raw – RAID 5


8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Dynamic Optimization use cases (2 of 2)

Deliver the required service levels for the highest possible performance throughout the data life cycle
During peak
period—tune Tune back
for during
performance nonpeak
period
10 TB net 10 TB net 10 TB net

RAID 6 RAID 1 RAID 6


2 TB NL disks 900 GB FC disks 2 TB NL disks

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization in Management Console (1 of 2)

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization in Management Console (2 of 2)

User Space or Copy


Space (space used
for snapshots) can be
changed on the fly

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization: Monitoring task progress

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization performance example
Volume tune from RAID 5, set size 8 (7+1) NL to RAID 1, set size 2 FC 15K

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dynamic Optimization using the CLI

• Tune the user space to CPG cpg_r6_nl for VV hp1


cli% tunevv usr_cpg cpg_r6_nl hp1
Task 998 started

• Tune the user space to CPG cpg_r6_nl for sequentially named VVs hp1.0 through hp1.5 (6 volumes):
cli% tunevv usr_cpg cpg_r6_nl -cnt 6 hp1.0
Task 999 started

NOTE: The createsched command can be used to schedule or automate tasks,


including scheduling of a Dynamic Optimization tune using the tunevv command

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Online Virtual Volume
Conversion

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Online Virtual Volume Conversion
• Convert virtual volumes real-time no downtime
− Thick to TPVV
− Thick to TDVV
Thick to Thin Thin to Dedupe
− TPVV to TDVV
Enables customers to purchase systems with HDDs and move
− TPVVto
workloads toSSDs
Thick
when ready
− TDVV to Thick
− TDVV
Allow to TPVV
customer to revert if they want

Thick Thin Dedupe


• A volume
Requires Dynamiccan be converted,
Optimization license or converted and the
original VV kept
Thin to Thick Rehydration to Thin

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Conversion considerations and steps

The conversion process involves four steps and considerations:


• Assessment
− Is it worthwhile to convert?
− Consider pros and cons of fully provisioned volumes versus TPVVs/TDVVs
• Data preparation
− Empty trashcans, delete unnecessary files, and so on
• Zero unused space
− Use applications such as UNMAP and dd to prepare the data
− During conversion, zero detect is done, and data with zeros will not be migrated or converted
• Conversion
− Use Management Console, the CLI, or SSMC 2.1 or higher to convert the virtual volume

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV conversion in Management Console (1 of 3)

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV conversion in Management Console (2 of 3)

Discard or keep original VV

Use original CPG


or new CPG

Note: screen truncated


to fit on slide
19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV conversion in Management Console (3 of 3)

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV online conversion using the CLI
• Tune the VV and convert the fully provisioned or thin deduped volume hp3 to a thinly provisioned VV using
the same user space CPG (or a different one)
cli% tunevv usr_cpg cpg_r6_nl -tpvv hp3
• Tune the VV and convert the thin or fully provisioned volume hp4 to a thinly deduped VV using an SSD CPG
cli% tunevv usr_cpg SSD_r5de -tdvv hp4
• Tune the VV and convert the fully provisioned or thinly deduped volume hp5 to a thinly provisioned VV using
the same user space CPG (or a different one). Keep the original fully provisioned VV with the name hp5.orig.
cli% tunevv usr_cpg cpg_r6_nl -keepvv hp5.orig -tpvv hp5
• Tune the VV and convert the thinly provisioned or thin deduped volume hp6 to a fully provisioned VV using
the same user space CPG (or a different one)
cli% tunevv usr_cpg cpg_r6_nl -full hp6

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Features
Module 11
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Describe the benefits of the zero detect/thin persistence feature when working with thinly provisioned virtual
volumes (TPVVs )
• Work with TPVVs and zero detect using the Management Console, SSMC and CLI

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect and thin persistence

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect for TPVV introduction

• A disadvantage of TPVV is that they can become un-thin over time; deleting files at the OS level
does not free up the deleted space on the array
• Writing of a string of zeros to a TPVVs a waste of space on the array and can be discarded (not
written) to the VV saving space using Zero Detection
• During data transfers (creation of a physical copy, or remote copy, or conversion) zeros are
detected and not written to the destination
• Zero detect is not applicable to TDVVs or Fully Provisioned volumes
• Zero detect is a function of the HP 3PAR ASIC

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect enabled on TPVV

Zero detect enabled by default


for Thinly Provisioned volumes

cli% createvv -tpvv –pol zero_detect cpg1 vv1 10G


cli% setvv -pol zero_detect vv1

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect use case 1
Data written is checked for zeros and discarded/not written

0000000000
0000000000
0000000000
Zero detect is performed during data transfers such as:
0000000000
0000000000 • Creation of a physical copy
16K page
• Creation of a Remote Copy group and Remote Copy replication
• VV conversion
6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect/thin persistence use case 2 (1 of 4)
Host view and array view match
Host view Array view

2G free 2G available

2 2

2G data written 2G data written

4G TPVV exported to host 4G TPVV


7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect/thin persistence use case 2 (2 of 4)
Deleting data from TPVV does not free up space on array
Host view Array view

3G free 2G available

2 Space not
Host deletes removed
1G of data from the
2G data written allocation

1G data written

4G TPVV exported to host 4G TPVV


8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect/thin persistence use case 2 (3 of 4)
Solution: Write zeros to deleted space; zeros will be discarded and space reclaimed
Host view Array view

2G free 2G free
2G available

Zeros written to
00000000000000 00000000000000
deleted space tell
00000000000000
Write zeros to 00000000000000 array what can be
00000000000000 00000000000000 reclaimed
00000000000000 deleted data
00000000000000
00000000000000 00000000000000
00000000000000
Space returned to the
00000000000000 CPG
1G data written 1G data written

4G TPVV exported to host 4G TPVV


9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Zero detect/thin persistence use case 2 (4 of 4)
Post re-thinning
Host view Array view

2G free 3G2Gavailable
free
3G free

1G data written 1G data written

4G TPVV exported to host 4G TPVV


10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Persistence Methods: Windows Server 2012
• Use UNMAP
− Can be scheduled to run
automatically (Weekly frequency
recommended)
− Can also run at time of permanent
file deletion (when not put into the
recycle bin: Shift-Del)
− Can be run manually using
Defragment and Optimize Drives or
optimize-volume command

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Persistence Methods: Windows Server 2003 and 2008

• Use sdelete to securely


overwrite the deleted data
space with 000’s and reclaim
the space

• Use fsutil to create a balloon


file to reclaim deleted space

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Persistence Methods: Linux

• Real –time reclaim by an ext4 or an XFS file system when mounted with the –o discard option
− UNMAPs are generated to free up the deleted storage on the TPVV

# mount -t ext4 -o discard /dev/mapper/tpvv_lun /mnt

• The mke2fs, e2fsck, and resize2fs utilities can be used

• Use the fstrim command for ext3 and ext4 (when not mounted with –o discard) file systems
#fstrim -v /mnt
/mnt: 21567070208 bytes were trimmed

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Persistence Methods: HP-UX and UNIX
HP-UX
• If using Veritas Storage Foundations 5.1 or higher, space can be reclaimed with file deletion or
shrinking
• For non-VXFS file systems or VXFS file systems on LVM volumes write explicit 000’s using dd

UNIX
• On UNIX systems or Linux systems that don’t support discard, write explicit 000’s using dd or
shred
# dd if=/dev/zero of=/path/10GB_zerofile bs=128K count=81920
# shred –n 0 –z –u /path/file

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VMware Thin Persistence
HP 3PAR Thin Persistence in VMware Environments
Introduced with vSphere 4.1
• vStorage API for Array Integration (VAAI)
• Features enabled by the 3PAR Plug-in for VAAI or natively:
− Thin VMotion—XCOPY
− Active Thin Reclamation—WRITE_SAME
− Atomic Test and Set—ATS

Introduced with vSphere 5.0


• Automated VM Space Reclamation
• Leverages industry-standard T10 UNMAP
• Supported with VMware vSphere 5.x and
HP 3PAR OS 3.1.1 or higher

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VMware VAAI—Full copy offload: XCOPY
Enables the storage array to make full copies of data within the array without having
to have the ESX host
Without VAAI With VAAI + 3PAR

VMFS VMFS
3PAR Plug-In for VAAI or natve

ESX host based data movement Array based data movement

LUNs
Array Array
LUNs

• ESX host involved in data movement when cloning VMs or • Offloads data movement from the ESX host to the array
using Storage VMotion • Improved performance when cloning VMs or using Storage
• ESX host consumes precious memory and CPU cycles vMotion
performing “non-ESX” related tasks • Further increases VM density as ESX memory and CPU are
• Significant network overhead incurred during data movement not taxed with data movement
through the ESX host
18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VMware VAAI—Block zero offload: WRITE_SAME
Enables the storage array to zero out a large number of blocks within the array
without having to have the ESX host write the zeros as data
Without VAAI With VAAI + 3PAR

VMFS
VMFS 3PAR Plug-In for VAAI or native
0 0
0 0
0 0
ESX host writing large, expensive zeros
0 0 Offload zeroing with 3PAR zero-detect
0 0
0 0
0 0
0 0

000000 000000 LUN


00000
00000
00000 LUN
00000

• ESX host sends large blocks of zeros to disk when initializing • ESX host offloads writing large blocks of zeros to the
VMs array when initializing VMs
• Thin and thick VMDKs: Blocks initialized at run time • Faster provisioning of VMs with no ESX CPU cycles
• EZT VMDKs: Entire VMDK initialized at creation time consumed writing zeros
• On HP 3PAR, zero detect in ASIC means no zeros written
to disk and no space consumed
19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VMware VAAI—Hardware Assisted Locking
Provides an alternative to SCSI reservations as a means to protect the metadata for
VMFS cluster file systems and helps improve the scalability of large ESX host farms
sharing a datastore
Without VAAI With VAAI + 3PAR

VMFS VMFS
3PAR Plug-In for VAAI or native

SCSI Reservation – Locks entire LUN


ATS – Locked at the block level

LUN LUN

• SCSI Reservations used for VMFS metadata updates • Locking occurs at a block level, allowing concurrent LUN access
• Entire LUN locked during VMFS metadata update, impacting all from other VMs
I/O for every VM on that LUN • Enables bigger LUNs and more VMs per LUN (higher VM density)
• Lock/release mechanism delay time (and bugs) can leave ESX • 3PAR enables 10x faster data comparisons for ATS using the HP
in a hung state 3PAR ASIC versus array CPU
20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VMware: esxcli storage vmfs unmap

Beginning with ESXi 5.5


• Use esxcli storage vmfs unmap to reclaim space manually
• UNMAP is disabled by default
• Open an SSH session to the host and run
# esxcli storage vmfs unmap –l volume label|-u volume_uuid [-n number]

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Persistence: Other apps and operating systems

Integrated/Automated Space Reclamation with:


• Symantec Storage Foundation
• Oracle ASM Storage Reclamation Utility
• Others that use WRITE_SAME or UNMAP

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Local Replication:
Virtual Copy and Physical Copy
Module 12
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the benefits of Virtual Copy (VC) and Physical Copy (PC)
• Create, export, unexport, and delete a VC
• Describe the rules of VC relationships
• Promote (restore) from a VC
• Resynchronize a PC to a base volume
• Promote a PC to become a base volume
• Use the Management Console, SSMC and CLI to manage virtual copies
• Use the Management Console and CLI to manage physical copies

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR local replication types
The HP 3PAR disk array supports two types of local replicas

Virtual copy (VC) Physical copy (PC)

• Snapshot
pointer-based replica
using a copy-on-write • Point-in-time full copy
technique replica of a virtual
volume
• Not a full copy replica

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Copy introduction

A snapshot of another virtual volume (referred to as a base volume) or another


snapshot (snap-of-a-snap) or a physical copy
• Created using copy-on-write (COW) techniques
• VC license required

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Virtual Copy: Snapshot at its best

Features Integration with


• Individually removed, exported, and • MS SQL
Base volume
promoted • MS Exchange
• Scheduled creation and deletion • Oracle
…but only one CoW
• Consistency groups • vSphere required

• Thin aware • HP Data Protector

Ready
• Instantaneous read-write (rw) and read-
only (ro) snapshots
• Snapshots of snapshots
• Virtual Lock for retention of snaps

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy characteristics

• Records only the changes to the original volume (copy-on-write)


• Administrator can make hundreds of virtual copies of a virtual volume, assuming there is enough
storage space
• Virtual copies use CPG space referred to as copy space
• Snapshots can be promoted (used to roll back data to a point-in-time)
• Read/write (RW) snapshots can be created if source (base/snapshot volume) is read-only (RO) or RW
• RO snapshots can only be taken if source is RW, so no RO from RO snapshots are possible
• The base volume must be RW

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy limits

7200 7400 7450


Specifications 7200c 7400c 7450C 10000
7440c
Maximum number of virtual volumes (bases and snapshots) 8192 16384 65536

Maximum number of base volumes 4096 8192 32768


Maximum number of rw snapshots per base volume 256
Maximum number of ro snapshots per base volume 2048

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy relationships
A complex VC scenario

• A snapshot (VC) can be read-only (RO) or read/write (RW)


• Base volumes are always RW
9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy: Copy space/copy space CPG

• To create a VC of a VV, copy space


must be defined
• This is where the copy-on-writes will
be stored
• Can be a different CPG than the User
CPG

Using the CLI:


cli% createvv -tpvv –snp_cpg NL_r6 FC_r1 oracle_44 8G

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (1 of 6)
Base Virtual Volume (VV)
Read/Write (RW)
Snapshot (S0)
Logical block pointers [Create Read Only Snapshot]
Read Only (RO)
• Create the pointer at Snapshot Admin Space (AS)
Create Snapshot Admin Space
• All pointers point to Base Virtual Volume chunklet
Read Only Snapshot

A A A
A A A
A A A
Base Virtual Volume CPG Snapshot CPG

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (2 of 6)
Base Virtual Volume (VV)
Read/Write (RW)
Snapshot (S0) [Update base VV after creating read only snapshot]
Logical block pointers Read Only (RO) ・Copy old data to Snapshot Data Space(SD)
(Copy on Write)
Create Snapshot Admin Space
Read Only Snapshot ・Change the SA pointer from Base VV to SD
・Update base VV

B New Data

A A A Copy on Write

A A A
A A A
Base Virtual Volume CPG Snapshot CPG

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (3 of 6)
Base Virtual Volume (VV)
Read/Write (RW)

Snapshot (S0)
Logical block pointers
Read Only (RO)
Snapshot (S0_0)
Create Snapshot Admin Space Read/Write (RW)
Read Only Snapshot
Snapshot Admin Space
Create
Read Write
Snapshot

[Creating Read/Write(R/W) Snapshot]

A A A
A A A
A A A
Base Virtual Volume CPG Snapshot CPG
13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (4 of 6)
Base Virtual Volume (VV)
Read/Write (RW)

Snapshot (S0)
Logical block pointers
Read Only (RO)
Snapshot (S0_0)
Create Snapshot Admin Space Read/Write (RW)
Read Only Snapshot
Snapshot Admin Space
Create
Read Write
Snapshot

B New Data

[Update Base VV after creating Snapshot]


・Copy old data to Snapshot Data Space(SD)
A A A Copy on Write
(Copy on Write)
A A A ・Change the SA pointer from Base VV to SD

A A A ・Update base VV

Base Virtual Volume CPG Snapshot CPG


14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (5 of 6)
Base Virtual Volume (VV)
Read/Write (RW)

Snapshot (S0)
Logical block pointers
Read Only (RO)
Snapshot (S0_0)
Create Snapshot Admin Space Read/Write (RW)
Read Only Snapshot
Snapshot Admin Space
Create
Read Write
Snapshot

C New Data

A A A A
[Update RW Snapshot after creating snapshot -1]
A A
B A
・Write new data to Snapshot Data Space(SD)
A A A ・Change the RW snapshot SA pointer to SD
Base Virtual Volume CPG Snapshot CPG
15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy snapshot: How it works (6 of 6)
Base Virtual Volume (VV)
Read/Write (RW)

Snapshot (S0)
Logical block pointers
Read Only (RO)
Snapshot (S0_0)
Create Snapshot Admin Space Read/Write (RW)
Read Only Snapshot
Snapshot Admin Space
Create
Read Write
Snapshot

D New Data

A A A A
[Update RW Snapshot after creating snapshot -2]
A A
B A C ・Write new data to Snapshot Data Space(SD)
A A A ・Change the RW snapshot SA pointer to SD

Base Virtual Volume CPG Snapshot CPG


16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a VC using Management Console (1 of 3)

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a VC using Management Console (2 of 3)

VC name: Default uses a


Virtual Lock options date/time format

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a VC using Management Console (3 of 3)

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Displaying VCs in Management Console

Base

VC

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a VC using SSMC (1 of 2)

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a VC using SSMC (2 of 2)

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Displaying VCs in SSMC

Base

VC

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy promotion

• A virtual copy (snapshot) can be promoted to any read/write parent within the same virtual volume family tree
• The promotion detects the differences between the snapshot and the RW parent and then copies these
differences back to the RW parent
• By default, a promotion will promote back to the base volume
• The -target option can be used with the promotesv command (or within MC or SSMC) to specify any RW parent
(snapshot) within the same virtual volume tree to promote to
• A promotion is essentially a way to restore data back to a particular point-in-time (PIT)
• The volume (either the base or parent VC) being promoted to must be RW
• To promote, both the parent being promoted to (the base or RW VC) and VC being used to promote from
(RO or RW VC) must be unexported

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Copy Promotion: Example
showvv
29 myvv cpvv base --- 29 RW normal 512 18432 10240 10240
30 ro1 snp vcopy myvv 29 RO normal -- -- -- 10240
31 rw1 snp vcopy ro1 29 RW normal -- -- -- 10240
32 ro2 snp vcopy rw1 29 RO normal -- -- -- 10240
33 rw2 snp vcopy ro2 29 RW normal -- -- -- 10240

Promote to the Base


promotesv ro2
Task 6474 has been started to promote virtual copy ro2

showtask -d 6474
Id Type Name Status Phase Step -------StartTime------- ------FinishTime-------
6474 promote_sv ro2->myvv done --- --- 2015-02-28 06:08:20 PDT 2015-02-28 06:09:03 PDT

Promote to a specified read-write parent


promotesv -target rw1 ro2
Task 6475 has been started to promote virtual copy ro2 to parent rw1

showtask -d 6475
Id Type Name Status Phase Step -------StartTime------- ------FinishTime-------
6475 promote_sv ro2->rw1 done --- --- 2015-02-28 06:09:32 PDT 2015-02-28 06:09:34 PDT

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Promotion of a VC in Management Console (1 of 2)

Select the VC to
promote from

NOTE: The VC you are promoting from AND the


volume being promoted to MUST be unexported

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Promotion of a VC in Management Console (2 of 2)

Confirmation screens during promotion

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Promotion of a VC in SSMC (1 of 2)

Select VC to promote
from

Snapshot selected to use for promotion must be unexported


28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Promotion of a VC in SSMC (2 of 2)

Select base volume or rw snapshot to


promote to (must be unexported)

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy Stale snapshots policy
SSMC example
Possible scenario: No space
remains to record copy-on-
write changes, which could
make the VC invalid or stale
cli% createvv –pol no_stale_ss

Stale snapshots Summary of Behavior


Policy
Allowed Specifies that invalid or stale snapshot volumes are permitted. Failure to update
snapshot data does not affect the write to the base volume, but the snapshot is
considered invalid or stale. This is the default policy setting.
Not Allowed Specifies that invalid snapshot volumes are not permitted. Failure to update a snapshot
is considered a failure and writes to the base volume are not permitted.

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VV copy space: Dynamic Optimization tune

A VVs Copy Space CPG can be tuned


using Dynamic Optimization

Tune the copy space for the VV oracle_1 to the CPG called NL_r6
cli% tunevv snp_cpg NL_r6 oracle_1
31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual Copy CLI overview

Command Summary
createsv create a virtual copy
promotesv promote from/restore from a virtual copy
creategroupsv create consistency group virtual copies
promotegroupsv copies the differences of virtual copies back to their base volumes, to allow
to revert the base volumes back to an earlier point in time.
updatevv update a virtual copy with a new virtual copy
createsched schedule the creation of a virtual copy; used in conjunction with createsv

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy functionality using the CLI (1 of 2)
• Create a read/write virtual copy named hp1_snap.wed of the base virtual volume hp1:
cli% createsv –rw hp1_snap.wed hp1
• Create a read-only virtual copy named test_snap of the virtual copy named hp1_snap.wed:
cli% createsv –ro test_snap hp1_snap.wed
• Create a read-only virtual copy named snap_backup that has a retention time of 48 hours and an expiration time
of 72 hours of the base virtual volume hp2:
cli% createsv –ro snap_backup –retain 48 –exp 72 hp2
• Create a read-only virtual copy of each member of the virtual volume set vvcopies and each virtual copy will be
named svro-<name of parent volume>:
cli% createsv -ro svro-@vvname@ set:vvcopies
• Promote the virtual copy named test_snap back to the base volume hp1:
cli% promotesv test_snap
• Promote the virtual copy named test_snap back to the rw virtual copy named hp1_snap.wed
cli% promotesv -target hp1_snap.wed test_snap

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual copy functionality using the CLI (2 of 2)
• Create read-only consistency group virtual copies of the virtual volumes vv1, vv2, vv3, vv4:
cli% creategroupsv –ro vv1 vv2 vv3 vv4
• Promote the virtual volumes vv1, vv2, vv3, vv4 with the differences from their base volume
cli% promotegroupsv vv1 vv2 vv3 vv4
• Use the createsched command to schedule creation of a virtual copy every hour that expires in 2 hours for
the volume vvname
cli% createsched "createsv -ro -exp 2h @vvname@.@s@ vvname" @hourly snp_vv
• Update the existing virtual copy named test_snap with a new shapshot using the –f (force) option
cli% updatevv –f test_snap
Updating VV test_snap
• Remove the volumes that start with test and are snapshot using the –f (force) option
cli% removevv -f -snaponly -pat test*
• Remove the virtual copy vv1_snap if it is a snapshot and all its descendants using the –f (force) option
cli% removevv –f snaponly -cascade vv1_snap

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical copy

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical Copy introduction

• A physical copy is a full copy (point-in-time) replica of the base data


• A physical copy duplicates all the data from one original base volume to another volume called the destination
volume
• Any changes to either volume causes them to lose synchronization
• After a relationship between the base and the destination/physical copy is created, the data in the physical copy is
point-in-time (PIT)—Not a “real-time” mirror
• No additional license is required to create a physical copy
• Physical copy administration can be done using Management Console , CLI, and SSMC 2.1 or later

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical copy terms and specifics

Destination
Base volume
volume

• Can be a Thick/TPVV/TDVV • Can be a Thick/TPVV/TDVV


• Exported or unexported • Can be the same size or larger than base
• Must have copy space CPG • Can use a different CPG than the base (different RAID
defined (for saved VC) level, type of media, etc.)
• Can be a physical copy or a VC • Cannot be exported
(snapshot) • When created and data copied, the destination volume
type transitions to Physical Copy
37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC (1 of 3)

Destination volumes must be created first


and must be the same size or bigger than
the base. Destination cannot be exported
during initial creation.

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC (2 of 3)

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC (3 of 3)

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC: Resync (1 of 2)

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC: Resync (2 of 2)

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC: Promote (1 of 2)

CANNOT export physical copy to host:


you must first promote

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Working with physical copies in MC: Promote (2 of 2)
• After promotion a physical copy becomes a base
volume and can be exported

• The relationship between the original base and


the volume is severed (ie.: can no longer resync)

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical Copy CLI overview

Command Summary
createvvcopy copies a virtual volume
promotevvcopy promotes a physical copy to be a base volume
creategroupvvcopy creates consistent group physical copies of a list of virtual volumes

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Physical copy functionality using the CLI
• Create a physical copy of the VV called vv1 using the destination VV called vv2, and save a snapshot for later resync:
cli% createvvcopy –p vv1 -s vv2
• Create a physical copy of vv1 that is named vv2, which is thin provisioned, using cpg1 as its user space and cpg2 as its
snapshot space:
cli% createvvcopy -p vv1 -online -tpvv -snp_cpg cpg2 cpg1 vv2
• Create a set of copies for the volumes in virtual volume set vvcopyset, keeping snapshots for resynchronization:
cli% createvvcopy -s -p set:vvcopyset set:copys
• Resynchronize a physical copy named vv2 to the base volume vv1:
cli% createvvcopy -r vv2
• Promote the physical copy named vv2 to become a base volume:
cli% promotevvcopy vv2

46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Take control of your future with training from
HP Education Services
Visit: www.hp.com/learn
You can also follow us on Facebook and Twitter –
use the links below and join the
HP Education Services Virtual Community

www.facebook.com/HPEnterpriseBusiness/app_11007063052
www.facebook.com/HPTechnologyServices
www.twitter.com/HPEducation

47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dedup
Appendix A
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the benefits of Deduplication
• Understand the deduplication process
• Interpret the dedup output from CLI commands
• Explain the purpose of the Garbage Collector
• Understand how dedup works with Virtual Copies, Physical Copies, and Remote Copies

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Deduplication introduction
• Deduplication has become increasingly important with the acceleration in the use of SSDs in
disk arrays
• To reduce the cost implications associated with SSDs vs. spinning media (FC and NL disks)
solutions like deduplication and data compression can reduce the cost per byte of SSD
capacity
• By eliminating duplicate writes, dedup helps save space and extend the life of SSDs
• Dedup is supported on the 7000 series and V400/V800 platforms and the array must have
SSDs as deplication can only be performed on thin volumes created from SSD capacity
currently
• Array must be running HP 3PAR OS 3.2.1 and Management Console 4.6 or SSMC 2.0
• Thin provisioning license is required but no additional licensing required
• The impact of deduplication on IO performance is determined by various parameters, such
as whether duplication is performed inline or in the background, and the granularity of
deduplication

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Deduplication definition
Deduplication can be explained as:
A space reduction technique that identifies redundant/duplicate data in physical
storage and maintains only a single copy of data for all the redundant copies

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dedup terminology
• TDVV: Thin Deduped Virtual Volume
Thin provisioned virtual volume allocated from
SSD capacity which is deduped; also referred to
as a DDCV (Dedup Client Volume) • GC: Garbage Collector
Reclaim process that will free up
• TV: Thin Volume unreferenced pages in the DDS
Generic reference to either a TPVV or a
TDVV • CRC: Cyclic Redundancy Check
CRC32 dedup process implemented by
the ASIC
• DDS: Dedup DataStore
System automatically creates this when first
TDVV is created in a CPG; subsequent TDVVs
created in the same CPG will use this DDS

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dedup details

• A thin provisioned virtual volume called the dedup data store (DDS) will be created
when the first TDVV is created using an SSD CPG
• The DDS is used to hold the data (both unique and duplicate) for all TDVVs using
the CPG
• Only TDVVs using the same CPG are compared for deduplication
• The DDS volume will have the naming convention .sysvv_0<CPG name> but is
not visible in standard output (such as showvv)

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR Thin Deduplication with Express Indexing

Inline Deduplication with instant metadata


lookups 1. hash signatures generated
inline by the ASIC

0001100 0001100
ASIC
0001101 0001101
provides the
Bit-by-bit
only HP 3PAR Scalable, not 1111100 1111100
compare 16 KB
solution in Express limited to
offloaded to 0001100
the industry Indexing for granulariy max amount
the ASIC 3. only unique data is
to use instant
upon a for host of RAW 1111100 written to SSD
silicon- metadata I/O size capacity on
match of 2. bit-to-bit compare on
based hash lookups a system
signature match offloaded
key to ASIC
generation

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Confidential – For training purposes only.
HP 3PAR Express Indexing
Fast metadata lookups
• Uses metadata tables
• Compares signatures of incoming request to stored signatures
• If signatures match, duplicate request is flagged
− Pointer is added to metadata table to existing blocks

• ASIC performs bit-to-bit comparison of data


− Before any new write is marked as duplicate
− Prevents hash collisions

HP 3PAR is the most scalable all-flash array available


• Up to 460 TB of raw capacity
• Up to 1.3 PB of usable all-flash capacity
8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Confidential – For training purposes only.
HP 3PAR Thin Deduplication: How it works
Accelerated by the ASIC and Express Indexing
1. Host write
0001101

3. Fast metadata
2. ASIC computes hash lookup with Express
Indexing 4. (On match) Data is compared
against the existing potential deduped
LBA page and the ASIC used for a bit-by-bit
xxx yyy zzz
compare using inline XOR operation
Hash
L1
0001101
Hash 5. XOR = 0000000
L2
L1 Table
Hash 0001101
L3
L2 Table
6. A dedupe match will result in
XOR outcome being a page of
L3 Table zeros, which is detected inline
by the ASIC

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Confidential – For training purposes only.
Garbage Collector (GC)
• For TDVVs zero detection is not done and not applicable like a TPVV

• But the DDS for a CPG having TDVVs can accumulate pages that have
no references in any TDVV

• Thin deduplication uses a garbage collection process for data in the


DDS that is not used by and TDVV in the CPG and releases/reclaims
unreferenced pages

• The garbage collection process is an automated system task and will


not impact performance while volumes are online

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Dedupe considerations
• Only thin provisioned virtual volumes allocated from SSD CPGs can be deduped

• Any SSD CPG containing even one TDVV cannot be used as part of an Adaptive Optimization
configuration; likewise, it is impossible using any SSD CPG that is part of an AO configuration to
create a TDVV

• Deduplication is done at the 16K page level and is best utilized where the IO’s are aligned to this
16K granularity (or multiple of 16K)

• Deduplication is performed on TDVVs in the same CPG only: for maximum deduplication
efficiency, store data with duplicate affinity on TDVVs in the same CPG

• Dedup is ideal for data with high level of redundancy; data that has be previously deduped or
encrypted are not good candidates for depuplication

• There is a 256 TDVVs limit per CPG, but an SSD CPG can have a mix of TPVV and TDVVs and
Fully Provisioned volumes
11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a TDVV (1 of 2)

Create a thin deduplicated virtual volume (TDVV)

cli% createvv –tdvv <cpg_name> <tdvv_name> <size>

cli% createvv –tdvv SSD_CPG1 tdvv0 10G

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Creating a TDVV (2 of 2)

Example using SSMC


to create a TDVV

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
showvv

SSMC

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
showvv -space
Showing Compaction and Dedup Ratio
cli% showvv -space vv1 vv2 vv3
---Adm---- --------Snp---------- ----------Usr-----------
---(MB)--- --(MB)-- -(% VSize)-- ---(MB)---- -(% VSize)-- -----(MB)----- -Capacity Efficiency-
Id Name Prov Type Rsvd Used Rsvd Used Used Wrn Lim Rsvd Used Used Wrn Lim Tot_Rsvd VSize Compaction Dedup
483 vv1 tpvv base 256 9 0 0 0.0 -- -- 12800 10240 10.0 0 0 13056 102400 10.0 --
485 vv2 tdvv base 5184 4428 0 0 0.0 -- -- 13056 5129 5.0 0 0 18240 102400 10.7 1.0
486 vv3 tdvv base 5184 4433 0 0 0.0 -- -- 13056 5129 5.0 0 0 18240 102400 10.7 2.0
-----------------------------------------------------------------------------------------------------------------
3 total 10624 8870 0 0 38912 20498 49536 307200 10.5 1.5

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
showcpg –d

• The showcpg –d displays CPG detailed information

A CPG with TDVVs can be shared with Fully Provisioned volumes


and/or TPVVs but only TDVVs will be deduped

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
showspace

showspace –cpg displays CPGs and the capacity efficiency


cli% showspace -cpg *

------------------------------(MB)----------------------------

CPG -----EstFree------- --------Usr------ -----Snp---- ----Adm---- -Capacity Efficiency-


Name RawFree LDFree Total Used Total Used Total Used Compaction Dedup
TPVV_CPG 18499584 9249792 16896 16896 15872 512 8192 256 10.0 -
TDVV_CPG 18499584 9249792 34304 34304 31232 0 24576 10496 10.7 1.6
FC_r5 176078848 154068992 21951616 21951616 104320 3072 32768 8480 13.1 -

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
showsys
System Efficiencies cli% showsys -space
------------- System Capacity (MB) -------------
Total Capacity : 20054016
showsys –space Allocated
Volumes
:
:
1542144
385024
displays DDS usage in all CPGs (output truncated) Non-CPGs : 0
User : 0
Snapshot : 0
Admin : 0
CPGs (TPVVs & TDVVs & CPVVs): 385024
User : 136192
Used : 136192
Unused : 0
Snapshot : 125952
Used : 1024
Unused : 124928
Admin : 122880
Used : 33024
Unused : 89856
………………

------------- Capacity Efficiency --------------


Compaction : 10.3
Dedup : 3.0

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
srvvspace

cli% srvvspace -hourly -btsecs -1h -usr_cpg TDVV_CPG

------RawRsvd(MB)------ ----User(MB)----- ------Snap(MB)------


Time Secs User Snap Admin Total Used Free Rsvd Used Free Rsvd Vcopy
2015-03-03 23:00:00 CEST 1413493200 68608 0 31488 100096 10244 24060 34304 0 0 0 0

------Admin(MB)------- ----------Total(MB)---------- -Capacity Efficiency-


Used Free Rsvd Vcopy Vcopy Used Rsvd VirtualSize Compaction Dedup
8896 1600 10496 0 0 19140 44800 67313664 10.0 1.5

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Viewing Compaction Saving and Dedup Savings

MC

SSMC

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings (1 of 3)

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings (2 of 3)

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings (3 of 4)

In Tasks & Schedules area

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings
checkvv –dedup_dryrun (1 of 2)

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVV: Estimating Space Savings
checkvv –dedup_dryrun (2 of 2)

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Online Volume Conversion to/from TDVV (1 of 3)

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Online Volume Conversion to/from TDVV (2 of 3)

Note: screen truncated

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
to fit on slide
Online Volume Conversion to/from TDVV (3 of 3)

• Tune the VV and convert the thin provisioned volume (allocated from FC capacity) h4 and convert to a thinly-
deduped provisioned VV using an SSD CPG
cli% tunevv usr_cpg SSD_r5de -tdvv hp4
• Tune the VV and convert the fully provisioned volume hp5 and convert to a TDVV . Keep the original fully
provisioned VV with the name hp5.orig.
cli% tunevv usr_cpg SSD_r5de -keepvv hp5.orig -tdvv hp5
• Tune the VV and convert the TDVV hp99 and convert to a fully provisioned VV using the CPG NL_r6
cli% tunevv usr_cpg NL_r6 -full hp99

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVVs and Snapshots/Physical Copies

Snapshots made from TDVVs work similarly to snapshots created from fully provisioned
and TPVVs except:
• If a TDVV is created using a User CPG and the Copy Space CPG is dedup aware (SSD) then user
data and snapshot copy-on-write data will be deduped
• If Copy Space CPG is different than the User CPG for the TDVV the deduplicated data will reside in
the CPG associated with the TDVV (User CPG) and only “collision” data will reside in the Copy
Space CPG

When creating a Physical Copy the base volume can be any provisioning type (fully
provisioned, TDVV, TPVV) destination volume can be a TDVV

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
TDVVs and Remote Copy
• Thin Dedup is supported as either the primary volume or the target volume
in a remote copy group
• Supported for both syncronous and periodic-async replication

P T

TDVV TDVV

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Remote Copy TDVV/TPVV Specifics
• If Target/Remote array does not
support TDVV, TPVV Target volumes
will be created for the remote copy
group

• If Target/Remote system doesn’t


support thin provisioning, a Warning
is displayed and fully provisioned
volumes will be used

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Thin Clones for non-duplicative VM Cloning
VM cloning leverages XCOPY and ODX

VM VM VM VM

Hypervisor

HP
VM • Clones created on-the-fly
3PAR VM without pre-allocating
Virtual VM any storage
Volume • New data deduped
VM inline

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adaptive Flash Cache
Appendix B
HK902 E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives

After completing this module, you should be able to:


• Explain the benefits of Adaptive Flash Cache (AFC)
• Understand what can and cannot be moved into AFC
• Explain the different LRU (Least Recently Used) queues and the concept of LRU queue demotion
• Use the appropriate CLI commands to setup, enable, disable, remove, and monitor AFC
• Understand the guidelines and rules regarding AFC
• Monitor AFC using the statcache and srstatcache commands

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adaptive Flash Cache introduction

When a read request comes to the array from a host the read can be either sequential or random
• With a sequential read stream, an HP 3PAR OS algorithm will detect the sequential read pattern, doing a pre-
fetch putting data in cache resulting in a cache-read hit
• With a random read stream with no determined predictive pattern, the read usually results in a cache-read
miss and the requested data must be read into cache from the back end disks which is non-optimal from a
performance point of view

Adaptive Flash Cache adds a second level of cache (between DRAM and back-end disks) using
SSD capacity to increase the probability that a random read can be serviced at a much more
effective rate improving read performance.

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adaptive Flash Cache explained
Without Adaptive Flash Cache With Adaptive Flash Cache

Host sends read


Host sends read
or write
or write
DRAM Cache

DRAM Cache 16K Page size

16K Page size

AFC 16K Cache Reads


Reads or Writes page size SSD Tier Writes

HDD HDD
HDD
5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
What does/does not get cached in AFC

• Only small block random read data (64K or less IO size) from a node’s DRAM cache
is a candidate for moving to AFC
• Data that is pre-fetched using the array sequential read-ahead algorithm into
DRAM cache and data in DRAM cache with IO size >= 64K are not candidates to be
moved to AFC
• Data is only placed into AFC after having been resident in DRAM first—data is
never put in AFC directly from FC and NL back end disks
• Data read into DRAM from SSD media will never be placed in AFC
• AFC is not intended to be an extension for write data

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Cache Memory Page (CMP) vs. Flash Memory Page (FMP)
DRAM Cache

16K 16K 16K 16K 16K • DRAM cache on a node is broken down into 16K cache memory pages (CMP)
• A CMP can be utilized by host writes, mirroring of writes from other controllers,
16K 16K 16K 16K 16K or for sequential or random read data requested from hosts
• When DRAM cache utilization reaches 90% (10% of CMPs are free/clean), 16K
16K 16K 16K 16K 16K CMPs used for random read IO are candidates to be moved out to SSD AFC

16K 16K 16K 16K 16K


Adaptive Flash Cache

16K 16K 16K 16K 16K

16K 16K 16K 16K 16K AFC broken down into 16K
flash memory pages (FMP)
16K 16K 16K 16K 16K from 1 GB chunklets on
SSD disks
16K 16K 16K 16K 16K

8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers
AO VV 1-Tier VV

128MB region
AFC Leverages SSD capacity in an array as a level-2

moves
read cache extension
− Small block random read data that is to be removed from DRAM cache
is copied to AFC
− Provides a second-level caching layer between DRAM and spinning Cache Control
disks (HDD) Write Read
Cache Cache
• If data in flash cache is accessed it is copied into DRAM cache and
remains there until it is once again removed
• AFC is fully compatible with Adaptive Optimization (AO)

16KB CMP moves


• If an array contains both cMLC and eMLC SSDs AFC will be created on
the eMLC SSDs Nearline Fast Solid
(NL) Class State
(SSD)
(FC)
Flash LDs
i.e.
800GB

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

DRAM Read
Cache

16k Page
Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache

16k Page
Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM cache read miss


on LBA 0x9abc6h
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read LBA 0x9abc6h is read


Cache 16k CMP from the VV into DRAM
cache

Flash Cache 16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

16k CMP

Read request completed to server

DRAM Read LBA 0x9abc6h is read


Cache 16k CMP
from the VV into DRAM
cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

16k FMP allocated from DRAM Read


“Dormant” LRU queue to Cache 16k CMP

the “Normal” LRU queue

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

The system now wants to remove the16k


CMP for LBA 0x9abc6h from DRAM cache

16k CMP is copied to AFC


DRAM Read FMP and removed from
Cache DRAM cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM cache read miss


on LBA 0x9abc6h
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

Server read request of LBA 0x9abc6h

DRAM Read
Cache 16k CMP
LBA 0x9abc6h is read
from flash cache into
DRAM cache (AFC hit)

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server
16k CMP

Read request completed to server

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

16k FMP is promoted from


DRAM Read “Normal” to “Warm” FMP
Cache 16k CMP
because of flash cache hit

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: how it works
Server

16k CMP is copied to AFC


DRAM Read
FMP and removed from
Cache
DRAM cache

Flash Cache 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LRU (Least Recently Used) Queues

Hot Warm Norm Cold Dormant

• When a 16K CMP of random read data moves to AFC it is copied to a 16K FMP and into one of five
least recently used queues to track how hot the data is
• A 16K CMP that needs to be moved from DRAM to AFC (16K FMP) is placed in the NORM LRU queue
• FMPs in AFC that are accessed by a host can be promoted to hotter/higher priority LRU queues
• FMPs in AFC can be placed in a lower priority LRU queue as a result of queue demotion

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP
16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

Server read request of LBA 0xc619ab

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

Server read request of LBA 0xc619ab

DRAM cache read miss


on LBA 0xc619ab
DRAM Read
Cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP
16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

Server read request of LBA 0xc619ab

DRAM Read LBA 0xc619ab is read


Cache 16k CMP from the VV into DRAM
read cache

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP 16k Page

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

16k CMP

Read request completed to server

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP


16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

A 16k FMP is allocated from DRAM Read


the “Dormant” LRU queue to Cache 16k CMP

the “Normal” LRU queue

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read
Cache 16k CMP

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

The system now wants to remove the 16k


DRAM CMP for LBA 0xc619ab from read cache

DRAM Read 16k CMP is written to AFC


Cache 16k CMP
FMP and removed from
DRAM cache

Flash Cache 16k FMP 16k FMP 16k FMP


16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

DRAM Read
The AFC “Dormant” LRU queue Cache
has now run out of FMPs

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

So all AFC LRU queues are DRAM Read


demoted one level Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC data flow in the controllers: LRU Queue Demotion
Server

DRAM Read
Cache

Flash Cache 16k FMP 16k FMP 16k FMP 16k FMP

LRU Queues Virtual


Dormant Cold Norm Warm Hot Volume

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LRU movement examples (1 of 2)
Excerpts from statcache output showing LRU queues

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 24056921 0 1107007 1484 412 0 0 0 0

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 0 0 25132171 7000 26653 0 0 0 0

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
LRU movement examples (2 of 2)
Excerpts from statcache output showing LRU queues
----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 0 25132171 7000 26653 0 0 0 0 0

----------------- FMP Queue Statistics ------------------

Node Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
0 25132171 7000 26653 0 0 0 0 0 0

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC specifics
• No license required
• Must be running HP 3PAR OS 3.2.1 or higher
• Not supported on the HP 3PAR 7450 /7450c models
• Minimum amount of AFC configurable per controller node pair is 64 GB for all models
• Maximum amount of AFC configurable per controller node pair depends on the hardware model:
7200/7200c: 768 GB
7400/7400c: 768 GB (1500 GB max for 4-node models)
7440c: 1500 GB (3000 GB max for 4-node models)
V400 and V800: 2064 GB
• Minimum 4 SSDs per controller node pair for 7000 Series and 8 SSDs per controller node pair for
V400 and V800 required for AFC configuration

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Configuring and monitoring AFC

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Adaptive Flash Cache CLI commands
Command Summary
createflashcache Specify amount of SSD capacity per node pair to be allocated for AFC

setflashcache Enable/disable AFC for VV Sets or per array

showflashcache Display how much AFC has been allocated per controller node

removeflashcache Disable/remove all AFC from array

statcache Display cache statistics, including AFC stats

srstatcache Displays historical performance data reports for flash cache and data cache

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: createflashcache and showflashcache
• Create 128 GB of Flash Cache for each node pair in the array (must be added multiples of 16 GB):
cli% createflashcache 128g

• Display the status of Flash Cache for all nodes on the system (example output shown):
cli% showflashcache
-(MB)-
Node Mode State Size Used%
0 SSD normal 65536 0
1 SSD normal 65536 0
-------------------------------
2 total 131072

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: setflashcache System Level Mode (1 of 2)
• There are two “targets” the setflashcache sub commands can be executed against:
“sys:all”
“vvset:<vvset name>”

• The “sys:all” target is used to enter System Level Flash Cache Mode and enables/disables
flash cache for all VVs and VVsets on a system globally
cli% setflashcache enable sys:all
cli% setflashcache disable sys:all

• The system level mode is global and overrides any settings applied to VVset targets while not in
system level mode

• To exit system level flash cache mode and get back into VVset mode you must use the “clear”
subcommand
cli% setflashcache clear sys:all

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: setflashcache System Level Mode (2 of 2)
• When flash cache system level mode is entered any flash cache setting for vvset:<Vvset>
targets is overridden

• While in system mode the showflashcache command with either the –vv or –vvset option will not
display information for either individual VVs or Vvsets: these options only display individual VV and
VVset data when not in system level mode

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI: setflashcache when NOT in System Level Mode
• Use the “vvset:<VVset name>” target when not in flash cache system level mode to
enable/disable flash cache for Vvsets
 If changes are made while in system level mode they will occur but will not take effect until you clear system mode
• Any flash cache setting specified using the “vvset:<VVset name>” target will be overridden
(have no affect) if you go into system level mode
• When system level mode is cleared any prior “vvset:<Vvset name>” target configurations
that were created or modified while in system level mode are applied
• showflashcache with either the –vv or –vvset option only displays VV and VVset data when not in
system mode
• Changes specified to a VVset target while in system level mode will not have any affect on the non-
system mode settings but you will not be able to see the effect until you clear system level mode
• Examples:
cli% setflashcache enable vvset:ESX5ii
cli% setflashcache disable vvset:ESX5ii

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI command examples (1 of 2)
• Display the status of VVsets with Flash Cache enabled on the system (example output shown)
cli% showflashcache -vvset

Id VVSetName AFCPolicy
1 ESX5ii enabled
----------------------
1 total

• Display the status of VVs with Flash Cache enabled (example output shown)
cli% showflashcache -vv
VVid VVName AFCPolicy
50 ESX5ii.0 enabled
51 ESX5ii.1 enabled
52 ESX5ii.2 enabled
-------------------------
3 total
46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC CLI command examples (2 of 2)
-(MB)-
• Display the status of Flash Cache for all nodes Node Mode State Size Used%
on the system (example output shown): 0 SSD normal 65536 35
cli% showflashcache 1 SSD normal 65536 35
-------------------------------
2 total 131072

• Disable and remove all flash cache from the array


cli% removeflashcache
Are you sure you want to remove the flash cache?
Select q=quit y=yes n=no: y

• Display the status of Flash Cache for all nodes on the system (example output shown):
cli% showflashcache

Flash Cache is not present.


47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (1 of 5)
statcache shows CMP (DRAM) statistics and FMP statistics for data held in AFC

statcache
When run with no
options reports CMP
and FMP statistics on
a per node basis

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (2 of 5)
Read and Write statistics details

• Node: Node ID on the storage system


• Type: Data access type either Read or Write
AFC does not cache write data so the “write” counter for FMP will always be 0
• Accesses: Number of Current and Total Read/Write I/Os
• Hit%: Hits divided by accesses (displayed in percentages)

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (3 of 5)
Read Back and Destaged Write

• Read Back: Data reads from flash cache back into DRAM cache and represent flash cache read
hits
• Detsaged Write: Writes of CMPs from DRAM into flash cache and occur when a CMP is being
removed from DRAM read cache and written into flash cache -- these writes are mirrored in
flash cache (RAID1) even though AFC only holds read data

50 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (4 of 5)
FMP Queue Statistics

Displays FMPs in LRU queues per controller node

51 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache (5 of 5)
CMP Queue Statistics

• Free: Number of cache pages without valid data on them


• Clean: Number of clean cache pages (valid data on page)
A page is clean when data in cache matches data on disk
• Write1: Number of dirty pages that have been modified exactly 1 time
A page is dirty when it has been modified in cache but not written to disk
• WriteN: Number of dirty pages that have been modified more than 1 time
• WrtSched: Number of pages scheduled to be written to disk
• Writing: Number of pages currently being written by the flusher to disk
• DcowPend: Number of pages waiting for delayed copy on write resolution
• DcowProc: Number of pages currently being processed for delayed copy on write
resolution
52 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache -v

statcache –v
reports CMP and FMP
statistics by virtual volume

53 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: statcache -v -metadata

statcache –v -metadata
reports CMP, FMP statistics and
metadata statistics by virtual
volume

54 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: srstatcache (1 of 2)
Displays historical cache statistic reports for both CMPs and FMPs

55 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Monitoring Cache: srstatcache (2 of 2)

cli% srstatcache –internal_flashcache –fmp_queue -btsecs -10m

----CMP---- ----FMP---- -Read Back- -Dstg Wrt- ----------------------FMP Queue-----------------------


Time Secs r/s w/s rhit% whit% rhit% whit% IO/s MB/s IO/s MB/s Dormant Cold Norm Warm Hot Destage Read Flush WrtBack
2015-02-07 13:05:00 MDT 1404759900 2730.0 2721.0 5.4 0.5 0.1 0.0 2.0 0.0 0.0 0.0 16770454 0 5643 884 235 0 0 0 0
2015-02-07 13:10:00 MDT 1404760200 2580.0 2583.0 5.3 0.5 0.1 0.0 2.0 0.0 0.0 0.0 16770685 0 5147 1193 191 0 0 0 0

Display the internal flashcache activity including FMP queue statistics beginning 10 minutes ago

56 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Warm Up time for AFC

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Estimating Adaptive Flash Cache Warmup (1 of 3)
• Just like a normal DRAM based cache, flash cache requires time to warmup
before an application or benchmark may see a noticeable improvement in I/O
latency

• The warmup time can be anywhere from minutes to hours depending on


factors such as workload and cache size

• To estimate how long it will take to warmup flash cache

First: Estimate the cache size

Cache Size = (DRAM Read Cache) + (FLASH Read Cache)

58 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Estimating Adaptive Flash Cache Warmup (2 of 3)
• Second: Estimate the demand for Flash Cache in MB/sec
This demand depends on the I/O size
All read I/O’s to FC drives on HP 3PAR are 16kb aligned/16kb in size
All sequential Reads and Reads that are 64kb in size should not be included in estimating flash
cache warm up

<=16kb <=32kb <=48kb <64kb


IO Size Modifier 16 32 48 64

Flash Cache Fill Rate = Read IOPS * (IO Size Modifier)

59 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Estimating Adaptive Flash Cache Warmup (3 of 3)
Estimating Cache Hit Rate

• To understand the effectiveness of Flash Cache, you also need to be able to


estimate the cache hit rate

• To do this, take the size of the arrays cache and divide by the Working Set Size

• Working Set Size: a measurement of the total amount of unique blocks being
accessed over a unit of time

60 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Warmup time and Cache Hit Rate: Example
• 8TB of VVs
• 32GB of read cache Array Cache Size = (DRAM Read Cache) + (FLASH Read Cache)
• 256GB of flash cache
• 288 GB total of array cache
Array Cache Size
Cache Hit Rate: Cache Hit Rate
288GB/8TB = 0.036 Working Set Size
For a random workload, best flash cache hit %
would be 3.6%

Flash Cache Fill Rate:


Host Workload: 20,000 4kb random reads Flash Cache Fill Rate = Read IOPS * (IO Size Modifier)
20,000 * 16 = 320 MB/sec for filling flash cache

Time to Fill:
Maximum cache hit rate can be obtained in: Array Cache Size
288GB / (320 MB/sec) = 900 seconds Time to fill
15 minutes
Flash Cache Fill Rate

61 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC Impact Example (1 of 2)
Pre-configuration of AFC
Workload stats:
IOs/Sec: ~1000 Service time (ms) ~6 ms

Approx. 10 minutes after configuration of AFC


Workload stats:
IOs/Sec: ~1500 (50% increase)
Service time (ms) ~4 ms (33% decrease)

62 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC Impact Example (2 of 2)

Approx. 35 minutes after configuration of AFC


Workload stats:
IOs/Sec: ~4500 (450% increase)
Service time (ms) ~1 ms (83% decrease)

63 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AFC tidbits
• Once the amount of AFC is specified using the createflashcache command the amount can not
be changed: the removeflashcache command must be used to remove the designated AFC then
recreated using createflashcache
• When adding new SSDs the tunesys operation can not be used to rebalance FMPs across all
SSDs used for AFC: to use all SSDs (including newly added) the removeflashcache command
must be used to remove the designated AFC then recreated using createflashcache
• AFC and Adaptive Optimization can co-exist on the same array but serve different purposes:
both improve performance and reduce cost
• The createflashcache –sim <size> can be used to track flash cache statistics even if the array
does not have SSDs to determine if AFC would be beneficial
• If the array contains both eMLC and cMLC SSDs flash cache will be created on the eMLC drives
• Flash Cache is not supported on the 480GB cMLC SSDs
• Administration and configuration of AFC can be done using SSMC 2.1 or higher

64 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Part 1:
Concepts and Configuration
Appendix C
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives
After completing this module, you should be able to:
• Explain the benefits of File Persona
• Install and configure File Persona
• Work with File Persona Authorization and Authentication
• Understand the logical layers of File Persona
− File Shares
− File Stores
− Virtual File Servers (VFS)
− File Provisioning Group (FPG)

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
What is File Persona?
The two most popular storage system technologies are file level storage and
block level storage
• File level storage is often seen and deployed in Network Attached Storage (NAS) systems
This storage technology is most commonly used for storage arrays, which is found in hard
drives and NAS systems and the storage disk is configured with a protocol such as NFS or
SMB and the files are stored and accessed from it in bulk

• Block level storage is often seen and deployed in Storage Area Network (SAN) storage
Storage technology where raw volumes of storage are created and each block can be
controlled as an individual hard drive and these blocks are controlled by server based
operating systems (such as VMware) and each block can be individually formatted with the
required file system.

Prior releases of HP 3PAR OS only supported block level storage

3 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Use Cases
• Home directory consolidation
• Group/departmental shares
• Remote/Branch Office shares
• Sync and share
• Custom cloud applications
• Archiving
• Backup and Recovery

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Hardware requirements
• Series 7000 “c” model: 7200c, 7400c, 7440c, or 7450c model
2-Port 10Gb Optical Ethernet NIC
4-Port 1Gb 1000Base-T Ethernet NIC
2-Port 10Gb Optical Ethernet NIC example
• HP 3PAR OS 3.2.1 MU2 Slot 2

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR converged file services logical view

6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Scalability Limits
Item Limit
Concurrent SMB users per node pair 1500

Max File Provisioning Groups per node pair 16

Max size per File Provisioning Group 32TB

Max aggregate capacity of all FPGs, per 64TB


node pair
Shares per node-pair 4000

FPG, VFS per node pair 16

7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR File Persona: Enablement
and Setup

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Enablement and Setup: SSMC (1 of 3)
• Advanced file objects off

• Advanced file objects on

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Enablement and Setup: SSMC (2 of 3)

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Enablement and Setup: SSMC (3 of 3)

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Enablement and Setup : CLI (1 of 2)
1. Verify File Persona license available
− showlicense – look for File Persona
2. Ensure File Persona is not already running
− showfs
3. Start File Persona, specifying target
nodes/ports
− startfs 0:2:1 1:2:1
4. Configure FP node addresses, networking
− setfs nodeip …; setfs nodeip …
− setfs gw …
− setfs dns …
5. Configure authentication (CLI or SSMC)
− setfs auth …
− setfs ad …
12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Enablement and Setup : CLI (2 of 2)
• File Services included in TPD install image
− No user interaction changes to install process
− TPD image significantly larger due to inclusion of file services code
• .iso installed into /common/fsvc

• File Services feature enabled from SSMC or CLI


− Requires appropriate file services license
− Screens to collect IP Address and Node/Port information
− Invokes ‘startfs’ on InForm to perform actual installation. Internally:
controlport fs add node:slot:port
1.
2. createfsvm …
3. createfsquorumdevice …
4. Validate manageability and connectivity
− Must be enabled on a node pair basis
− stopfs to suspend or disable file services
14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR File Persona:
Authentication and Authorization

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
What is Authentication and Authorization?
• Provides a mechanism for users connecting via the supported protocols (SMB, NFS, and Object
Access API) to authenticate themselves
• Unique access permissions for each file or folder can be specified for each user and group
provided through these mechanisms
• Support is provided for authentication via Active Directory, LDAP, or Local Authentication
Active Directory for primarily Windows environments
RFC2307 is supported for cross-protocol access with active directory
LDAP for primarily Linux/Unix environments
Local Authentication for smaller environments of either type
• The providers can also be stacked for using a combination of these
• Identity Mapping can be configured for NFSv4 access
• The data path security discussed here is completely independent of the management path
security for the array

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Show Authentication Settings

Configuration and setup of Active Directory and


LDAP can be done from these screens as well
17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Configure Authentication Settings (1 of 2)

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Configure Authentication Settings (2 of 2)

19 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Add Local User

20 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
SSMC: Add Local Group

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
HP 3PAR File Persona:
Provisioning and Management

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Persona Configuration Explained
• Manage file services node pairs
Start/Stop Service
Networking
Authentication Inserv038
Antivirus

node 0 node 1

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Provisioning Groups Explained

• Manage File Storage


Activate/Deactivate Inserv038
Grow
Failover
Reassign node 0 node 1
Snapshot space reclaim

markfpg jimfpg

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Create FPG: SSMC

NOTE: by default the FPG is created


when the createvfs command is run

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Provisioning Group: Overview
• Roughly equivalent to CPG, but for
file

• Main unit of HA, DR

• Contains a set of associated VVs for


actually storing data

• May be individually
activated/deactived to control
availability of associated shares

• Minimum Size: 1TB


Maximum Size: 32TB

• Active on a single FP controller node


FPG Creation from CLI
SYNTAX
createfpg [options] <cpgname> <fpgname> <size>{g|G|t|T}  
createfpg ‐recover [‐wait]

OPTIONS
‐comment <comment>
Specifies the textual description of the file  SPECIFIERS
provisioning group.
<cpgname>
‐full
The CPG where the VVs associated with the file 
Create the file provisioning group using fully  provisioning group will be created
provisioned volumes.
‐node <nodeid>
<fpgname>
Bind the created file provisioning group to the specified 
node. The name of the file provisioning group to be created

‐recover
Recovers the file provisioning group which is involved in  <size>
Remote DR and that was removed using the ‐forget option. The size of the file provisioning group to be created. 
‐wait The specified size must be between 1T and 32T. A suffix 
(with no whitespace before the suffix) will modify the 
Wait until the associated task is completed before  units to GB (g or G suffix) or TB (t or T suffix).
proceeding. This option will produce verbose task 
information.
27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
FPG Creation Example from CLI

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
FPG Modification: setfpg
SYNTAX
setfpg [options] <fpgname>

OPTIONS
‐comment <comment string> OPTIONS (con’t)
Specifies any addition textual information. ‐failover
‐rmcomment Specifies that the File Provisioning Group should be 
Clears the comment string. failed over to its alternate node. If it has previously 
failed over to the secondary, this will cause it to 
fail back to the primary node.  Will fail if a graceful 
‐activate failover is not possible.
Makes the File Provisioning Group available. ‐forced
‐deactivate In the event of failure to failover, this will attempt 
Makes the File Provisioning Group unavailable. a forced failover.

‐primarynode <nodeid> SPECIFIERS


Specifies the primary node to which the File Provisioning  <fpgname>
Group will be assigned.  Appropriate <nodeid> values are  The name of the file provisioning group to be modified
defined as those on which file services has been enabled.

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
FPG Growth: growfpg

• Implemented
with the
creation of an
additional VV
− New VV is
associated
with the
existing FPG
− Creation and
growth
increment
limited to 1T
“chunks”

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
FPG Grow: SSMC

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Virtual File Servers (VFS) Explained

• Manage network endpoints


Inserv038
IP addresses
• Manage user/group capacity
Quota Limits
node 0 node 1
• Manage antivirus
Define default Policies and start scans
markfpg jimfpg

markvfs

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS: SSMC (1 of 3)

33 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS: SSMC (2 of 3)

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS: SSMC (2 of 3)

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS Creation: CLI
SYNTAX
createvfs [options] <ipaddr> <subnet> <vfsname> OPTIONS (con’t)
‐node <nodeid>
OPTIONS The node to which the File Provisioning Group should be 
‐comment assigned.
Specifies any additional textual information. ‐vlan <vlanid>
‐bgrace <time> The VLAN ID associated with the VFSIP.
The block grace time in minutes for quotas within the  ‐wait
VFS. Wait until the associated task is completed before 
‐igrace <time> proceeding. This option will produce verbose task 
information.
The inode grace time in minutes for quotas within the 
VFS.
‐fpg <fpgname> SPECIFIERS
The name of the File Provisioning Group in which the VFS  <ipaddr>
should be created. The IP address to which the VFS should be assigned
‐cpg <cpgname> <subnet>
The CPG in which the File Provisioning Group should be  The subnet for the IP Address.
created.
<vfsname>
‐size <size>
The name of the VFS to be created.
The size of the File Provisioning Group to be created.

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS IP Addresses CLI

• IP address information required at VFS configuration time


− VFS IP address  share accessibility to outside world
− Potentially many IP addresses associated with a single VFS
− IP specified in createvfs is not special

• Managed using showfsip, createfsip, setfsip, removefsip


− ID auto-assigned at creation time, used for set/remove, along with VFS name

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores Explained

• Manage store capacity Inserv038


Quota Limits
• Manage antivirus
node 0 node 1
Policies and Scans per file store
• Manage file snapshots markfpg jimfpg
Create/Delete file snapshots
markvfs

.admin home

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores: SSMC (1 of 4)

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores: SSMC (2 of 4)
: SSMC (2 of 4)

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores: SSMC (3 of 4)

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores: SSMC (4 of 4)

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores Management: CLI

• File Stores are primarily a grouping mechanism

• Typical HP 3PAR file store CLI commands commands include:


− showfstore, createfstore, setfstore, removefstore

• May be explicitly created, or implicitly created as part of share creation

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Shares Explained

• Share file to network clients Inserv038

node 0 node 1

markfpg jimfpg

markvfs

.admin home

markm theom

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Shares: SSMC (1 of 3)

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Shares: SSMC (2 of 3)
Share type can be:
• NFS share
• SMB share

46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Shares: SSMC (3 of 3)

Share path settings include


• Client List
• Permissions

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Shares: CLI

• CLI management CLI commands include: showfshare, createfshare, setfshare,


removefshare

• Examples:
cli% setfshare smb -allowip 100.1.1.1 myvfs myshare
cli% setfshare nfs -options rw -fstore myfstore myvfs myshare
cli% setfshare obj -readonly true -fstore myfstore myvfs myshare

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona Part 2:
Snapshots, Antivirus, Quotas
Appendix D
HK902S E.00

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Module objectives
After completing this module, you should be able to:
• Explain how snapshots work with File Persona
• Configure and use built in Antivirus capabilities of File Persona
• Work with File Services Quota Management

2 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots Explained (1 of 2)
• Snapshots are instant and space efficient replicas used for data protection
• Snapshots are the point-in-time (PIT) replica of an active data set/file system
• Snapshots are the most common method for local data protection
• Snapshots enable user driven self recovery
• Snapshots don’t occupy any additional space -- unless the data is changing
• Snapshots provide data continuity and enhanced application availability
• Snapshots eliminate backup windows by simplify back-up management of large volumes of
data
• Snapshots provide the opportunity for more frequent backups and thus more RPOs
• Snapshots offer virtual elimination of backup gaps, overall lowering TCO

4 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots Explained (2 of 2)
• Snapshots offer a versioning mechanism that allows a view of the present and past
point-in-time states of the file system
• Snapshots preserve the previous states of files and folders
• Snapshots provide granular restoration at the file/folder level (vs. volume level for
block snapshots)
• Snapshots require a reclamation process to free up space within an FPG after deleting
snapshots
• Snapshots are read-only
• Snapshots use copy-on-write (COW) to enable fast snaps and minimal memory usage

5 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshot Functionality
• During file store creation, the file store is
configured for snapshots, with a .snapshot
directory in the root of the file store
file
• When a snapshot is created, an entry in this
store
directory is added matching the snapshot name
dir1 dir2 .snapshot
• When a file/folder preserved by a snapshot is
modified, a copy-on-write operation occurs where
the current version points to new blocks and the
old blocks are left intact

• After removing snapshots, a snapshot reclamation


process should be executed at the FPG level to
reclaim any now-unused blocks
6 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshot Design Criteria
Scalability
• 1 snapshot directory tree per file store (16 per FPG currently)
• 1024 snapshots per snapshot directory tree
• Designed for almost limitless snapshot capability (half a million on a 4 node model)

Flexibility
• Supports all protocols used to access the file system: SMB, NFS, Object Access API
• Snapshots can be exported/presented as an independent share for backup and replication
needs

Efficiency
• Fast snapshot invocation and minimal memory usage (no file system freeze)
• Space allocation equivalent to free block and inode pool for the file system
• No over-provisioning required for snapshot reservation
7 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshot Basic Concepts
Snapshots are built on top of two core technologies:
Name Space Versioning
• Used to track file and directory creation/deletion
• Introduces the concept of birth and death epochs to a file
• Files that are deleted after a snapshot was taken are not deleted from the directory
• Deleted files are hidden from standard enumeration, but continue to be available

Remap on write
• Used to track changes to a single file
• All historical information is linked to the present version of the object
• When a file is modified after a snap is taken:
A new inode is created to reflect the old view
New writes are remapped to a new area
Block pointers are adjusted as needed
8 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Snapshot Lifecycle
• Mark directory as a snap tree implicitly as part of creating a file store (.snapshot
directory)

• Creation of snapshots (on demand or scheduled)

• Snapshot deletion

• Purging of deleted snapshots: reclamation

• Deleting a snapshot tree (done implicitly when deleting a file store)

9 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: File Snapshots View in SSMC

10 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Creating a Snapshot in SSMC (1 of 2)

11 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Creating a Snapshot in SSMC (2 of 2)

12 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshot Delete and Reclaim
Snapshots are deleted/removed:
• If retention period is set and retention period reached
• Deletion of shapshots will happen automatically if –retain option used with CLI
• Snapshots can be deleted manually via CLI or SSMC
• When a snapshot is deleted the file system space will not be released until a snapshot
reclamation is done for the FPG.

13 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Create Snapshot Reclaim in SSMC

MAXSPEED = release of only entire files


MAXSPACE = release of blocks and files

14 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Snapshot Reclamation Tasks in SSMC

15 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots using the CLI (1 of 2)
• Create snapshots
createfsnap [-retain <rcnt>] [-f] [-fpg <fpgname>] <vfs> <fstore> <tag>

• Delete snapshots
removefsnap [-f] [-fpg <fpgname>] [-snapname <name>] <vfs> <fstore>

• List snapshots
showfsnap [-fpg <fpgname>] [-vfs <vfs>] [-fstore <fstore>] [-pat] <snapname>|<pattern>]

16 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots using the CLI (2 of 2)

• Start/Resume snapshot reclamation task


startfsnapclean [-resume] [-reclaimStrategy {<maxspeed>|<maxspace>}] -fpg <fpg-name>

• Stop/Pause snapshot reclamation task


stopfsnapclean [-pause] -fpg <fpg-name>

• List snapshot reclamation tasks


showfsnapclean -fpg <fpgname>

Snapshot reclamation can be scheduled using the createsched


command in conjunction with the startfsnapclean command

17 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Snapshots statfs
statfs –snapshot [-iter <number>] [-d <secs>] [-node <nodeid>[,<nodeid>]...] [-verbose]

cli% statfs -snapshot -verbose


02:46:57 03/15/2015
Snapshots
02:46:57 03/15/2015
Node RedirectOnWrite
0 0
0 118
1 12
1 0
1 2
-----------------------------------
total 2 132

18 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Antivirus Scanning

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Antivirus Scanning explained
• Antivirus scanning is a data protection method against destructive virus and malware
• Antivirus scanning on a network share or home directory is critical for data protection
as the incoming data is from multiple users and PCs
• Ensures data security of any removable data on a PC
• Quarantines the infected file(s) for an offline action to maintain business continuity
• Reduces outages by preventing virus attacks
• Prevents viruses/malware from propagating further
• Protect against Denial of Service attack

Key acronyms:
AV: Antivirus
VSE: Virus Scan Engine

21 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Solution Overview
• AV solution is Independent of the Antivirus stack provided by third party vendors
• AV seamlessly integrates with HP 3PAR arrays and server based third party vendors :
McAfee and Symantec
• Supports configuration to control Scanning (AV scan policies)
• Supports on-access (real time) and schedule scanning (on demand scanning)
• Supports multiple virus scan servers for redundancy and improved throughput performance
• Scanned file information is persisted to eliminate redundant scans
• Supports different protocols –NFS/HTTP/CIFS
• Single vendor supported for the cluster(currently)
• Files are scanned with Virus Scan Engines (VSE’s )using latest virus definition updates
• Infected files are quarantined by default

22 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Scanning: On Access Scan (1 of 2)
• CIFS: Policy can be set to scan on Open or on both Open and Close and scan is performed
only when needed
• NFS/HTTP: Files are scanned on READ (No other policy need to be explicitly defined)
• Access to infected files is denied to the user and file(s) quarantined
• Support for automatic and manual start/stop of AV service
• AV policies – SCAN ENABLE, VENDOR, EXCLUSION(size and extension type), CIFS scan, AV
UNAVAILABLE can be set at the Virtual File server(VFS) level
• AV Policy override at File Store allowed
• Files that need to be scanned are scanned as per above mentioned policies set by the
user
• Files will need to be rescanned when the virus definitions are updated on the VSE or
when files are modified

23 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Scanning: On Access Scan (2 of 2)
• Hot files are not scanned for all protocols (Except for CIFS)
• When OPEN and CLOSE policy is set for CIFS, there will be a delay of 35 seconds before
a scan is triggered on CLOSE
• AV Scan statistics are provided at the VFS level
• AV Unavailable policy defines action when configured VSE’s are unavailable or
incoming simultaneous scans exceeds the max threads in AV service(512 currently),
ALLOW will return reads without scan being triggered, DENY will return “access
denied”
• File type exclusions can be defined to exclude multiple file extensions from scanning
• File size exclusions can be defined to exclude files exceeding the specified size from
scanning

24 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Scanning: On Demand Scan
• Support for starting AV scan task on a particular Virtual File Server/File store/directory
path
• Support for starting AV scan task with duration option
• Pause/Resume/Stop of a AV scan task supported
• Supports maximum of eight AV scan tasks running at any point of time
• Listing of all active AV scan tasks for a particular VFS/File store
• AV Policies, once defined, are applicable to both On-access and On demand scanning
• Reasons for Files skipped during AV scan are logged
• Already scanned files are not scanned and reported under skipped files
• Scheduling of scans using the standard HP 3PAR task schedule

25 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Scanning: Managing Quarantine Files
• Support for List of Quarantined files names under VFS/File store
• Deleting Quarantined files present under VFS/File store
• Resetting Quarantined files present under VFS/File store
• Moving Quarantined files present under VFS/Filestore to a default
location in .admin folder of the VFS with a timestamp
• Directory structure and file attributes will be preserved on a move
operation
• User can clean the moved infected files manually through a external
virus scanner
• Files marked as infected are not rescanned unless the files are
explicitly reset by the user

26 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus scanning process
2. The HP 3PAR array determines if the file
needs to be scanned based on the policies that
1. The client requests an open (read) or close (write) have been set and notifies the AV Scan Server.
of an SMB file or read for an NFS or HTTP file.

4. If no virus is found, then access will be allowed to


the file. If a virus found, then there will be an “Access 3. The VSE scans the file and
Denied” to an SMB client, a “Permission Denied” to an reports the scan results back
NFS client, or “transfer closed” to an HTTP client. to the array.
Then the file is quarantined and the scan messages
are logged in /var/log/ade.

27 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus Architecture
Protocols
CIFS
• Scan on OPEN
• Scan on OPEN and CLOSE

NFS
• Scan on Read

HTTP
• Scan on Read

28 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Show/Setup scan engines (VSE) using SSMC

29 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS level Antivirus configuration using SSMC (1 of 3)

30 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS level Antivirus configuration using SSMC (2 of 3)

View VFS policies

31 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
VFS level Antivirus configuration using SSMC (3 of 3)

Set VFS
policies

32 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (1 of 7)

34 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (2 of 7)

View file store policies

35 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (3 of 7)

Set file store policies

36 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (4 of 7)

Create AV scan

37 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (5 of 7)

Manage scan

38 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (6 of 7)

Manage quarantined files

39 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Stores level Antivirus configuration using SSMC (7 of 7)

Delete quarantined files

40 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Scan Engine Management

Add/Remove/Set Scan Engines


setfsav vse [+|-]<vselist>

List Scan Engines


showfsav

NOTE: when using just the setfsav vse command (without


any options) as this will clear out the list of VSEs.

41 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: AV configuration
Start/stop AV service manually
AV service will not be running on the cluster by
default Display list of the VSEs configured on the array
cli% showfsav
startfsav,stopfsav
Vendor IPAddress PortNum Status
cli% startfsav svc MCAFEE 10.2.2.156 1344 Up
cli% stopfsav svc MCAFEE 10.2.2.157 1344 Down

Add/remove a virus scan engine (VSE)


setfsav
cli% setfsav vse +10.2.2.157:9444
cli% setfsav vse -10.2.2.156:9443

42 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Policy Management
Set Antivirus Policy
setfsav pol

Show Antivirus Policies


showfsav pol

43 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AV policies using the CLI: Examples (1 of 3)
Configuring AV polices for the Virtual File Server
Enable scan on VFS unity.3pardata.com on the File Provisioning Group TESTFPG using the vendor type to SYMANTEC,
set the AV Unavailable policy to ALLOW, exclude file extensions HTM and JPG, and exclude files larger than 10MB from
scanning.
cli% setfsav pol -scan enable -vendor SYMANTEC -fileop open -unavail allow
-excludesize 10 -excludeext htm,jpg unity.3pardata.com

Overriding AV policies for a File Store


Override VFS antivirus properties fileop and excludesize into File Store
cli% setfsav pol -fstore engineering -fileop both -excludesize 100 -fpg testfpg
unity.3pardata.com

44 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AV policies using the CLI: Examples (2 of 3)

Inherit VFS antivirus properties fileop and excludesize into File Store
cli% setfsav pol -fstore engineering -fileop inherit -excludesize inherit
–fpg testfpg unity.3pardata.com

Inherit all of the VFS antivirus properties into File Store


cli% setfsav pol -inheritall -fstore engineering –fpg testfpg unity.3pardata.com

Move quarantined files under VFS unityvfs


cli% setfsav quar move unityvfs

45 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
AV policies using the CLI: Examples (3 of 3)
Display AV statistics for the VFS
cli% showfsav quar -vfs unity.3pardata.com

---Scan Engines--- ------------Result-------------


--------VFS------- Configured Active Scanned Infections Quarantined
unity.3pardata.com 1 1 10567 34 10

Output displays:
Scan statistics such as Configured VSEs for a particular vendor
Number of Active VSEs for a particular vendor
Files scanned, Files infected, Files Quarantined for a particular VFS
Ability to reset the statistics counters for files scanned/files infected/files quarantined***

46 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Scan management (1 of 2)
Start Scan
startfsav scan

Pause/Stop Scan
stopfsav scan [-pause]

Resume Scan
startfsav scan –resume

List Scans
showfsav scan

47 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Scan management (2 of 2)

Show the list of all AV scan tasks running on a VFS/File store

cli% showfsav scan -vfs unity.3pardata.com -fstore engineering

-----Inodes---- ---------Averages----------
--------VFS------- --FileStore-- ID State Scanned Skipped FileSize(kb) ScanRate(Mbps) --Path--
unity.3pardata.com engineering 24 STARTING 1024 6 10567 34 /foo/bar
unity.3pardata.com engineering 25 RUNNING 10256 0 9577 34 /foo/zib

48 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Quarantined file management (1 of 2)

Move Quarantined files


setfsav quar move Export Quarantined file list
setfsav quar exportlist
Delete Quarantined files
setfsav quar delete List Quarantined file information
showfsav quar
Reset Quarantine indication on files
setfsav quar reset

49 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus CLI: Quarantined file management (2 of 2)
Export list of quarantined files under unityvfs
cli% setfsav quar exportlist unityvfs

Move quarantined files under VFS unityvfs


cli% setfsav quar move unityvfs

Reset quarantined files under File Store engineering under VFS unityvfs
cli% setfsav quar reset -fstore engineering unityvfs

Delete quarantined files under File Store engineering under VFS unityvfs
cli% setfsav quar delete -fstore engineering unityvfs

50 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Antivirus White Paper

Technical white paper:


Antivirus scanning best practices guide for HP 3PAR File Persona

http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA5-6079ENW&cc=us&lc=en

51 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quota Management

© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quota management explained (1 of 2)

Quota management provides better control and planning for data growth:
• Quota management enables storage administrators to control user space
• Quotas help reduce the business cost for backups and archival of data
• Quotas help manage the growth and allocation of system resources
• User quotas reduce the likelihood of undesired data being stored in home directories
• Quotas can be combined with alerts and log on events for maintaining records
• Quotas enable a chargeback environment in large organizations

53 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quota management explained (2 of 2)
Quotas are a restriction on the maximum disk usage that can be used by a user, group,
or other unit of a business and quotas:
• Allow restriction of file capacity or number of files
• Can be configured per user, or per group within a Virtual File Server (VFS)
• Can be configured per File Store (independent of user/group)
• Allow specification of users/groups from any of the supported authentication providers
• Provide for reporting of current usage, and alerts/events as certain consumption thresholds are
reached
• Can either be hard (immediately enforced after being exceeded) or soft with a grace period in
which continued writes are allowed
• Can be configured in bulk using an export/import mechanism through the .admin file store

54 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quota management functionality
• User/Group quotas are calculated at virtual server level and file store quotas are calculated at
file store level
• The hard limit is the maximum file size/number of files allotted to the user, group, or file store
• The soft limit is the MB/number of files that, when reached, trigger a countdown timer (default
is 7 days)
• Quotamonitor daemon runs on every node to do the accounting and the kernel does the actual
enforcement
• If a user has a user quota and a group quota for the same virtual server, the first quota reached
takes precedence
• Nested quotas are not supported
• Quotas are enabled on the file system by default
• Setting a quota to zero removes the quota
• The existing quota configuration can be exported/imported to a file at any time

55 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quotas in SSMC

Quotas View

56 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quota management in SSMC

57 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Import/Export quotas in SSMC

58 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Quotas in SSMC

59 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
Create/Edit File Store Quotas in SSMC

60 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Managing quotas using the CLI (1 of 2)
Configure User/Group Quota Configure Quota Limits
setfsquota –username setfsquota –scapacity
setfsquota –groupname (soft capacity limit)
setfsquota –hcapacity
(hard capacity limit)
Configure File Store Quota
setfsquota –sfile (soft file limit)
setfsquota –fstore
setfsquota –hfile (hard file limit)

Clear a Quota
Restore Quotas
setfsquota –clear
setfsquota -restore

Archive Quotas
Display Quotas
setfsquota -archive showfsquota
61 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
File Persona: Managing quotas using the CLI (2 of 2)
Set Grace Time Initially (block grace time/inode grace time)
createvfs –bgrace
createvfs -igrace

Update Grace Time (block grace time/inode grace time)


setvfs –bgrace
setvfs -igrace

Show Grace Time


showvfs –d

63 © Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
© Copyright 2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.