Sie sind auf Seite 1von 71

RELEASE NOTES

EMC VMAX3TM Family with HYPERMAX OS 5977


Release Level HYPERMAX OS 5977.691.684

Release Notes
REV 10

August 22, 2016

These release notes contain information about EMC HYPERMAX OS 5977.691.684 (and
previous releases) for the VMAX3 Family. Topics include:
Revision history ........................................................................................................ 2
Audience .................................................................................................................. 2
Product description................................................................................................... 3
Features.................................................................................................................. 10
Enhancements for HYPERMAX OS 5977.691.684..................................................... 44
Enhancements for HYPERMAX OS 5977.596.583..................................................... 44
Enhancements for HYPERMAX OS 5977.498.472..................................................... 46
Fixed problems for HYPERMAX OS 5977.691.684 .................................................... 47
Fixed problems for HYPERMAX OS 5977.596.583 .................................................... 57
Fixed Problems for HYPERMAX OS 5977.498.472 .................................................... 67
Related documentation........................................................................................... 68
Troubleshooting and getting help............................................................................ 69
Glossary.................................................................................................................. 70
Revision history

Revision history
Table 1 presents the revision history of this document.

Table 1 Revision history

Revision Date Description

01 September 26, 2014 First release of EMC HYPERMAX OS 5977.250.189 for the
VMAX3TM Family.

02 December 04, 2014 First release of HYPERMAX OS 5977.497.471.

03 December 23, 2014 First release of HYPERMAX OS 5977.498.472.

04 December 24, 2014 Updated to include additional fix details for HYPERMAX
OS 5977.498.472.

05 March 16, 2015 First release of HYPERMAX OS 5977.596.583.

06 March 17, 2015 Updated to include additional fix details for HYPERMAX
OS 5977.596.583.

07 April 02, 2015 Updated to include additional details for Bronze Service
Level Objective.

08 September 25, 2015 First release of HYPERMAX OS 5977.691.684.

09 October 09, 2015 Updated to clarify SRDF N-X support details.

10 August 22, 2016 Updated to remove references to the VMAX 100K


four-engine configuration.

Audience
This document is primarily intended for EMC customers but it can also be used by EMC
Customer Service personnel.

Note: The HYPERMAX OS 5977 Release Notes document contains technical information
that requires the reader to have some primary knowledge of EMC hardware and software
products.

This document was accurate at publication time. New versions of this document might be
released on EMC Online Support https://support.EMC.com. Check to ensure that you are
using the latest version of this document.

2 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Product description

Product description

VMAX3 Family
This section provides high-level information on the features and benefits of the
next-generation VMAX3 Family (100K, 200K, and 400K) arrays. More detailed array-
specific information is provided in "Provisioning limits" on page 9.
VMAX3 arrays are purpose built for Hybrid cloud, providing the worlds first and only
Dynamic Virtual Matrix that allows resources to be allocated in real-time applications
(front-end hosts), information (back end/storage devices), and rich data services. The
VMAX3 arrays deliver mission-critical storage with the scale, performance, availability,
and agility required to meet the high demands of extreme data growth for the current and
future Hybrid cloud. Ranging from the single or dual-engine VMAX 100K up to the
eight-engine VMAX 400K, the VMAX3 arrays offer dramatic increases in floor tile density
with engines, and high-capacity disk enclosures for both 2.5" and 3.5" drives
consolidated in the same system bay. In addition, VMAX3 arrays support:
Hybrid or all flash configurations
System bay dispersion of up to 82 feet (25 meters) from the first system bay
Optional third-party racking

VMAX3 arrays support the use of native 6 Gb/s SAS 2.5" drives, 3.5" drives, or a mix of
both drive types. Individual system bays can house either one or two engines and up to six
high-density disk array enclosures (DAEs) per engine available in either 3.5" (60 slot) or
2.5" (120 slot) formats. Each system bay can support up to 720 2.5" drives, 360 3.5"
drives, or a mix of both. Dual-engine configurations support a maximum of two DAEs in
either 3.5" (120) or 2.5" (240) formats. VMAX3 arrays come fully pre-configured out of the
factory to significantly shorten the time to first I/O during installation.

Array management
VMAX3 arrays can be managed via:
EMC Unisphere for VMAX V8.1A Graphical User Interface (GUI) that provides a
common EMC user experience across storage platforms. Unisphere for VMAX enables
you to easily provision, manage, and monitor VMAX environments. Refer to the EMC
Unisphere For VMAX V8.1 Documentation Set at https://support.EMC.com.
EMC Solutions Enabler V8.1Intended for use by advanced command-line users and
script programmers to manage various types of control operations on VMAX arrays
and devices using the SYMCLI commands of the Solutions Enabler software. Refer to
the EMC Solutions Enabler V8.1 Documentation Set at https://support.EMC.com.
EMC SRDF/TimeFinder Manager for IBM i V8.1 A set of host-based utilities that
provides an IMB i interface to EMC Remote Data Facility (SRDF) and EMC TimeFinder.
The SRDF/TimeFinder interface enables you to configure and control SRDF or
TimeFinder operations on DMX or VMAX Family arrays attached to IBM i hosts. Refer to
the EMC SRDF/TimeFinder Manager for IBM i Product Guide at
https://support.EMC.com

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 3


Product description

Benefits

Radical agility
Service Level Objectives and pre-configuration of Fully Automated Storage Tiering
(FAST) technology change the industry paradigm by allowing you to better manage
your VMAX3 array workloads by Service Level Objective: Diamond, Platinum, Gold,
Silver, Bronze, and Optimized (default setting). Service Level Objective Management
provides predictable service level delivery throughout the management lifecycle by
enabling you to plan, provision, monitor and manage your Service Level Objectives.
FAST Array Advisor identifies workloads that are not meeting their Service Level
Objective and issues a report on where the workload could move to, and how to move
it. FAST Array Advisor also allows you to check all arrays for a potential move, with
Unisphere for VMAX highlighting the array most suitable to perform the move.
FAST Hinting allows the VMAX3 array to proactively respond to the forecast
performance needs of databases and applications, ensuring the Service Level
Objective policy is not impacted by a burst of activity.
Improved system agility to instantaneously support per-port spikes in workloads.
A VMAX3 array comes fully pre-configured, providing a significantly reduced
installation time and simplified implementation process; support for thin devices
provides a more intuitive storage system.
VMAX3 unifies file and block data services via Embedded NAS (eNAS), reducing the
capital and operational expense, as well as the rack space consumed by an external
NAS Gateway.
HYPERMAX OS 5977.691.684 provides the option of managing VMAX3 arrays using
the Embedded Management (eManagement) container application. This feature
enables customers to further simplify management, reduce cost, and increase
availability by running VMAX3 management software directly on VMAX3 arrays.
eManagement embeds VMAX3 management software (Solutions Enabler, SMI-S,
Unisphere for VMAX) on the VMAX3 array, enabling customers to manage the array
without software installed on a host. eManagement manages a single VMAX3 array
and SRDF attachments, customers with multiple VMAX3 arrays who want a single
control pane can use the traditional host-based management interfaces.
Other important enhancements include: 6 Gb/s SAS back end, Dynamic Virtual Matrix
Infiniband (IB) Interconnect to allow engine separation, increased geographical
dispersion, and environmental data collection and notification through various GUI
applications.
FAST.X offers automated tiering beyond and across the data center, extending
enterprise data services to multiple platforms and extending service level objectives
from VMAX3 to other storage devices.
VMAX3 integration with EMC XtremIO flash storage allows customers to benefit from
the data reduction of XtremIO while leveraging VMAX3s reliability and simplicity. The
optional XtremIO X-Brick high performance data reduction tier creates the ultimate
enterprise platform for mixed workloads.

4 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Product description

VMAX3 integration with EMC CloudArray cloud-integrated storage provides the ability
to move less active workloads to more cost-efficient cloud storage resulting in up to
40 percentage lower storage costs and highly scalable back-end capacity.

Extreme performance
Superior next-generation hardware consisting of new engines with: Ivy bridge-based
processors; PCIe Gen 3 Interconnects and 48 to 96 cores per engine, and new Full
Data Rate (FDR) Dynamic Virtual Matrix operating at 56 Gb/s. With the addition of
VMAX3 100K support for up to 4 engines, the maximum drive configuration is now:
720 for single engines
1,440 for 2 engines
2,160 for 3 engines
2,880 for 4 engines

Local RAID places all the RAID members of a device behind the same engine and also:
facilitates local access and control over I/O for all RAID members, reduces I/O
overhead, improves performance, and maintains expected levels of high availability.
128 KB front-end track size results in improved performance of large I/O.
VMAX3 arrays can be configured as either hybrid or all flash configurations.
Dispersion of up to 82 ft (25 meters) from the first system bay provides you with
greater flexibility in deploying VMAX3 arrays in your data center.
Third-party rack support integrates the VMAX3 array in a preferred third-party rack to
support standardization of cabinets and is available across the product family.
Dual-engine system bay support provides you with the flexibility to further reduce your
storage array footprint depending on your performance vs. capacity requirements.
Improved reporting for exceeded thresholds provides you with an easy way to
determine the current performance health of your arrays.
Unisphere Performance Viewer supports Signal Transfer Point (STP) messages and can
send the performance dashboards via email (EMC personnel only). Unisphere for
VMAX REST API provides functionality for Service Level Policy provisioning,
performance, and FAST information.

Trusted architecture
VMAX3 arrays provide Data at Rest Encryption (D@RE) ensuring controller-based
encryption (CBE) for maximum protection. D@RE protects against unauthorized data
access when drives are lost, complies with Federal and Industry requirements, and
eliminates drive shredding.
No single point of failure; all VMAX3 array components are fully redundant to
withstand any component failure; always-on availability architecture with advanced
fault isolation and robust data integrity checking.
An all flash cache data vault that is capable of surviving two VMAX3 array component
failures.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 5


Product description

SRDF enhancements incorporate performance improvements for synchronous and


asynchronous replication and improved parallelism in writes to SRDF-protected LUNs.
VMAX3 Family arrays support SRDF/Star and Cascaded SRDF for VMAX 10K, 20K, 40K
to VMAX3 (N-X support), and VMAX3 to VMAX3.
SRDF/Metro with Witness support provides concurrent access VMAX3 SRDF replicated
storage over synchronous distances to a host or clustered hosts, enabling higher
availability for applications and data at Metro Distances.
TimeFinder is a local replication solution designed to non-disruptively create
point-in-time copies (snapshots) of critical data. The underlying technology for
TimeFinder is SnapVX. SnapVX creates snapshots by storing changed tracks (deltas)
directly in the Storage Resource Pool of the source device. With SnapVX, you do not
need to specify a target device and source/target pairs when you create a snapshot.
ProtectPoint provides faster, more efficient backups while eliminating the backup
impact on application servers. By integrating VMAX3 with Data Domain storage,
ProtectPoint reduces cost and complexity by eliminating traditional backup
applications while still providing the benefits of native backups. Key benefits include:
achieve faster, more frequent backups to meet stringent SLOs; instantly access
application backups from Data Domain for simple granular recovery; recover faster
from native full backups; virtually eliminate backup impact on application servers;
and eliminate the need for a dedicated backup server. Unisphere for VMAX provides
an enhanced Protection dashboard, where you can view all ProtectPoint-configured
storage groups in one location.
File Auto Recovery (FAR) with SRDF/S is a two-way synchronous-only replication which
provides seamless failover of Virtual Data Movers (VDM) from one site to another.

Scale and density


Support for 64K front-end devices allows you to scale your applications and simplify
management of environments that have a large number of devices.
VMAX3 array hardware provides differentiated functionality and the ability to scale
mission-critical applications and support new cloud infrastructures.
The HYPERMAX OS Data Services emulation1 provides the ability to scale front-end
and back-end resources independently.
Continued improvements in software management ensure that you can easily manage
rapidly scaling requirements and data centers.
Up to 32 front-end ports per engine, and 256 ports per array (two-fold increase)
enables you to further scale-out host connectivity (FC, iSCSI, or FCoE) without adding
more engines.
Consolidation of CPU resources and ports used for common functions provide efficient
resource utilization.

1. May also be referred to as EDS or ED.

6 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Product description

HYPERMAX OS
VMAX3 arrays introduce the industrys first open storage and hypervisor converged
operating system, HYPERMAX OS.
HYPERMAX OS combines industry-leading high availability, I/O management, quality of
service, data integrity validation, storage tiering, and data security with an open
application platform. It features the first real-time, non-disruptive storage hypervisor that
manages and protects embedded services by extending VMAX high availability to services
that traditionally would have run external to the array. It also provides direct access to
hardware resources to maximize performance and can be non-disruptively upgraded.
HYPERMAX OS runs on top of the Dynamic Virtual Matrix leveraging its scale-out flexibility
of cores, cache, and host interfaces. The VMAX3 hypervisor reduces external hardware
and networking requirements, delivers higher levels of availability, and dramatically
lowers latency.
HYPERMAX OS provides the following services:
Manages system resources to intelligently optimize performance across a wide range
of I/O requirements.
Ensures system availability through advanced fault monitoring, detection, and
correction capabilities. HYPERMAX OS also provides concurrent maintenance and
serviceability features.
Interrupts and prioritizes tasks from microprocessors. For example, HYPERMAX OS
ensures that fencing off failed areas takes precedence over other operations.
Offers the foundation for specific software features available through EMCs disaster
recovery, business continuity, and storage management software.
Provides functional services for both the host VMAX3 array and for a large suite of EMC
storage application software.
Defines the priority of each task, including basic system maintenance, I/O processing,
and application processing.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 7


Product description

HYPERMAX OS emulations
HYPERMAX OS provides a range of emulations that operate between the array, host,
management, and back-end functions. Table 2 provides a summary of the HYPERMAX OS
emulations that are available in VMAX3 arrays.
Table 2 HYPERMAX OS emulations

Area Emulation Description Protocol1

Back end DS A back-end connection in the array that communicates with the SAS 6 Gb/s
disk drives, DS is also known as an internal disk controller.

DX Back-end connections that are not used to connect to hosts. FC 8 or 16 Gb/s


ProtectPoint leverages FAST.X to link Data Domain to the VMAX3
array, FAST.X extends storage tiering and service level
management to other storage platforms (including XtremIO,
Cloud Array and 3rd party arrays). On the VMAX3 array, DX ports
must be configured for FC protocol.

Management IM IM enables the separation of infrastructure tasks and N/A


emulations. By separating these tasks, emulations can focus on
I/O-specific work only, while IM manages and executes common
infrastructure tasks, such as environmental monitoring, Field
Replacement Unit (FRU) monitoring, and vaulting.

ED A new middle layer used to separate front-end and back-end N/A


communications. It acts as a translation layer between the front
end, which is what the host knows about, and the back end,
which is the layer that reads, writes, and communicates with
physical storage in the VMAX3 array.

Host / Open FA - Fibre Front-end emulation that: FC - 8 or 16 Gb/s


Systems SE and FE - 10 Gb/s
Channel Receives data from the host (network) and commits it to the
SE - iSCSI array
Retrieves data from the array to the host/network
FE - FCoE

Remote copy RF - Fibre Interconnects arrays for Symmetrix Remote Data Facility (SRDF). RF - 8 Gb/s FC SRDF
Channel RE - 1GbE SRDF
RE -GbE RE - 10 GbE SRDF

1. The 8Gb/s module auto-negotiates to 2/4/8 Gb/s and the 16 Gb/s module auto-negotiates to 4/8/16 Gb/s using optical SFP
and OM2/OM3/OM4 cabling.

8 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Product description

Provisioning limits

Table 3 Provisioning limits for arrays running HYPERMAX OS 5977 and later

Area Features

Devices Maximum 16K per each director


Maximum 4K per each storage group

Initiator group Maximum 16K initiator groups per array


Maximum 64 initiator addresses (or 64 child initiator group
names) per group

Note: Using multiple child initiator groups in a cascaded initiator


group with multiple initiators, allows a masking view to exceed the
limit of 64 initiators.

Note: An initiator group is comprised of either World Wide Name


(WWN) initiators or iSCSI initiators; a mix of initiator types in an
initiator group is not supported.

Storage group Maximum 16K storage groups per array


Maximum 64 child storage groups per each parent storage group
Maximum 4K storage groups with host I/O limits defined

Port group Maximum 16K port groups per array


Maximum 32 ports in a port group

Note: A port group is comprised of either physical ports (fibre) or


virtual targets (iSCSI); a mix of port types in a port group is not
supported.

Masking view Maximum 16K masking views per array

LUN addresses Maximum 4K LUN addresses per director port

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 9


Features

Features
This section details the features introduced for the VMAX3 Family. It also contains
information on features that have been superseded or removed with the introduction of
HYPERMAX OS 5977.
Features are categorized per area:
"Platform and infrastructure" on page 11
"HYPERMAX OS data services" on page 17
"Back-end/drive infrastructure" on page 29
"Open systems front end" on page 29
"Management software" on page 30
"Serviceability" on page 32
"ProtectPoint" on page 35
"VMAX3 unified" on page 36
"Security" on page 40
Table 4 provides a high-level list of the key features introduced in HYPERMAX OS
5977.691.684. Refer to the specific area for more information.
Table 4 New features

Area Features

Platform and infrastructure Additional upgrade capabilities


Improved VMAX3 performance
Front-end I/O module, FCoE, and iSCSI support
File I/O module upgrade to a Block I/O module or another File
I/O module type

HYPERMAX OS data services Non-disruptive software upgrade


Metadata enhancements
Ability to create large devices
Online device expansion
SRDF Enhancements
SRDF/Metro with Witness support
FAST.X integration with CloudArray and XtremIO

Management software Improved Unisphere performance


Additional Unisphere reporting capabilities
Unisphere support for iSCSI, enhanced SRDF, SRDF/Metro and
DSA hinting
Embedded Management

ProtectPoint Ability to terminate a full LUN restore with minimal impact

VMAX3 unified eNAS support for all HYPERMAX OS code releases


Additional eNAS upgrade capabilities
eNAS File Auto Recovery (FAR) with SRDF/S

10 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Platform and
infrastructure
The following features are available for the VMAX3 Family.

Multi-core emulation
Table 5 Multi-core emulation details

Area Feature Description

Platform and Multi-core emulation Multi-core emulation provides additional CPU and physical port utilization capabilities to
infrastructure HYPERMAX OS emulations, extending the existing code architecture and improving overall
performance. Features include pre-defined core mappings that allow you to specify
performance characteristics based on expected I/O profiles and usage.

Dynamic Virtual Matrix


Table 6 Dynamic Virtual Matrix details

Area Feature Description

Platform and Dynamic Virtual Dynamic Virtual Matrix provides the Global Memory interface between directors with more
infrastructure Matrix than one enclosure. Dynamic Virtual Matrix is composed of multiple elements, including
Infiniband Host Channel Adaptor (HCA) endpoints, Infiniband Interconnects (switches),
and high-speed passive, active copper, and optical serial cables to provide a Virtual
Matrix interconnect.
A fabric Application Specific Integrated Circuit (ASIC) switch resides within a special
Management Interface Board Enclosure (MIBE), which is responsible for Virtual Matrix
initialization and management.

Table 7 Dynamic Virtual Matrix limitation

Limitation First affected release

Diagnostic loopback is not supported. HYPERMAX OS 5977.250.189

Dynamic Virtual Matrix Interconnect


Table 8 Dynamic Virtual Matrix Interconnect details

Area Feature Description

Platform and Dynamic Virtual The Dynamic Virtual Matrix Interconnect provides a fabric interconnect for Direct Memory
infrastructure Matrix Interconnect Access (DMA) transfers and communication between directors. The SwitchOS software on
the MIBE supports Dynamic Virtual Matrix Interconnect bring up, MIBE chassis
environment monitoring, and communication. The VMAX 100K and 200K arrays support 8
Interconnect ports; the 400K array supports 16 Interconnect ports.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 11


Features

VMAX3 Family engines


Table 9 VMAX3 Family engine details

Area Feature Description

Platform and VMAX3 Family The VMAX3 hardware infrastructure for the 100K, 200K, and 400K arrays is designed
infrastructure engines around the new engine. A significant difference from previous platforms is that all
processors use Intel 64 architecture.
The engine hardware is based on an Intel x86-64 CPU architecture, using the Intel Ivy
Bridge-EP CPU and an Intel Patsburg Platform I/O Hub (PCH). There are two directors per
engine, and each director board supports 24 (100K), 32 (200K), or 48 (400K) Ivy Bridge
cores, and Intel hyper-threading technology. The only physical differences between the
engines in a 100K, 200K, and 400K array are the dual inline memory module (DIMM)
populations and CPUs (in both core frequency and number of cores).

HYPERMAX OS 5977.691.684 supports:


Non-disruptive online add and upgrade of DAEs
Non-disruptive online add of engines
Non-disruptive online system bay add and upgrade for dual engine system bay
configuration
Drive upgrades of different drive and RAID protection types
Memory replacement and upgrades to existing engines
Flash I/O module additions and upgrades to existing engines
Online I/O module add and conversion of all protocol types, including eNAS and
compression, to existing engines
Non-disruptive online fabric add
Multi-protocol I/O modules (Fibre Channel, FCoE, and iSCSI)
Configuring a slot as Admin-Disabled Slot / Defective_slot in a DAE
File I/O module upgrade to a Block I/O module or another File I/O module type.

Table 10 VMAX3 Family engine serviceability limitations

Serviceability limitations First affected release Limitation lifted

Drive upgrades must fit within the existing infrastructure and cache resources, and have HYPERMAX OS HYPERMAX OS
the same drive and RAID protection type. 5977.596.583 5977.691.684

Note: It is possible to add new disk groups with different drive and RAID protection type
via RPQ. Contact your EMC representative for more information.

DAE upgrades are not supported HYPERMAX OS HYPERMAX OS


5977.596.583 5977.691.684

On-Line Virtual Matrix (fabric) upgrades are not supported HYPERMAX OS HYPERMAX OS
5977.250.189 5977.691.684

Flash I/O module additions or upgrades to existing engines are not supported HYPERMAX OS HYPERMAX OS
5977.250.189 5977.691.684

Online add of 16 Gb Fibre I/O modules to existing engines are not supported HYPERMAX OS HYPERMAX OS
5977.250.189 5977.691.684

Memory upgrades to existing engines are not supported HYPERMAX OS HYPERMAX OS


5977.250.189 5977.691.684

DAE additions into empty DAE slots are not supported HYPERMAX OS HYPERMAX OS
5977.250.189 5977.691.684

12 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Table 10 VMAX3 Family engine serviceability limitations (continued)

Serviceability limitations First affected release Limitation lifted

Hardware replacement support plan: HYPERMAX OS


No script automated engine replacement 5977.250.189
No script automated DAE replacement
Engineering will support a manual procedure when required

Online PDU replacement script is not supported (fly and fix workaround). HYPERMAX OS
5977.250.189

Online firmware upgrade script is not supported; a workaround is available. HYPERMAX OS HYPERMAX OS
5977.250.189 5977.596.583

The online add of engines is not supported. HYPERMAX OS HYPERMAX OS


5977.250.189 5977.691.684

FCoE or iSCSi protocols are not supported on the same front end I/O module, a separate HYPERMAX OS
front end I/O module is needed for each protocol. 5977.691.684

Dynamic Virtual Matrix data engine driver


Table 11 Dynamic Virtual Matrix data engine driver details

Area Feature Description

Platform and Dynamic Virtual The Dynamic Virtual Matrix data engine driver (DEDD) is a new, single interface to
infrastructure Matrix data engine fabric-attach hardware. This single interface enables the sharing of the fabric-attach
driver hardware with all CPUs and instances, and enables all reads and writes to Global Memory
to utilize the DEDD interface.

Local RAID
Table 12 Local RAID details

Area Feature Description

Platform and Local RAID Local RAID configures all members of a RAID group so that they are behind a single engine
infrastructure and controlled by a single back-end initiator.
New benefits are:
Local access and control over I/O for all RAID members.
A reduction in the number of messages and Global Memory operations.
Less I/O overhead and improved RAID performance.

Additional benefits resulting from local RAID are:


Elimination of cross-bay cabling for direct/daisy chain DAE cabling.
Dispersion at engine/bay level (position around any obstacles or across an aisle).
Ability to configure new systems with any combination of contiguous or dispersed
engine/bays.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 13


Features

64K front-end devices


Table 13 64K front-end devices details

Area Feature Description

Platform and 64K front-end devices This feature introduces support for up to 64K front-end devices. Device creation and
infrastructure mapping for thin devices (TDEVs) is supported by Solutions Enabler, SymmWin, and all
required emulations.

Fabric-less systems
Table 14 Fabric-less systems details

Area Feature Description

Platform and Fabric-less systems Fabric-less systems provide support for single enclosure VMAX3 arrays without fabric
infrastructure hardware. Fabric-less VMAX3 arrays have unique scalability characteristics, greater
memory capability, and speed.

Infrastructure Manager emulation


Table 15 Infrastructure Manager emulation details

Area Feature Description

Platform and IM emulation IM emulation enables the separation of infrastructure tasks and emulations. This
infrastructure separation enables emulations to focus on I/O-specific work only, while IM manages and
executes common infrastructure tasks such as environmental monitoring, FRU monitoring,
and vaulting. IM emulation runs on each physical director board of the array, and has its
own CPU resources.

Internal networking enhancements


Table 16 Internal networking enhancements details

Area Feature Description

Platform and Internal networking Internal networking enhancements provide the VMAX3 array with significantly improved
infrastructure enhancements network resources, better resource management, and the following advanced features:
The VMAX3 Hypervisor allows guests to run on VMAX3 arrays in an isolated
environment while providing controlled access to VMAX3 hardware resources.
A virtual guest network that interfaces with assigned IP addresses from the guest
subnet, and the ability to communicate with each other without routing services.
Two separate physical LAN segments, each connected with Ethernet LAN switches.
Baseboard Management Controllers (BMC) provide an Intelligent Platform Management
Interface (IPMI) over Ethernet for baseboard management.
Support for a virtual network interface (VNI) layer that allows applications to virtualize
Ethernet hardware on director boards.
Advanced MAC filtering infrastructure.

14 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Environmental support
Table 17 Environmental support details

Area Feature Description

Platform and Environmental Environmental support improves the environmental monitoring infrastructure. It is based
infrastructure support on the IM emulation and the monitoring service collects system-wide environmental
information from hardware elements and emulations. The following advanced functions
are provided:
Monitors every power subsystem in the array.
Monitors cooling and temperature information in the system.
Collects and reports system FRU Vital Product Data (VPD) data.
Reports errors for faulty power and cooling components.
Sends notifications of power and cooling-related events for power vaulting.
Monitors vault device counts for power vaulting.
Provides APIs used for power vault-related functionality.
Provides support for system calls for environmental GUI and FRU replacement scripts.
Relays overall environmental health information to every system emulation.

Vault to flash
Table 18 Vault to flash details

Area Feature Description

Platform and Vault to flash Vaulting is the process of saving Global Memory data to a reserved space during an offline
infrastructure event. Vault to flash provides vaulting of Global Memory data to an internal flash I/O
module. The feature provides the following advantages:
Provides better performance by allowing larger Global Memory per director that can be
saved within 5 minutes.
The system is lighter and requires fewer batteries.
The system is easier to configure as there is no longer a requirement to reserve capacity
on back-end drives for vault space.

Software support for DAE intermixing


Table 19 Software support for DAE intermixing details

Area Feature Description

Platform and Software support for Software support for DAE intermixing supports the configuration rules for intermixing
infrastructure DAE intermixing VMAX DAE60s and DAE120s to support the provisioning of all RAID members within a
single engine.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 15


Features

VMAX3 performance improvements


Table 20 VMAX3 performance improvement details

Area Feature Description

Platform and VMAX3 performance VMAX3 performance improvements include continuous enhancements across a variety of
infrastructure improvement workloads, including the addition of the FlashBoost feature that reduces read miss
latency and boosts maximum read IOPS and throughput in the array.

Table 21 VMAX3 performance improvement limitation

Limitation First affected release

FlashBoost does not support the disabling of asynchronous cache. HYPERMAX OS 5977.691.684

16 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

HYPERMAX OS
data services
The following features are available for the VMAX3 Family.

HYPERMAX OS data services emulation


Table 22 HYPERMAX OS data services details

Area Feature Description

HYPERMAX OS HYPERMAX OS data HYPERMAX OS data services emulation consolidates various functionalities to ensure a
data services services emulation simplified and more scalable process for adding applications while also providing a
common host for data services that are not directly in the data path. Some features and
benefits are:
Simplifies the data path code making it more efficient.
Facilitates system scaling of processing power without requiring the addition of
front-end ports or drives.
Provides I/O scalability with the independent addition of cores.

HYPERMAX OS 5977.691.684
Improves the metadata efficiency by reducing the on-flash metadata.
Supports online device expansion.
Improves response time by adding FlashBoost capabilities.
Provides the ability to create large devices; the creation of TDEVs up to 64TB is now
supported.
Reduces device creation time.

Next-generation FAST engine


Table 23 Next-generation FAST engine details

Area Feature Description

HYPERMAX OS Next-generation FAST The next-generation FAST engine:


data services engine Automatically re-balances pools when new capacity is added.
Enables you to set performance levels by Service Level Policy.
Actively manages and delivers the specified performance levels.
Provides high-availability capacity to the FAST process.
Delivers defined storage services based on a mixed drive configuration.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 17


Features

VMAX3 Hypervisor
Table 24 VMAX3 Hypervisor details

Area Feature Description

HYPERMAX OS VMAX3 Hypervisor The VMAX3 Hypervisor allows non-VMAX3 operating system environments, for example
data services Linux, to run as a virtual machine (VM) within a VMAX3 array.
It provides a private tools guest that supports:
Rapid and efficient deployment of data services
Embedded application foundation that:
Comes pre-installed to improve time to value
Uses VMAX3 High Availability (HA)
Reduces future physical footprint
Provides a traceability tool
Supports Solutions Enabler

Table 25 VMAX3 Hypervisor limitation

Limitation First affected release

Access restricted to EMC personnel. HYPERMAX OS 5977.250.189

SRDF modes of operation


Table 26 SRDF modes of operation details

Area Feature Description

HYPERMAX SRDF modes of HYPERMAX OS 5977 provides the following SRDF modes of operation:
OS data operation SRDF/Metro
services SRDF/Synchronous (SRDF/S)
SRDF/Asynchronous (SRDF/A)
SRDF Adaptive Copy Disk

18 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

SRDF topologies
Table 27 SRDF topology details

Area Feature Description

HYPERMAX SRDF topologies HYPERMAX OS 5977 provides the following SRDF topologies:
OS data Concurrent SRDF
services Cascaded SRDF
SRDF/Star
SRDF/Metro

Concurrent SRDF provides:


Three-site disaster recovery and advanced multi-site business continuity protection.
Data on the primary site is concurrently replicated to two secondary sites.
Replication to remote site can use SRDF/S, SRDF/A, or adaptive copy

Cascaded SRDF provides:


A three-way data mirroring and recovery solution that provides enhanced replication
capabilities, greater interoperability, and multiple ease-of-use improvements.
A combination of SRDF/S and SRDF/A

SRDF/Star provides:
A data protection and failure recovery solution that covers three geographically
dispersed data centers in a triangular topology.
Two modes of operation:
Concurrent SRDF/Star
Cascaded SRDF/Star

For SRDF N-X support, please reference the "SRDF N-X topology support matrix" on
page 22.

HYPERMAX OS 5977.691.684 supports SRDF Consistency with xCopy/ODX.

SRDF/Metro is supported on VMAX3 arrays running HYPERMAX OS 5977.691.684. This


feature provides the following advantages:
A high availability solution at Metro distances by leveraging and extending SRDF/S
functionality.
Active-Active replication capabilities on both the source and target sites.
Witness support to enable full high availability, resiliency, and seamless failover.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 19


Features

SRDF/A improvements
Table 28 SRDF/A improvements details

Area Feature Description

HYPERMAX SRDF/A SRDF/A has the following improvements:


OS data improvements Support for multiple SRDF/A cycles on the R1 (source) device. Previously, SRDF/A used
services a single cycle when buffering data. Now multiple smaller cycles provide incremental
updates to the R2 (target) device, improving the Recovery Point Objective (RPO).
Improved performance and a simplified Delta Set Extension (DSE - spillover) mode that
can eliminate the requirement of DSE on the R2 device and packs the data that is to be
spilled more efficiently. DSE space allocation will take place from a Storage Resource
Pool (SRP) and is enabled by default.

Table 29 SRDF/A improvements limitation

Limitation First affected release

It is not possible for a host read from the R1 device to receive its data from the remote SRDF/A R2 HYPERMAX OS 5977.497.471
device when DSE has any deltas saved to disk and there is a dual-drive failure (R1 device local
mirrors are unavailable).

SRDF/S performance
Table 30 SRDF/S response time details

Area Feature Description

HYPERMAX SRDF/S performance SRDF/S performance has been improved by reducing latency and increasing IOPS.
OS data
services

SRDF/Metro
Table 31 SRDF/Metro

Area Feature Description

HYPERMAX SRDF/Metro SRDF/Metro provides high availability with Instant Site Failure Recovery and supports
OS data Active-Active replication at metro distances between VMAX3 arrays running HYPERMAX OS
services 5977.691.684. For more information, please reference EMC VMAX3 SRDF/Metro Overview
and Best Practices Technical Notes.

Table 32 SRDF/Metro limitations

Limitation First affected release

SRDF/Metro configurations do not support Concurrent or Cascaded SRDF devices. HYPERMAX OS 5977.691.684

Both the source (R1) and target (R2) arrays must be running HYPERMAX OS 5977.691.684. HYPERMAX OS 5977.691.684

Existing SRDF device pairs that participate in an SRDF mode of operation cannot be part of an HYPERMAX OS 5977.691.684
SRDF/Metro configuration; SRDF device pairs that participate in an SRDF/Metro configuration
cannot participate in any other SRDF mode of operation.

The R2 cannot be larger than the R1. HYPERMAX OS 5977.691.684

20 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Table 32 SRDF/Metro limitations

Limitation First affected release

Devices cannot have Geometry Compatibility Mode (GCM) set. HYPERMAX OS 5977.691.684

Devices cannot have User Geometry set HYPERMAX OS 5977.691.684

Open Replicator is not supported. HYPERMAX OS 5977.691.684

Devices cannot be Business Continuance Volumes (BCV). HYPERMAX OS 5977.691.684

Devices cannot be used as the target (R2) devices when the SRDF devices are RW on the SRDF HYPERMAX OS 5977.691.684
link with SyncInProg or Active-Active SRDF pair state.

SRDF Consistency with xCopy/ODX is not supported. HYPERMAX OS 5977.691.684

Online devices expansion is not supported. HYPERMAX OS 5977.691.684

FAST.X is not supported. HYPERMAX OS 5977.691.684

vSphere API for Array Integration (VAAI) commands are not supported. Atomic Test and Set (ATS) HYPERMAX OS 5977.691.684
commands are supported, please reference the EMC Support Matrix (ESM) for more information.

SCSI 2 and SCSI 3 Cluster are not supported, please reference the EMC Support Matrix (ESM) for HYPERMAX OS 5977.691.684
more information.

The only valid SRDF mode is active, this mode cannot be changed once the device pairs are in an HYPERMAX OS 5977.691.684
SRDF/Metro configuration.

The SRDF Consistency state cannot be changed once it is enabled for all SRDF devices. HYPERMAX OS 5977.691.684

Controlling devices in an SRDF group that contain a mixture of source (R1) and target (R2) HYPERMAX OS 5977.691.684
devices is not supported.

The SRDF pair bias device in an SRDF/Metro configuration can only be changed when the SRDF HYPERMAX OS 5977.691.684
pair state is Active-Active.

Consistency Group (CG) SRDF control and set operations are allowed on one SRDF group at a HYPERMAX OS 5977.691.684
time.

The symrecover command is not available with SRDF/Metro. HYPERMAX OS 5977.691.684

SRDF/Metro does not support FCoE or iSCSI front-end capabilities. HYPERMAX OS 5977.691.684

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 21


Features

SRDF N-X support


Table 33 SRDF N-X support details

Product type Feature Description

HYPERMAX SRDF N-X SRDF N-X support enables replication to/from existing VMAX arrays with 5876 Enginuity
OS data to/from VMAX3 arrays with HYPERMAX OS 5977. Please reference the SRDF N-X topology
services support matrix table below for more information.
For more details on SRDF interfamily connectivity and limitations, see:
EMC VMAX3 Family with HYPERMAX OS Product Guide within the EMC VMAX3 Family
Documentation Set at https://support.EMC.com.
EMC SolVe desktop tool at https://support.EMC.com.

SRDF N-X topology support matrix


Table 34 SRDF N-X topology support matrix

Topology HYPERMAX OS Enginuity Support mechanism

Concurrent HYPERMAX OS 5876.286.194 with Please contact your EMC representative for details on requesting
SRDF 5977.691.684 the required N-X SRDF N-X topology support.
Cascaded fixes.
SRDF
SRDF/Star

SRDF/Metro HYPERMAX OS 5876.286.194 with Please contact your EMC representative for details on requesting
5977.691.684 the required N-X SRDF N-X topology support.
and SRDF/Metro
Witness fixes.

Concurrent HYPERMAX OS 5876.272.177 with Please contact your EMC representative for details on requesting
SRDF 5977.596.583 the required N-X SRDF N-X topology support.
Cascaded fixes.
SRDF
SRDF/Star

SRDF/Metro Not available in N/A N/A


HYPERMAX OS
5977.596.583

22 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Table 35 SRDF N-X support limitations

Limitation First affected release Limitation lifted

Cascaded SRDF topologies are not supported with VMAX arrays running HYPERMAX OS HYPERMAX OS HYPERMAX OS
5977. 5977.497.471 5977.596.583

SRDF/STAR is not supported with arrays running HYPERMAX OS 5977. HYPERMAX OS HYPERMAX OS
5977.497.471 5977.596.583

A VMAX array running Enginuity 5876 whose R1 has RecoverPoint co-existence isn't HYPERMAX OS
supported when connected to the R2 of a VMAX3 array running HYPERMAX OS 5977. 5977.497.471

There is no support for EMC FAST coordination propagation between a VMAX3 array HYPERMAX OS
running HYPERMAX OS 5977, and a VMAX array running Enginuity 5876. 5977.497.471

VMAX N-X connectivity for HYPERMAX OS 5977 to Enginuity 5876 is supported for 2-site HYPERMAX OS HYPERMAX OS
and concurrent SRDF configurations. The Enginuity 5876 code level that connects to the 5977.497.471 5977.596.583
5977 VMAX array must be at a minimum of 5876.272.177 with the required N-X fixes.

N-X connectivity for HYPERMAX OS 5977.691.684 to Enginuity 5876 is supported for HYPERMAX OS
2-site and concurrent SRDF configurations. The Enginuity 5876 code level that connects 5977.691.684
to the 5977 VMAX3 array must be at a minimum of 5876.286.194 with the required N-X
fixes.

SRDF control operations on compressed Virtual Provisioned devices on a VMAX array HYPERMAX OS HYPERMAX OS
running Enginuity 5876 may cause data loss. This limitation is lifted in HYPERMAX OS 5977.497.471 5977.596.583
5977.596.583 and the Enginuity 5876 code level must contain the required fixes.
Please contact your EMC representative for details.

An update operation on the source (R1) side after a failover, where the target (R2) side is HYPERMAX OS
still operational to the hosts is not supported when the source (R1) side is running 5977.497.471
VMAX 10K, 20K, 40K with Enginuity 5876 code levels.

N-X connectivity for Cascaded SRDF is supported for HYPERMAX OS 5977.596.583 to HYPERMAX OS
Enginuity 5876. The Enginuity 5876 code level that connects to the 5977 VMAX3 array 5977.596.583
must be at a minimum of 5876.272.177 with the required N-X fixes.

N-X connectivity for Cascaded SRDF is supported for HYPERMAX OS 5977.691.684 to HYPERMAX OS
Enginuity 5876. The Enginuity 5876 code level that connects to the 5977 VMAX3 array 5977.691.684
must be at a minimum of 5876.286.194 with the required N-X fixes.

N-X connectivity for SRDF/Star is supported for HYPERMAX OS 5977.596.583 to HYPERMAX OS


Enginuity 5876. The Enginuity 5876 code level that connects to the 5977 VMAX3 array 5977.596.583
must be at a minimum of 5876.272.177 with the required N-X fixes.

N-X connectivity for SRDF/Star is supported for HYPERMAX OS 5977.691.684 to HYPERMAX OS


Enginuity 5876. The Enginuity 5876 code level that connects to the 5977 VMAX3 array 5977.691.684
must be at a minimum of 5876.286.194 with the required N-X fixes.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 23


Features

Data compression
Table 36 Data compression details

Area Feature Description

HYPERMAX SRDF data Compression minimizes the amount of data transmitted over an SRDF link. Both software
OS data compression and hardware compression can be activated simultaneously for SRDF traffic over GigE and
services Fibre Channel. Data is first compressed by software and then further compressed by
hardware. Hardware compression is an SRDF group-level attribute. SRDF hardware
compression is pre-configured. Software compression can be enabled/disabled using
SRDF software. Software and hardware compression can be enabled on both the R1 and
R2 sides, but the actual compression happens from the side initiating the I/O (typically
the R1 side).

Table 37 Data compression limitations

Limitation First affected release Limitation lifted

Write Acceleration (WA) and/or FAST Write on Fibre Channel SRDF links require that HYPERMAX OS
software and hardware compression is disabled. 5977.497.471

The online removal of an SRDF compression front-end I/O module is not supported. HYPERMAX OS HYPERMAX OS
5977.497.471 5977.596.583

The online addition of an SRDF compression front-end I/O module is supported, with a HYPERMAX OS
limitation, if a copper port in a GigE SRDF 1 Gb/s front-end I/O module is configured. 5977.596.583
After a SymmWin script failure, it is necessary to manually reset the copper ports in
order to activate hardware compression.

Adding a GigE SRDF front-end I/O module to a director with no ports mapped can lead to HYPERMAX OS HYPERMAX OS
the director no longer detecting the available SRDF compression front-end I/O module. 5977.497.471 5977.596.583

Other SRDF features


The following SRDF features are not available or have been deprecated.
Table 38 SRDF features not available

Feature Status First affected release Limitation Lifted

Adaptive Copy Write Pending Deprecated HYPERMAX OS


Note: Disk mode is still supported 5977.497.471

16 Gb/s front-end I/O modules are not supported for SRDF functionality Not available HYPERMAX OS
5977.497.471

SNMP with GigE SRDF is not supported. Not available HYPERMAX OS


5977.497.471

IPSec is not supported for SRDF functionality over GigE. Not available HYPERMAX OS
5977.497.471

SRDF is not supported with Embedded NAS. Available HYPERMAX OS HYPERMAX OS


from Q3 2015 5977.497.471 5977.691.684

24 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

TimeFinder SnapVX
Table 39 SnapVX details

Area Feature Description

HYPERMAX SnapVX TimeFinder SnapVX is a local replication solution, designed to non-disruptively create
OS data point-in-time copies (snapshots) of critical data. SnapVX creates snapshots by storing
services changed tracks (deltas) directly in the SRP of the source device. With SnapVX, you do not
need to specify a target device and source/target pairs when you create a snapshot. If
there is ever a need for the application to use the point-in-time data, you can create links
from the snapshot to one or more target devices. If there are multiple snapshots and the
application needs to find a particular point-in-time copy for host access, you can link and
relink until the correct snapshot is located.

The following features are provided:


Supports up to 256 snapshots per source device, which are tracked as versions with
less overhead and simple relationship tracking.
Up to 1,024 target volumes can be linked per source device, providing read/write
access as pointer (snap) or full (clone) copies.
Supports target-less snapshots.
Increases scalability and improves performance.
Provides quick create and terminate times.
Removes the dependency on cache when scaling.
Maintains CLI and functional backwards compatibility for TimeFinder/Mirror,
TimeFinder/Clone and TimeFinder/VP Snap.

Open Replicator
Table 40 Open Replicator details

Product type Feature Description

HYPERMAX Open Replicator (ORS) Open Replicator (ORS) allows you to perform data migration between two VMAX arrays,
OS data with the destination array being a VMAX3. ORS provides support for:
services Hot pull
Cold pull
Front-end zero detection
Donor update

Note: For information on third-party array support for ORS, see e-Lab Interoperability
Navigator at http://elabnavigator.emc.com.

Table 41 Open Replicator limitations

Limitation First affected release

ORS pace is not supported. HYPERMAX OS 5977.497.471

ORS multi-target is not supported. HYPERMAX OS 5977.497.471

ORS push is not supported. HYPERMAX OS 5977.497.471

Interoperability with local replication and SRDF is not supported. HYPERMAX OS 5977.497.471

512 concurrent sessions are supported. HYPERMAX OS 5977.497.471

ORS I/O may not be distributed evenly across ports. HYPERMAX OS 5977.497.471

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 25


Features

Table 41 Open Replicator limitations (continued)

Limitation First affected release

ORS I/O is not distributed evenly across participating Fibre Channel directors. HYPERMAX OS 5977.497.471

The symsan command may return incomplete results on all flash arrays. HYPERMAX OS 5977.497.471

ORS may not protect against torn pages when doing a hot pull with donor update. HYPERMAX OS 5977.497.471

The symrcopy pull from a larger device to a smaller device with or without the HYPERMAX OS 5977.497.471
-force_copy option is not supported in HYPERMAX OS 5977.

ORS may experience performance degradation on devices greater than 16 TB. HYPERMAX OS 5977.497.471

ORS with Donor Updates may report a B514.06 error which may be ignored if the session HYPERMAX OS 5977.691.684
completes.

A single discovery of 512 concurrent sessions may result in a discovery error, separate the HYPERMAX OS 5977.691.684
sessions into 2 or more files of less than 256 concurrent sessions per discovery.

Online addition of DX emulation


Table 42 Online addition of DX emulation

Product type Feature Description

HYPERMAX Online addition of DX Online addition of DX emulation to a director and online add of DX ports to a DX emulation
OS data emulation is supported.
services

Table 43 Online addition of DX emulation limitation

Limitation First affected release

The online addition of DX emulation is not supported in Solutions Enabler or Unisphere for VMAX. HYPERMAX OS 5977.596.583

Unisys OS2200 host support


Table 44 Unisys OS2200 host support

Product type Feature Description

HYPERMAX Unisys OS2200 host Unisys OS2200 hosts and Multi Host File Sharing (MHFS) technology is supported.
OS data support
services

26 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

FAST.X
Table 45 FAST.X

Product type Feature Description

HYPERMAX FAST.X FAST.X introduces the seamless integration of VMAX storage and heterogeneous arrays. It
OS data extends storage tiering and service level management to other storage platforms
services (including XtremIO and 3rd party arrays) and to the cloud (CloudArray), enabling the use of
LUNs on external storage as raw capacity. Data services such as SRDF, TimeFinder, and
Open Replicator may be supported on the external device.

Table 46 FAST.X limitation

Limitation First affected release

Data services are not supported with CloudArray. HYPERMAX OS 5977.691.684

Non-disruptive software upgrade


Table 47 Non-disruptive software upgrade

Product type Feature Description

HYPERMAX Non-disruptive This feature enables the customer to non-disruptively upgrade from HYPERMAX OS
OS data software upgrade 5977.596.583 to HYPERMAX OS 5977.691.684 and from HYPERMAX OS 5977.596.583
services FAST.X ePack to HYPERMAX OS 5977.691.684. Customers running HYPERMAX OS
5977.498.472 must upgrade to HYPERMAX OS 5977.596.583 prior to upgrading to
HYPERMAX OS 5977.691.684.

Non-disruptive software downgrade


Table 48 Non-disruptive software downgrade

Product type Feature Description

HYPERMAX Non-disruptive This feature enables the customer to non-disruptively downgrade from HYPERMAX OS
OS data software downgrade 5977.691.684 to HYPERMAX OS 5977.596.583. Downgrade limitations apply, please
services contact your EMC representative for details.

Metadata enhancements
Table 49 Metadata enhancements

Product type Feature Description

HYPERMAX Metadata This feature improves the efficiency of meta data in the VMAX3 by reducing on-flash
OS data enhancements metadata and allowing customers to run larger configurations for the same amount of
services flash.

Table 50 Metadata enhancements limitation

Limitation First affected release

This feature is available for new installs with HYPERMAX OS 5977.691.684 and later only. HYPERMAX OS 5977.691.684

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 27


Features

Online device expansion


Table 51 Online device expansion

Product type Feature Description

HYPERMAX Online device This feature allows the online expansion of TDEVs while the application is running, and the
OS data expansion devices are visible to hosts and to any supported size (up to 64TB).
services

Table 52 Online device expansion limitation

Product type Feature Description

HYPERMAX Online device Online device expansion:


OS data expansion Only applies to TDEVs with sufficient SRP capacity.
services Does not apply to the following external devices:
iSeries devices
Celerra/NAS devices
ProtectPoint external backup and restore devices
Provides the ability to increase the device size only.
Is not available if the devices are being replicated via SRDF, TimeFinder, ProtectPoint or
ORS.

28 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Back-end/drive
infrastructure
The following feature is available for the VMAX3 Family.

6 Gb/s SAS back end


Table 53 6 Gb/s SAS back-end details

Area Feature Description

Back-end/drive 6 Gb/s SAS This feature introduces support for 6 Gb/s Serial Attached SCSI (SAS) drives with back-end
infrastructure back end configuration that provides improved performance. SAS is a high-speed reliable protocol
that uses the same low-level technology as Fibre Channel encoding. SAS topology is
different to Fibre Channel as SAS uses a connectionless tree structure with unique paths
to individual devices. Routing tables store these paths and help to route I/O to the
required locations.

Open systems front end


The following feature is available for the VMAX3 Family.

Host I/O limits per storage group


Table 54 Host I/O limits per storage group details

Area Feature Description

Open systems Host I/O limits per This feature allows every storage group (up to the maximum number of storage groups per
front end storage group array) to be associated with a host I/O limit quota. A maximum of 4K quotas can be set on
the VMAX3 array (more than one storage group can be associated through cascaded
relations when the 4K quota is set on a parent storage group). Limits are evenly
distributed across the available directors within the associated port group.
Some benefits are:
Ensures that applications cannot exceed their set limit, therefore reducing the potential
of impacting other applications.
Provides greater levels of control on performance allocation in multi-tenant or cloud
environments.
Enables the predictability required to service more customers on the same array.
Manages expectations of application administrators with regard to performance, and
provides incentives for users to upgrade their performance service levels.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 29


Features

Management software
The following feature is available for the VMAX3 Family.

Scalability of management software


Table 55 Scalability of management software details

Area Feature Description

Management Scalability of As part of the next-generation FAST engine, scalability of management software is
software management software supported in the following VMAX software management products:
Solutions Enabler V8.1
Unisphere for VMAX V8.1

Service Level Objective management


Table 56 Service Level Objective management details

Area Feature Description

Management Service Level Service Level Objective management provides the ability to:
software Objective Plan
management Use current workload as a reference workload
Rename Service Level Objectives
Provision
Provide host I/O limits for provisioning
Unisphere for VMAX support for Cascaded SRDF in VMAX3
Monitor
ProtectPoint in Unisphere for VMAX Protection Dashboard
Service Level Objectives compliance report and alerts
Performance Analysis enhancements
Database Storage Analyzer (DSA) Dashboard
Manage
Check suitability for Service Level Objective change
World Wide Name (WWN) global search
SLO demotion is available from HYPERMAX OS 5977.691.684 for Platinum,
Gold, Silver, and Bronze SLOs
For more details, please refer to the EMC Unisphere for VMAX documentation set on
support.emc.com.

30 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Embedded management
Table 57 Embedded management

Area Feature Description

Management Embedded Embedded management is available for VMAX3 arrays running HYPERMAX OS
software management 5977.691.684. Pre-installed in the factory, eManagement enables customers to further
simplify management, reduce cost, and increase availability by running VMAX3
management software directly on VMAX3 arrays.
eManagement provides the ability to:
Manage the VMAX3 array in which it is embedded and any SRDF connected arrays
(limited support).
Support Unisphere, Solutions Enabler, vApp and SMI-S without the requirement of a
separate server.

Note: Customers can continue to run VMAX3 management software on a dedicated server
if they plan to manage several VMAX3 arrays from a single management console.

Table 58 Embedded management limitation

Limitation First affected release

This feature is available for new installs with HYPERMAX OS 5977.691.684 and above only. HYPERMAX OS 5977.691.684

IPv6 is not supported. HYPERMAX OS 5977.691.684

Embedded Management, Unisphere, Solutions Enabler, Virtual Appliance, and SMI-S Provider
Table 59 Embedded Management, Unisphere, Solutions Enabler, vApp and SMI-S details

Area Feature Description

Management Embedded Embedded Management, Unisphere, Solutions Enabler, vApp and SMI-S provide
software Management, management support for features available in HYPERMAX OS 5977.691.684.
Unisphere, Solutions For more information, please refer to the EMC Unisphere for VMAX Documentation Set, the
Enabler, vApp, and EMC Solutions Enabler Documentation Set, and the EMC Solutions Enabler Release Notes
SMI-S document.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 31


Features

Serviceability
The following features are available for the VMAX3 Family.

VMAX point-of-service interface


Table 60 Point-of-service interface details

Area Feature Description

Serviceability Point-of-service This feature provides a single point-of-service interface that supports the functionality of a
interface Keyboard, Video, Mouse (KVM) in system bay 1. It enables system bays to be dispersed
throughout a data center without sacrificing service effectiveness.
The point-of-service interface consists of an RJ45 Ethernet connection, a power cord and a
laptop work tray. It allows maintenance tasks and Management Module Control Station
(MMCS) activities to be controlled from any system bay by simply connecting a laptop or
tablet PC to a Management Module (MM) and using the RemotelyAnywhere application to
control the MMCS in system bay 1.

Simplified SymmWin
Table 61 Simplified SymmWin details

Area Feature Description

Serviceability Simplified SymmWin Simplified SymmWin is a GUI application for VMAX hardware replacement tasks. It uses
images, animations, and guided wizards to reduce user mistakes and to minimize
training.
The target users of Simplified SymmWin include Customer Engineers (CEs), Product
Support Engineers (PSEs), and script developers. Different sets of functionalities may be
accessed, based on authorization privileges and MMCS role definition.
New features of Simplified SymmWin include:
Multiple PSE and script developer functionality.
Support for: environmental physical view in SymmWin; configuration physical view in
SymmWin; starting Simplified SymmWin and invoking a script from a URL; starting
Simplified SymmWin and resuming a script from a URL.

Table 62 Simplified SymmWin limitation

Limitation First affected release

Simplified SymmWin is the only interface to run replacement scripts. HYPERMAX OS 5977.250.189

32 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Dual Management Module Control Station


Table 63 Dual Management Module Control Station details

Area Feature Description

Serviceability Dual MMCS The Dual MMCS improves the infrastructure of the new VMAX3 arrays. MMCS is a MM with
an embedded service processor. Every VMAX3 array has two functional MMCSs, one
primary and one secondary. Both are located in the first engine and provide the following
services:
Easy serviceability by performing configuration, installation, and setup of the secondary
MMCS from the primary MMCS.
The ability to run maintenance tasks on the secondary MMCS that affect the primary.
Improved redundancy by:
Collecting errors and a call home facility from the secondary MMCS if the
primary is not available.
Ability to dial to any of the MMCSs during connection issues.
Allowing the host Solutions Enabler to run configuration changes on any of the
MMCSs in the system.
Ability to perform health checks of all MMCS peers.
HYPERMAX OS 5977.691.684 provides added MMCS health check capabilities.

Table 64 Dual Management Module Control Station limitations

Limitations First affected release

MMCS failover is triggered manually after EMC Customer Service has determined that the primary HYPERMAX OS 5977.250.189
MMCS is not functioning.

The secondary MMCS can only run maintenance that affects the primary. If the primary MMCS is HYPERMAX OS 5977.250.189
not functioning it needs to be replaced.

The MMCSs location is fixed on engine 1 with the primary located on side A and the secondary on HYPERMAX OS 5977.250.189
side B.

Action scripts can only run on the primary MMCS. HYPERMAX OS 5977.250.189

A failed procedure can only be recovered on the MMCS it failed on. HYPERMAX OS 5977.250.189

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 33


Features

Pre-configuration of the VMAX3 Family arrays


Table 65 Pre-configuration of the VMAX3 Family array details

Area Feature Description

Serviceability Pre-configuration of The VMAX3 Family arrays are custom-built and pre-configured with array-based software
the VMAX3 Family applications, including a factory pre-configuration for FAST that includes:
arrays DATA devices (TDATs)
Data pools
Disk groups
Storage Resource Pool (one by default is pre-configured)
Five Service Level Objectives: Diamond, Platinum, Gold, Silver, Bronze*
Additional factory pre-configuration includes:
Embedded NAS
SRDF (hardware aspects are pre-configured)
FAST.X & ProtectPoint (hardware aspects are pre-configured)
Embedded Management
D@RE (enabled by default on VMAX 200K and 400K arrays)
FCoE & iSCSI
VMAX3 100k - 4 engine

Note: Optimized is the default Service Level Objective setting.

Note: * Bronze SLO no longer requires 7.2K drives.

34 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

ProtectPoint
The following feature is available for the VMAX3 Family.
Table 66 ProtectPoint details

Area Feature Description

ProtectPoint ProtectPoint ProtectPoint integrates primary storage on a VMAX3 array and protection storage for
backups on an EMC Data Domain system. ProtectPoint allows you to backup your Oracle
database directly from a VMAX3 array to Data Domain, as well as restore images from Data
Domain to a VMAX3 array without impacting the performance of the application server.
This provides reduced costs, reduced architecture complexity, and faster backup and
recovery times.

Integration is conducted on the Data Domain system via:


vdisk services
FastCopy

And from a VMAX3 array perspective, via:


SnapVX

HYPERMAX OS 5977.691.684 supports the ability to terminate a full LUN restore with
minimal impact.

Table 67 ProtectPoint limitation

Limitation First affected release Limitation lifted

The maximum number of eDisks per VMAX3 engine is 2,048. HYPERMAX OS


5977.497.471

You cannot encapsulate more than 16 TB of device capacity in one step (it needs to be HYPERMAX OS HYPERMAX OS
split into multiple steps). 5977.497.471 5977.596.583

Encapsulated eDisks cannot be deleted. HYPERMAX OS HYPERMAX OS


5977.497.471 5977.596.583

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 35


Features

VMAX3 unified
The following feature is available for the VMAX3 Family.

Embedded NAS
VMAX3 unifies file and block data services via eNAS, reducing the capital and operational expense,
as well as the rack space consumed by an external NAS Gateway. eNAS provides the following
advantages:
Leverages VMAX3 features (FAST, Host I/O limits).
Supports pre-installed network-ready software that can be activated after the array is installed.
Provides Unisphere for VMAX streamlined block and file management capabilities, creating a
unified user experience that supports:
Creating storage pools
Managing storage pools
Summary view of block and file assets
File to block mapping
Correlating file and block events and alerts
The following front-end I/O modules1 are supported for eNAS:
4-port 1GbE BaseT
2-port 10 GbE BaseT (copper)
2-port 10 GbE (optical)
4-port 8 Gb FC2 (maximum one)
eNAS replication is performed using:
VNX Replicator for File - An asynchronous file system-level replication technology.
VNX SnapSure - An EMC VNX software feature that enables you to create and manage
checkpoints, which are point-in-time, logical images of a Production File System (PFS).
File Auto Recovery (FAR) with SRDF/S - A two-way synchronous only replication which
provides seamless failover of Virtual Data Movers (VDM) from one site to another
(HYPERMAX OS 5977.691.684 and later).
HYPERMAX OS 5977.691.684 validates eNAS on all supported HYPERMAX OS code and supports:
eNAS software upgrade to the latest VNX code base
I/O module upgrades
I/O module firmware upgrades
FAST.X (external provisioning mode only)
File Auto Recovery (FAR) with SRDF/S

Note: File Auto Recovery (FAR) with SRDF/S requires two-way synchronous only replication
(asynchronous replication is provided with IP Replicator for File) and Split Log file system.
Split Log file system is the default file system for HYPERMAX OS 5977.691.684 replacing the existing
Common Log file system utilized in previous HYPERMAX OS code.
FAR with SRDF/S is initially limited to one source and one destination array and limited to VMAX3
arrays with eNAS only.
Please contact your EMC customer service representative for more information on FAR with SRDF/S.

Please note the following items:


eNAS Auto-diskmark feature is disabled by default; this feature must be enabled to automatically
discover the provisioned VMAX LUNs.
eNAS Control Station IP addresses must be configured prior to upgrading HYPERMAX OS
5977.498.472 code to a later version.

1. One I/O module per eNAS instance, per standard block configuration.
2. Backup to tape is optional and does not count as one of the possibilities for the single I/O module
requirement.

36 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Table 68 eNAS limitations

Limitations First affected release Limitation lifted

The RunningGuests.ini file is not updated if you choose to manually start the Data HYPERMAX OS
Mover after the upgrade, rather than allowing the upgrade to automatically restart the 5977.596.583
Data Mover.

Auto-clear of the user home directories is not occurring, resulting in multiple user home HYPERMAX OS
directories. This issue does not impact eNAS functionality. 5977.596.583

When using IPv4, a Control Station can be reached from both NAT IP addresses. HYPERMAX OS
However, for IPv6, a Control Station can only be reached through active NAT IP 5977.596.583
addresses.

The IMPL system_performance_profile setting should not be set to back-end centric HYPERMAX OS
(BE-centric) mode when eNAS is configured. 5977.596.583

TimeFinder SnapVX is not supported. HYPERMAX OS


5977.497.471

ProtectPoint interoperability is not supported. HYPERMAX OS


5977.497.471

SRDF is not supported. HYPERMAX OS HYPERMAX OS


5977.497.471 5977.691.684

A maximum of four Software Data Movers is supported. HYPERMAX OS


5977.497.471

Upgrade information: HYPERMAX OS


eNAS cannot be added to an existing VMAX3 system. 5977.497.471
eNAS software upgrades are completed via a Service Pack. Please contact your EMC
customer service representative for details.
Upgrading a single Software Data Mover is not supported.

A front-end eNAS I/O module that requires replacement due to a fault can only be HYPERMAX OS HYPERMAX OS
replaced by an eNAS I/O module of the same type. 5977.497.471 5977.691.684

Front-end eNAS I/O modules cannot be added to empty slots in an existing eNAS HYPERMAX OS HYPERMAX OS
configuration. 5977.497.471 5977.691.684

eNAS I/O module firmware upgrade is not supported. HYPERMAX OS HYPERMAX OS


5977.497.471 5977.691.684

Restore of a Software Data Mover may fail if new devices are added prior to the HYPERMAX OS
successful restore of the Software Data Mover. 5977.691.684

File Auto Recovery (FAR) information: HYPERMAX OS


The /nas/tools/dbchk -p command may report an error if the FAR service is 5977.691.684
configured.
The nas_syncrep command may fail if session related commands are running on
the R2 site.
The nas_syncrep - Clean command may fail when the R1 filesystem has File
Level Retention enabled and the Software Data Mover was not online during cleanup
mitigation.
Each CIFS server in the VDM and physical Data Mover must have its own interface
explicitly specified.
If the R1 and R2 sites are sharing IP/interfaces, a session delete on the R1 site will
set the R2 interfaces to a "down" state.

File Auto Recovery Manager (FARM) may incorrectly report the status of VDM sessions HYPERMAX OS
discovered on R2 sites. 5977.691.684

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 37


Features

Embedded NAS -
VNX for File derivative
A derivative of the VNX Operating Environment for File 8.1.7.xx and EMC Unisphere
1.3.3.1.0072-1 was created for eNAS 8.1.4.53. Table 69 details the areas in the VNX for
File derivative that were modified to support Embedded NAS.
Table 69 Modifications in VNX for File derivative for eNAS

Area Feature Description

High-level Architecture Updates to:


architecture Deployment model
Networking orchestration
High-availability in the virtual machine environment

Installation and Scripts Updates to:


packaging NAS express install scripts
SymmWin GUI/scripts

Upgrades Scripts and Control Updates to:


Station NAS upgrade scripts
SymmWin GUI/scripts
Creation of the Concierge tool on the Control Station

Base platform Several Updates to:


Tier2 layer for cluster management
Networking support (DHCP, NAT orchestration, PCI passthrough)
Orchestration of the access to the Control Station from an external host for NAS
management
Storage access (Cut Through Driver)

Note: Only an N+1 failover model is supported, not the N+M model used in VNX for File

System Embedded Cabinet Updates to VNX CLIs such as server_xxx, server_sysconfig


management Type support

NAS inventory Updates to nas_inventory and Unisphere inventory page

Symmetrix storage Update to Symm plugin


group identification Description: The Symm plugin service enables the Link and Launch capability in Unisphere
for a specified file VNX to access Unisphere for VMAX forms. This plugin controls the entire functionality of
system Link and Launch.

Model Number /nas/sbin/model script


Support

System Procedures Updates to:


operations Power up and power down procedures
Emergency shutdown and recovery procedures
Call Home xml file

Security Control Station Updates to access to Control Station security.

Performance Profiling Updates to I/O path performance profiling.

38 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Table 69 Modifications in VNX for File derivative for eNAS

Area Feature Description

EMC Unisphere New model number Updates to File plugin


updates Description: A collection of core Unisphere plugin components that define and load UI
components, Servlets, Unisphere Appliance Layer (APL), and common objects.

Link and Launch Updates to File plugin, Symm plugin, APL


enhancements

Link and Launch Updates to File plugin, Symm plugin script, persistence enablement
registration
parameters
persistence

Planned bug fixes Updates to File plugin, Symm plugin

Enhanced Link and Updates to:


Launch support Expansion of LUNs for existing storage groups.
One time Link and Launch credential setup for single sign on.
Link and Launch requests into Unisphere for VMAX File dashboard forms and/or wizards.

Solutions Solutions Enabler Solutions Enabler for VNX for File is upgraded to 8.0.1
Enabler for VNX for File

VNX CLI Command Updates to the following commands:


Reference and Man nas_diskmark
pages nas_fs
nas_pool
nas_volume
nas_disk
server_devconfig
fs_ckpt
cel_fs
nas_inventory
nas_cs
server_sysconfig

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 39


Features

Security
The following feature is available for the VMAX3 Family.
Table 70 Data at Rest Encryption details

Area Feature Description

Security Data at Rest Data at Rest Encryption (D@RE) provides hardware-based, on array, back-end encryption
Encryption using I/O modules to encrypt and decrypt data written to the disk drives. The following
benefits are provided:
Encrypts all user data on the array - at the drive level; vault data is encrypted on Flash
modules
No Impact to performance
All VMAX3 data services are supported
Includes an Embedded RSA Key Manager
Provides an Advanced Encryption Standard (AES), that is, 256-bit encryption and FIPS
140-2 compliant

Table 71 Data at Rest Encryption limitation

Limitation First affected release

Existing VMAX3 arrays do not support a non-disruptive online upgrade of D@RE, an upgrade HYPERMAX OS 5977.596.583
license is available for existing arrays that wish to re-install with D@RE enabled.

External key manager is not supported. HYPERMAX OS 5977.596.583

40 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Legacy feature limitations


Table 72 lists the legacy feature limitations that exist in HYPERMAX OS 5977.
Table 72 Legacy feature limitations in HYPERMAX OS 5977

Feature Name Limitation Affected VMAX Platform Limitation lifted

ACLX The Access Control Logix (ACLX) device default factory HYPERMAX OS
configuration setting is the lowest ACLX enabled port on the 5977.250.189
lowest FA or FCoE director to prevent device discovery
unavailability with some operating systems; this can cause
excessive device discovery completion times and/or device
inaccessibility.
You can change the default setting to make the device accessible
on a different port.

ACLX If an ACLX database restore is attempted for greater than 1,000 HYPERMAX OS HYPERMAX OS
elements (Initiator Groups, Storage Groups, Port Groups, 5977.497.471 5977.596.583
Masking Views) you may see 00AA.55 - "Syscall took excessive
time to run" errors logged. If the restore operation takes longer
than 60 minutes, 0D21.F4 errors will also be seen.

ACLX 01.00F2.12 errors may be logged if a Fibre Channel director is HYPERMAX OS HYPERMAX OS
removed from the configuration in an ACLX-enabled system. 5977.497.471 5977.596.583

ACLX Before deleting a masking, it is necessary to ensure all ports in HYPERMAX OS HYPERMAX OS
that view are online. 5977.596.583 5977.691.684

Script support There is a system limitation of two concurrent upgrade HYPERMAX OS


procedures (Solution Enabler's Configuration Manager allows 5977.250.189
customers to run a maximum of two concurrent upgrade
procedures).

FA statistics lptask_time, management_time, and link_port_time are always HYPERMAX OS


reported as zero. 5977.250.189

Data Services The HYPERMAX OS Data Services emulation I/O statistics for read HYPERMAX OS
Emulation miss KBs are always reported as zero. 5977.250.189

Parity writes Parity writes are not counted in back-end optimized writes. HYPERMAX OS HYPERMAX OS
5977.250.189 5977.596.583

Device The device-per-pool allocated capacity may revert to zero for HYPERMAX OS
allocation inactive devices. 5977.250.189

Microsoft Due to the increase in track size (from 64K to 128K) in VMAX3 HYPERMAX OS
2012 (Win8) Family arrays, Windows 2012 issues ODX extents that do not 5977.497.471
ODX Support align on these boundaries. As a result, offload copies are failing
with a check condition resulting in the host being forced to
perform a software copy. As a result, you may experience a
longer then expected time for the copy request to complete.

NDU Upgrading from HYPERMAX OS 5977.250.189 to HYPERMAX OS HYPERMAX OS HYPERMAX OS


5977.497.471 may result in xx13 disk performance errors 5977.497.471 5977.596.583
occurring when there is high workload on the system.

TDEV size The maximum allowed TDEV size is 16 TB. This applies to both HYPERMAX OS HYPERMAX OS
internal TDEVs and eTDEVs (FAST.X TDEV external device). 5977.250.189 5977.691.684

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 41


Features

Table 72 Legacy feature limitations in HYPERMAX OS 5977

Feature Name Limitation Affected VMAX Platform Limitation lifted

16 Gb/s Fibre Ports on the 16 Gb/s Fibre Channel front-end I/O module may HYPERMAX OS
Channel I/O take longer than expected to recover if a link bounce occurs 5977.250.189
module during active I/O. This issue may occur if the I/O module is
running any of its supported speeds (4, 8, 16 Gb/s).

SymmWin SymmWin times out after 5 minutes when the number of HYPERMAX OS
Solutions Enabler configuration change requests are beyond 5977.596.583
system processing capabilities.

42 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Features

Other feature updates


The intelligence and improved capability introduced with HYPERMAX OS 5977 resulted in
some functionality being superseded or removed.
Table 73 details the areas that were either superseded or removed with the introduction of
HYPERMAX OS 5977.
Table 73 Superseded or removed functionality

Type Description

Device types Removed: Thick device types are now superseded by thin device types.
Improvement: Thin device types reduce cost, improve capacity utilization,
and simplify storage management.

Removed: The functionality within Vault to consume Symmetrix device


numbers.

Note: Vault still exists, only the functionality mentioned above has been
removed.

Removed: The functionality within Symmetrix File System to consume


Symmetrix device numbers.

Note: Symmetrix File System still exists, only the functionality mentioned
above has been removed.

Device attributes Removed: Fixed Block Architecture Metas

Removed: Multiple local mirrors

Removed: Legacy IBM i models (2107 emulations for VMAX 20K and 40K)

Other functionality Removed: Permacache

Removed: SymmIP

Removed: VLUN (v2)

Removed: Priority Quality of Service (QoS)

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 43


Enhancements for HYPERMAX OS 5977.691.684

Enhancements for HYPERMAX OS 5977.691.684


The following enhancements are included in the HYPERMAX OS 5977.691.684 release.

Table 74 Enhancement

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

72754540 84247 This fix improves the incremental clone copy time by reducing Potential N/A
the time taken to create the precopy clone session and Performance
activate the clone session. Issue

70419566 473762 This fix adds a DCWP option to the 98,SCAN,FBNK scan to No Impact 201099
discard Write Pending slots that log 8710 mismatch errors.

481518SYM fw_pack_full This fix adds full support for HGST Cobra-F 10K drives. No Impact N/A
_5977_1509
18_110905.
exe

Enhancements for HYPERMAX OS 5977.596.583


The following enhancements are included in the HYPERMAX OS 5977.596.583 release.

Table 75 Enhancements

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68803210 20016465 This enhancement ensures the SymmWin Volume Delete No Impact 197440
script can successfully delete volumes; this is done by
increasing the script timeout value which allows sufficient
time for the deallocation of cache devices.

68763840 81107 Recovery Enhancement: this fix introduces the EMC internal No Impact N/A
Inline command 8c,,,dfos,free,<type>,<num>,<inst>,<split> used
to delete Meta Data File System splits. Large meta data
objects, like Front End track ID table (FE TID), get splits in
smaller objects (called splits). The inline allows the deletion
of these splits.

68335494 80975 Recovery Enhancement: this fix improves the EMC internal No Impact N/A
Inline command 98,SCAN,DISK,E8C to print an error message
when it fails to read meta data from a disk instead of
reporting stale information.

67927486 81070 The inline command DA,,ENCL,LCCB,C,,STAT is enhanced to No Impact N/A


display the no LCC found on port 12 message.

67923904 20016307 This enhancement ensures the Link Controller Card (LCC) Potential N/A
replacement script confirms that only ports related to the Maintenance
replacement are offline before continuing the script. It also Issue
ensures that the script attempts to enable both internal disk
controller (DS) ports before presenting a SymmWin script
failure message.

44 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Enhancements for HYPERMAX OS 5977.596.583

Table 75 Enhancements (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

67357806 20016351 Enhancement of the Disk replacement and Spare replacement No Impact N/A
SymmWin scripts to check LCC ports and drive Not Ready at
the beginning of the script and alert the user of any issue
unexpected state before replacing the drive.

66463620 20016250 This enhancement provides the option to open SymmWin No Impact N/A
logfiles using LogZilla instead of word or notepad by adding a
shortcut to the Windows right-click menu.

66463620 80572 This fix enables the VMAX3 inlines command, DA,,SAS,LINK, No Impact N/A
to display port pairs.

66463620 79796 This fix improves the short trace information provided for No Impact N/A
errors, 600B, 700B, and 800B.

68644438, 20016183 This enhancement improves the summary page information Potential N/A
68639390, provided when the Simplified SymmWin Online Code Load Maintenance
68756402, script fails to take a lock. Issue
69186412

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 45


Enhancements for HYPERMAX OS 5977.498.472

Enhancements for HYPERMAX OS 5977.498.472


The following enhancements are included in the HYPERMAX OS 5977.498.472 release:
Contains some enhanced resiliency fixes for ProtectPoint, TimeFinder, and SRDF.
Provides improvements to the factory pre-configuration code and cache efficiency for
TimeFinder.

46 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.691.684

Fixed problems for HYPERMAX OS 5977.691.684


The following fixes are included in the HYPERMAX OS 5977.691.684 release.

Base functionality
Table 76 Base functionality

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

73548696 84409 The Symmwin System Interface Board (SIB) replacement Potential Data N/A
script may leave the Host Channel Adaptor (HCA) mask (that Unavailability
contains fabric paths status) in an incorrect state. A
subseguent full initialization process for any director may
cause the director affected by the SIB replacement to become
unavailable.

69601428 471924 After a MMCS1 replacement on a VMAX3 system running Potential N/A
HYPERMAX OS 5977, communication to the door LED via USB Maintenance
may be lost. Issue

61566038, 441226 Negative cache partition Write Pending (WP) counters can Potential Data N/A
63491798, cause host I/O timeouts, due to the Cache Partitioning (CP) Loss/Data
67135918, count, which is expected to be zero "0" when it actually may Unavailability
68180452, be at its maximum value or higher. This can cause ab3e and
69081036 xx3c timeouts on the front end.

Exposed environment: Fibre Channel, Enginuity.

Errors: AB3A.

71148264 475793 In an environment running HYPERMAX OS 5977.596 or later, Potential Data N/A
deleting a Masking View with a PMR (Port Mask Record) where Unavailability
the ports are in an offline state causes a pending delete state
and inconsistencies in the masking view records. Performing
masking changes after this might not be possible.

Special Conditions: This only applies to HYPERMAX OS


5977.596 or later.

72135042 83976 After an external power event affects one or more disk Potential Data N/A
enclosures, the array continues to stay offline even after the Loss/Data
external power is restored. Unavailability

Errors: B5.100B.1F.

71608284 83090 A back-end director may stop destaging data received due to Potential Data N/A
an internal WP management problem. Eventually, the system Loss/Data
write pending limit is reached and the system fails. Unavailability

Errors: 5A13.0C, C33C.

71302604 82658 During Meta Data Recovery, the MDFU,OFFR scan fails when it Potential Data N/A
encounters consecutive deleted objects in a file system. The Unavailability
scan fail occurs due to a large amount of deleted inodes that
are not at the end of the file system.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 47


Fixed problems for HYPERMAX OS 5977.691.684

Table 76 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

71276606 83276 This fix prevents SymmWin Dynamic Member Sparing (DMS) Potential Data N/A
from removing a Not Ready member that may lead to an Unavailability
invalid Ready state, which in turn may cause stuck read
requests, invalid stripe member masks, and may result in a
host timeout.

Errors: B9.AD10.33, AD10.33, B4.BB10.12, BB10.12.

71151618 82564 During Meta Data Recovery, the DFOS,IFSC scan may corrupt Potential Data N/A
file system flash allocator counters in the allocator pyramid Unavailability
causing the flash allocations to fail.

70939272 20016806 The Offline LCC Firmware Upgrade script does not check if Potential Data N/A
both LCC paths to the drive are ready before loading the LCC Unavailability
firmware on the LCC port. This missing check may cause the
drive to drop Not Ready during the LCC firmware upgrade
when there is a hardware issue on one of the LCC ports.

69678386 81880 A non-disruptive upgrade from 5977.498.472 to Potential Data 201161


5977.596.583 may corrupt the memory heap which will Unavailability
impact the SymmWin scripts Offline Meta Data Recovery and
Offline DAE replacement; these scripts invoke the full IML
type $FF,STOP.

70419566 82188 Host I/Os (reads and writes) can time out against tracks Potential Data N/A
where the Front End (FE) Track ID (TID) points to a cache slot it Unavailability
doesn't own.

Errors: 283C, 2A3C.

70419566 82014 This fix improves the EMC internal inline FTID scan (inline No Impact 201097
command 98,SCAN,CACH,FTID) to fix a slot/track pointer
mismatch if there is a CRC error on a target ID read.

70419566 81983 This fix ensures that the EMC internal inline recovery scan No Impact 201052
(inline command 98,VPR,RCVR,<dv>,<track>) recovers the
corrupted target ID of TDEV/TDAT unallocated tracks.

70419566 82098 The VMAX3 array may trigger I/O host aborts and host Potential Data 201051
timeouts due to the incorrect handling of the DTR Recovery by Unavailability
the HYPERMAX OS Data Services Emulation (EDS).

70419566 81883 An EMC internal inline FXBE scan may cause 2A2C errors to Potential Data 201042
log against diagonal parity tracks for slices with IZERO tracks. Loss/Data
Unavailability
Errors: 2A2C.73

70419566 81806 The EMC internal inline SDTR (Setting Deferred Table Rebuild) Potential Data 201041
scan (98,SCAN,MDPG,SDTR,,1) on a TDEV device that has Loss/Data
more than 54,000 cylinders is triggering Track ID CRC Unavailability
corruption. This happens for any Track ID that is beyond Split
0.

Errors: A72C.50

70235660 81763 The EMC internal inline command DA,,ESES,DB,ALL,ALL Potential Data N/A
command might cause multiple drives to become not ready. Unavailability

48 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.691.684

Table 76 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

70223024 82077 Performing a single-director initialization of a Infrastructure Potential Data 201830


Management (IM) emulation after adding devices to a VMAX3 Unavailability
system causes corruption of system memory.

Errors: xx3C, A72C, xx10.

70203274 81789 Under rare circumstances an ACLX Unmap operation may Potential Data 200530
trigger a Fibre Channel director to log multiple 0EFF errors and Unavailability
may become unavailable.

67804802 81350 Access from VPLEX to VMAX arrays with 16Gb/s (front-end I/O Potential Data N/A
modules) Fibre Channel front-end ports may be lost after Unavailability
temporary link disruptions.

63272934, 81224 The recovery of incomplete IO handling has been improved. Potential Data N/A
63368660, Unavailability
64235598, Errors: C12C, 2A26.
64287388,
64332522,
64441412,
64457734,
64847218,
65479948,
65624732,
65782428,
65897934,
65940426,
66622306,
67226018,
67276794,
67385754,
68261406,
68766698,
68770338,
68794426,
68799358,
69576468,
69695808,
70033966,
70251386,
71857412

72819710 84051 When a SymmWin script issues syscalls 920E_85 or 920E_84 Potential Data N/A
to clear fabric statistics on a VMAX system that has a fabric Loss/Data
link down or the switch is initialized (like one MIBE is Unavailability
unavailable) it may drop all directors to DD. The problem is
that in these conditions the syscall takes a long time to
complete and it doesnt allow the emulation thread to update
the lifesigns which ends up dropping a director with the
status of DD. The condition can occur in systems with three or
more engines and when half the fabric is unavailable (one
MIBE is unavailable).

Errors: 10.05F2.1E, 2AD80B.7C, 60.D80B.ED.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 49


Fixed problems for HYPERMAX OS 5977.691.684

Table 76 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

72787478, 83938 Including the track# parameter in the following the EMC No Impact N/A
73155172 internal inline command generates error 20CE.21:

98,SNVX,TGST.DISP.ALL,device,cylinder#,head#,track#

Here, device identifies a disk device, cylinder# identifies a


cylinder on the device, head# identifies a disk head within
the cylinder, and track# identifies the number of tracks to
process.

Errors: 20CE.21, 01B1.01, 643E.21

71827190 83141 HYPERMAX OS does not validate the number of devices Potential N/A
specified on the command line for a FATS or CATS scan. This Recovery Issue
could result in the scan never finishing.

71827190 83153 The EMC internal EDS scan scheduler does not run a queued Potential N/A
scan if the scan_next_f function returns "skip. In this case, Recovery Issue
the scheduler continues to poll and does not execute the scan
function. This can prevent scans from running correctly. When
the problem occurs using the inline command 98,SCAN,STAT
shows that the value of the Count for Next field increases
while the value of the Item field is unchanged.

71361048 82736 Overflow in statistics metadata can occur when there are a Potential N/A
large number of storage groups that have a Service Level Performance
Objective (SLO) specified. Issue

Special Conditions: Several hundred storage groups that have


a Service Level Objective specified.

Errors: 7D3A.02, 7D3A.81, 7D3A.FD

70419566, 81882 After an upgrade to HYPERMAX OS 5977.596.583, some No Impact 201056


70836452 device delete operations may not clean up internal metadata
cache pages for thin devices resulting in
streaming MD scrubbing mechanism errors BE1E.

Errors: BE1D, BE1E.02.

70419566 83034 An EMC internal FTID scan might not fix a cache pointer error if Potential Data N/A
the cache slot is write pending when the Loss/Data
98,SCAN,CACH,FTID,<dev>,1 inline command is issued. Unavailability

69049624 81400 During a Vault restore, drives may log wrong disk - mismatch Potential Data N/A
in serial number (error DD10) and drop Not Ready (Error Unavailability
100B). The problem is triggered by the HYPERMAX OS when
incorrectly clears the drive World Wide Name (WWN) from the
back-end director drive WWN table.

Errors:DD10, 100B

50 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.691.684

Table 76 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

72063528 84289 A memory failover event may trigger 8810.34 283C.07 AB3E Potential Data N/A
errors. Unavailability

480378SYM 83518 Front-end and back-end track IDs are not being destaged to Potential Data N/A
flash after FAST data movements, resulting in flash contents Unavailability
not being viable for Metadata Recovery.
For HYPERMAX OS 5977.596.583 the equivalent fix number is
83741.
CUSTOMER: ETA 205176: VMAX3: FAST compliance moves
may be not saved when the VMAX3 powers off without Power
Vaulting or when Meta Data Recovery is executed, resulting in
potential data loss.

478547SYM 83743 This is an enhancement to the 98,SCAN,MDPU scan adding No Impact N/A
the VIPD option. This option is used to correct Meta Data
inconsistency between Global Memory (GM) and Flash.
CUSTOMER: ETA 205176: VMAX3: FAST compliance moves
may be not saved when the VMAX3 powers off without Power
Vaulting or when Meta Data Recovery is executed, resulting in
potential data loss.

GigE
Table 77 GigE

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

69619218 82044 A syscall 9235_24 including a SRDF GigE (RE) port parameter Potential Data N/A
that is not mapped to the RE may cause a software exception Unavailability
on the RE directors.

Special conditions: Systems with RDF RE Gig-E hardware


compression.

Exposed environment: SRDF RE directors.

Errors: 0EFF.FF.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 51


Fixed problems for HYPERMAX OS 5977.691.684

Open Replicator
Table 78 Open Replicator

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

69753324, 81724 Performing a migration from CLARiiON to VMAX3 using Open Potential Data 199781
69753738 Replicator and Asymmetric Logical Unit Access (ALUA) set may Unavailability
cause CX/VNX array to panic, resulting in losing the remote of
the session.

Errors: B511, B513, B514, 243E.

Open Systems
Table 79 Open System

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

69550848 472045 When creating a masking view, HYPERMAX OS might Potential Data 199637
incorrectly predict that the operation would impact Loss/Data
connectivity to the relevant multipath gatekeeper and blocks Unavailability
the operation. This results in the masking view creation failing
with the following error: "The operation cannot be performed
because it will impact the gatekeeper".

Errors: 0x2E.

RAID
Table 80 RAID

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

69049484 470192 Using allocation on host access on a VMAX3 array may show Potential 199277
inconsistent I/O patterns and percent busy numbers on disks Performance
that are in the same RAID group. This may occur when the VPI Issue
performs short allocations which may take multiple
BE-Slice-Locks on slices which are partially allocated. If
subsequent short allocations fail , the Free Space
Management (FSM) will continue to try every available
OpenSSL connection and where necessary purge the initial
failing connection leading to an imbalance between TDAT
members.

Exposed environment: RAID 5, RAID 6.

52 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.691.684

SRDF Family
Table 81 SRDF Family

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

39495156, 361378 SRDF performance degradation may occur when running SRDF Potential 75828
58749064 in adaptive copy disk mode between a Symmetrix DMX array Performance
running Enginuity 5773 and a Symmetrix VMAX array. Issue

71733436, 477723 SRDF directors may experience high CPU utilization and log Potential N/A
71763356, 02F2.01, 01F2.14 errors when the I/O is low. Performance
72223996 Issue
Errors:2A.02F2.01, 2A.01F2.14.

71306844 83998 When the R1 device of an SRDF configuration is on a VMAX Potential Data N/A
5876 system and the R2 device is on an VMAX3 5977 system, Loss/Data
a host application writing zeros to partial tracks on the R1 Unavailability
device might trigger SRDF data inconsistency.

Errors: 2A2C, EC85, BC10.

72117094 83552 In an SRDF environment, a path could get stuck in the "going Potential Data 204011
offline" state resulting in a loss of SRDF paths. Unavailability

Errors: 0E22.2B, 0E22.2C

71733436, 83393 Lower than expected SRDF throughput rates and latency could Potential 203741
72223996 be experienced on VMAX3 systems with a small amount of Performance
devices and large distances. This is caused by the Copy Issue
Window feature that is not optimized to allow small numbers
of device copies to fill all of the available bandwidth.

71930798 83250 In an SRDF environment, the SRDF full establish command Potential Data N/A
fails after a new device pair is created. This is due to a stray Unavailability
secondary mirror that is left behind after a partially-failed
deletepair or a half_deletepair on the primary side.

Errors: SYMAPI_C_NEED_MERGE_TO_RESUME.

71733436 83289 When creating an SRDF pair, the synchronization operation Potential 203740
may be very slow to complete. This happens only if the R1 Performance
device is on a HYPERMAX OS 5977 system and the R2 device Issue
is on an Enginuity 5876 system that had previous SRDF pairs
that have since been deleted. Null tracks are marked as
invalid thus impacting the synchronization time.

71129808 82652 During an SRDF/A restore operation the RF directors may run No Impact N/A
out of stack space when handling invalid tracks triggering an
exception.

Errors: 22A.93FF.FF.

71733436, 83459 During a SRDF copy operation, individual I/O requests are not Potential 204114
72223996 distributed evenly among the SRDF directors. Directors with Performance
the slowest response times are allocated during the majority Issue
of the I/O requests. This results in the available bandwidth
being underused causing potential performance problems
and longer synchronization times.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 53


Fixed problems for HYPERMAX OS 5977.691.684

Table 81 SRDF Family

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

72117094 83562 In an SRDF/Synchronous environment, overlapping writes can Potential 204468


lead to an excessive number of 2A3E errors being logged on Performance
the SRDF directors. This can lead to ITAC errors, performance Issue
degradation and the possible loss of SRDF paths.

Exposed environment: SRDF/Synchronous J0 mode with a


heavy workload that performs many small writes to the same
tracks on a number of devices over and over again.

Errors: 2A3E.2A, 22CE.02, 03CE.00, 05CE.11, 26CE.03.

70998948, 82333 After an SRDF establish, the SRDF links may start streaming Potential Data 202080
72470600, 053F, 053C and F03F errors and the SRDF devices enter a Not Unavailability
72547116, Ready state on the links.
73155172
Exposed environment: All SRDF arrays running HYPERMAX OS
5977 that have local replication SNAPVX sessions on an R2
remote device.

Errors: A0.F03F.05, F03F.xx, 053C.xx, 053F.xx, 083F.xx,


0813.xx.

69895110 81767 When the R1 device of an SRDF configuration is on a VMAX No Impact N/A
system and the R2 device is on an VMAX3 system, if an RDC
scan running on the R1 device detects a data mismatch it
correctly logs an EC85 error. Kernel slab allocation (01F2)
errors might be logged incorrectly on the RDF director of the
R2 device.

SRDF/A
Table 82 SRDF/A

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

71763356 83952 In an SRDF environment, if an SRDF/A group is defined on Potential N/A


multiple directors but is not operational on one of them Performance
(possibly due to lack of communication to the remote VMAX Issue
array), the director may waste CPU cycles performing checks
for I/O related to this group.

54 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.691.684

SymmWin
Table 83 SymmWin

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

483020SYM 20017260 The SymmWin script "Online code load" doesn't display the Potential Data N/A
list of temporary fixes been loaded, and does not warn the Unavailability
user of the potential loss of temporary fixes already loaded on
the array.

70532202, 20017114 This fix improves the SymmWin Config and New Install No Impact N/A
71495128 script to better handle failed drives on a DARE configured
array.

71276606 20016997 When the SymmWin Dynamic Member Sparing (DMS) script Potential Data N/A
drops a drive to the Not Ready state, the drive could stay in a Unavailability
Pending Not Ready state (error 140bB.0E). The script
continues without detecting this condition, which can result
in some data unavailability issues.

Errors: AD10.33 (incorrect member mask).

71532972, 20017058 SymmWin fails to commit an online configuration changes Potential N/A
71811952 made to the FA Topology or the FA Loop ID. Maintenance
Issue

70518856 20016850 On a VMAX3 system running HYPERMAX OS 5977, the last No Impact N/A
step of the configure and install script does not include an
additional environmental health check that checks the RS232
to MMCS2 cable connectivity.

69263716 471815 The SymmWin Online Configuration Changes script, when No Impact N/A
running in simulation mode, may fail to validate eNAS
network configuration changes.

72401644 20017068 The SymmWin script Online Configuration Change does not No Impact N/A
verify that both physical paths are communicating for newly
added drives.

Errors: B0.600B.C5.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 55


Fixed problems for HYPERMAX OS 5977.691.684

TimeFinder-SNAP
Table 84 TimeFinder-SNAP

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

730326668 84370 This fix resolves an issue where the Local Replication Potential Data N/A
background scrubbing mechanism may fail to free certain Unavailability
Replication Data Pointer (RDP) objects.

Errors:3D10.3F

72874428 84022 A system running HYPERMAX OS 5977 or later may not be Potential Data N/A
able to handle new SnapVX sessions internally, even though Unavailability
there is plenty of capacity from both a
physical (disks and SRP) and a logical (internal metadata)
perspective. The result is that new SnapVX sessions are
created via the software, but will almost immediately
appear in a Failed state after creation. Special conditions: The
issue occurs at around 75% logical internal meta data
capacity flash full.

Errors: 82.3D10.3F, 3D10.3F.

69930212, 81732 An ESX Server may lose the paths to LUNs when running Potential 199853
71289140 SnapVX snapshot sessions on large devices. This can result in Performance
a high amount of write pendings on the devices and Issue
subsequent performance issues.

Errors: AB3E.00

Virtual provisioning
Table 85 Virtual provisioning

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

70979864 82876 FAST VP moves may fail, logging 7D3B 33 errors, when the Potential N/A
number of extents on the devices are greater than the actual Performance
device size. Issue

Special Conditions: This issue may occur when a config


change deletes a large device and replaces it with a smaller
device but the extent statistics are not deleted.

Errors: 7D3B.33

56 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.596.583

Fixed problems for HYPERMAX OS 5977.596.583


The following fixes are included in the HYPERMAX OS 5977.596.583 release.

Base functionality
Table 86 Base functionality

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68986330 81344 This fix introduces the engineering inline command No Impact N/A
DA,SET,ESES,MDNL, < port > , < eses > , ABRT, which can be
used to recover from a firmware download where the drive
expanders inside the LCC are stuck in a WAIT state.

68001906 81001 Yellow Jacket drives are logging too many BMS 03/xx errors, Potential Data N/A
and when the back-end director writes to a BMS 03/xx Unavailability
detected block, the drive returns 03/0C/03 (reassign
required). This may lead to the back-end director counting too
many fatal errors and dropping the drive 0D0B (Write
Disabled), or the drive may be dropped with a BADD trace
error for excessive BMS detected 03/xx errors.

Errors: 03/xx, 0D0B, BADD.

68335494 80675 When a drive is performing a recovery operation known as Potential Data 196771
long CDB blocking mode, it may fail to send read messages to Unavailability
the other RAID group member to initiate a rebuild operation,
resulting in Read I/O timeouts.

Errors: BExx, BE2E, 0113.

69507888 80660 It is not possible to recover an unavailable director after an Potential 198806
Online configuration change to add volumes Maintenance
(QuickCacheVolsAdd) was performed with the director already Issue
in an unavailable state.

68028736 20016505 SymmWin script Online and Offline LCC firmware upgrade may Potential N/A
fail due to incorrect timeout values. The fix increases the Maintenance
script timeout values. Issue

67907926 80443 When the MDP hash table scrubber is run in FIX mode, it No Impact 194822
does not properly return page nodes to the free list when
deleting a node from the hash table.

Errors: BE1F.xx.

69419762 80146 In some cases during a director replacement, Vault due to Potential Data N/A
Fabric Availability errors (02b1.22) are logged and the Loss/Data
VMAX3 system vaults when shutting down/recovering Unavailability
Infrastructure Manager (IM) 1A or 2A in a multi-engine
system.

Errors: 02b1.22.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 57


Fixed problems for HYPERMAX OS 5977.596.583

Table 86 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

67271660 79960 The lock of an already locked Replication Data Pointer (RDP) No Impact N/A
tree, that occurs during the late failure of copy, fails. (A410.10
error).

Exposed environment: SnapVX

Errors: A410.10.

67291080 79944 The syscall (920B_50) used to collect Device Mirror Statistics No Impact N/A
on DATA devices may incorrectly attempt to retrieve
information on TDEV devices triggering 230.DC10.40 errors
on ED (management) directors.

Errors: 230.DC10.40.

68335494 80297 When a drive is performing a recovery operation known as Potential Data 196771
long CDB blocking mode, it may stop serving I/Os on every Unavailability
logical device configured on the drive.

Errors: BExx, BE2E, 0113.

66463620 80868 This enhancement adds port, index, and D_ID information to No Impact N/A
the short trace associated with BE3E and B53E errors.

66463620 80045 If a DAE becomes unavailable, a VMAX3 system starts a 5 No Impact N/A
second timer. Should that timer expire, the system vaults.
This enhancement posts error C03E.87 when the timer starts.

64911174 80788 Recovery enhancement: the EMC internal Inline command No Impact N/A
D1,,4 incorrectly display the arrival time of the IO commands
that are in the device queue.

60975616 72142 The FCoE emulation, under certain workloads, might report a Potential Data 180154
Non-Maskable Interrupt (NMI) error and become unavailable. Unavailability

Errors: 20.023E.01, 20.023E.02, xx.023E.xx

58739374 69191 The SYMCLI symchg command may post the error An internal No Impact 172833
error occurred during a SYMAPI operation. Please report to
EMC due to HYPERMAX failing to read/reset the Symmetrix
Differential Data Facility (SDDF) bit via Syscall 9243_03(8D).

58102254 68601 The internal debug inline command A4,EE,<dev> does not No Impact 171193
display non-data slots on a Thin device, which will prevent the
discovery of duplicate cache slots against a device.

68717162, 20016427 In a newly installed VMAX3 environment, the verify VMAX Potential 197810
68557336 setup script may fail with the error message Empty part Maintenance
number, can't validate component. This is because Issue
HYPERMAX OS has no information about the VPDs of all field
replacement units (FRU) at this stage of the setup procedure.

58 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.596.583

Table 86 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68997442, 80696 The VMAX3 system logs page node errors (BE1D.82) after a No Impact 197766
69419762 director replacement or a director recovery.
This error is generated by the Page Node scrubber that is
detecting a discrepancy between the pointer in the hash table
and the pointer in the Meta Data page.
In order to recover from this error, the director should be
made unavailable and a full IML should then be performed on
the Infrastructure Manager (IM) director.

Errors: 30.BE1D.82, 30.BE1F.01

68022012 79654 In rare cases, message transmission buffers are Potential N/A
inappropriately freed, resulting in corrupted buffers. Maintenance
Issue
Errors: 10.00F2.54, 01.00F2.10

66463620 80066 This enhancement improves the information provided for Potential N/A
environmental errors 28.xx80.xx by including location Maintenance
information in the generated message. For example when the Issue
power supply to a DAE fails, the message includes the
identifier of the failed DAE.

67405112 80004 When a drive is replaced because of BADD errors, the old Potential N/A
state might not be cleared correctly. This can result in the Maintenance
same BADD errors being logged for the new drive. Issue

62885804, 74907 Previously, it was possible for a director to complete a full IML Potential Data 187318
67144040, even if it had a faulty time-of-day clock that was not Loss/Data
68131046 synchronized with the rest of the system. This could result in Unavailability
possible data loss or data unavailability. This fix ensures that
if any director fails to synchronize its time clock with the rest
of the system, the IML on that director fails, thus maintaining
the overall data integrity of the system.

64796960, 77179 The inlines command 8F,MODX,TOKN,XHOW,ID,token-id Potential N/A


68565434, displays incorrect information. The output is labeled as Maintenance
64796960, relating to the specified token identifier (token-id). However Issue
68565434 the data displayed always relates to token 0x0000. This
occurs regardless of the token identifier the user supplies.

67693968, 20016190 Associating a front-end port with a logical device using the Potential N/A
67693934, Online Configuration Change function in simulation mode, Maintenance
67693546, could cause the function to endlessly loop. Issue
67693890

67502252 80764 Periodic Syscalls or Inlines issued to check the Ethernet Potential N/A
status may trigger exception IMLs (0EFFs) when a bad Maintenance
Network Interface Controller (NIC) chip becomes unresponsive Issue
in the middle of a NIC reset.

Errors: 0EFF

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 59


Fixed problems for HYPERMAX OS 5977.596.583

Table 86 Base functionality (continued)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68939398 81283 A DS director slice may become unavailable after failing to Potential N/A
take one of the ports offline. Maintenance
Issue
Errors: 003f.a5, 800.2C, BF3E.30

67415386 80291 In rare cases, a non-blocking single-port SCSI reset fails to Potential N/A
completely recover the targeted link, resulting in degraded Performance
I/O performance. Issue

67870658, 80695 HYPERMAX OS is improperly penalizing Seagate Yellow Jacket Potential Data N/A
68001906, drives when they log 03/0C/03 (request reassign) sense Unavailability
67870658 data. The drives are dropped into a Not Ready state and a
0D0B error is logged.

Errors: 0D0B

Fibre Channel
Table 87 Fibre Channel

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

67114036 78538 Under rare conditions, after an I/O Unit Check condition the No Impact N/A
Fibre Channel director may report improper sense codes to
the host.

Online Configuration Change


Table 88 Online Configuration Change

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68440708 20016525 Online loading of a new configuration via a SymmWin script Potential Data N/A
may cause the loss of configuration changes made via Loss/Data
SYMCLI/API. The SymmWin protection against this scenario is Unavailability
not working as intended; this is now fixed.

60 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.596.583

Online Upgrade (NDU)


Table 89 Online Upgrade (NDU)

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

66900244 80541 During a non-disruptive upgrade to HYPERMAX OS Potential Data 196581


5977.498.472, an attempt to release a memory chunk Unavailability
defined by a corrupted pointer causes an exception. This can
result in a director becoming unavailable. In very rare
instances, more than one director may become unavailable.

Errors: 0EFF

Open Replicator
Table 90 Open Replicator

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

57350950 69226 In a Symmetrix Open Replicator (ORS) environment, when a Potential 170577
process login (PRLI) request is rejected by a Host Bus Adapter Performance
(HBA), the front-end host might continually retry the port login Issue
(PLOGI) and PRLI commands for up to eight seconds,
impacting performance.

Exposed Environment: Open Systems

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 61


Fixed problems for HYPERMAX OS 5977.596.583

Open Systems
Table 91 Open Systems

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

66900244 80563 Windows Cluster Validation may fail due to HYPERMAX OS Potential Data N/A
reporting the incorrect list of registered hosts. Unavailability

68128276 79716 The SymCLI command symaccess list logins doesnt report Potential 196731
the Initiators connected to the Fibre Channel directors. The Application
problem is caused by HYPERMAX OS not correctly updating Failure
the login history table (OLOG) of the front end director.

Errors: FC22.06

66900244 80769 Windows Cluster Validation may fail if more than 32 Potential Data N/A
PowerPath hosts or VM hosts are connected to one Fibre Unavailability
Channel Emulation.

Exposed environment: PowerPath hosts attached to front-end


ports.

Errors: F9.133E.00

58021944, 67854 Online configuration changes and Direct Member Sparing Potential Data 168938
57973310, (DMS) operations may incorrectly trigger a SCSI Unit Unavailability
58109528, Attention. This may cause certain hosts to lose access to their
58025354, disks.
58310266,
60205034,
57422134,
56818620,
57412454,
54823012,
56743920,
57698786,
56896368,
57434400

RAID 6
Table 92 RAID 6

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

55122006 67477 During a RAID 6 drive rebuild, the maximum system write Potential N/A
pending (WP) limit could be reached, resulting in delayed Performance
host I/Os. Issue

62 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.596.583

SRDF-Family
Table 93 SRDF-Family

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

67856842 80441 In an SRDF environment, the D0 control commands may fail Potential Data N/A
with D022.0D timeout errors. Without the fix the problem Unavailability
triggers when there is a large discrepancy in the directors
timer.

Exposed environment: SRDF configured systems.

Errors: D022.0D, D03E.25

65117800, 79189 SRDF adaptors and all disk adaptors may stop operating soon Potential Data 191199
66832748 after moving SRDF devices from Adaptive Copy Disk mode to Loss/Data
Synchronous mode. This was due to the resources of the Unavailability
adaptors being overwhelmed by the very large number of
synchronous write requests that can occur. In additions, a
large number of 9A1C.07 and 0E22.07 are logged on the
SRDF adaptors.

SRDF/A
Table 94 SRDF/A

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

40632458, 58934 If a cleanup command (inline F0,CE,SNOW,CLEN,CE) is issued Potential Data 76834
64183866 on an SRDF/A group while there is still an active SRDF/A Loss
session, it may cause the loss of SRDF/A information. This
results in SRDF/A devices entering a target not ready state,
causing the resynchronization of the whole SRDF/A group.

SRDF-Dynamic
Table 95 SRDF-Dynamic

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

59632942 70516 SRDF mirror position on members within the same meta Potential 175842
volume may be misaligned when performing an SRDF Create Performance
Pair operation. This can lead to undesired behavior, including Issue
problems with Dynamic SRDF swap.

Exposed environment: Dynamic SRDF on CKD meta volumes.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 63


Fixed problems for HYPERMAX OS 5977.596.583

SymmWin
Table 96 SymmWin

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

69197778 20016490 SymmWin fails to start the user listener service for some user Potential N/A
levels. As a result, the VMAX3 array calls home with error Application
04DC.1A because the system is not able to detect that the Failure
user is no longer active in order to automatically logout.

Errors: 04DC.1A

68737928 20016445 VMAX3 systems are pre-configured with a small number of Potential N/A
CPU cores allocated to the Fibre Channel directors. This has a Performance
severe impact on performance. Fix 20016455 detects this Issue
configuration problem, posts a hint to the user, and sends
error 04DB.86 over the Call Home facility.

68387664 20016396 This fix provides instructions on how to handle the cable No Impact N/A
management arms (CMA) while replacing a Disk Array
Enclosure (DAE) of VMAX3 systems.

68312650 20016310 When the Simplified SymmWin Online Code Load script fails, Potential 196732
a master lock is set disabling the resume script functionality. Maintenance
This fix allows the user to resume the script from the point of Issue
failure.

68312650 20016476 The SymmWin enviromentals interface is reporting fabric Potential N/A
Matrix Interface Board Enclosure (MIBE) issues when the Maintenance
inline command 84,mibe,stat does not report any. Issue

468075SYM 20016354 The Simplified SymmWin Online Code Load script may fail Potential N/A
due to the loss of the network route which was caused by the Maintenance
re-image of the MMCS. This fix ensures the network route Issue
persists after the MMCS is re-imaged.

465589SYM 20016474 During the manufacturing configure and install process for No Impact 194895
VMAX3, the Deferred Maintenance Threshold is left blank, the
fix sets it to the expected default value of 0.

67196950 20016190 The AutoPSE_Disk SymmWin script, upon discovering that Potential N/A
hardware errors are associated with a spare drive, triggers a Maintenance
dial home to replace a single spare disk even though deferred Issue
service (indicating that a failed drive should not be
immediately replaced) is enabled.

Special conditions:
This only occurs when Auto_PSE executes for a configured
spare drive failure.

Exposed environment:
Systems that use deferred maintenance.

69361480 20016190 The SymmWin script Online Code Load may post the error Potential 198722
Message 0x00008817.0xAB48: Unable to update the other Maintenance
MMCS due to missing files: - InstSettings.ini. Issue

64 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed problems for HYPERMAX OS 5977.596.583

Table 96 SymmWin

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

66463620 20016245 Symmwin error analyzer incorrectly reports the error BE38, No Impact N/A
the bytes layout do not match with the layout used by
HYPERMAX inline command EB.

Errors: BE38

68916934 20016478 Some VMAX systems have been delivered pre-configured with Potential Data 197950
an incorrect RDF_MODE set preventing the user successfully Unavailability
adding SRDF dynamic groups and devices.

69052332 81266 Online deletion of a high number of devices (more than Potential 197982
2,000) may fail leaving orphaned Distributed Flash Objects Maintenance
System (DFOS) objects. The system will log File Systems (FS) Issue
lock timeouts (BE20.40) and the Online configuration
changed script will fail logging F4EE, F4EF errors.

Errors: BE20.40, F4EE, F4EF

68737928 20016415 During the creation of a new configuration file, the Fibre Potential N/A
Channel director may be configured with a very low number of Performance
CPU cores, causing a severe performance impact. Issue

68730646 20016399 The Configure and Install New Symmetrix script may fail at the Potential 197398
Wait_for_GuestOS_Install step if all of the GuestOS image Application
files were not copied to the correct directory. This fix adds the Delay
step Verify_GOS_Dir_and_Image_Exist to verify that the files
are copied in the correct location before initiating the
installation.

68131046 20016349 When adding a Field Replaceable Unit (FRU) to the SymmWin No Impact N/A
bad FRU tab, the Returned Material Authorization (RMA) file
does not put the part number in the file.

66032266 20016190 When performing compare SymmWin reports, missing file No Impact N/A
error messages might be logged. This is caused by a missing
file path entry. To locate and open the created file, use Tools -
GT - View Report.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 65


Fixed problems for HYPERMAX OS 5977.596.583

Virtual Provisioning
Table 97 Virtual Provisioning

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

68012188 80553 If a device (that is a member of a group of devices being Potential 196052
freed) is deleted during the free-all request, the free-all Maintenance
request for some of the devices may be aborted and the Issue
request may need to be re-issued.

Special conditions: Device being deleted is part of a multi


device free-all request and was freed and deleted before
other devices completed the de-allocation task. The error
window increases if the device being deleted is small
compared to other devices in the free-all request.

Errors: 7F7A.34, 7F7A.09.

66608214, 79620 The Microsoft HyperV feature uses the offload data transfer Potential 194215
69078874, (ODX) functionality when moving files with Windows Explorer Application
67083634, or performing virtual machine migrations. Windows will send Failure
67303458, the ODX request which is received by the front-end director,
68649828, creating a session and linked tokens. This request may fail
68565434, with error 8313.23 due to misaligned extents or error
68492260, 8313.15 if all command indexes are IN_USE due to
67618490, insufficient command buffer and token index cleanup. This
68773600 issue may also impact the ability to unbind or delete thin
devices.

66 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Fixed Problems for HYPERMAX OS 5977.498.472

Fixed Problems for HYPERMAX OS 5977.498.472


The following fixes are included in the HYPERMAX OS 5977.498.472 release.

Base functionality
Table 98 Base functionality

Fix number
Service request /OPT Knowledgebase
number number Problem summary Impact article

465628SYM 80000 If a director board fails to power-up when the VMAX3 array is Potential Data N/A
powering-up, the back-end emulation Dual Initiator may Unavailability
become unavailable, which can result in the VMAX3 array
vaulting. This failing condition is based on the configuration
of the VMAX3 array and the size and configuration of the
memory.

Configuration changes
Table 99 Configuration changes

Service request Fix number Knowledgebase


number /OPT number Problem summary Impact article

465580SYM 80034 The HYPERMAX OS Data Services emulation (also referred to Potential Data N/A
as EDS) could result in a director becoming unavailable during Unavailability
any large configuration changes that involve adding or
deleting devices.

TimeFinder
Table 100 TimeFinder

Service request Fix number Knowledgebase


number /OPT number Problem summary Impact article

466465SYM 80161 The VMAX3 array may log timeout errors (2A3C) when writing Potential Data N/A
to a clone target device where the slot is set to versioned write Unavailability
pending; this will result in the write I/O being failed back to
the host.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 67


Related documentation

Related documentation
The following list details the general platform and host documentation. The documents
are available from EMC Online Support.
EMC VMAX3 Family Documentation SetContains the product guide, physical
planning guide, and power documentation for the VMAX3 arrays.
EMC VMAX Family Viewer for Desktop and iPadIllustrates system hardware and
system configurations offered for VMAX and VMAX3 arrays.
EMC Solutions Enabler Documentation SetContains all the product guides and
installation manuals needed to manage your array using the Solutions Enabler
SYMCLI mechanisms.
EMC Solutions Enabler Release NotesDescribes the contents of your kit and how to
prepare for an installation. These release notes identify any known functionality
restrictions and performance issues that may exist with the current version and your
specific storage environment.
EMC Unisphere for VMAX Documentation SetExplains how to use EMC Unisphere for
VMAX for storage system configuration, management, and monitoring.
EMC Unisphere for VMAX Release NotesDescribes the contents of your kit and how
to prepare for an installation. These release notes identify any known functionality
restrictions and performance issues that may exist with the current version and your
specific storage environment.

SolVe Desktop
SolVe Desktop provides procedures for common tasks and supported SRDF features and
applicable limitations for 2-site and 3-site solutions. To download the SolVe desktop tool
go to EMC Online Support and search for SolVe Desktop. Download the SolVe Desktop and
load the VMAX Family and DMX procedure generator.

Note: You need to authenticate (authorize) your SolVe Desktop. Once installed, please
familiarize yourself with the information under the Help tab.

ProtectPoint documentation
The following guides provide additional information on EMC ProtectPoint solution:
EMC ProtectPoint Implementation Guide
EMC ProtectPoint Solutions Guide
EMC ProtectPoint File System Agent Command Reference
EMC ProtectPoint Release Notes

68 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Troubleshooting and getting help

Troubleshooting and getting help


EMC support, product, and licensing information can be obtained at EMC Online Support.

Note: To open a service request through EMC Online Support, you must have a valid
support agreement. Contact your EMC sales representative for details about obtaining a
valid support agreement or to answer any questions about your account.

E-Lab navigator
To ensure your host operating system is supported, check all specifications and
limitations defined in E-Lab Interoperability Navigator, which can be reached at:
https://elabnavigator.EMC.com

Product information and support


For information about EMC products, licensing, service, documentation, release notes,
software updates, or for technical support, go to the EMC Online Support site (registration
required) at: https://support.EMC.com.

eLicensing support
To activate your entitlements and obtain your VMAX3 Family or VMAX 10K, 20K, 40K
Family license files, visit the Service Center at https://support.EMC.com, as directed on
your License Authorization Code (LAC) letter emailed to you.
For help with missing or incorrect entitlements after activation (that is, expected
functionality remains unavailable because it is not licensed), contact your EMC Account
Representative or Authorized Reseller.
For help with any errors applying license files through Solutions Enabler, contact the EMC
Customer Support Center.
If you are missing an LAC letter, or require further instructions on activating your licenses
through Online Support, contact EMC's worldwide Licensing team at licensing@emc.com
or call:
North America, Latin America, APJK, Australia, New Zealand: SVC4EMC
(800-782-4362) and follow the voice prompts.
EMEA: +353 (0) 21 4879862 and follow the voice prompts.

Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of our content. Send your opinions of this document to:
VMAXContentFeedback@emc.com

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 69


Glossary

Glossary
Table 101 details the most common terms and acronyms used in this HYPERMAX OS 5977
Release Notes document.

Table 101 Terms and acronyms

Term or acronym Description

eNAS Embedded NAS

ePack Term used when referring to a bundle of fixes.

D@RE Data at Rest Encryption

DMA Direct Memory Access

DSS Decision Support System

EDS/ED HYPERMAX OS Data Services emulation

eManagement Embedded Management

eTDEV Encapsulated TDEV

FAR File Auto Recovery

FAST Fully Automated Storage Tiering

FBA Fixed Block Architecture (Open System)

HCA Host Channel Adaptor

IM Infrastructure Manager

MIBE Management Interface Board Enclosure

MMCS Management Module Control Station

MM Management Module

PBu Petabyte usable

REST Representational State Transfer

TDEVs Thin devices

TBr Terabyte raw

vApp Virtual Appliance

VDM Virtual Data Mover

Virtual Matrix Reference to fabric/switch

70 EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes


Glossary

Copyright 2016 EMC Corporation. All rights reserved. Published in the USA.

Published August, 2016

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with
respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a
particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries.
All other trademarks used herein are the property of their respective owners.

EMC VMAX3 Family with HYPERMAX OS 5977 Release Notes 71

Das könnte Ihnen auch gefallen