Sie sind auf Seite 1von 122

EMC® VNX®

VNX Operating Environment for Block 05.33.021.5.256


VNX Operating Environment for File 8.1.21.256
EMC Unisphere 1.3.21.1.0256-1

Release Notes
P/N 302-000-403
REV 37
March 2020

The software described in this document is intended for the VNX5200, VN5400,
VNX5600, VNX5800, VNX7600, VNX8000, VNX-F5000, and VNX-F7000 but the following
software packages are also intended for general use with earlier VNX and CLARiiON
products:
 Unisphere Host software, CLI, and Utilities
 Unisphere Service Manager
 ESRS IP Client
The software versions are the same for all platforms. Topics include:
Revision history ............................................................................................. 2
Software media, organization, and files ....................................................... 4
New features and enhancements ................................................................. 6
Fixed problems ............................................................................................. 7
VNX Operating Environment for Block 05.33.021.5.256, VNX Operating
Environment for File 8.1.21.256, and Unisphere 1.3.21.1.0256-1 . 7
Fixed in previous releases ........................................................................... 10
Known problems and limitations ................................................................ 75
Documentation ......................................................................................... 117
Configuring and Managing CIFS on VNX ............................................ 117
Configuring VNX Naming Services ..................................................... 117
Configuring Virtual Data Movers on VNX .......................................... 117
Parameters Guide for VNX for File .................................................... 118
Using FTP, TFTP, and SFTP on VNX .................................................... 119
Using VNX Replicator ......................................................................... 120
VNX 5400 Parts Location Guide ......................................................... 120
Where to get help ..................................................................................... 120

1
1
1
1
Revision history

Revision history
Rev Date Description
37 March, 2020 Updated for 05.33.021.5.256 (Block OE), 8.1.21.256 (File OE), and 1.3.21.1.0256-1
(Unisphere).
36 July, 2019 Updated for 1.3.9.1.0239 (USM).
35 May, 2019 Updated for 05.33.009.5.238 (Block OE).
34 January, 2019 Updated for 05.33.009.5.236 (Block OE), 8.1.9.236 (File OE), and 1.3.9.1.0236-1
(Unisphere).
33 July, 2018 Updated for 8.1.9.232 (File OE).
32 April, 2018 Updated the New features and enhancements section.
31 April, 2018 Updated for 05.33.009.5.231 (Block OE), 8.1.9.231 (File OE), and 1.3.9.1.0231
(Unisphere).
30 January, 2018 Updated for 05.33.009.5.218 (Block OE).
29 December, 2017 Updated for 05.33.009.5.217 (Block OE), 8.1.9.217 (File OE), and 1.3.9.1.0217-1
(Unisphere).
28 March, 2017 Updated for 05.33.009.5.186 (Block OE), 8.1.9.211 (File OE), and 1.3.9.1.0210 (USM).
27 December, 2016 Updated Java support content.
26 September, 2016 Updated for 05.33.009.5.184 (Block OE), 8.1.9.184 (File OE), and 1.3.9.1.184 (Unisphere).
25 March, 2016 Updated SSD FAST Cache content.
24 March, 2016 Updated for 05.33.009.5.155 (Block OE), 8.1.9.155 (File OE), and 1.3.9.1.155 (Unisphere).
23 November, 2015 Updated for VNX for File OE 8.1.8.132 and VNX for Block OE 05.33.008.5.132 for the VDM
Metrosync Manager.
22 October, 2015 Updated for VNX for File OE 8.1.8.121 with editorial updates.
21 August, 2015 Updated for VNX for File OE 8.1.8.121.
20 August, 2015 Updated for 05.33.008.5.119 (Block OE), 8.1.8.119 (File OE), and 1.3.8.1.0119 (Unisphere)
19 April, 2015 Updated for 05.33.006.5.102 (Block OE) and 8.1.6.101 (File OE).
18 April, 2015 Updated Block OE fixes from previous document version.
17 March, 2015 Updated for 05.33.006.5.096 (Block OE), 8.1.6.96(File OE), and 1.3.6.1.0096 (Unisphere).
16 December, 2014 Updated for the release of VNX for Block 05.33.000.5.081.
15 December, 2014 Updated the Data at Rest Encryption (D@RE) feature description
14 December, 2014 Editorial update.
13 November, 2014 Updated fixes and known issues for release:
• 05.33.000.5.079 (Block)
• 8.1.3.79 (File)
12 October, 2014 Editorial update.
11 October, 2014 Updated for a security advisory.
10 September, 2014 The known issues list has been updated.
09 September, 2014 Updated software revision numbers:

2 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Revision history

05.33.000.5.074 (Block)
08 July, 2014 Updated for release:
• 5.33.000.5.072 (Block)
• 8.1.3.72 (File)
• 1.3.3.1.0072-1 (Unisphere)
06 March 2014 Updated fixes from previous document version.
05 February 2014 Updated for release:
• 05.33.000.5.051 (Block)
• 8.1.2.51 (File)
• 1.3.2.1.0051 (Unisphere)
04 January 2014 Changed symptom description for AR607962 in the Fixed problem section under VNX
Operating Environment for Block 05.33.000.5.038.
03 January 2014 Updated for release 05.33.000.5.038 (Block)
02 November 2013 Updated for release 05.33.000.5.035 (Block)
01 October 2013 Initial release of 05.33.000.5.034 (Block), 8.1.1.33 (File), 1.3.1.1.0033 (Unisphere),
1.3.1.1.0033 (ESRS IP Client), 8.1.1.33 (VIA), and 1.3.1.1.0033 (USM)
These release notes contain supplemental information about:
 EMC VNX Operating Environment (OE) for Block
 EMC VNX Operating Environment (OE) for File
System Management
Unisphere UI
Platform
Security
Replication
CIFS
ESRS (Control Station)
RecoverPoint FS
Migration
 Unisphere
Unisphere Analyzer
Unisphere Host software, CLI, and Utilities
Unisphere Quality of Service (QoS) Manager
 Virtual Provisioning
 FAST Cache and FAST VP
 EMC SnapView for:
VNX OE for Block
Admsnap
Admhost
 EMC SAN Copy for VNX OE for Block
 EMC MirrorView/Asynchronous and MirrorView/Synchronous for VNX OE for Block

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 3
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Software media, organization, and files

 EMC Serviceability for VNX OE for Block, VNX OE for File, and Unisphere
Unisphere Service Manager (USM)
VNX Installation Assistant (VIA)
EMC Secure Remote Support (ESRS) IP Client
 EMC Snapshots for:
VNX OE for Block
SnapCLI
 Virtualization for EMC VNX for:
VNX OE for Block
VNX OE for File

Software media, organization, and files


The VNX OE for Block version 05.33.021.5.256 and the VNX OE for File version 8.1.21.256 are available in
their respective upgrade bundles.
To upgrade to the VNX OE for Block OE or the VNX OE for File OE, use the Unisphere Service Manager
(USM) System Software wizards. For the latest version of USM, go to Online Support and choose Support
by Product > VNX2 Series > Downloads
Note: You must perform VNX File OE upgrades before performing upgrades for any attached
VNX Block systems.
You can also obtain updated versions of the following software at Online Support:
 Unisphere version 1.3.21.1.0256-1
Unisphere Host Agent version 1.3.21.1.0256-1
VNX for Block CLI version 7.33.9.2.236
Unisphere Storage System Initialization Utility version 1.3.21.1.0256-1
Unisphere Server Utility version 1.3.21.1.0256-1
 Unisphere Service Manager version 1.3.21.1.0256-1
 VNX Installation Assistant version 8.1.21.256
 EMC Secure Remote Support IP Client version 1.3.21.1.0256-1

Java support
The following Java Platforms are verified by Dell EMC and compatible for use with Unisphere, Unified Service
Manager (USM), and VNX Installation Assistant (VIA):
• Standard Edition 1.8
• Standard Edition 9.0
• Standard Edition 10.0
The Unisphere off-array GUI is not supported on JRE9 and later, due to Java issues. Instead, use Unisphere
Launcher 1.3.9.1.0999 for Windows when running on systems with JRE9 and later.

4 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Software media, organization, and files

IMPORTANT: Some new feature/changes may not take effect in the Unisphere GUI after an upgrade. To avoid
this, it is recommended that you clear your Java cache after an upgrade.

Firmware
The following firmware variants are included with this release:
• If a lower revision is installed, the firmware is automatically upgraded to the revision contained in
this version.
• If a higher revision is running, the firmware is not downgraded to the revision contained in this
version.

Enclosure type Current firmware version


15 Drive 3U DAE (DAE6S) 1.55
25 Drive 2U DAE (DAE5S) 1.55
60 Drive 4U DAE (DAE7S) 8.08
120 Drive 3U DAE (DAE8S) 15.12

Platform BIOS BMC FW POST


MT 33.60 25.60 69.30
JF 33.51 25.60 61.00

Updating drive firmware


It is highly recommended that you perform a drive firmware update directly after any update of the VNX OE
software. The latest version of USM automatically scans for drive firmware updates, and notifies you if any are
available for the drives present in your system. If an update is available, a popup message is displayed at the
bottom of USM’s System page. USM assigns a priority rating to your updates, and allows you to install multiple
firmware updates in a sequential manner.
To generate a customized procedure for updating your drive firmware, go to mydocuments.emc.com, select
VNX series, and then Update VNX software. Answer the questions about your system, and choose the “Install
disk firmware” task to generate the procedure.

Online access to VNX installation documents


VNX installation manuals are available exclusively online. It is recommended to download the latest version of
the documentation from the VNX product page at support.EMC.com.

Security information
For information on an individual technical or security advisory, go to the Online Support website and search by
using the DSA number or “Dell Security Advisories” as the keyword. For a list of DSAs in the current year, refer
to Dell Security Advisories – All Dell Products – Current Year. For a list of older DSAs, refer to Dell Security
Advisories – All Dell Products – Archive.

Set up the
EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File “My
5
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
New features and enhancements

Advisory Alerts” option to receive alerts for Dell Technical Advisories (DTAs) and Dell Security Advisories (DSAs)
to stay informed of critical issues and prevent potential impact to your environment. Go to Account Settings and
Preferences, type the name of an individual product, click to select it from the list, and then click Add Alert. For
the individual product or All Dell Products, select DTA’s and/or DSA’s.

VDM synchronous replication operations


When you need to perform VDM synchronous replication operations, use VDM MetroSync, which uses
MirrorView replication.

New features and enhancements

Security Enhancements
Support for the TLS 1.2 protocol and disabling the TLS 1.1/1.0 protocol
Management communication into and out of VNX2 systems is encrypted by using SSL. By default, the Storage
Management Server and on-array clients support TLS 1.0, TLS 1.1, and TLS 1.2 protocols for SSL
communications. Disabling the TLS 1.0 protocol means that the Storage Management Server and on-array
clients (except for ESRS Device Client) will only support SSL communications using the TLS 1.1 and TLS 1.2
protocols, and TLS 1.0 will not be considered a valid protocol. Disabling the TLS 1.1 protocol means means that
the Storage Management Server and on-array clients (except for ESRS Device Client) will only support SSL
communications using the TLS 1.2 protocol and TLS 1.0 and TLS 1.1 will not be considered valid protocols.
Disabling TLS 1.0 or 1.1 on VNX2 systems may impact existing applications which are not compatible with the
higher level protocols. In this case the lower level TLS support should remain enabled. The following
functionality will not work when TLS 1.0 is disabled:
• Replication to and from VNX2 systems using software versions earlier than 05.33.021.5.256 and
8.1.21.256.
• Domain management containing VNX1/VNX2 Control Station using versions earlier than 8.1.21.256
• Navisphere CLI (on systems using versions earlier than 7.33.x.x.x) cannot connect to the Management
Server. Replication manager, RPA, ViPR SRM, AppSync and ESA integrated with old Navisphere CLI
cannot connect to Management Server either.
If TLS 1.0 is disabled in the network environment (such as for block TLS 1.0 packets by switch), the following
functions will be impacted:
• Unisphere Service Manager (which cannot receive software, drive firmware, and language pack
upgrade notifications)
• ESRS IP Client
• ESRS Device Client on Control Station and Storage Processors
Refer to the Security Configuration Guide for VNX for more information about TLS.

6 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed problems

Fixed problems

VNX Operating Environment for Block 05.33.021.5.256, VNX Operating Environment for
File 8.1.21.256, and Unisphere 1.3.21.1.0256-1
Category Details Symptom Fix Fixed in Version

VNX Block OE Platform: VNX for Block Single storage processor rebooted Fixed the code. 05.33.021.5.256
Severity: Low with bugcheck (0x0340408E) when
the SAS controller firmware failed
Tracking: 12892375/985130,
13913733/996969, with an outstanding unsuccessful I/O.
14828377/1006848
VNX Block OE Platform: VNX for Block Data unavailability occurred when Fixed in code. 05.33.021.5.256
Severity: Medium one storage processor was down and
Tracking: 14140015/999895, the other storage processor panicked.
An IO request was stuck and timed
1000481
out.
VNX Block OE Platform: VNX for Block Drives aged around 2 years gave Fixed in code. 05.33.021.5.256
Severity: High incorrect alert Pool has disks
with less than 30 days
Tracking: 1005658
remaining before end of
life (EOL) which lead to
premature replacement of drives.
VNX File OE Platform: VNX for File Rolling Data mover panics for I/O Fixed in code. 8.1.21.256
Severity: High failure against RDF10
Tracking: 994515,
13669459/993953,
13880217/996324
VNX File OE Platform: VNX for File The NFS server became unresponsive Fixed in code. 8.1.21.256
Severity: High when mounted with NFSv4.1
Tracking: 8266562/922174,
921910
VNX File OE Platform: VNX for File Virtual Machine clone failed with Made code changes to allow VAAI- 8.1.21.256
Severity: Medium VAAI-NAS plugin when attempted NAS plugin to access datastore when
Tracking: 831075 with NFS4.1 Kerberos. Kerberos is enabled on NFS export.

VNX File OE Platform: VNX for File Data Mover rebooted when CIFS Fixed in code. 8.1.21.256
Severity: Medium client sent SMB create/open request
using SMB 2.002 protocol version.
Tracking: 10608226/955589,
956042
VNX File OE Platform: VNX for File System rebooted when CEPP service Fixed in code. 8.1.21.256
Severity: Medium was restarted frequently during CIFS
Tracking: 11479613/967475, high load.
967489

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 7
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed problems

Category Details Symptom Fix Fixed in Version

VNX File OE Platform: VNX for File User could not get correct access time Fixed in code. 8.1.21.256
Severity: High on the files when they were copied
Tracking: 12314351/976669, from client to VNX2 NFS share using
976781 NFSv4.

VNX File OE Platform: VNX for File File locking issues when using Fixed in code. 8.1.21.256
Severity: Low NFSv4.1. When the server recalled a
Tracking: 13197238/998848, file delegation, the client replied
NFS4ERR_REP_TOO_BIG_TO_CACHE
1000020
to the CB_SEQUENCE operation
VNX File OE, Platform: VNX for File Data Mover was unresponsive due to Fixed in code. 8.1.21.256
Severity: Medium blocked NFS threads.
Tracking: 13758935/995202,
97026942/1017345, 1025611
VNX File OE Platform: VNX for File Data Mover was unresponsive to Fixed in code. For more information, 8.1.21.256
Severity: High server commands due to maxing out refer to the KnowledgeBase article
Tracking: 14188501/1000024, number of TCP streams. 000540115.
1000451
VNX File OE Platform: VNX for File Data Mover panicked because of Fixed in code. 8.1.21.256
Severity: Medium synchronization issues in SMB Hash
Tracking: 14889392/1006912, layer when BranchCache was
enabled.
1006535/946495,
1020968/921394
VNX File OE Platform: VNX for File Client was unable to maintain Fixed in code. 8.1.21.256
Severity: Medium NFSv4.1 sessions to multiple VDMs
Tracking: 14883236/1007240, running on the same Data Mover.
1006208/871106, 1012342
VNX File OE Platform: VNX for File Data Mover panicked when NFS client Fixed in code. 8.1.21.256
Severity: Medium tried to mount/unmount file system
Tracking: 14883236/1012342 after syncrep failover or reverse.

VNX File OE Platform: VNX for File NFS users (AD-LDAP domain) were Fixed in code. 8.1.21.256
Severity: Low denied access to a file system shared
Tracking: 15528354/1013948, in NT mode causing temporary data
unavailability.
1014088
VNX File OE, Platform: VNX for File DataMover rebooted due to RPC re- Fixed in code. 8.1.21.256
FileSystem Severity: Low transmission.
Tracking: 11963749/971972,
984884
VNX File OE, Platform: VNX for File Data Mover panic/fault message Fixed in code. 8.1.21.256
FileSystem Severity: Medium reported during space reclaim.
Tracking: 12273515/975533,
976136

8 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed problems

Category Details Symptom Fix Fixed in Version

VNX File OE, Platform: VNX for File File System extension caused the Fixed in code. 8.1.21.256
FileSystem Severity: Medium Data Mover panic.
Tracking: 12592255/980874
VNX File OE, Platform: VNX for File Data Mover rebooted due to failure in Fixed in code. 8.1.21.256
FileSystem Severity: Medium initializing state service.
Tracking: 12884419/984278,
984373
VNX File OE, Platform: VNX for File During high memory pressure, the Fixed in code. 8.1.21.256
SnapSure Severity: Medium data mover panicked while freeing up
the memory.
Tracking: 10816479/958671
VNX File OE, Platform: VNX for File Data Mover rebooted due to Fixed in code. 8.1.21.256
SnapSure Severity: Low duplicate entry of logical volume with
the following message in server log:
Tracking: 11637637/968837,
Duplicate entry for
968955
Logical Volume
VNX File OE, Platform: VNX for File Data Mover rebooted during Fixed in code. 8.1.21.256
SnapSure Severity: Low unmount, rename and mount of File
Tracking: 12275320/975559, System Snapsure volumes.
975669
VNX File OE, Platform: VNX for File Refresh session hung while refreshing Fixed in code. 8.1.21.256
SnapSure Severity: Medium the remote replication with user
checkpoint.
Tracking: 12273515/975767
VNX File OE, Platform: VNX for File Data Mover rebooted due to internal Fixed in code. 8.1.21.256
SnapSure Severity: Low issues while using File System
Snapshots.
Tracking: 14225495/1000279,
1000418
VNX File OE, Platform: VNX for File While running server_stats on Control Fixed in code. 8.1.21.256
Storage Severity: Medium station for stat path store.volume,
Tracking: 12318493/976175, Data Mover panicked due to out of
976723 91602714/991560 memory.

VNX File OE, Platform: VNX for File NDMP backups failed. Fixed in code. 8.1.21.256
System Severity: High
Management Tracking: 11607859/968703
VNX File OE, Platform: VNX for File File Systems and Cifs Shares XML API The issue was resolved by optimizing 8.1.21.256
System Severity: Medium queries did not respond to SRM XML API quries processing.
Management Tracking: 11602043/972107, causing stats collection to fail on
Control Station.
972146
VNX File OE, Platform: VNX for File XML API NFS Exports query failed Fixed in code. 8.1.21.256
System Severity: Medium with the following error: The XML
Management API server was unable to
Tracking: 11888292/981230
map the mover name to a
mover ID

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 9
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Details Symptom Fix Fixed in Version

VNX File OE, Platform: VNX for File All Data Movers in the system went Fixed in code. 8.1.21.256
System Severity: Medium into rolling panics for I/O failure
Management Tracking: 13669459/993953, against un-activated snap LUN due to
13880217/996324 race condition.

VNX File OE, Platform: VNX for File In a File only (VG) array, after upgrade Fixed in code. 8.1.21.256
System Severity: Medium to 8.1.9.236, an Internal Error
Management Tracking: 13942037/998047, message was reported on browser
when modifying LDAP setting using
997442
GUI.
VNX File OE, Platform: VNX for File XML API VDMs query failed to return Fixed in code. 8.1.21.256
System Severity: Medium NFS interface attached to VDM as
Management there was no support of NFS
Tracking: 14253279/1001988
Interfaces on VDM in XML API Query
Backend Platform: VNX for File nas_rp -cg commands failed with Fixed in code. For more information, 8.1.21.256
Features Severity: Medium RP_SESSION.open refer to KnowledgeBase article
000530951
Tracking: 13523124/992256,
14263910/1001310
Unisphere Platform: VNX for Block In Unified systems, after exiting the Fixed in code. 8.1.21.256
Backend/CLI Severity: Low ECOM process, some events were
Tracking: 14239238/1001890 generated because a dial home failed.

Unisphere Off Platform: VNX for Block Unisphere Service Manager (USM) Fixed in code. 1.3.9.239 and later. 8.1.21.256
Array Tools Severity: Medium 1.3.9.1.0236 was not able to Since USM 1.3.9.0236 users cannot
Tracking: 13765890/995478, download new software. The upgrade from within USM, Dell EMC
1004623 following error displayed when trying recommends downloading USM
to download software via USM: 1.3.9.0239 from Online Support
Could not connect to directly. After it is installed you will
sso.emc.com. be able to download files from within
USM. KnowledgeBase Id (Primus
ID): 520607

Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE On systems running 05.33.009.5.236 storage processor 995993 05.33.009.5.238
bugchecks will not generate dump files for root cause analysis.
VNX Block OE Storage Processor did not boot after re-imaging to 998884 05.33.009.5.238
05.33.009.5.236.
VNX Block OE Some Solid State Drives displayed reduced performance, 900927 05.33.009.5.236
depending on the usage rate.
VNX Block OE One or both storage processors rebooted with bugcheck code 09223868/939650, 937718 05.33.009.5.236
0x0000001E, when a drive was streaming errors.

10 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE When both storage processors rebooted, a storage pool 09767593/944994, 944607 05.33.009.5.236
remained either offline or in a degraded state.
VNX Block OE After some SSD drives reported a 04/xx hardware error, data 954414, 79934524/832634 05.33.009.5.236
integrity issues occurred.
VNX Block OE A storage processor bugcheck (0x05900000) occurred due to 82413498/901270 05.33.009.5.236
corruption in the persistent reservation information. After the
bugcheck, the front end ports were not enabled on either SP.
VNX Block OE After a reboot of one storage processor, the Data Mover could 07761276/967960, 913529 05.33.009.5.236
not access LUNs by using the peer storage processor.
VNX Block OE Both storage processors rebooted with bugcheck 69753738/955693 05.33.009.5.236
0x05900000.
VNX Block OE When a write completed to only some of the drives in a stripe 11753742/969796, 969786 05.33.009.5.236
or RAID group, pools and/or LUNs went offline with possible
loss of data.
VNX Block OE On D@RE-enabled systems, if a pool had been created, and 10235124/974331 05.33.009.5.236
later destroyed, while a system verification was in progress, and
another pool was created on the same drives, a data loss
occurred the next time a storage processor was rebooted for
any reason.
VNX Block OE A storage processor rebooted unexpectedly with code 74128930/770210 05.33.009.5.236
0x01902003. A new Fibre Channel command was initiated
during the recovery of an old command which was in progress
that used the same IDs.
VNX Block OE The array dial-home did not occur for a Power Supply failure 73421820/774509 05.33.009.5.236
because there were extraneous characters at the end of the
Part Number.
VNX Block OE When a storage pool became full, VNX Snapshot entered an 78550032/822737 05.33.009.5.236
error state while attempting to create a Snapshot Mount Point
(SMP). The SMP could not be removed to recover the pool.
VNX Block OE A storage processor bugcheck (0x03B03009) occurred when a 81112690/847947, 848527 05.33.009.5.236
front end port was trying to respond to a port login when it
should not have.
VNX Block OE A storage processor rebooted with bugcheck 0x0000001E 81523532/851720, 851994 05.33.009.5.236
when one of the drives reported numerous errors and could not
shutdown due to an ongoing proactive sparing operation.
VNX Block OE A storage processor bugcheck (0xE115801D) occurred during 82666234/866541, 870429 05.33.009.5.236
slice evacuation process.
VNX Block OE VNX Unisphere reported faulted hardware but the faults cleared 05425865/870923, 05.33.009.5.236
automatically. Refer to KnowledgeBase article 000469382 for 10100561/951969,
more details. 05425865/954720
VNX Block OE A RAID group could not be rebuilt and a proactive copy could 85103080/893472, 893397 05.33.009.5.236
not be aborted when a source drive failure occurred during a
proactive sparing operation.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 11
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE A single storage processor bugcheck (0xDAAAEEEE) occurred 67825708/901680 05.33.009.5.236
during a data movement completion operation under heavy I/O
load.
VNX Block OE System drive replacement failed when the system reached the 7207718/905237, 904707 05.33.009.5.236
maximum number of drives to be installed.
VNX Block OE A storage processor rebooted unexpectedly with code 91235510/950137, 05.33.009.5.236
0x01902009. The data structures indicated some level of 10240546/950524
corruption in data received from the Fibre Channel chip.
VNX Block OE During an NDU, a storage processor bugcheck occurred 10283288/952703, 950964 05.33.009.5.236
(0xE111801D) in a configuration where a storage pool
contained a large number of disks.
VNX Block OE A storage processor rebooted with bugcheck (0x03404090) 12214701/975248, 974756 05.33.009.5.236
when the SAS controller firmware failed with an outstanding
unsuccessful I/O request.
VNX Block OE, Call home failed to report a failed LCC on a unified system. 01230852/982798 05.33.009.5.236
System
Management
VNX Block OE, Event 0x712789a3 could not be configured in the Call Home 01230852/979451 05.33.009.5.236
System template.
Management,
UDoctor,
Serviceability
VNX Block OE Write cache was disabled during DIMM replacement on a 84293394 / 885838 05.33.009.5.231
storage processor. 886222
VNX Block OE EMC Midrange array products have historically reported values 919425 05.33.009.5.231
for the "maximum transfer length" and "optimal transfer
length" reported in response to inquiry commands on vital
product data pages as 65535 blocks. This value is 1 block less
than 32MB. With the release of the Unity product line, we now
report our SCSI protocol support level as revision 4. This means
that some host operating systems now interpret these values as
alignment hints as allowed in the SPC-4 protocol specifications.
Disk labeling, file system creation, and volume management
utilities that used these values would create labels, file systems,
or LVM volumes that were misaligned when using the default
settings derived from these values.
VNX Block OE Toshiba PM3/PM4 or Samsung REXa drives failed with hardware 939526 05.33.009.5.231
errors.
VNX Block OE On a storage system running version 05.33.006.5.096 (or later), 76730658 / 798541 05.33.009.5.231
a single storage processor bugcheck (0x01901050) occurred 798576
during a period of heavy I/O with link up/down transitions.

12 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Single storage processor bugcheck (0x03404090) occurred 83157332 / 873250 05.33.009.5.231
when the SAS controller firmware failed with an outstanding 874974
unsuccessful I/O.
VNX Block OE Drives reported event 71678050 or 71678053 and drives 83755284 / 880264 05.33.009.5.231
were recommended for Proactive Spare (User Correctable) even 880792
though there are no soft media errors reported.

KnowledgeBase ID: 462159


VNX Block OE A single storage processor bugcheck (0x0340201A) occurred 07100852 / 902953 05.33.009.5.231
when the SAS firmware experienced an error. 904324
VNX Block OE After encryption is enabled on a system, unexpected multi-bit 944762 05.33.009.5.218
CRC errors can occur, resulting in one or more LUNs going
offline.
VNX Block OE On a system with encryption enabled, while a storage processor 90181412 / 942106 05.33.009.5.218
is rebooting, in rare cases an incorrect encryption key can be
used, resulting in data loss.
VNX Block OE On a system with encryption enabled, moving a drive from one 944761 05.33.009.5.218
slot to another can result in an unexpected SP reboot.
VNX Block OE User LUNs on the system drives, or System LUNs, were unable 915524 05.33.009.5.217
to process I/O, went offline or reported bad blocks after a
system drive was replaced.
VNX Block OE A host experienced high I/O latency when iSCSI ports took about 81593210 / 901502 05.33.009.5.217
10 to 30 seconds to respond to an Abort Task command.
VNX Block OE Storage processor bugchecks occurrred (0x03507000) on a 6755460 / 898717 05.33.009.5.217
16G Fibre Channel system after a host HBA configuration
change.
Fixed code to handle situation when the FC driver received an
invalid command.
VNX Block OE AIX attaches could not boot from a 16G Fibre Channel 5998020 / 894783 05.33.009.5.217
connection.
VNX Block OE Storage processor bugchecks occurred (0x03b03024) on a 84941048 / 891686 05.33.009.5.217
16G Fibre Channel system when internal resources were
exhausted.
Updated the code to reset the FC chip when this condition is
detected.
VNX Block OE Storage processor bugchecks occurred (0x000000d1) on a 84305006 / 886163 05.33.009.5.217
16G Fibre Channel system when the host sent a Fibre Channel
Packet Response field set to non-zero.
VNX Block OE A drive was faulted when the backend hung because of a SAS 80329754 / 842508 05.33.009.5.217
backend issue.
Updated the drive health check code to handle this condition.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 13
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Incorrect handling of "Test And Clear Volume Attribute" event 85171126 / 894304 05.33.009.5.217
cancellations resulted in delayed event processing that caused
an FF_ASSERT_PANIC on one Storage Processor (SP), and
corruption that caused a panic (bugcheck E1258007) on the
other SP.
Updated the code to properly handle "Test And Clear Volume
Attribute" event cancellation requests.
VNX Block OE Affects encrypted systems only. A race condition caused the 83130490 / 872173 05.33.009.5.217
keys to become corrupted for the system drives, which resulted
in inaccessibility to the units on those drives, including the
cache vault.
Updated the code to fix a race condition in order to avoid key
corruption.
VNX Block OE Storage Processor FF_ASSERT_PANIC occurred, caused by 827213 05.33.009.5.217
incorrect handling of "token invalidation state" in the token
driver.
Updated the code to fix the "token invalidation state" to avoid
unexpected reboots.
VNX Block OE Storage Processor (SP) panic occurred (bugcheck code 7428129 / 920761 05.33.009.5.217
0000007E). Accessing a corrupt path caused the RecoverPoint
(RP) Splitter to crash. This was a single SP panic, where the RP
Splitter did not recover until the SP came back up.
Updated the code to check for RPA path validity to avoid
crashing when path is corrupted.
VNX Block OE The following message was seen in the logs: 7033910 / 920760 05.33.009.5.217
ERROR: In Cluster 0xxxxxxxxxxxxxxxxx, SP
cannot see RPA1. Verify that SP can see all
RPAs at the relevant site.

The RecoverPoint (RP) Splitter's RecoverPoint Appliance (RPA)


discovery mechanism might hang when trying to disconnect a
faulty RPA path.
VNX Block OE Storage Processor (SP) panic (bugcheck D1) occurred due to an 85511004 / 920759 05.33.009.5.217
access violation (access to an already freed path) while trying to
cancel an I/O that was sent to the RecoverPoint Appliance
(RPA). This was a single SP panic, where the RecoverPoint (RP)
Splitter did not recover until the SP came back up.
VNX Block OE Evacuation objects were stuck in transition state due to 6404188 / 890455 05.33.009.5.217
incorrect handling of abort request during evacuation.
Updated the code to handle abort requests during evacuation
operations.

14 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE A remote storage pool’s available space was incorrectly 6283141 / 888294 05.33.009.5.217
calculated. This prevented a consistency group synchronization
from starting, and incorrectly reported insufficient space on the
remote array.
Updated the code to correctly calculate the remote array pool
space and allow consistency group synchronization to start
when pool space is sufficient.
VNX Block OE When a target LUN was trespassed from one Storage Processor 5917925 / 882929 05.33.009.5.217
(SP) to the other, while the LUN was being relocated and while
being accessed by a host, an issue occurred that left the target
LUN in a quiesced state, so that the host could no longer access
it.
VNX Block OE While internal databases on the vault drives were being 83522860 / 877849 05.33.009.5.217
changed during an array upgrade, a drive link failure occurred
on one Storage Processor (SP), making the drive connection fail
over to the peer SP. This resulted in a panic (bugcheck
E10581A4) on both SP's.
Updated the code to handle mismatched database sizes.
VNX Block OE A recoverable disk error was mishandled, which temporarily 81684562 / 853398 05.33.009.5.217
caused access loss to storage.
Updated the code to correctly attempt a disk error recovery.
VNX Block OE A LUN became unavailable while a Storage Processor (SP) was 78095046 / 814165 05.33.009.5.217
rebooting, which caused incorrect pool system information on
the peer SP and caused the pool to go offline and remain
offline.
VNX Block OE Affects encrypted systems only. While a Storage Processor (SP) 74883542 / 782035 05.33.009.5.217
was rebooting, a drive link failed and that drive’s connection
was failed over to the peer SP. The SP was then unable to push
the encryption keys to that drive, which prevented it from
coming online.
Fixed the code to check the status of drive link faults before
pushing the keys.
VNX Block OE A fresh install on a VNX5200 or VNX5400 in Manufacturing 885954 05.33.009.5.186
failed. The OE did not load because of a missing or corrupt
driver.
VNX Block OE Brief (less than 45 seconds) failures of multiple drives may cause 835959 05.33.009.5.184
LUNs to go offline.
VNX Block OE After an NDU, during the LCC firmware upgrade, drives may fail, 79612508 / 828617 05.33.009.5.184
enclosures may go offline, or the LCC firmware upgrade may
fail.
VNX Block OE During an NDU, a single storage processor bugcheck 79147712 / 828510 05.33.009.5.184
(0000001E) occurred when the peer SP was being upgraded.
VNX Block OE Storage processor bugchecks (0x05900000) occurred due to 78491320 / 821326 05.33.009.5.184
internal resource starvation.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 15
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE During a period of backend instability, multiple internal drive 78799806 / 820737 05.33.009.5.184
health checks were initiated on the same RAID group, causing
the RAID group to be broken temporarily.
VNX Block OE When one SP bugchecked during configuration changes, if the 78220474 / 817161 05.33.009.5.184
peer SP had low memory or was out of memory, pools and pool
LUNs may be erased from persistent storage.
VNX Block OE A RAID group rebuild stopped while in process because of media 77523988 / 806755 05.33.009.5.184
errors. This caused a portion of the RAID group to be degraded,
which impacted performance.
VNX Block OE A drive that was being proactively spared failed at a time when 77001192 / 801467 05.33.009.5.184
other drives in the RAID group were already faulted (two other
drives in RAID-6, and one in RAID-5). However, the RAID group
was not marked as broken. This could lead to unexpected
results, including, but not limited to, an SP bugcheck.
VNX Block OE LUNs went offline after power was resumed from a failure, and 798952 05.33.009.5.184
one of the drives came online later than others.
VNX Block OE After the system was upgraded to version R33.155 and the LCC 798649 05.33.009.5.184
firmware was being upgraded, multiple drives reported as
faulted.
VNX Block OE A RAID group got stuck in the degraded state when a drive in a 795742 05.33.009.5.184
redundant RAID group experienced timeouts (which initiated an
internal drive health check and rebuild logging).
VNX Block OE While an SP was rebooting, the peer SP was manually rebooted, 791238 05.33.009.5.184
causing the first SP to crash and reboot.
VNX Block OE Enclosures may go offline or become faulted as a result of a race 79119078 / 828579 05.33.009.5.184
condition between Drive and Enclosure handling during an LCC
upgrade.
VNX Block OE A single storage processor bugcheck occurred when the SAS 78953658 / 822547 05.33.009.5.184
controller firmware crashed while it was processing SAS
topology changes.
VNX Block OE A direct LUN (DLU) might become inaccessible because a 78989068 / 821891 05.33.009.5.184
snapshot was created on the DLU during a relocation, or a
snapshot was created immediately after the last snapshot on
the DLU was destroyed.
VNX Block OE A 15-Drive 3U Disk Array Enclosure (DAE6S) may shut down 819633 05.33.009.5.184
after a power supply fuse blows.
VNX Block OE A LUN became disabled on both SPs after the LUN recovered. 817366 05.33.009.5.184
VNX Block OE Various issues occurred, due to incorrect internal memory 72382254 / 813574 05.33.009.5.184
allocation handling during cancellation of I/O requests.
VNX Block OE The BBU fault LED did not clear after it was replaced. 813153 05.33.009.5.184

16 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE After converting from a VNX5600 or VNX5800, a VNX7600 had 77635160 / 812007 05.33.009.5.184
incorrect LUN limits.
VNX Block OE Could not perform clone operations (such as AddClone, Sync, 77870966 / 810992 05.33.009.5.184
and Reverse-Sync) on deduplication-enabled LUNs due to
insufficient pool space.
VNX Block OE Both SP's rebooted from bugcheck code: E117B264. 801555 05.33.009.5.184
VNX Block OE Access to LUNs on systems with encryption enabled failed after 76248726 / 798707 05.33.009.5.184
performing a conversion that removed but did not restore KEK-
KEKs.
VNX Block OE Users could not attach to snapshots they created in order to 75857678 / 795850 05.33.009.5.184
allow I/O to and from the snapshots.
VNX Block OE A race condition in auto tiering may occur during slice relocation 794838 05.33.009.5.184
on a DLU and result in data unavailability or an SP bugcheck.
VNX Block OE During a period of heavy I/O, a storage processor bugcheck 69090538 / 778872 05.33.009.5.184
(0xe117b264) occurred when hosts cancelled I/O during data
movement between LUNs (such as that initiated by FAST Cache,
migration, and so on).
VNX Block OE SP reboots may occur if some hardware components fail to 73944754 / 774563 05.33.009.5.184
respond for 180 seconds.
VNX Block OE When FAST Cache was enabled, an NDU failed after a storage 74260452 / 774197 05.33.009.5.184
processor bugchecked multiple times.
VNX Block OE A single SP bugchecked and rebooted. Several different 73271614 / 766108 05.33.009.5.184
bugcheck types were possible, depending on the sequence of
events.
VNX Block OE After a Block OE upgrade, if the RecoveryImage or 71437626 / 734107 05.33.009.5.184
UtilityPartition package was installed within one hour, the peer
SP might not upgrade the LCC firmware. The firmware version
on the SPs would be different.
VNX Block OE A bugcheck (0x000000D1) occurred in some rare instances 819434 05.33.009.5.184
where the SAS backend experienced a hardware issue.
VNX Block OE In rare cases, an SP bugcheck may occur because a thread 76975134 / 804021 05.33.009.5.184
deadlock could occur in a background job responsible for free
space reclamation from LUNs in a pool.
VNX Block OE A Samsung SAS Flash 3 drive failed to come back online after a 798651 Samsung SAS Flash
very brief (less than 3 seconds) power down event to an 3 drive firmware
enclosure. revision EQP6 or
later.
VNX Block OE, Failover can be delayed during planned storage processor 825289 05.33.009.5.184
CBFS reboot or shutdown (including during NDU), possibly causing
temporary loss of access to data.
VNX Block OE, Add or sync mirror operations might fail on deduplication- 77870966 / 814187 05.33.009.5.184
MirrorView enabled LUNs that have insufficient pool space.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 17
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE On a 64-bit system, a user may find that the new repository 71961938 / 745926 05.33.009.5.155
location changed during installing USM doesn't take effect. The
default location (C:\EMC\repository) is used instead.
VNX Block OE Coherency errors were reported with a configured FAST Cache 69309944 / 745344 05.33.009.5.155
after powering down the system.
VNX Block OE The read and write cache hit ratio showed a value larger than 71746328 / 739480 05.33.009.5.155
100% in both the Unisphere GUI and the naviseccli.
VNX Block OE Concurrent reads which are not aligned to 64K boundaries and 63726358 / 682360 05.33.009.5.155
reading from overlapped 64K areas experienced degraded 66010424 / 682364
performance due to read serialization. This issue was most 67624288 / 694624 / 682220
noticeable when using iSCSI with TCP delayed ACK enabled.
VNX Block OE When a peer SP was booting and requested FAST Cache 75056638 / 780445 / 742695 05.33.009.5.155
clean/dirty status for the LUNs in the system, the thread that
processed the peer to peer messaging completed the request
before the thread that sent the message existed.
VNX Block OE When performing an NDU from USM, the user couldn't control 71829328 / 758565 05.33.009.5.155
which SP was primary or secondary. 73794720 / 765110 / 750158
VNX Block OE A user reported that their Data Mover bugchecked. When a user 75852870 / 790439 05.33.009.5.155
configured FAST VP on DLUs their I/O operations failed, 76172048 / 793963
reporting a device not ready error. If the I/O operations came
76204854 / 794982
from the Data Mover, a bugcheck occurred.
76330100 / 795458
77279582 / 804586
77088816 / 806101
76558198 / 806103
77419972 / 807071
VNX Block OE The storage processor returned the following error message: 68342174 / 742549 05.33.009.5.155
EV_Agent::Process -- Outstream data xfer
error. Err: EMULSocket::send()
VNX Block OE A user was unable to delete a call home notification template 72744294 / 756098 05.33.009.5.155
when the Grapical User Interface (GUI) language was not set to
English.
VNX Block OE A VNX snap restore/destroy operation failed and the user was 69934054 / 761868 05.33.009.5.155
presented with the following error: The operation
failed because a snapshot restore operation
is in progress.
VNX Block OE A hardware exception was generated. 72417248 / 749613 05.33.009.5.155
VNX Block OE The USM System Verification Wizard timed out on some block- 73632390 / 766201 05.33.009.5.155
only arrays whose Capture Configuration takes longer than 3 or
4 minutes.
VNX Block OE If a system exceeds the limit for maximum number of slots, the 60659572 / 619114 05.33.009.5.155
enclosures exceeding this limit will fail (as well as the drives
within these enclosures). The drive faults are persisted, so they
are not allowed back into the system without manual
intervention. This usually happens when a new enclosure is
added which causes the slot count to increase above the max.

18 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE After a Storage Processor reboot the MCR driver is slow in 61132026 / 627657 05.33.009.5.155
reporting that one or more LUNs are online and comprising the 68372910 / 698048
Fast Cache. Fast Cache will restart its load of cache pages but 72119400 / 743317
will put them into a state that is not in sync with the peer SP. 73495202 / 760229
This out of sync condition can cause a single SP bug check. 73947128 / 767034
74283170 / 773365
74703166 / 781571
VNX Block OE The Management Server restarted because more than one 65651032 / 670754 05.33.009.5.155
naviseccli getall command were running at the same time.
VNX Block OE After a pool expansion, only part of the expanded capacity is 70506790 / 728910 05.33.009.5.155
available.
VNX Block OE A disk drive metadata error caused incorrect location and serial 65700452 / 670381 05.33.009.5.155
numbers. The disk would not be accessed correctly.
VNX Block OE A storage processor bug checked (0x0000007E) or similar. 69468616 / 710577 05.33.009.5.155
VNX Block OE A naviseccli request to expand a storage pool may still be 69856422 / 714631 05.33.009.5.155
performed, even if the rules check fails.
VNX Block OE When a RAID group is degraded, I/Os are suspended and can't 70736120 / 724320 05.33.009.5.155
be finished. 70736120 / 726015
VNX Block OE An LDAP user was able to log into the GUI after it was disabled 7041380 / 735840 05.33.009.5.155
from the LDAP server.
VNX Block OE Some offline pool LUNs were marked for recovery due to an 70598948 / 737559 05.33.009.5.155
underlying metadata corruption. 72703416 / 747627
73967238 / 765254
VNX Block OE During a non disruptive upgrade, a Storage Processor bug 72013458 / 739426 05.33.009.5.155
checked with code 7E.
VNX Block OE Single storage processor bug checked when a power fail 73924290 / 775848 05.33.009.5.155
occurred.
VNX Block OE ESRS username/password longer than 30 characters was not 667307 05.33.009.5.155
supported in the GUI.
VNX Block OE A naviseccli process caused high CPU utilization. 73009628 / 751451 05.33.009.5.155
VNX Block OE A temporary file was stuck in C:\temp and subsequent 73009628 / 753515 05.33.009.5.155
Config/Capture schedules were affected.
VNX Block OE A clone source LUN and clone LUN might become inconsistent 760129 05.33.009.5.155
even though they appear in a synchronized/normal state.
VNX Block OE When running VAAI I/O for protected volumes and while the 74599468 / 780452 05.33.009.5.155
volume was in split mode and the backlog was full , the system
triggered an SP bugcheck (0xE117B264).
VNX Block OE An NTP time synchronization failed. 69259750 / 716725 05.33.009.5.155
VNX Block OE A high CPU usage was seen when enabling an NQM Policy. 70692738 / 732432 05.33.009.5.155
VNX Block OE A drive was faulted after returning an unexpected error. 71956862 / 740943 05.33.009.5.155
VNX Block OE A host lost access during an upgrade. 69230038 / 746586 05.33.009.5.155
VNX Block OE The naviseccli port -diagnose –host command didn't 74752816 / 778599 05.33.009.5.155
work.
VNX Block OE The LUN Provisioning Wizard displayed the “Write caching will 68538732 / 715855 05.33.009.5.155
be enabled for the LUN, but not for the storage system” pop up
warning, even though Write Cache was enabled.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 19
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE A storage processor bug checked and rebooted with 69880048 / 723766 05.33.009.5.155
C000021E.
VNX Block OE When I/O is canceled, the storage processor responded with 71359452 / 738710 05.33.009.5.155
multiple bug checks.
VNX Block OE A RAID group double faulted after an LCC firmware upgrade. 71380624 / 749264 05.33.009.5.155
VNX Block OE When an end of life or drive fault is cleared, there is no message 72640808 / 749499 05.33.009.5.155
in the event log.
VNX Block OE A storage processor rebooted with a bugcheck code of 73078514 / 752546 05.33.009.5.155
0x0000001e.
VNX Block OE A hard reset on one or both storage processors occurred 73689542 / 761654 05.33.009.5.155
following a drive failure. 74579368 / 774552
74621678 / 775580
74585484 / 775907
74923400 / 778812
75266646 / 785037
75587520 / 787732
75342362 / 788163
72391200 / 789881
76568192 / 797369
VNX Block OE A storage processor bugchecked with E1158018. 73717128 / 763900 05.33.009.5.155
VNX Block OE A storage processor bugchecked with the code, 0x01901005, 70024168 / 715256 05.33.009.5.155
during an upgrade. 76163286 / 799793
72875350 / 755719
72628862 / 755259
72634096 / 749179
VNX Block OE A storage processor rebooted with bug check C000021E. 71880024 / 737140 05.33.009.5.155
70396450 / 732631
72924344 / 750541
75295458 / 784584
75757686 / 789090
76346712 / 795886
76826284 / 800160
VNX Block OE During the process of converting un-encrypted RAID groups to 72481812 / 745528 05.33.009.5.155
encrypted RAID groups, there is a window of time where
encryption could hang and lead to either a single or dual SP bug
check.
VNX Block OE SPA responded with a bugcheck code C000021E. 72730540 / 750297 05.33.009.5.155
VNX Block OE When an NQM provider sets a small delay value (less than 1 ms) 71670692 / 756388 05.33.009.5.155
to a driver, the NQM driver can keep scanning the I/O list, using 75296136 / 790048
outstanding CPU resources. This can cause the CLI/GUI to not
respond.
VNX Block OE A user received an alert that Unisphere could no longer manage 72477754 / 746981 05.33.009.5.155
the SPs. 69568684 / 716485 /
Navi CLI commands responded very slowly. The 74911744 / 782113 /
admin_tlddump.txt, showed the TLD response time to be 74250132 / 787804 /
around 4 secs. 74697790 / 787983

20 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE USM displayed an incorrect message after a drive replacement 742030 / 729304 05.33.009.5.155
was complete.
VNX Block OE When trying to connect a host to storage group, a message was 744343 05.33.009.5.155
returned, saying:
Results from call to add host(s) to the
storage group:
The overall operation failed.
Error details:
Success
VNX Block OE When the CLI command server_reclaim was executed, 747548 05.33.009.5.155
error message 2237 was displayed.
VNX Block OE A Speed change from AUTO to 16G-only will not login without 742243 05.33.009.5.155
link bounce. A Speed change from 16G-only to AUTO will not
login without some link bounce.
VNX Block OE When a power failure for the entire array (DPE and DAEs) 720833 05.33.009.5.155
happened, bugcheck (0x0340406a) occurred.
VNX Block OE Attempting to delete a LUN while specific operations are active 68041418 / 662213 05.33.009.5.155
results in a bugcheck (0x05900000) on both VNX Storage
Processors (SPs)
These operations include:
Clone - Sync
CPM - Copy/Resume
SanCopy - Rollback/SnapLU activate
MLU - Attach/Rollback snap
VNX Block OE The ODFU wizard did not progress and was stuck on the Prepare 745970 / 745613 05.33.009.5.155
for Disk Firmware Installation page.
VNX Block OE When a user clicked on the Disk Firmware Upgrade notification 746384 / 746702 05.33.009.5.155
to initiate an online firmware upgrade and also clicked on
Software > Disk Firmware > Install Disk Firmware Online, a
second window was opened.
VNX Block OE User received notifications for information events while only 74078752 / 771646 05.33.009.5.155
"Error and Critical Error" events were selected in the template.
VNX Block OE After creating a VNX storage pool, the following combination of 694090 05.33.009.5.155
actions created a system bugcheck (0x76008301):
1. Swapping out a disk in the storage pool
2. Deleting the storage pool
3. Creating a new storage pool
VNX Block OE When targeting R32 arrays, USM's LCC Status window can time 73119584 / 753846 05.33.009.5.155
out if the event log has excessive events.
VNX Block OE When using QoS to limit I/O throughput to a certain value, I/O 742112 05.33.009.5.155
jumped significantly at regular intervals.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 21
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE When a read error occurred during a disk copy operation (such 768685 05.33.009.5.155
as Proactive Copy or rebuild to hot spare), and the data was 71024192 / 737634
directly promoted into FAST Cache before the read error could 73554468 / 761400
be corrected by a background verify, the FAST Cache reported 74300706 / 774479
an error on a host read. 74465916 / 775374
75225728 / 782594
75658014 / 788635
75825358 / 790383
75831040 / 793192
76112472 / 793799
76366310 / 798205
VNX Block OE A single storage processor bugcheck (0x000000D1) occurred 71599430 / 733934 05.33.009.5.155
when a clone image was synchronizing and there was an IO
cancellation on the clone source LUN.
VNX Block OE During a period of heavy I/O, a storage processor encountered a 67095570 / 687329 05.33.009.5.155
bugcheck (0x05900000). 66236706 / 676748
70235702 / 720402
72772694 / 748981
VNX Block OE A service processor bugcheck (0x05900000) occurred processing 69537274 / 709736 05.33.009.5.155
aborted host I/O. 73524350 / 772981
VNX Block OE A storage processor bugcheck (0xe117b164) occurred when 786680 05.33.009.5.155
hosts cancelled I/O during data movement between LUNs (such 75965998 / 791585
as that initiated by FAST Cache, migration, and so on).
VNX Block OE A set of Deduplication-enabled LUNs in a pool went offline after 789059 05.33.009.5.155
an NDU from previous versions of the OE. Recovery was run
without benefit of UFSLog Replay. Persistent user data in the
dedup container was lost.
VNX Block OE RAID Groups were broken due to a failed drive in an already 740193 05.33.009.5.155
degraded RAID Group. 70041390 / 719629
Refer to the New features and enhancements to read details 70239252 / 720587
about this feature. 75873498 / 790773
VNX Block OE There was an I/O timeout due to a CBFS internal deadlock. 757363 05.33.009.5.155
74317178 / 773445 / 771591
VNX Block OE The network port did not properly release. When this issue 75087372 / 757469 05.33.009.5.155
occurred, the navi cimom process failed to start. The 75087372 / 782710
cimomlog.txt, reported that network port 443 was occupied by 75346340 / 783692
another process. 75882842 / 794733

22 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Single, and in isolated cases dual, storage processor bugchecks 745992/68583616/711908/ 05.33.009.5.155
(various including 0x00000000, 0x0000001E and 0x06000000) 70124318/718260/70160028/
occurred. 719042/70579306/722143/
70410880/722435/70460808/
727069/71031670/730140/
70720506/729441/71433446/
732583/71032226/732636/
71122914/734498/71653810/
735589/71707848/737354/
71710906/737416/71934826/
741293/72210706/741597/
72218658/745901/72603090/
746812/72449556/746830/
72537124/747406/72716654/
748918/72786516/749463/
72786516/749463/71723488/
750105/72361548/750419/
72460654/750722/72576654/
751103/72562244/751137/
72867486/751836/73034174/
753425/73008808/753836/
73198930/756123/73242084/
757322/73221472/757836/
73344740/758947/73549316/
759556/73522806/761511/
73584960/762244/73788884/
763710/73782220/764006/
73942168/767280/74097100/
767551/73833426/772940/
74354674/773506/74603914/
775232/74685002/775921/
74529834/775942/74633358/
775949/74690526/776281/
74709630/777106/74557752/
777152/74760414/778038/
74914482/780093/74841632/
780320/75000926/780344/
75006366/780590/74933114/
781763/75385234/784760/
75034794/785168/75501106/
785249/75567772/787746/
75536858/788021/75660370/
788621/75532270/789271/
75903016/791076/75987138/
793707/76444362/795984/
76444362/795985/76289730/
796396/76386254/797748/
76579342/802414
VNX Block OE LUNs went offline when an NDU was performed while FAST VP 751240 05.33.009.5.155
was relocating slices to another storage tier.
VNX Block OE, When enabling support options in Unisphere, single sign on 73132226/ 750931 05.33.009.5.155
Unisphere failed and a popup message, an error occurred was
displayed.
VNX Block OE Both SPs rebooted due to a bugcheck (7E ) during recovery. 74485638 / 776467 / 733790 05.33.008.5.119

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 23
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE During an SP reboot, an error message is displayed within the 72327522/ 745720 05.33.008.5.119
RecoverPoint user interface that says, Splitter XX is down.
VNX Block OE A single storage processor bugcheck (various including 66800310/ 688607 05.33.008.5.119
0xc000021e) occurred following an error on the Fibre Channel
hardware component.
VNX Block OE LUNs with little or no I/O were implicitly trespassed. 62840900/ 643410 05.33.008.5.119
VNX Block OE A single SP bugcheck (0x5900000) occurred due to a race 64975010/ 661855 05.33.008.5.119
condition between a host I/O cancel and a data movement
completion operations.
VNX Block OE A single SP rebooted with bugcheck 7E 65263512/ 666561 05.33.008.5.119
[SYSTEM_THREAD_EXCEPTION_NOT_HANDLED].
VNX Block OE Relocation failed when a scheduled window ends. This may 673991 05.33.008.5.119
cause LUNs to go offline.
VNX Block OE A single SP bugchecked due to a condition between the 67016532/ 684704 05.33.008.5.119
aggregate I/O cancelation path and the normal I/O path.
VNX Block OE A single storage processor bugcheck (0x03302004) occurred. 62922652/ 686495 05.33.008.5.119
VNX Block OE The FEDisk code path was executed multiple times, so that the 67384810/ 688757 05.33.008.5.119
memory finally was consumed out.
VNX Block OE A single storage processor bugcheck occurred while replacing a 68250170/ 700491 05.33.008.5.119
system drive.
VNX Block OE Errors occured when a drive went end of life and reported drive 68458206/ 701988 05.33.008.5.119
faults at the same time.
VNX Block OE Customer sees the badly formed log entry. 69505944/ 709249 05.33.008.5.119
VNX Block OE The host can experience timeouts due to a bad drive which is 69490164/ 711777 05.33.008.5.119
streaming hardware errors.
VNX Block OE A log entry was recorded in the Windows event log when BBU 69682770/ 712701 05.33.008.5.119
state was temporarily unknown.
VNX Block OE An ESX Storage Vmotion failed at 40% migrating Virtual 70271682/ 721881 05.33.008.5.119
Machines between LUNs in the same Storage Pool.
VNX Block OE When a single strorage processor bugcheck occurred, a LUN was 70634516/ 722759 05.33.008.5.119
inaccessible on the other storage processor.
VNX Block OE A storage processor rebooted due to a bugcheck code 64252492/ 663772 05.33.008.5.119
0x03006001.
VNX Block OE A single storage processor bugcheck (0xE111805F) occurred 68492010/ 700300 05.33.008.5.119
when creating/destroying many LUNs and performing LUN
migration at the same time.
VNX Block OE An NDU attempt from a previous version of VNX Block OE failed 69194110/ 715753 05.33.008.5.119
a rule check due to a default SAS address being detected.
VNX Block OE Storage processor B encountered a bugcheck code 0x0000001E. 70890884/ 745687 05.33.008.5.119
Storage processor A encountered a bugcheck code 0x05900000.
VNX Block OE LUNs with little to no I/O were implicitly trespassed. 61018100/ 636333 05.33.008.5.119
VNX Block OE A single storage processor bugcheck occurred due to the 67626278/ 699803 05.33.008.5.119
handling of a rare SAS controller\firmware issue.

24 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE When configuring IPv4 on iSCSI or a management port, the IP 72704554/ 749117 05.33.008.5.119
configuration will fail if the 4th octet is 255 and subnet mask is
less than 24 bits.
VNX Block OE VNX cannot be used in a Read-Only Domain Controller 67585474/ 734750 05.33.008.5.119
environment (RODC) for W2K8_R2/W2K12 domains.
VNX Block CLI An error was returned when several naviseccli connection - 693686 05.33.008.5.119
pingnode or naviseccli connection -traceroute commands were
executed at the same time.
VNX Block OE, Unisphere Analyzer disk statistics are not always updated (may 667981 05.33.008.5.119
Analyzer show 0).
VNX Block OE, The LUN prepare process was frozen. 68016422 / 700220 05.33.008.5.119
CBFS
VNX Block OE, A service processor bugcheck (0x0000007E) occurred when a 69776356/ 713410 05.33.008.5.119
CBFS LUN recovery was running.
VNX Block OE, A single storage processor bugcheck occurred because the log 69319126 / 719632 05.33.008.5.119
CBFS space could not be released normally.
VNX Block OE, Pool LUNs were offline after creating a large number of LUNs at 64781490 / 659363 05.33.008.5.119
CBFS the same time.
VNX Block OE, An added system disappeared after relaunching ESRS. 71611654/724452/619114 05.33.008.5.119
ESRS
VNX Block OE, A single storage processor bugcheck (0x01901004, 0x0190101A) 61787212/687024/687026 05.33.008.5.119
Fibre Channel occurred when a Fibre Channel logout command was received
while processing a link down event.
VNX Block OE, A race condition between the removal of a CAS command and 64825240/659963 05.33.008.5.119
Host its completion resulted in an SP bugcheck (0x05900000).
VNX Block OE, Storage processor bugchecks occurred (various, including 65654434/711927/671492 5.33.008.5.119
iSCSI Driver 0x05900000 and 0x0000001E) on an iSCSI connected storage
system due to a packet storm received by the iSCSI data ports.
VNX Block OE, A single processor bugcheck (0xE1198006) occurred. 68061026/ 695711 05.33.008.5.119
MirrorView
VNX Block OE, If a storage processor was shut down, the peer storage 69898952/ 715224 05.33.008.5.119
MirrorView processor rebooted unexpectedly due to incorrect handling of
an error condition within MirrorView/S.
VNX Block OE, An error message with 0x715281e8 was reported in the navi 64921968/ 662696 05.33.008.5.119
MirrorView log.
VNX Block OE, Incomplete I/O requests were seen by an ESX server under a 60859568/ 669717 05.33.008.5.119
MirrorView workload with overlapping write requests.
VNX Block OE, A Navi Cimom dump was seen due to memory leaks in 55507470/ 702768 05.33.008.5.119
MirrorView MirrorView/S admin.
VNX Block OE, Attempting to destory a mirror resulted in one or both storage 69335508/ 707403 05.33.008.5.119
MirrorView processors rebooting unexpectedly.
VNX Block OE, During a power failure, the storage pool was incorrectly marked 64781490/ 706840 05.33.008.5.119
NFS as recovery needed.
VNX Block OE, Single SP bugcheck. 69045616/ 704235 05.33.008.5.119
Platforms
VNX Block OE, SnapView sessions were stopped with an error message: 62970312/ 724669 05.33.008.5.119
SnapView a100402d.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 25
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE, A single SP bugcheck occurred (0x000000d1) and host I/O was 67742876/ 691992 05.33.008.5.119
Snap Clones removed for the clone source LUN.

VNX Block OE, The management servers re-started frequently on both SPs. 60933588/ 662081 05.33.008.5.119
System
Management
VNX Block OE, Storage Processor B was very slow to respond to any commands 65228246/ 666980 05.33.008.5.119
System and had to be re-started.
Management
VNX Block OE, UDoctor received .xml files, but the .xml files were not 69353288/ 709946 05.33.008.5.119
System processed and were stuck in UDoctor folder.
Management,
UDoctor,
Serviceability
VNX Block OE, The system was unable to create FAST Cache during a non 65169048/ 664516 05.33.008.5.119
Unisphere disruptive upgrade.
VNX Block OE, A pool expansion failed. 64586292/ 686517 05.33.008.5.119
Unisphere
VNX Block OE, Unable to create a clone if the clone destination LUN is a 68384336/ 698899/ 703769 05.33.008.5.119
Unisphere metaLUN and the metaLUN has a configured capacity.
VNX Block OE, LDAP login failed if there was no leaf certificate in an LDAP 68540890/ 701325/680720 05.33.008.5.119
Unisphere configuration.
VNX Block OE, The USM DAE install wizard's cabling suggestion was incorrect. 66354048/ 677685 05.33.008.5.119
USM
VNX Block OE, The USM report wizard creates folders with multiple version 65466552/ 668662 05.33.008.5.119
USM, strings when it creates and saves the reports.
Unisphere
VNX Block OE, When a storage pool encounters an error and goes offline, it's 67699060/ 692639 05.33.008.5.119
Virtual possible that the command Naviseccli storagepool -list still
Provisioning shows the status as OK.
VNX Block OE, A storage processor rebooted due to bugcheck code 0x 68222912/ 697328 05.33.008.5.119
Virtual E111805F.
Provisioning
VNX Block OE, In a rare instance, a single SP bugcheck (0xC0000021E) 653263 05.33.008.5.119
Virtual occurred.
Provisioning
VNX Block OE SP A rebooted due to a bugcheck. 702129 05.33.006.5.102
VNX Block OE, Single SP bugcheck happened when the array was under high 692912 05.33.006.5.102
Virtual pressure from external I/Os and internal background
Provisioning operations, while at the same time the other SP rebooted.
VNX Block OE When deduplication was not enabled on a LUN in a pool, the 69046416 / 710022 / 653311 05.33.006.5.096
query command on the pool returned an error (2237).
VNX Block OE SP A bugchecked when SPB had 250 virtual desktops that had 68197756 /698062 / 690469 05.33.006.5.096
been deduped and completed the I/O run.
VNX Block OE A Link Control Card (LCC) or an Inter Connect Module (ICM) in a 599765 / 614380 05.33.006.5.096
VNX DAE reported a power supply unit (PSU) fault or fan fault
because it did not detect the replaced module.

26 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Under certain conditions, reverse read is reported as several 65649146/ 695091 05.33.006.5.096
times slower than read.
VNX Block OE In certain scenarios, key manager may enter deadlock which 66912942/ 683415 05.33.006.5.096
prevents peer SP from coming up.
VNX Block OE FAST VP relocation (moving data within one storage pool) would 665509, 670416, 676377 / 05.33.006.5.096
occasionally fail with error code 0xe12d8417 when its scheduled 64183616
relocation window ended , causing the LUN to go offline.
VNX Block OE On rare occasions, VNX disks report faults because the VNX 687676 / 67152848 05.33.006.5.096
system did not handle IO fault states correctly.
VNX Block OE Initiating a non-disruptive Upgrade (NDU) of the VNX OE bundle 66051036/673828 05.33.006.5.096
while also installing a large number of enablers and creating a
Synchronous Mirror led to failures when creating legacy
SnapView sessions.
VNX Block OE Non-disruptive Upgrade (NDU) failed due to the Windows 613202 05.33.006.5.096
command bcdedit failing to run. An error message displayed:
The data area passed to a system call is too small" should exist
in c:\temp\ndu-bcdedit-log.out.
VNX Block OE One SP would not go online after it was restarted. The SP will 645883 05.33.006.5.096
became pingable but the getagent command failed as timeout.
VNX Block OE If a user sent IO to a RAID Group before expanding a LUN, only a 633096 05.33.006.5.096
small number of slices were moved to new raid group.
VNX Block OE Event ID 10 from source WMI was logged in the Application log 657111 05.33.006.5.096
after every reboot6
VNX Block OE If a user modified a property of an 10GB iSCSI port, the modified 648479/ 659444 05.33.006.5.096
portentered a degraded state resulting in suboptimal
performance.
VNX Block OE A user migration session changed to a system migration 656149 05.33.006.5.096
(compression/ deduplication) session unexpectedly, then stuck
in SYNCHRONIZED state.
VNX Block OE Creation of a snapshot or snap session was supported only on 640680 05.33.006.5.096
LUNs less than 256TB.
VNX Block OE Failure mode and effects analysis was improperly handling 546173/ 548762 05.33.006.5.096
internal fault reporting for the battery backup unit.
VNX Block OE Any failure of the read for a RAID Group capacity resulted in an 565361/ 567110 05.33.006.5.096
“insufficient capacity” error message, even if capacity was not
the cause of the error.
VNX Block OE Seagate drive write performance was not optimal. 53114380/ 564310 05.33.006.5.096
584313
VNX Block OE In rare occurences, when a LUN was reset the trespass 47877908/ 607594 05.33.006.5.096
operation did not complete, causing bugcheck 0xE111805F.
VNX Block OE Unable to change the network port for VNX management 61370926/ 628608 05.33.006.5.096
interface to auto.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 27
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE The VNX Management Server was not accessible because a 61640548/ 627975 05.33.006.5.096
connection limit was reached. 62520788/ 638971
63182168/ 643064
63607866/ 647050
64209578/ 667181
60322976/ 613498
67604626/690953
VNX Block OE If both the VNX A1 and B1 power supplies were missing or 637601 05.33.006.5.096
faulted, the system incorrectly reported fans 1,2,5,and 6 as
"Ok" even though they were faulted and no longer running. If
both the VNX A0 and B0 power supplies were missing or faulted,
the system incorrectly reported fans 3,4,8,and 9 as faulted.
VNX Block OE Unisphere returned a battery "Not Ready" alert during the 58482710 /642288 05.33.006.5.096
weekly battery tests even when the battery was healthy.
VNX Block OE If the Storage Processor (SP) did not have a serviceable power 62898824/ 640321 05.33.006.5.096
supply, the SP would shutdown during local SPS testing.
VNX Block OE If a system battery backup unit (BBU) was not ready, and the 63502640/ 647811 05.33.006.5.096
other system BBU was in test mode, the cache status was
reported as failed.
VNX Block OE If connections to one the VNX Storage Processor’s (SP) link 63823874/ 649179 05.33.006.5.096
control cards (LCCs) became unreliable, the IO was not properly
redirected to the other system SP. This could result in a system
unavailability, with event logs showing the LCC going up and
down.
VNX Block OE The maximum transmission unit (MTU) size for VNX Storage 63910430/ 650471 05.33.006.5.096
Processor (SP) iSCSI ports operated at 14 less than the value set
for them.
VNX Block OE During a period of heavy I/O, storage processor bugchecks 60487186/ 650678 05.33.006.5.096
(various including 0xDAAAEEEE, 0x0000007E) occurred when
hosts cancelled I/O during data movement between LUNs (such
as that initiated by FAST Cache, migration and so on).
VNX Block OE When attempting a non-disruptive upgrade (NDU) for a VNX 63260516/ 651506 05.33.006.5.096
system, connectivity to the system was temporarily interrupted
while the system detected the default expander SAS address.
VNX Block OE Request from Storage Resource Management (SRM) software 654611 05.33.006.5.096
issued using the EMC XML API failed with an HTTP 503 [Service
unavailable] error.
VNX Block OE The Storager Processor (SP) Setport command did not reboot 64483380 / 656693 05.33.006.5.096
the SP when the command completed running.
VNX Block OE When upgrading the VNX OE, if a drive failed (and was building 667695 05.33.006.5.096
to a hotspare) while one system Storage Processor (SP) was
upgraded and the other was not, the hotspare operation failed
and the RAID group became degraded.
VNX Block OE A single Storage Processor (SP) bugcheck (various, including 65771190/ 671378/717629 05.33.006.5.096
0x05900000 and 0x0000001e) occurred in the FAST Cache driver
when it was unable to acquire an internal resource.

28 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Attempts to expand a virtual provisioning pool for a direct LUN 64499178/ 657304 05.33.006.5.096
(DLU) failed with error code 712d8e0e because an internal
create-mapping request and it’s I/O write operation required
the same memory.
VNX Block OE An attempt to expand a LUN failed with the 712d8e0e error 64472012/ 657569 05.33.006.5.096
code.
VNX Block OE Could not re-configure a VNX system IO module. 64483380/ 661513 05.33.006.5.096
VNX Block OE Service interruption sometimes occurred during a non- 63260516/ 665700 05.33.006.5.096
disruptive upgrade array if a default expander SAS address
existed.
VNX Block OE After a non-disruptive upgrade to the VNX Block OE package 66051036/ 673856 05.33.006.5.096
05.33.000.5.072 or 05.33.000.5.074, with enablers installed,
when a MirrorView/S session was created, the SnapView
session failed.
VNX Block OE Rebuild rates were slow, sometimes taking days or weeks weeks 65259884 / 664411/680711 05.33.006.5.096
to complete.
VNX Block OE A RAID group double faulted. 661327 05.33.006.5.096
VNX Block OE, If the kernel decided to move the deduplicated domain from 649682 05.33.006.5.096
Block one SP to another, the kernel would try to stop this effort. As a
Deduplication result, the second SP experienced a sharing violation.
VNX Block OE, In VNX systems where FAST VP was enabled, intermittent 649085 / 63732104 05.33.006.5.096
CBFS service interruption could occur because of a single SP bugcheck
(code C000021E).
VNX Block OE, VNX systems occasionally experienced precise one-second gaps 646038 / 60964096 05.33.006.5.096
Platform in CMI Peer to Peer traffic flow (IO) in latency scenarios, where
Storage Processor A (SPA) would reboot repeatedly until SPB
was rebooted.
VNX Block OE, If a RecoverPoint Splitter failed to read from the storage while 683826 / 64500382 05.33.006.5.096
RecoverPoint volume was in Virtual Access, a Storage Processor bugcheck
could potentially occur.
VNX Block OE, When Data Copy Avoidance (DCA) was enabled, a deadlock 683830, 654290 / 65434254 05.33.006.5.096
RecoverPoint could occur when IO to the RecoverPoint Appliance (RPA) failed
while the VNX Storage Processor had insufficient resources to
accommodate the IO demands. After 180 seconds, the IO
interruption caused a Storage Processor bugcheck.
VNX Block OE, After enabling NQM, an ESX server seemed to loose connectivity 691525 / 66260842 05.33.006.5.096
Unisphere due to performance issues.
NQM When this issue occurred, multiple NQM control engine threads
continually set poor performance parameters every 20
seconds.
VNX Block OE LUNs sometimes failed to come online after a power failure or 579483, 545677 05.33.006.5.096
Virtual after resolving a hardware failure that led to the offline
Provision condition.
VNX Block OE Unexpected results occurred when a storage processor 67209396/688738 05.33.000.5.081
rebooted during data in place encryption. 67136530/686425
67173272/686779

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 29
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE A storage processor became unresponsive when updating SAS 66040194/674144 05.33.000.5.079
controller firmware during an NDU from an OE version earlier
than 05.33.000.5.072 to an OE version 05.33.000.5.072 (or
later).
VNX Block OE A storage processor bugcheck (0x01901004) occurred when the 65185426/663684 05.33.000.5.079
fibre channel speed setting was manually changed via 65365958/665952
Unisphere GUI or CLI.
VNX Block OE Coherency errors were reported during an NDU to 65298752/ 664729 05.33.000.5.079
05.33.000.5.074 from an earlier version of code (prior to
05.33.000.5.072).
VNX Block OE Repeated storage processor bugcheck (including 0x0000001e) 65270166/672994 05.33.000.5.079
occurred following an NDU failure.
VNX Block OE Coherency errors were reported during an NDU when one SP 684145 05.33.000.5.079
was running 05.33.000.5.074 and the other SP was running an
earlier version of code (prior to 05.33.000.5.072).
VNX Block OE Unisphere allowed users to set the speed on the port of an 1Gb 649466 05.33.000.5.072
iSCSI/TOE IO Module to either 10Mb, 100Mb, or 1Gb.
VNX Block OE A subset of LUNs in a storage pool remain offline following 542529, 545677 05.33.000.5.072
double-faulted disk removal.
VNX Block OE When a VNX5200 or VNX5400 system was installed with the 618635 05.33.000.5.072
maximum number of I/O modules, but at least one I/O module
is uninitialized, occasionally an alert message occured indicating
that an uninitialized I/O module had exceeded the system limit.
VNX Block OE An SP bug check occurred during software upgrade or 556166, 562545 05.33.000.5.072
installation operations.
VNX Block OE Running the naviseccli ndu –list command in engineering mode 567026 05.33.000.5.072
sometimes showed a default version
VNX Block OE Incorrect current output for the DC power supply was reported. 591576, 593267 05.33.000.5.072
VNX Block OE A single storage processor bugcheck (various, including 65789046/671288 05.33.000.5.072
0x05900000 and 0x0000007e) occurred. 66357332/677368
VNX Block OE After upgrading to VNX for Block OE version 05.33.000.5.072, 664045 05.33.000.5.072
coherency errors are reported in the system logs.
VNX Block OE Hardware modules are being indicted in logs/traces by ESP due 637319 05.33.000.5.072
to underlying issues in our environment path.
VNX Block OE User gets data corrupted error, but in disk array those sectors 627961/635306 05.33.000.5.072
have been recovered.
VNX Block OE A LUN 's Address Offset value is incorrect. 621573 05.33.000.5.072
VNX Block OE The Raid Group enters a cycle of full rebuild, followed by a short 621540 05.33.000.5.072
period where the copy starts and the drive position goes offline.
VNX Block OE Under heavy I/O load utilizing FAST Cache, an internal timer 622564 05.33.000.5.072
may trigger a single processor bug check.
VNX Block OE Thin Lun 131 of Pool 1 went offline with error code 62859966/639466 05.33.000.5.072
0xE12D8D0D.
VNX Block OE Drives were faulted if their firmware did not support Enhanced 580959 05.33.000.5.072
Queuing. The drives were marked as Faulted with a reason
indicating enhanced queuing check.

30 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE During powerup of an enclosure or during a power glitch, one or 613776, 629616 05.33.000.5.072
more drives did not progress to the “ready” state.
VNX Block OE During an LCC, SLIC, or Base Module replacement, multiple 622581, 626540 05.33.000.5.072
drives within a RAID group went offline, causing rebuild logging
to initate.
VNX Block OE SPA and SPB encountered error code 0x05900000 in the Data 62778786/639100 05.33.000.5.072
Mover Library (DML).
VNX Block OE The user experiences intermittent 1 second latency spikes. 60964096/648029 05.33.000.5.072
VNX Block OE User saw two of the same enclosures, when they only had one. 60659572/616891 05.33.000.5.072
VNX Block OE Scheduled Battery Backup Unit (BBU) checks showed a BBU was 612219 05.33.000.5.072
faulted, even though the component was operational.
VNX Block OE SP bug-checked during shut down if there were many TLU/DLUs 627774, 616729, 627776 05.33.000.5.072
in failover mode.
VNX Block OE Requests for overlapping cache pages between the Fast Cache 05.33.000.5.072
Idle Cleaner and the host request resulted in a deadlock that
timed out after 10 seconds.
VNX Block OE The amber LED was illuminated when a cable was connected to 612801 05.33.000.5.072
iSCSI ports.
VNX Block OE Unisphere and FBECLI Peer Info reported IO module limits 618635 05.33.000.5.072
incorrectly.
VNX Block OE Under heavy front-end FCoE I/O, a single storage processor bug 593740 05.33.000.5.072
check (0x01901008) occurred.
VNX Block OE A bugcheck occurred with error code 0x0340201a. 62294340/637286 05.33.000.5.072
VNX Block OE, I/O to the affected LUN will hang if the error path is hit. The 646502 05.33.000.5.072
CBFS system may bigcheck due to IO timeout.
VNX Block OE, SCSI GET_LBA_STATUS commands maps to VMFR commands 628196 05.33.000.5.072
CBFS from MLU to CBFS. When there are many of these commands
sent to the system, the CPU is busy and leads to a bugcheck.
This happens when running:
- Win2K12, TRIM enabled
- RecoverPoint, Thin Extender enabled
VNX Block OE, Watchdog Panic due to two COREs holding spinlock for a long 619482 05.33.000.5.072
CBFS time.
VNX Block OE, Some mapping cannot proceed and the following error is 626128 05.33.000.5.072
CBFS returned:
CBFSA: UFS: 4: IndUsableList::Load failed to get free entry.
System may be <NL>
CBFSA: stressed! CurrentNumAllocatedEntries (368260)
MaxEntries (368255)
VNX Block OE, A flase message is presented to the user saying that the 59070064/602429 05.33.000.5.072
Platform management switch type has changed, when no change has
been made.
VNX Block OE, IPv6 would not work on SPA. 59876582/610818 05.33.000.5.072
Platform
VNX Block OE, IPv6 configuration failed when the network prefix started with 59876582/614231, 633470 05.33.000.5.072
Platform fc00::
VNX Block OE, Data was unavailable when one of the SPs was rebooting. Error 61296778/622987 05.33.000.5.072
Platform Code 0x25 is displayed.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 31
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE, When more than 64 FC front-end ports were configured on an 63411674/645909 05.33.000.5.072
Platform array, not all of the ports were visible to the hosts and/or
switch. Several were assigned duplicate WWNs.
VNX Block OE, SP queue statistics are disabled and displayed as zero in the GUI 565385, 588543 05.33.000.5.072
Platform and CLI.
VNX Block OE, An incremental SAN Copy session or aMirrorView/A session 560498, 561780 05.33.000.5.072
SANCopy for sometimes failed to be removed.
Block
VNX Block OE, LDAP host queries do not report IPv6 addresses. 602169 05.33.000.5.072
Security
VNX Block OE, False-positive logs and user traces indicated MVA thread 545496 05.33.000.5.072
MirrorView/A, starvation.
MirrorView/S
for Block
VNX Block OE, After a LUN encountered an internal error during a trespass, it 534327, 539374 05.33.000.5.072
Virtual could not be brought back online.
Provisioning
VNX Block OE, Single SP bugcheck occurred because a CDCA (cache dirty/can’t 557018, 632291 05.33.000.5.072
Virtual assign) condition after dual SP reboots.
Provisioning
VNX Block OE The serial attached SCSI (SAS) status LED for Port 2 on Base 566542 05.33.000.5.051
Module was not ON.
VNX Block OE A VNX single Storage Processor (SP) reboots when a host 594685 05.33.000.5.051
performs a 1MB write operation to the system that is attached
to a Recoverpoint Appliance (RPA) system for replication
VNX Block OE MirrorView Asynchronous connections are fractured on a VNX 595166 05.33.000.5.051
system. The following symptoms can occur:
Deduplication LUNS are temporarily unable to provide I/O and
multiple snapshots remain in a destroying state in the
deduplication domain.
Single storage processor (SP) reboot after LUN expansion
VNX Block OE When a drive is removed, no event is logged to indicate the 593320 05.33.000.5.051
reason for the drive fault.
The Taken offline event (ID 7167802f) is dropped from both the
Windows event log and the VNX event log because the string
length exceeded the maximum character limit.
VNX Block OE After a driver reset, an unexpected SP reboot occurs producing 593659 05.33.000.5.051
an alert/log message such as the following: CPD PANIC -
FCOEDMQL 1 (FE5)
VNX Block OE An inaccurate VNX call-home message is generated which 595715 05.33.000.5.051
indicates that a backup battery unit (BBU) is missing when it is
actually present. This can potentially cause BBU status
indications to show the component status as degraded
VNX Block OE Storage processor bugchecks (0x00000041) occurred due to 596197 05.33.000.5.051
excessive traffic on the management ports.

32 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE The VNX generates warning messages (BMS found 184 entries) 597131 05.33.000.5.051
because the number of Background Media Scan (BMS) log
entries recorded for vault drives quickly reaches defined
thresholds.
VNX Block OE Coherency errors for RAID-1 and RAID-6 occasionally do not 599355 05.33.000.5.051
generate the proper XOR sector trace messages in debug
operations (performed by EMC Support).
VNX Block OE A dial home event is generated even though no fault has 599721 05.33.000.5.051
occurred.
VNX Block OE, A single SP bugcheck can occur when multiple rollback 583999 05.33.000.5.051
Platforms processes are in simulatneously active.
VNX Block OE, A single SP reboot can occur when multiple rollback processes 583999 05.33.000.5.051
SnapView are in simulatneously active.
VNX Block OE mgmtd used almost 100% CPU on Disaster Recovery-side 645614 / SR63207028 05.33.000.5.038
Control Station.
VNX Block OE SPA and SPB bugcheck within minutes of each other, and their 607962 05.33.000.5.038
associated LUNs and DMs go offline.
This problem occurs every 90-99 days in the following systems:
VNX5200, VNX5400, VNX5600, VNX5800, VNX7600
This problem occurs in a VNX8000 system every 80 days.
VNX Block OE When the weekly BBU test time arrives, SPA and SPB can start 601545 05.33.000.5.038
their BBU test at the same time. Later when the BBU test
completes, the test is not set as completed and the BBU test will
start again repeatedly.
VNX Block OE After a reboot pool luns can go offline. The mount may fail due 606436 05.33.000.5.038
to an unexpected slice object state initialized from slice cache. 608736
VNX Block OE, In a system configured for large deduplication, a mismatched 605245 05.33.000.5.038
Deduplication deduplication counter size/cast (64-bit/32-bit) can cause data
loss.
VNX Block OE When copper twinaxial cables that supported iSCSI connections 58805510 05.33.000.5.035
were in use, iSCSI hosts experienced data unavailable (DU) after /599567
a non-disruptive upgrade (NDU) to VNX OE for Block version
05.33.000.5.034.
VNX Block OE Some disks remained in the Transitioning state for more than 1 577106 05.33.000.5.034
day after being reenabled.
VNX Block OE Could not upgrade the drive firmware for Micron Buckhorn SAS 573654 05.33.000.5.034
drives which had a drive firmware size of over 2 MB.
VNX Block OE Panic occurred while shutting system down. 574220 05.33.000.5.034
VNX Block OE SP B reported a panic and created a Dump file on SP B. 574235 05.33.000.5.034
VNX Block OE The SP Fault LED blink rates were half the expected rate during 577670 05.33.000.5.034
normal and degraded boot scenarios.
VNX Block OE Physical power switches on the SPs were automatically turned 578752 05.33.000.5.034
OFF during a power down. Array did not power up after power
glitch (or short term power loss).
VNX Block OE, Not enough free space in a pool to recover LUNs. 580536 / 569009 05.33.000.5.034
Virtual
Provisioning

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 33
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE Battery Backup Unit (BBU) A: Not Ready || Battery Backup Unit 593666 05.33.000.5.034
B: Not Ready was reported in Unisphere.
VNX Block OE The BBU self test was in a cycle where the tests repeated 584042 05.33.000.5.034
continuously.
VNX Block OE An on-array debug utility/process was inadvertently left enabled 504517 05.33.000.5.034
in the shipping version of the VNX OE for Block code.
VNX Block OE, When attempting to delete a deduplication LUN while there 574723 05.33.000.5.034
Deduplication were some active copy requests in-progress, the deduplication
LUN could get stuck in destroying state.
VNX Block OE, When shrink was performed while deduplication was running 577233 05.33.000.5.034
Deduplication and in the process of collecting data to be deduplicated, the
deduplication job stuck in a loop and it did not make any
progress.
VNX Block OE, Snap creation failed with the error message: “The operation 577314 05.33.000.5.034
Snapshots cannot be performed because the LUN is ‘Preparing’. Wait for
the LUN’s Current Operation to complete ‘Preparing’ and retry
the operation.”
VNX Block OE, If an attempt was made to activate a snap session on a snapshot 548616/539292 05.33.000.5.034
Snapshots which already had an activated session, the operation failed but
no error was returned.
VNX Block OE, After attempting to destroy a storage pool, the destroy 578172 05.33.000.5.034
Virtual operation would fail. The storagepool –list command displayed:
Provisioning Current Operation: Destroying
Current Operation State: Failed
Current Operation Status: Subroutine failure (0x4000803c)
VNX Block OE, A LUN shrink on a deduplication-enabled LUN failed to complete 578421 05.33.000.5.034
Virtual only when all the LUNs in the deduplication domain were empty
Provisioning (had no data). In this case lun -list showed the "Current
Operation" as Shrink indefinitely. In this state, the LUN could
not be shrunk, expanded, or snapped; it could only be
destroyed. I/O to the LUN was unaffected.
VNX Block OE, A single SP bugcheck occurred with BugCheck code 7E. 578490 05.33.000.5.034
Virtual
Provisioning
VNX Block OE, LUNs failed to come online following an SP reboot. 579050 05.33.000.5.034
Virtual
Provisioning
VNX Block OE, A create LUN operation could result in an internal error if the 573637 05.33.000.5.034
Virtual operation happened when the peer SP was rebooting and the
Provisioning peer SP was also the preferred path to service IO for the LUN.
VNX Block OE, LUN expand failed with “Unrecognized Thin Provisioning Driver 575603 05.33.000.5.034
Virtual error code: 36365”. This could happen if an SP bugchecked
Provisioning when hitting a particular boundary condition during LUN shrink;
the symptom was observed during a subsequent LUN expand.
VNX Block OE, A pool and all its LUNs went offline following an SP reboot. 576389 05.33.000.5.034
Virtual
Provisioning

34 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX Block OE, Host I/Os failed to a Pool LUN which was reporting ready after 576467 05.33.000.5.034
Virtual the system recovered from a dual SP reboot/bugcheck.
Provisioning
VNX Block OE, A LUN destroy command completed succesfully, but the LUN 567511, 585191 05.33.000.5.034
Virtual still remained in the system.
Provisioning
VNX Block OE, Snap creation failed and the snapshot was left in an errored 571738, 574415 05.33.000.5.034
Virtual state when the source LUN was in the middle of a trespass.
Provisioning,
VNX
Snapshots
VNX Block OE, Could not perform provisioning of ESRS. 8683358 / 938405 05.33.009.5.231
VNX File OE 939252 8.1.9.231
VNX File OE Data Mover panicked while truncating a file. 798909, 76686754/801364 8.1.9.236
VNX File OE User commands that caused a bus scan may have hung or timed 76939918/803847 8.1.9.236
out.
VNX File OE A Data Mover panic occurred after threads stalled when 77301130/807241 8.1.9.236
lockstats execution conflicted with ongoing file operations.
VNX File OE While accessing a previous version of a file by using VSS, which 78757798/819170 8.1.9.236
is a check point on the server, the Data Mover panicked when
unmounting a file system.
VNX File OE A Data Mover panicked when accessing a Distributed File 06969332/903450 8.1.9.236
System link folder in a CIFS share, where that share was
mounted by NFS from a Linux client.
VNX File OE When running a file system check on a writable file system 08003198/924674 8.1.9.236
checkpoint, “Zero References” could be incorrectly reported.
VNX File OE Under a heavy NFS and CIFS load, the Data Mover hung. 07265752/925172 8.1.9.236
VNX File OE A CIFS outage resulted from a non-existent path to event logs 08621365/929399 8.1.9.236
set by the user.
VNX File OE VNX system reported an NFS read error when stale file handles 08224670/931194 8.1.9.236
were received from an Isilon system being used as the offline
file server.
VNX File OE Data Mover unexpectedly panicked with a “counters out 08811375/932315 8.1.9.236
of sync error”, then incorrectly reported a file system to
be corrupt.
VNX File OE When using SMB3, a large MTU value resulted in slow 08727811/932447 8.1.9.236
performance.
VNX File OE When an SMB encrypted share was configured on a NAS server, 937758 8.1.9.236
a storage processor rebooted unexpectedly.
VNX File OE Unresponsive clients holding file oplocks caused an unmount 10555753/956089 8.1.9.236
failure and the Data Mover to hang.
VNX File OE When the TCP receive queue for one unresponsive NFS client 11180132/964574 8.1.9.236
became full, access to all NFS clients was blocked.
VNX File OE Data Mover panicked after frequent disconnect operations from 11143302/964691 8.1.9.236
the Domain Controller.
VNX File OE After upgrading to 8.1.9.231 or 8.1.9.232, SID/UID mapping 12549542/979745 8.1.9.236
conflicts occurred due to duplicate UIDs in the Usermapper
database.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 35
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE On starting a NAS DB backup, intermittent NAS command 72292338/753068 8.1.9.236
failures occurred. "Error 13422428165: server_2 :
The user is not authorized to perform the
specified operation" and "A Celerra database
error was detected" alerts were displayed with
"CS_PLATFORM:NASDB:ERROR:310:::::nasdb_backu
p: failed to backup BDB dbms files for Data
Movers" error message in /nas/log/sys_log on NAS DB backup.
VNX File OE Some Linux clients could not mount NFS exports using NFSv4.1. 75213994/791577 8.1.9.236
The mount failed with error "mount.nfs4: mount(2):
Operation not permitted".
VNX File OE When one Control Station of a dual Control Station server 76858778/801658 8.1.9.236
became unavailable, the surviving Control Station did not send a
callhome email notification.
VNX File OE The pathname of a corrupted container was not displayed when 70888060/812667 8.1.9.236
running a file system check.
VNX File OE The server_export command with "type=None" option 78020254/813740 8.1.9.236
failed with error "Error 22: server_2 : Invalid
argument Share error: Invalid types".
VNX File OE Internal information was incorrectly displayed in file system 78359324/815542 8.1.9.236
check output.
VNX File OE When using CIFS compression with the deduplication feature, 76262192/817209 8.1.9.236
the Data Mover unexpectedly panicked.
VNX File OE Data Mover panicked when Virtual Data Movers were mounted. 80293826/841044 8.1.9.236
VNX File OE Data Mover panicked while attempting a backup and restore 74272806/864761 8.1.9.236
operation.
VNX File OE Data Mover panicked when the command nas_fs –I 83439392/878267, 876303 8.1.9.236
<fs_name> -o mpd was issued while a synchronous
replication reverse operation was in progress.
VNX File OE Attempting to reverse or failover a synchronous replication 84272720/891366 8.1.9.236
session in the ‘sync in progress’ state failed with a misleading
error message.
VNX File OE Data Mover panicked unexpectedly when the server resent an 898063, 83660498/903830 8.1.9.236
RPC message after the client failed to respond.
VNX File OE After reversing a synchronous replication session, some NFS 07567469/915400 8.1.9.236
exports were missing, specifically in cases where a file system
had only subdirectories exported but not the root directory.
VNX File OE Data Mover panicked when it received a request with no 08045270/919698 8.1.9.236
ServerName. See KnowledgeBase article 519822.
VNX File OE Data Mover panicked due to low memory when using CIFS and 08004784/921050 8.1.9.236
Kerberos.
VNX File OE Data Mover panicked with "watchdog: no progress 08079866/923408 8.1.9.236
reducing dirty list" error.
VNX File OE Apple clients using OS X EL Capitan - 10.11.6 failed to resolve 932798 8.1.9.236
VNX DFS paths correctly due to incorrect characters being
added to the path.

36 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Unexpected Data Mover panics occurred, especially with heavy 08978736/934341 8.1.9.236
snapshot usage.
VNX File OE A Data Mover panicked when an SMB client attempted to 09362228/939531 8.1.9.236
cancel a request while also closing the SMB connection.
VNX File OE Data Mover panicked after receiving a malformed packet. 90635928/947857, 945692 8.1.9.236
VNX File OE An All Paths Down event occurred when re-mounting a file 10073678/949081 8.1.9.236
system during a NAS system upgrade.
VNX File OE Data Mover panicked while processing an LDAP policy request. 10286028/951119 8.1.9.236
VNX File OE Data Mover with CIFS services panicked during a failover and 10420959/952740 8.1.9.236
failback operation.
VNX File OE A Data Mover panicked when a client attempted to close an 08513596/956440 8.1.9.236
SFTP or SSH session.
VNX File OE A management switch reset was observed when three 07200355/967603 8.1.9.236
consecutive unanswered pings occurred on an internal network.
VNX File OE CIFS server did not work with SMB encrypted messages when 11429257/967620 8.1.9.236
accessing encrypted Windows shares.
VNX File OE Data Mover panicked when DHSM attempted to read an offline 08589434/968371 8.1.9.236
file from an Isilon system using NFSv3.
VNX File OE Shares could not be accessed because the nas_server 12094664/976675 8.1.9.236
command could not retrieve the Kerberos tickets.
VNX File OE The nas_checkup command reported a failed to get root file 74979910/792103 8.1.9.236
system space usage error when two or more VDMs had similar
names separated by a "-", for example: ABC and ABC-XYZ
VNX File OE After an illegal mv command was issued from a Solaris client 09010892/938341 8.1.9.236
using NFSv4, an error was not returned to the client.
VNX File OE, Data Mover panicked while attempting an NDMP Volume-Based 85048234/895144 8.1.9.236
NDMP backup operation.
VNX File OE, NDMP backup failed to backup a symbolic link to a file whose 951598 8.1.9.236
NDMP name was 1024 characters long.
VNX File OE, Attempting to use the -file option for the the 62991742/795396 8.1.9.236
Performance server_stats command when running it with a newly-
Statistics created user (other than nasadmin/root) caused "ERROR
(150860922881): Internal error.
(/home/tanaka: Permission denied)" to be
reported.
VNX File OE, File system features which use SnapSure became hung when 07084484/903404 8.1.9.236
SnapSure the SavVol reached full capacity.
VNX File OE, On large configurations, USM’s Collect Diagnostic Files 80328536/874115, 838190, 8.1.9.236
System operation could fail to obtain some requested information. 869481
Management
VNX File OE, Trying to configure a 5-digit proxy port number failed. 74274738/779729 8.1.9.236
System
Management,
Network
VNX File OE, After upgrading to 8.1.9.231 or 8.1.9.232, ConnectEMC with RSA 11712334/969498 8.1.9.236
System encrypted FTP configured stopped working.
Management,
UDoctor,
Serviceability

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 37
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, VMs rebooted during a Data Mover failover that took longer 81462896 / 852343 8.1.9.217
Security, LDAP than 30 seconds because ESX hosts received stale file handles.
Several parameters were added that help avoid stale NFS file
handles. Refer to the “Parameters Guide for VNX for File”
section for additional information about LDAP parameters.
VNX File OE, A Data Mover panic occurred when performance statistics 9650729 / 937515 8.1.9.231
Performance commands were used.
Statistics
VNX File OE, NFS and CIFS threads were blocked. 83367458 / 877512 8.1.9.217
SnapSure
VNX File OE, The Data Mover panicked with a VolPoolManager In- 78432378 / 817067 8.1.9.217
SnapSure memory superblock is bad before flush error
when the last checkpoint of one SavVol was deleted.
VNX File OE, After deleting a replication session, a file system was 7202734 / 912422 8.1.9.217
SnapSure unmounted from the VDM and renamed. Mounting the
renamed file system on the VDM caused the Data Mover to
reboot unexpectedly.
VNX File OE, The Data Mover panicked with the following error: Value in 82440738 / 883479 8.1.9.217
SnapSure TOC is different from the value in btree.
Updated the code to handle the btree issue.
VNX File OE, The Data Mover panicked with a Bad magic in chunk 83433366 / 878149 8.1.9.217
SnapSure error.
Updated the code to handle a race condition.
VNX File OE, The Data Mover experienced either a SYSTEM WATCHDOG 83252522 / 873964 8.1.9.217
SnapSure panic or Page Fault interrupt due to invalid snapshot
context. Server log showed a refresh failed because a duplicate
entry was found.
VNX File OE, The Data Mover experienced a Page Fault Interrupt 82905974 / 871231 8.1.9.217
SnapSure error.
Updated the code to handle invalid checkpoints.
VNX File OE, The Data Mover panicked with a Page Fault interrupt 75008226 / 780692 8.1.9.217
SnapSure error when a file system was paused with write operations in
progress.
VNX File OE, After deleting a file system by using the -reclaim option, dbchk 8911507 / 940691 8.1.9.231
System returned multiple errors such as the following:
Management Error: Volume 26734 on server server_2 is
missing in Control Station volumes
database.
VNX File OE, During a Data Mover reboot, slow file system performance 6259826 / 937658 8.1.9.231
UFS made it appear that NFS threads were blocked.
VNX File OE, The Data Mover panicked when file system corruption was 07716671 / 912824 8.1.9.217
UFS encountered. The corruption was caused by an earlier issue
where space reclaim was run while file system replication was
active.

38 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, CIFS and NFS access was lost. Messages similar to the following 77308854 / 830682 8.1.9.217
UFS were seen in the server_log and the sys_log:
SMB2 BLOCKED for nnn seconds.
VNX File OE, Due to a race condition, the NFS and CIFS getAttr option for 77609246 / 808352 8.1.9.217
File System, a file returned the same ctime or mtime, even when the size
UFS of the file changed.
VNX File OE, The Data Mover failback took longer than 5 minutes and caused 6723306 / 895922 8.1.9.217
UFS VCloud servers to crash.
VNX File OE, EMC VAAI NAS plugin could not successfully reserve space on 5486217 / 880225 8.1.9.217
UFS the array for a provisioning action. The log showed the
following: RESERVE SPACE thin-or-thick-2
(595c37de-af24-4d8c-a19d-8da55f2247df)-
000001-flat.vmdk] failed.
VNX File OE, Concurrent rename operations on the same file system within 83307948 / 883559 / 874686 8.1.9.217
UFS many directories saw 2 to 4 seconds latency gaps.
VNX File OE, CIFS or NFS threads hung, and NAS stopped service. 79977930 / 866347 8.1.9.217
UFS
VNX File OE, The Data Mover experienced a Page Fault Interrupt. 81926330 / 856541 8.1.9.217
UFS Virt ADDRESS: 0x000084cc53 Err code: 0 panic.
VNX File OE, NFS clients reported RPC timeouts with the following seen in 840575 8.1.9.217
UFS the server logs:
hashalloc degraded to search sequential
VNX File OE, Unicode long file names that were not legal 8.3 filenames would 839992 8.1.9.217
UFS have 8.3 filenames generated in the form filena~1.ext for the
first four filenames. If ~1 through ~4 were in use, <number>.ext
would be used, with <number> being any unused 8 digit or less
value. Migration of files could fail if the Unicode long filename
of a file to be copied matched the 8.3 generated filename of
another file already copied to the same folder.
Added a new parameter ufs.mangleM83ByNumber to
optionally restrict the 8.3 filename creation policy to not include
~n names.
VNX File OE, When parallel writes targeted the same file, latency was high. 79418490 / 830879 8.1.9.217
UFS
VNX File OE The Data Mover experienced an Invalid Opcode exception panic. 78640888 / 819977 8.1.9.184
A CIFS client was performing a large read.
VNX File OE An abort operation on an NDMP two-way backup hung, causing 78374138 / 817807 8.1.9.184
the backup threads to stay in a hung state. This eventually led to
a situation where backups were no longer possible on the Data
Mover.
VNX File OE The Data Mover was using NFSv3 file locks (NLM). It 78331038 / 816632 8.1.9.184
experienced a Page Fault Interrupt panic with Virt
ADDRESS: 0x0000839713 Err code: 0 Target
addr: 0x00000000 in routine
lockd_buildGrantedRequest().

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 39
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE A file was offline due to being deduplicated. Writes to the file 78422692 / 816217 8.1.9.184
were buffered while it was offline. When the file came back
online, the thread deadlocked itself, causing other blocked SMB
and NFS threads. Messages similar to: Service:CIFS
Pool:SMB2 BLOCKED for 405 seconds: Server
operations may be impacted. were seen in the
sys_log.
VNX File OE The Data Mover was not replying to CIFS or NFS clients, and 73685670 / 814454 8.1.9.184
SMB threads on the Data Mover were blocked. A DHSM
connection was being modifed.
VNX File OE Because of periodic Data Mover blocked threads, the Data 73005388/812664 8.1.9.184
Mover did not respond to any command.
VNX File OE A variety of symptoms were seen when mounting a file system 77774706 / 810965 8.1.9.184
with FLR enabled, including Data Mover panics and file system
hangs.
VNX File OE Some SMB threads were blocked in 77205588 / 804314 8.1.9.184
File_LocksData::checkOplock() or
File_LocksData::checkConflictWithRangeLock(
).
VNX File OE One or more NFSV3 clients reached the maximum number of 73356688 / 799234 8.1.9.184
file locks allowed. The clients could not write to the NFS
filesystem. There were messages similar to the following in the
sys_log file: LOCK: 3: Client x.x.x.x(NLM) can't
be granted new range locks, it already owns
30000.
VNX File OE The Control Station did not send a CallHome when multiple 76620428 / 798681 8.1.9.184
Data Movers panicked at the same time due to an issue with the
SPs.
VNX File OE The hidden parameter rpcusesplitmsg was enabled on the 76293352 / 796606 8.1.9.184
Data Mover. Later, the Data Mover stopped responding to NFS
writes.
VNX File OE The Data Mover experienced a Page Fault Interrupt panic in 76198338 / 794533 8.1.9.184
routine DP_RepSecondary::sendPostEventToVer().
VNX File OE SMB threads were blocked on the Data Mover. This caused 74094606 / 767230 8.1.9.184
users on the client systems to experience timeouts when
attempting to access any of the CIFS shares on the Data Mover.
Messages similar to SMB2 BLOCKED for 412 seconds:
Server operations may be impacted. were seen in
the sys_log.
VNX File OE The Data Mover experienced a isAValidIndBlkBuf: bad 72421770 / 745772 8.1.9.184
bn panic while deduplication was in process on a file system.
VNX File OE Under some circumstances, file system metadata was not being 71812052 / 736183 8.1.9.184
flushed often enough on the Data Mover. This led to a Data
Mover NOT reducing dirty list_ panic.
VNX File OE A file system with deduplication enabled was nearly full. The 51266376 / 645057 8.1.9.184
Data Mover experienced an alloc failed: counters
out of sync panic.

40 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE A RecoverPoint system was incorrectly configured with 75112000 / 809286 8.1.9.184
improper HLU to ALU mapping (for example, a data LUN had an
HLU that was less than 16). The DisasterRecovery (DR)
initialization operation did not detect the misconfiguration. The
DR failover detected the misconfiguration and failed to failover.
VNX File OE With deduplication enabled, an FLR file system could not be 63911682 / 807186 8.1.9.184
unmounted because the operation hung.
VNX File OE In IPv6 environments, an error occurred when the 76730694 / 801351 8.1.9.184
nas_migrate command was run with the -dr option.
VNX File OE CallHome operation failed from UDoctor. 76620428 / 799563 8.1.9.184
VNX File OE Messages similar to: mcd_helper: failed to set 76595382 / 798940 8.1.9.184
low-space threshold on <name> were seen in the
/var/log/messages file even though setting a low-space
threshold is not valid for the VNX hardware configuration.
VNX File OE There were many VDMs defined on the VNX. The 76197150 / 794195 8.1.9.184
server_stat command caused the Data Mover to panic with
Page Fault Interrupt in the routine
VDM_StatSessionListList::findSessionList().
VNX File OE There was a high load on the Control Station with no obvious 75811178 / 792101 8.1.9.184
cause. This caused the Control Station to respond to commands
very slowly. A server_stats session was in process and
there were VDMs defined on the system.
VNX File OE The Data Mover experienced a SYSTEM WATCHDOG panic in 58342356 / 791958 8.1.9.184
Mem_RuntimeAlloc::allocVMextFindPages. This
occurred with a high level of NFS over TCP on a network
interface with a large mtu set (for example mtu=9000). The fix
was included in a previous release, but was disabled by default
by hidden parameters.
VNX File OE On Windows clients with some third-party applications 73063620 / 781762 8.1.9.184
installed, a file cut and paste operation from one location to
another does not prompt the user for a file overwrite if the file
already exists. In this situation, the operation seems to be
successful, but nothing is actually moved or renamed.
VNX File OE, An NDMP restore failed with the following error in the server 76698370 / 815305 8.1.9.211
Backup, log:
NDMP NDMP: 3: Session 329 (thread nasw00)
remoteRead network timeout! networkto=7200
VNX File OE, CIFS notifications for upper level directories were missing when 79006294 / 836926 8.1.9.211
CIFS the permissions on a child directory were changed.
VNX File OE, An attempt to mount a CIFS share from a Linux client by using a 79858216 / 853159 8.1.9.211
CIFS Samba mount command failed when NTLMv2 authentication
was enabled. This occurred even though the client sent the
proper NTLMv2 authentication.
VNX File OE, Some accesses to CIFS File Systems from Windows10 and 78787312 / 865740 8.1.9.211
CIFS Windows12 clients were very slow, taking several minutes to
complete. Newer versions of Windows10 and Windows12
added some new Security Identifiers (SIDs) to the Kerberos
credential. Multiple requests were being sent from the VNX to
the Domain Controller to process these new SIDs.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 41
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, The Data Mover experienced a panic while the Viruschecker was 816581 8.1.9.184
CIFS enabled. The routine AppLibNT_updateCEPP appeared in
the panic backtrace.
VNX File OE, Intermittently, UNIX users were denied temporary access to 76988178 / 815045 8.1.9.184
CIFS their NFS exports when the Active directory service did not
provide mapping.
VNX File OE, A Page Fault Interrupt panic occurred in the 74740808 / 796825 8.1.9.184
CIFS sddlParser::resolveNames() routine when running
the ACLGPOS command for a standalone server.
VNX File OE, An application on the Windows client was using the MSDN API 809207 8.1.9.184
CIFS GetCompressedFileSize function to request the size of a
compressed file on the VNX. For a compressed file that did not
have the sparse attribute set, the VNX returned the size of the
original uncompressed file, rather than the size of the
compressed file. This caused a variety of symptoms on the
client, depending on how the application was using the size
information.
VNX File OE, Confusing messages similar to SPN mismatch for the 72902326 / 799135 8.1.9.184
CIFS server 'aaa.bbb.com is possible' were seen in
the server log. They are a proactive warning that there might be
Service Principle Names (SPNs) in use that should be configured
in the Active Directory, but the messages do not display the SPN
that was not found in the AD. They also do not specify that the
SPN could be missing rather than a mismatch.
VNX File OE, If the system had HTTPS configured for the 68225782 / 700859 8.1.9.184
Install, nas_connecthome command and an upgrade was
Configure performed, the system either missed the HTTPS or the entire
nas_connecthome configuration completely.
VNX File OE, Access denied errors were intermittently received while using 77873008 / 830925 8.1.9.211
NFS the ntcredential mount option on file systems.
VNX File OE, Synchronous replication was in use on the system. A Nested 5686068 / 877713 8.1.9.211
NFS Mount File System (NMFS) was created with a command similar
to nas_fs enforce_fsid=yes, id=<xxxx>. The NMFS
was created, but the ID was not <xxxx> even though the ID
was not already in use by another file system.
VNX File OE, An NFSv4 client read the root directory of a VDM. This caused a 76888636 / 817548 8.1.9.184
NFS deadlock, and eventually all NFS activity was blocked on the
Data Mover.
VNX File OE, With VAAI, if a VMDK file was converted to a VERSION file, it 794704 8.1.9.184
NFS could not be read with NFSV4.
VNX File OE, The sqlite3 application reported disk I/O error. A 74262086 / 815877 8.1.9.184
NFS network trace identified an NFSv4 BAD_STATEID error that
was related to a lock state being revoked in error.
VNX File OE, An NFS 4.1 client was not able to mount file systems located in 75213994 / 815871 8.1.9.184
NFS different VDMs simultaneously.
VNX File OE, When using NFS V4.1, VAAI fast cloned files could not be read 789776 8.1.9.184
NFS on the ESX host.

42 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, server_mount –a sometimes causes a page fault panic 53766452 / 751328 / 831929 8.1.9.184
ReplicationV2 while a replication internal checkpoint is being refreshed.
VNX File OE, A ReplicationV2 destination Data Mover did not detect a 72505350 / 817183 8.1.9.184
ReplicationV2 message corruption, and the Data Mover panicked.
VNX File OE, Data Mover panics with GP exception. Virt ADDRESS: 836139 8.1.9.184
SnapSure 0x000015666e. Err code: 0 error.
VNX File OE, Data Mover panics with messages similar to free() called 79742550 / 830238 8.1.9.184
SnapSure with invalid header - not pointing to valid
free list.
VNX File OE, A SnapSure checkpoint was not deleted correctly and caused a 73775698 / 814685 8.1.9.184
SnapSure Data Mover SYSTEM WATCHDOG panic.
VNX File OE, The system had SnapSure enabled. I/Os were slower than 814683 8.1.9.184
SnapSure expected.
VNX File OE, Deleting the last SavVol checkpoint created a large amount of 234567891 / 770367 8.1.9.184
SnapSure unnecessary log messages, filling the Log files, causing earlier
important information to be overwritten.
VNX File OE, When a file system was mounted on a subdirectory of another, 77177502 / 803454 8.1.9.211
System already exported, file system (nested mount) and synchronous
Management replication of the file systems was reversed, the export for the
nested file system was lost. The client could not access the
nested file system.
VNX File OE, The upgrade process disabled fixed-block deduplication of a 78926906 / 821332 8.1.9.184
System mapped pool. This caused the clients of the mapped pool to use
Management an excessive amount of storage.
VNX File OE, Automatic Data Mover failover failed with error failed to 70607876 / 732806 8.1.9.184
System complete command.
Management
VNX File OE, The name of a remote storage system was changed, for 78915918 / 821672 8.1.9.184
System example with the nas_storage –rename command.
Management Following this all nas_cel –syncrep commands for that
remote storage system failed.
VNX File OE, A Restore checkpoint schedules on the 77612720 / 810561 8.1.9.184
System destination failed error occurred because a
Management nas_migrate operation failed to restore checkpoint
schedules on the destination VDM.
VNX File OE, A VDM synchronous replication relationship existed between 801924 8.1.9.184
System two VNX systems. The active system also had a VDM replication
Management (Replicator V2/ RepV2) relationship with a third VNX system.
The active system went down due to an unrelated reason. The
attempt to use nas_syncrep –failover to failover to the
standby system failed with the error: Error 5005:
Device or resource busy.
VNX File OE, The nas_volume -info 76599686 / 797510 8.1.9.184
System -size -all command took over 30 minutes to complete.
Management
VNX File OE, The dbchk command failed with multiple errors similar to the 74765252 / 781997 8.1.9.184
System following Error: Non Zero exit Status while
Management running .Server_config for server_5.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 43
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, A Control Station’s LDAP users and groups did not get disabled 1243568790 / 773650 8.1.9.184
System after reconfiguring the LDAP service connection to a different
Management domain.
VNX File OE, LDAP user failed to login into Unisphere with User Principal 72983448 / 766687 8.1.9.184
System Name.
Management
VNX File OE, The –disableUser option was no longer a valid option and 75568228 / 786945 8.1.9.184
System was removed from the admin_management.pl command.
Management However, it was not removed from the usage error message,
causing confusion. For example the command:
/nas/http/webui/bin/admin_management.pl
-disableUser <uid> properly displayed the usage error
message, but the usage error message incorrectly showed –
disableUser as a valid option.
VNX File OE, A client system was attempting to create a file on an NFS file 77205462 / 807579 8.1.9.184
UFS system on the VNX system. The VNX system took longer than
the client expected to respond to the NFSV3 create request
from the client system. This caused the client application to
report an RPC timeout, or there was some other indication from
the client system that it failed to receive a response to the
NFSV3 create request in the expected timeframe.
VNX File OE, Server log was flooded with multiple UFS: 6: hashalloc 75246402 / 785019 8.1.9.184
UFS directly alloc failed messages.
VNX File OE, The VDM MetroSync Manager experienced an issue when 81674712 / 856793 3.0.17
VDM special characters such as “#” were used in a VNX password.
MetroSync
Manager
VNX File OE, The Control Station failed over from CS0 to CS1, triggering a 81706942 / 867581 3.0.17
VDM VDM MetroSync session failover. A VDM MetroSync Manager
MetroSync file system service check that was done as part of the
Manager MetroSync failover operation failed because the Control Station
failover was not complete. Subsequent VDM MetroSync
Manager commands displayed a warning message requesting
that MetroSync sessions be cleaned up.
VNX File OE, The VDM MetroSync Manager log file ran out of space and 80871072 / 868184 3.0.17
VDM could not add more information without manually removing
MetroSync data from the log.
Manager
VNX File OE, When both source side SPs reboot, if the nas_syncrep 817722 8.1.9.184
VDM -Clean command was run, the VDM with the synchronous
MetroSync replication session, and the file systems on the VDM, could be
Manager deleted. Because of the SP reboot, the consistency group
information could not be retrieved. An error should occur, but
the synchronous replication session, and the file systems on the
VDM were deleted by mistake.

44 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, A synchronous replication reverse or failover operation failed 817600 8.1.9.184
VDM after an importing sync replica of NAS
MetroSync database error occurred. After running a nas_syncrep
Manager -Clean -all command, the next synchronous replication
reverse or failover operation failed.
VNX File OE, Special characters and spaces in storage pool names caused 81706942/859373 8.1.9.211
VDM synchronous replication commands to fail.
MetroSync
Manager, CLI
VNX File OE A Data Mover on the user’s system had blocked SMB2 threads. 75775996 / 793529 8.1.9.155
VNX File OE With VAAI, when a VMDK file was converted to a VERSION file, 794704 8.1.9.155
it could not be read with NFSV4.
VNX File OE UDP access to the KDC was blocked. The Data Mover waited for 715070 8.1.9.155
UDP to time out before trying again with TCP. This resulted in
long delays when remote clients were accessing files.
VNX File OE Automatic checkpoints extensions did not stop after the %full 731941 / 12345679 / 729241 8.1.9.155
parameter was increased from 75% to 90%.
VNX File OE Domain administrators who belonged to multiple groups could 71024074 / 788471 8.1.9.155
not join new CIFS servers to the active directory.
VNX File OE Backups of VM snapshots were failing in a Hyper-V environment 73360626 / 774424 8.1.9.155
using the Remote Volume Shadow copy Service (RVSS) feature.
This was caused by interoperability issues with 3rd party backup
software vendors.
VNX File OE Attempts to extend the destination pool associated with a VDM 781945 8.1.9.155
sync replication session failed and resulted in an error condition,
Error 3027: d74 : inconsistent disktype.
VNX File OE Syncrepservice enable failure with the following: 777414 8.1.9.155
Error 13431996489: NBS configuration operation failed on
server server_2. Failed to add volumes. Error 13431996606: Add
NBS.
VNX File OE LDAP user was unable to log into Unisphere. 70928040 / 728569 8.1.9.155
VNX File OE While fetching all stats of a single volume with the server_stats 63530180 / 748541 8.1.9.155
command, WriteKiB, WriteRequests and WriteOps values were
incorrect.
VNX File OE A VNX File OE upgrade failed. 66693454 / 755202 8.1.9.155
VNX File OE A CIFS server does not accept an SMB NULL session request 71014830 / 773990 8.1.9.155
when the cifs.nullSession parameter was set to 0.
VNX File OE File was deleted in the time slot between FLR scaning it and 72292110 / 746170 8.1.9.155
locking it.
VNX File OE Get_backend_status cron job failed with error: Error 68342174 / 736577 8.1.9.155
running command:
/nas/opt/Navisphere/bin/navicli -h -t 60
getall -array
VNX File OE When a FSCK was exectuted on an existing file system that had 72380056 / 748944 8.1.9.155
a size larger than 16TB-64MB, it removed blocks within the last
CG of a 16TB. This caused corruption of some data.
VNX File OE The Data Mover responded with "couldn't get a free page", 60657782 / 749618 8.1.9.155
"Out of memory", or another message indicating an inability to
allocate memory.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 45
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE An error occurred while establishing a TCP connection to an 62981026 / 750965 8.1.9.155
(external) KDC and Kerberos closed the stream. Later, it tried to
close the stream again, resulting in a Data Mover panic.
VNX File OE When the Data Mover requested a Kerberos ticket using UDP, 62844700 / 750967 8.1.9.155
the response was too large for a UDP packet and an error was
returned. The request was sent again using TCP, but the
previous reply was not deleted, which caused a memory leak.
VNX File OE Customer was not able to access file system, all NFS/CIFS 68864834 / 751007 8.1.9.155
requests were blocked.
VNX File OE When a replication session stopped while data was being 65001990 / 751316 8.1.9.155
transferred, a second version restore happend when the session
was restarted. This made the session restart last longer.
VNX File OE The statsd crashed when an invalid response was received from 69126648 / 763056 8.1.9.155
the nas_server -query command.
VNX File OE When an LDAP user upgraded the File OE for a Unified system, 67819406 / 724815 8.1.9.155
USM failed with the message: Insufficient permission
to run.
VNX File OE There is a risk of a Data Mover bugcheck when trying to reboot 70157824 / 751017 8.1.9.155
system while there is an ongoing Backup.
VNX File OE Dbchk was unable to detect the problem that caused a 71785850 / 763620 8.1.9.155
filesystem extension failure: Error 10264: There is a
volume definition mismatch between the
Control Station and server_4 for v8774.
Component volume(s) v28277 on the Control
Station do not exist on server_4.
VNX File OE USM health check failed on "File System Usage on the Data 74363306 / 773551 8.1.9.155
Mover" on systems whose "avail" space is more than
2,147,483,647.
VNX File OE Under rare circumstances when restarting the SMB service, the 74078324 / 791878 8.1.9.155
windows event log auto archive may stop working.
VNX File OE Under rare circumstances, when there is a very large number of 75581540 / 792684 8.1.9.155
event logs generated and the retention is not set to infinite,
there is a memory leak.
VNX File OE Temporary SMB server performance decreased when an 75186726 / 790431 8.1.9.155
application used the File change notify feature on an
EMC server share.
VNX File OE Customer was unable to mount NFS exports on ESXi host. 73591176 / 768798 8.1.9.155
VNX File OE The VNX for File OE crashed due to the use of a non-referenced 74726358 / 784972 8.1.9.155
class instance.
VNX File OE The system failed to dial home when the configuration 68388050 / 750826 8.1.9.155
(configuration files and log files) had invalid UTF-8 characters. 68209328 /702964
VNX File OE NFSv4 Access Control Entries (ACEs) were displayed in the 60262164 / 750838 8.1.9.155
format of OWNER@ or GROUP@ if the ACE belonged to a user or
group that matched with the current owner user or owner
group of the file or directory. The OWNER@ and GROUP@
formats were misleading to the user.

46 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE When a Data Mover unexpectedly rebooted, there was no call 57606950 / 777372 8.1.9.155
home event sent back to EMC.
VNX File OE Unisphere was inaccessible after a Control Station failover 60525818 / 615062 8.1.9.155
occurred.
VNX File OE There is a race window that FLR log is written when a mount is 68012370 / 694787 8.1.9.155
not complete.
VNX File OE Command nas_fs -info -all failed with a backtrace. 67458600 / 717261 8.1.9.155
VNX File OE The SMB2 client is unable to change ownership or permissions 68538084 / 717435 8.1.9.155
on symlinks in mixed Windows-/Unix file systems. The operation
hangs.
VNX File OE Applications on SMB2 clients can be confused with deduped 72122142 / 760436 8.1.9.155
files, because of the "sparse" file attribute.
VNX File OE In situations where CEPP over MSRPC is used, the user 72894094 / 764319 8.1.9.155
experienced a Data Mover bugcheck. 72894094 / 764890
VNX File OE Regular I/O pauses observed with snapshots enabled causing 67618804 / 717238 8.1.9.155
20-25% performance high.
VNX File OE NFS/SMB write request latency exceeded 20 seconds on 729965 8.1.9.155
filesystems that were involved in a Data Mover failover or 67433430 / 690491
restart.
VNX File OE The nas_migrate command failed with the message, 68439794 / 732568 8.1.9.155
failed to validate or generate a migration 71928238 / 751248
plan.
VNX File OE The File System Checkpoints page in the Unisphere GUI had an 71194884 / 733687 8.1.9.155
empty Schedule Name column.
VNX File OE An AIX host failed when executing commands within a 70805746 / 734409 8.1.9.155
checkpoint directory.
VNX File OE A panic occurred during the upload to the VNX for File OE of a 71428114 / 743850 8.1.9.155
large file (above 1GB) through SFTP, due to a bug in the
management of data buffers.
VNX File OE When using NFSV4, the datamover bugchecked under a heavy 73808368 / 769160 8.1.9.155
load leading to network packet fragmentation.
VNX File OE After the creation of a new user role, many management 72368390 / 744622 8.1.9.155
commands run by the nasadmin user failed with error The
user is not authorized to perform the
specified operation.
VNX File OE Informational log messages produced during the 71522612 / 747102 8.1.9.155
server_devconfig probe operation add unnecessary noise
in the log, which could distract from real issues, and could be
interpreted as failures.
VNX File OE The VNX for File OE bugchecked (0x20b8b28) due to a very busy 73375412 / 757841 8.1.9.155
IPv6 network with a large destination cache.
VNX File OE On a nearly full file system, a customer was truncating a dense 73651308 / 761153 8.1.9.155
file while trying to write another file. The modification sequence
in the truncate process led to a counter out of sync
issue.
VNX File OE With all the Data Movers in standby mode, the arrayconfig 763116 / 763116 8.1.9.155
script hung when querying the replication information.
VNX File OE A large percentage of CPU was consumed when handling a large 74030184 / 773991 8.1.9.155
number of SMB2 durable handles.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 47
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE A Data Mover bug checked when performing a Codenomicon 72386430 / 745392 8.1.9.155
test suite.
VNX File OE When a checkpoint schedule was created using the GUI, the 71880786 / 745936 8.1.9.155
command, nas_syncrep -reverse failed.
VNX File OE A Create Replication task failed with the following error: Error 72728264 / 751235 8.1.9.155
13422034977: Operation task id=3285 on DEFTHW991X3STO at 72150002 / 751826
first internal checkpoint create state failed with error: 73212910 / 754586
13421840573: Execution failed: Segmentation fault: Operating
system signal. [C_STRING.c_strlen].
VNX File OE A Data Mover bugchecked with a SYSTEM WATCHDOG string. 72770820 / 752714 8.1.9.155
74713150 / 776397
VNX File OE The command, nas_storage -check –all, failed with a 72728264 / 753113 8.1.9.155
backtrace. 74513944 / 774532
75319920 / 789151
VNX File OE A Data Mover may reboot or failover very slowly and orphan 70653478 / 755323 8.1.9.155
files may be created which can only be removed by running fsck.
VNX File OE New Linux clients were unable to mount the file system using 75213994 / 791579 8.1.9.155
NFSv4.1 protocol.
VNX File OE Connected machine with NFS protocol hung and the following 73206702 / 754306 8.1.9.155
message was displayed in the server_log:
2015-08-06 10:16:28: KERNEL: 3: 3:
ThreadsServicesSupervisor: Service:NFSD
Pool:NFSD_Exec BLOCKED for 409 seconds:
Server operations may be impacted.
VNX File OE A user was unable to mount NFS export from a Windows 2008 72878838 / 756120 8.1.9.155
R2 running Windows NFS client over IPv6.
VNX File OE A Data Mover bugcheck occurred when using the 73249794 / 756898 8.1.9.155
server_statmon command for monitoring the NFS export 73931218 / 766962
access. It is likely to happen more frequently when using snap
file systems.
VNX File OE After upgrading to 8.1.8.119, users were unable to access 71623722 / 737659 8.1.9.155
certain CIFS shares. An extra "root_vdm_x" directory was added 71652346 / 757815
to the export path, which is invalid. Attempts to re-export 73633222 / 769189
without that path would not resolve the problem - the invalid 75334706 / 785301
directory would be re-added.
76828706 / 801007
This occured if the share was exported via both NFS and CIFS, if
the share was exported from a VDM from CIFS but a physical
DM from NFS, and if the share was exported on NFS using an
NFS alias that matched its mountpoint name on the VDM.

48 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE An auto-fsck was initiated on a system. The reason given for 74066422 / 767110 8.1.9.155
triggering the auto-fsck was bad dir read. However, the 73903652 / 769772
auto-fsck did not detect any on-disk directory inconsistencies on 74087822 / 769774
the file system. 74144132 / 769775
74317178 / 771591
74265562 / 771837
74329672 / 771956
74118550 / 772045
74362446 / 772442
VNX File OE When a file system is frozen and the structure of the file system 74317178 / 771485 8.1.9.155
is not initialized correctly, space reclaim may access an invalide
NULL pointer.
VNX File OE When a user tried to create a file system larger than 16TB-8G, 99999999 / 772881 8.1.9.155
but a File System of only 2TB was created.
VNX File OE CIFS blocked threads and the user received strange output. 75098050 / 781069 8.1.9.155
73533420 / 767732
VNX File OE The SSL-enabled LDAP service failed to connect to LDAP server 74774892 / 784540 8.1.9.155
with 91 / Connect error. Server log reports the following
error:
LDAP/SSL protocol error: The LDAP server
certificate verification failed, the
signature is not valid.
VNX File OE Failed to create a filesystem replication to a destination system 75066808 / 789103 8.1.9.155
due to the following error: Query storage pools All. 74144666 / 793619
Remote command failed.
VNX File OE A customer experienced an out of memory bugcheck when 68140920 / 740878 8.1.9.155
using the CEPP feature.
VNX File OE When using NFSv4 with GSS Kerberos, integrity from Suse 70524574 / 748032 8.1.9.155
clients, CIFS, and NFS might be unable to connect the server.
VNX File OE When using DHSM over SMB and the system is under heavy 71648738 / 751190 8.1.9.155
load or is in a misconfigured environment, a server bugchecked.
VNX File OE The Data Mover bugchecked. After failover to the standby Data 68810662 / 735334 8.1.9.155
Mover, messages could not be written to the FLR log file and 73453320 / 759296
messages similar to the following were seen in the server log:
"Error opening the activity log file, status = 17".
VNX File OE When using NFS V4.1, VAAI fast cloned files are not readable on 791145 8.1.9.155
the ESX host.
VNX File OE If the SavVol reaches full capacity and cannot be extended 751010 8.1.9.155
automatically, the oldest checkpoints are deleted.
VNX File OE Files with a partial corruption may be unreadable by FSCK on a 41472302 / 429397 8.1.9.155
deduplication-enabled filesystem.
VNX File OE GetAttr may return the same ChangeID, but different file 751009 8.1.9.155
sizes.
VNX File OE Customers are unable to join a compname to a domain's Active 776257 8.1.9.155
Directory when using an administrative account with a non
ASCII password.
VNX File OE If the first disk in a mixed thin_storage filesystem is not a thin 779839 8.1.9.155
disk, the filesystem may be not handled properly.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 49
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Modules such as CAVA and RDE FSCK failed unexpectly without 67479716 / 778506 8.1.9.155
notifying the user.
VNX File OE The Data Mover bugchecked with the following message: Page 75739878 / 791144 / 785696 8.1.9.155
Fault Interrupt. Virt ADDRESS: 0x0000e595ec
Err code: 0 Target addr: 0x0000000064.
VNX File OE Unable to mount an NFS export from Windows 2008 R2 when 72878838 / 769713 8.1.8.132
running a Windows NFS client over IPv6.
VNX File OE An AIX host failed when executing du/cp/tar etc commands 70805746 / 756461 8.1.8.132
within a ckpt dir. The following error was genrated when
accessing the ckpt dir from the PFS share folder exported by
VNX NFS server:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_GMT
/data/dir1# du
du: 0653-175 Cannot find the current directory.
VNX File OE When two Data Movers (DMs) with the same IPv6 brodcast 71777016 / 756450 8.1.8.132
domain were brought up serially, if the first DM lost
connectivity, the IPv6 network neighbor client's cache was not
updated.
VNX File OE Modules like CAVA aborted unexpectly without notification, and 74066422 / 780387 8.1.8.132
generated log message such as bad dir read in the server_log.
Although the logged messages implied that the directory was
corrupt, the file system did not always contain any
inconsistencies.
VNX File OE When running FSCK on an file system that is larger than 16TB- 72380056 / 769719 8.1.8.132
64MB, FSCK could remove blocks within last cylinder group
(CG), potentially corrupting some data within the file system.
VNX File OE During VNX2 migration operations, if the source virtual data 72284998 / 763440 8.1.8.132
mover (vdm) was attached to more than 200 interfaces,
migration operations could hang.
VNX File OE AIX host failed when executing du/cp/tar etc commands within 70805746 / 756461 8.1.8.132
a ckpt dir. The error occured when accessing the ckpt directory
from a PFS share folder that had been exported by a VNX NFS
server. For example:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_GMT
/data/dir1# du du: 0653-175 Cannot find the current directory.
VNX File OE I/O pauses occurred during snapshot operations causing 20-25% 756456 8.1.8.132
performance reduction.
VNX File OE Tests caused Data Mover bug checks sent corrupted NFS 755234 8.1.8.132
requests to the server. This did not occur with standard NFSV4
clients.
VNX File OE NFS/SMB write request latency sometimes exceeded 20 756459/ 729965 8.1.8.132
seconds on filesystems that were involved in Data Mover
failover or restart.
VNX File OE NFS/SMB write request latency sometimes exceeded 20 756459 8.1.8.132
seconds on filesystems that were involved in Data Mover
failover or restart.

50 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Tests caused Data Mover bug checks sent corrupted NFS 755234 8.1.8.132
requests to the server. This did not occur with standard NFSV4
clients.
VNX File OE I/O pauses occurred during snapshot operations causing 20-25% 756456 8.1.8.132
performance reduction.
VNX File OE AIX host failed when executing du/cp/tar etc commands within 70805746 / 756461 8.1.8.132
a ckpt dir. The error occured when accessing the ckpt directory
from a PFS share folder that had been exported by a VNX NFS
server. For example:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_GMT
/data/dir1# du du: 0653-175 Cannot find the current directory.
VNX File OE During VNX2 migration operations, if the source virtual data 72284998 / 763440 8.1.8.132
mover (vdm) was attached to more than 200 interfaces,
migration operations could hang.
VNX File OE When running FSCK on an file system that is larger than 16TB- 72380056 / 769719 8.1.8.132
64MB, FSCK could remove blocks within last cylinder group
(CG), potentially corrupting some data within the file system.
VNX File OE Modules like CAVA aborted unexpectly without notification, and 74066422 / 780387 8.1.8.132
generated log message such as bad dir read in the server_log.
Although the logged messages implied that the directory was
corrupt, the file system did not always contain any
inconsistencies.
VNX File OE When two Data Movers (DMs) with the same IPv6 brodcast 71777016 / 756450 8.1.8.132
domain were brought up serially, if the first DM lost
connectivity, the IPv6 network neighbor client's cache was not
updated.
VNX File OE An AIX host failed when executing du/cp/tar etc commands 70805746 / 756461 8.1.8.132
within a ckpt dir. The following error was genrated when
accessing the ckpt dir from the PFS share folder exported by
VNX NFS server:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_GMT
/data/dir1# du
du: 0653-175 Cannot find the current directory.
VNX File OE Unable to mount an NFS export from Windows 2008 R2 when 72878838 / 769713 8.1.8.132
running a Windows NFS client over IPv6.
VNX File OE A warm reboot of a Data Mover cannot be executed and a cold 758179 8.1.8.121
reboot occurs instead. If this occurs, client access to the data
could be temporarily disrupted.
VNX File OE An automatic file system check was initiated on a file or unified 74066422/ 767110 8.1.8.121
system. The reason given for triggering the auto-file system
check was bad dir read. However, the auto-file system check
did not detect any on-disk inconsistencies on the file system.
VNX File OE Linux clients were misconfigured. The file system service hung 67020256 / 689865 8.1.8.119
and the server log showed messages similar to:
NFSD Pool:NFSD_v4Daemons BLOCKED NFSD Pool:NFSD_Exec
BLOCKED
VNX File OE A memory corruption occurred while trying to rename a file. 67999806/ 699293 8.1.8.119
VNX File OE In a failover situation when there is a heavy load, some NFS 67433430/ 691027 8.1.8.119
clients could get StaleHandle errors.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 51
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE A Data Mover rebooted with the following error code: 64607060/ 678801 8.1.8.119
0x00027e4c19.
VNX File OE A system reboot occurred with hardlink pointer to a file. 68506210/ 716653 8.1.8.119
VNX File OE When a checkpoint FS corruption was detected and the 63195614/ 715376 8.1.8.119
corruption came from production FS, only the checkpoint FS
was marked corrupted. The Data Mover will bug check again,
since the production FS was not marked corrupted.
VNX File OE With NFSv4, Unix permissions (mode bits) are generated from 60262164/ 715367 8.1.8.119
the ACL, the group setting is then propagated to the owner.
VNX File OE NFS deadlocked when a file written with pNFS was truncated. 68729530/ 710839 8.1.8.119
Thread blocked messages were seen in the server log.
VNX File OE When using NIS as name resolvers and when there is an entry 68579608/ 701865 8.1.8.119
without a name in the group map, the Data Mover bugchecked.
VNX File OE Permissions set on DFS Folders through MMC or by using CLI 64120168/ 699142 8.1.8.119
commands were not accepted.
VNX File OE NFSv4 performance slowed, with some operations lasting about 65738438/ 693366 8.1.8.119
20 seconds.
VNX File OE A slow response time was seen on clients using specific 66739608/ 689050 8.1.8.119
applications.
VNX File OE The NTXMAP feature does not support Unix user names defined 68878404/ 718434 8.1.8.119
in NIS.
VNX File OE The server_date did not reflect the sync_delay option set by the 56244034/ 600639 8.1.8.119
customer.
VNX File OE When using the nas_checkup, a warning is returned for 56242026/613387 8.1.8.119
parameter canRunRT whose current and default values are not
the same.
VNX File OE When a file was overwritten and the file system was full, the 54995532/ 613405 8.1.8.119
new file creation is allowed, even if the new file size was larger
than the old file size. Writing this much data generated a
QUOTA_EXCEEDED error. The end result was that the old file
was lost and the new file couldn’t be created.
VNX File OE Execution of the command, nas_replicate –switchover, failed. 47184462 / 613479/ 485477 8.1.8.119
VNX File OE A Db chk failed to detect an invalid entry that was inserted 54593730/ 613485 8.1.8.119
manually into the /nas/server/slot_2/ufs file.
VNX File OE The Control Station rebooted unexpectedly after 485 days of 53513236/ 649331 8.1.8.119
uptime.
VNX File OE The command nas_cs -set -dns_domain backup failed. 649841 8.1.8.119
VNX File OE When a file is detected by an antivirus, there is no information 64309006/ 693813 8.1.8.119
about the anti virus engine which has detected the virus.
VNX File OE An error was encountered: ../ufslog.cxx at line: 11125 : sync: no 66537256/ 693832 8.1.8.119
progress reducing dirty list.
VNX File OE An incoherency was seen in the NFS statistics counters. 706261 8.1.8.119
VNX File OE Customer get STATUS_INTERNAL_ERROR for 69662042/ 713354 8.1.8.119
FSCTL_QUERY_ALLOCATED_RANGES request on compressed
file.

52 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE When executing cpio -m on a Solaris client to migrate files to a 63811662/ 655693 8.1.8.119
NFS share folder, the modified time of the files were not kept
as expected.

VNX File OE When a system running an earlier version of code was used as a 722333 8.1.8.119
destination to create a replicationV2 session, the destination file
system had a mismatched log type between the Control Station
and VNX for File OE.
VNX File OE LDAP user failed to login via SSH into Control Station following 60586898/ 617060 60893710/ 8.1.8.119
Control Station failover. 623870/ 724173
VNX File OE Storage processor B rebooted due to bugcheck code 71419220/ 737617/ 745754 8.1.8.119
0x0000007E.
VNX File OE Customer received an out of memory issue with bugcheck text 65340286/ 669498 8.1.8.119
that stated, couldn't get a free page, due to a bug in the data
cache flush algorithm.
VNX File OE Backup report shows 0MB is processed after a successful 64121286/ 680458 8.1.8.119
backup.
VNX File OE Under certain network configurations ndp x route commands 66737082/ 682013 8.1.8.119
generate a route that is missing a next hop.
VNX File OE A single SP bugcheck occurred when the VNX2 array was under 643402 8.1.8.119
pressure that included external I/Os and internal background
operations, while at the same time the other SP is rebooting.
VNX File OE The VNX for File OE bugchecked (0x000068df41, 0x0000000008) 68781348/ 704740 8.1.8.119
during NFSV4 access with a message saying, Page Fault
Interrupt.
VNX File OE, User experienced a slow host write response time or a timeout. 384143 8.1.8.119
CBFS
VNX File OE, A performance issue was experienced due to a high CPU 67303398/ 690463 8.1.8.119
CIFS utilization when using CIFS file-filtering.
VNX File OE, Share paths were not canonicalized before stored in the share 61227688/ 688267 8.1.8.119
CIFS databases.
VNX File OE, When Vista or Windows 2008 server clients access to a share 67948752/ 706949 8.1.8.119
CIFS and compress files, an attempt to rename a directory resulted in
blocked SMB2 threads on the Data Mover and a loss of access to
the CIFS share.
VNX File OE, Audit security settings were not correctly restored after a 708253 8.1.8.119
CIFS reboot on Data Movers with many VDMs and large File System
configurations.
VNX File OE, A server might receive invalid Kerberos tickets during SMB 66615986/ 713081 8.1.8.119
CIFS authentication, leading to an unexpected error.
VNX File OE, The LDAP configuration file (ldap.conf) is present in the /.etc 69680002/ 718807 8.1.8.119
CIFS directory, but incomplete.
VNX File OE, A permissions error occurred for CIFS users when changing a 69929998/ 721410 8.1.8.119
CIFS directory.
VNX File OE, On a compressed file, the following message was received: 69662042/ 722593 8.1.8.119
CIFS STATUS_INTERNAL_ERROR for
FSCTL_QUERY_ALLOCATED_RANGES.
VNX File OE, A bug check occurred (0x000072e499) when unmounting a file 58454314/ 724284 8.1.8.119
CIFS system and stopping the virus checker at the same time.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 53
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, CIFS users can't access SFTP with homedir and FQDN domain. 724359/ 70440966 8.1.8.119
CIFS
VNX File OE, CEPA reports actual path while deleting the file instead of 65618098/ 736528 8.1.8.119
CIFS symlink path.
VNX File OE, When the FS is paused, the nas_fs -info <fs_name> -o mpd 54065574/ 652192 8.1.8.119
Install, command cannot retrieve the Multi-Protocol Directory
Configure information.
VNX File OE, An external Open LDAP server configured to deny anonymous 58961190/ 720929 8.1.8.119
LDAP, VNX connection on the RootDSE showed error messages in the log
file.
VNX File OE, An incremental NDMP backup running through Avamar of a 65043266/ 715375 8.1.8.119
NDMP large file system with many files hung. The backup hung after
Avamar attempted to stop and restart the backup. The backup
threads started to terminate immediately after the backup
restarted and the transfer rate showed as 0 kbs.
VNX File OE, A backup report showed 0MB is processed after a successful 64121286/ 680458 8.1.8.119
NDMP backup.
VNX File OE, An IPv6 network tool may cause a condition on the Data Mover. 69685320/ 713567 8.1.8.119
Network
VNX File OE, A user was unable to mount replication destination checkpoints 54333266/ 8.1.8.119
NFS via NFSv4 if the access policy of the file system was not NATIVE. 610491
VNX File OE, After a client reboot, a server panicked when NFSV4 locks were 68927792/ 704534 8.1.8.119
NFS held by the client.
VNX File OE, Under NFSV4 load, the server may crash in some special 70410652/ 720785 8.1.8.119
NFS circumstances.
VNX File OE, NFS writes are slow after pNFS client switches over the I/O to 68729530/ 725727 8.1.8.119
NFS NFS after a failure.
VNX File OE, pNFS: The datamover returns NFS4_ERR_TOOSMALL to the 68729530/ 738744 8.1.8.119
NFS Linux client when the layout description (list of extents) doesn't
fit in the client buffer size. The client falls back to NFS.
VNX File OE, When the dbms_backup script is run as nasadmin, a failure 55783542/ 651834 8.1.8.119
Platform occurred because nasadmin didn’t see the server when ACL was
Services set.
VNX File OE, As the load of CPU reaches 12%, mail gets rejected with 61526254/ 715294 8.1.8.119
Platform following error messages grep "rejecting connections"
Services /var/log/maillog |tail -20 Apr 25 02:15:46 WKPSVNN-PTCC-
2165 sendmail[2543]: rejecting connections on daemon MTA:
load average: 14 Apr 25 02:16:01 WKPSVNN-PTCC-2165
sendmail[2543]: rejecting connections on daemon MTA: load
average: 14.
VNX File OE, The Installation process failed when the same IP was assigned to 67650918/ 693125 8.1.8.119
Platforms SPA and SPB.
VNX File OE, A bug check was initiated on the Data Mover when PMDV was 59553716/ 651820 8.1.8.119
SnapSure deleted while the prefetch I/O still exists.
VNX File OE, The destination file system replication or migration was 63316290/ 669268 8.1.8.119
SnapSure corrupt, but the source file system was clean.

54 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, When SavVol is larger than 2G, auto-extend cannot stop until all 69343968/ 711230/ 712801 8.1.8.119
SnapSure the space in a pool is consumed.
VNX File OE, A Data Mover bugchecked with error code 0x00009d5fd7. 61627400/ 652203 8.1.8.119
Storage
VNX File OE, A server exceeded the timeout value during a boot operation. 60532776/ 652204 8.1.8.119
Storage
VNX File OE, The creation of a 2TB file system from ViPR failed. 64298388/ 654578/ 655198 8.1.8.119
System
Management
VNX File OE, Thousands of defunct sysadmin accounts were created on the 64507704/ 661838/ 665714 8.1.8.119
System Control Station.
Management
VNX File OE, An NFS export created on a VDM appeared in the Unisphere 64979880/ 666806 8.1.8.119
System GUI, but then disappeared later.
Management
VNX File OE, 5 weekly historical statistical files were found in 62954784/ 669470 8.1.8.119
System /nas/jserver/sdb/control_station/data_movers/fs_capacity/
Management directory, instead of the expected files.
VNX File OE, A Data Mover failover command failed with an unclear error 64647466/ 692625 8.1.8.119
System message, Execution failed: valid_entry: Routine failure.
Management [EXTRACTOR.set_entry.
VNX File OE, After a VNX for File upgrade, the CIFS server was no longer 68106062/ 698162 68191680/ 8.1.8.119
System available. 700842/ 700096
Management
VNX File OE, Command nas_fs -info -all failed with a backtrace. 67458600/ 699981/ 701067 8.1.8.119
System
Management
VNX File OE, MoverStats statSet="NFS-All" XML API query did not return 706255 8.1.8.119
System "v3aai" counter.
Management
VNX File OE, ViPR SRM Client was unable to retrieve NFS v4.1 statistics via 717380 8.1.8.119
System XML API.
Management
VNX File OE, SRM queries caused frequent Control Station reboots due to an 65172254/ 699436/ 699435 8.1.8.119
System out-of-memory condition.
Mangement
VNX File OE, A user encountered an error that states the following when 703739 8.1.8.119
System upgrading from a previous version of code:
Mangement Error: File system has mount record present, but is not in use in
filesys table.
Error: File system is found not in use, yet there is rw/ro server in
its entry in $NAS_DB/volume/filesys file.
VNX File OE, The /nas/log/cmd_log showed that nas_storage -sync command 60933588/ 667347 8.1.8.119
System was executed every 5 minutes.
Management
VNX File OE, Symmetrix poller shut down and never restarted. 63091868/ 696681 8.1.8.119
System
Management

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 55
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, UDoctor cannot take the correct recommended action. 64430016/ 715296 8.1.8.119
UDoctor
VNX File OE, When executing cpio -m on a Solaris client to migrate files an 63811662/ 655693 8.1.8.119
UFS NFS share folder, the modified time of the files were not kept as
expected.
VNX File OE, Customer encountered a bugcheck with divide exception. 65637828/ 676161 8.1.8.119
UFS
VNX File OE, A customer encountered a bad bn error on the destination side 60825130/ 680460 8.1.8.119
UFS when performing an offload copy operation.
VNX File OE, When creating a fast clone (for a vmdk image), the Data Mover 66254656/ 687722 8.1.8.119
UFS hung, or returned with a message, sync: no progress.
VNX File OE, In the time zone of UTC+14 (or UTC+13 during daylight savings 68621960/ 700810 8.1.8.119
Unisphere time), when upgrading the Unisphere File software, the post
upgrade check was failing with "Error 3501: Storage API
code=4562: SYMAPI_C_CLAR_CIL_FAILURE Failure during
Clariion Interface Layer's attempt to obtain data from Clariion
array". The control station "nas_storage -c -a" command also
failed, with the same error. The naviseccli command
autotiering -info -schedule (from the control station or to the
array) failed with: -GMTOFF:Value 840 is greater than
maximum. The maximum value is 780.
VNX File OE VNX OE for File 8.1.6.96 did not include the Cabinet-level 779900, 708754, 708759, 8.1.6.101
disaster recovery feature. VNX OE for File Cabinet-level disaster 708756
recovery utilizes RecoverPoint, MirrorView, or SRDF
technologies to replicate the underlying storage of File
resources to a secondary array. It provided NAS file system and
VDM remote replication to a secondary site by implementing
Data Mover level replication and Cabinet-level failover. This
allows for recovery and continued business operation in the
event of a disaster to the primary array.
VNX File OE, A storage pool reached an out of space condition resulting in a 717000 8.1.6.101
System Data Mover panic when a file system whose name contained a
Management space tried to use the pool. This file system was not
automatically unmounted after the panic which resulted in
recurring panics on the Data Mover.
VNX File OE Migration failed because a Replication V2 session cannot be 65997274/ 69876844 8.1.6.96
created with a mismatch of file attributes.
VNX File OE Customer will encounter page fault error creating fs using MVM 61554224/ 678167 8.1.6.96
with size 16T.
VNX File OE Clients were not able to log onto the server using NTLMSSP 659697 8.1.6.96
authentication after changing NTLM Authentication Level
Security policies to:
0 LAN Manager Auth Level: Send LM & NTLM responses
2 LAN Manager Auth Level: Send NTLM response only
VNX File OE An access right issue occurred, when the facility to use more 63892708/ 661114 8.1.6.96
than 16 groups in an NFS credential is used and the inherited
parent ACL includes ACE with Creater_Owner/Creator_Group.

56 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Users cannot configure FT files larger than the current 60261046/ 658493 8.1.6.96
maximum size of 100MB.
VNX File OE The DNS records for a CIFS server interface are not updated on 62112774/ 654895 8.1.6.96
the DNS server secured zone. DNS updates for the CIFS inteface
fail.
VNX File OE An SP reboot occurred when converting a unix mount path into 63917158/ 654476 8.1.6.96
Unicode.
VNX File OE When a PFS with a special size is created, and is not a mutiple of 63685744/ 651226 8.1.6.96
1MB, the nas_migrate creation for that PFS will fail on the
Create Replication step.
VNX File OE A file rename occasionally resulted in an incorrect audit log 63603546/677639 8.1.6.96
entry.
VNX File OE When an incorrect host names are present in the export 64529252/661903 8.1.6.96
options,
the resolution is executed every time the export options are
checked.
VNX File OE The Data Mover hung when a file system was unmounted. 64873316/ 672774 8.1.6.96
VNX File OE The SMB2 client receives a message: STATUS_FILE_CLOSE on file 60876796/ 654183 8.1.6.96
operations.
VNX File OE The Data Mover experienced multiple "Invalid Opcode 651824 / 58854686 8.1.6.96
exception. Virt ADDRESS: 0x0001be2471" bugchecks on routine
calls from "UFS_FileSystem::findExistingNode()" after
experiencing file system corruption.
VNX File OE A SavVol used for either SnapSure or Replication was 711230/ 69343968 8.1.6.96
automatically extended due to reaching the HWM. However,
even if the SavVol had sufficient free space after the auto-
extension, if the SavVol size was larger than 2TB, the auto-
extension kept repeating automatically until the size reached
16TB or all of the free space in the pool was used if there was
not enough free space to reach 16TB. There were many
informational messages similar to:
CS_PLATFORM:SVFS:INFO:20:::::Slot2:1424053636:
ckpt_3Xday_for30days_054 autoextension succeeded. in the
sys_log..
VNX File OE A file system was created directly on top of meta volumes that 704369 8.1.6.96
did not belong to any pools. A command to extend this file
system by specifying another volume failed with the message:
Error 2237: Execution failed: Segmentation fault: Operating
system signal. [FS_EXEC.pre_exec].
VNX File OE The server_mount command did not validate command options. 506271 8.1.6.96
The command attempted to run, even with invalid (misspelled)
options, and did not work.
VNX File OE When adding two volumes with same name, the Data Mover 54676948/ 613383 8.1.6.96
(DM) performed a GP exception bugcheck.
VNX File OE The trunk.LoadBalance parameter accepted invalid values, 57768114/ 613397 8.1.6.96
preventing the load balancing feature from working as
expected.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 57
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE A large number of messages similar to the following 58237744/ 613403 8.1.6.96
accumulated in the the system server_log:
NFS3:NfsGssRequest::doStreamProcessing: RPC protocol
violation, verf size <n>
In additon, the Data Mover (DM) performed an out of msgb
bugcheck.

VNX File OE During a file system failover, when a file system with DFS shares 59133330/ 613406 8.1.6.96
was replicated from version 5.6.49 (or earlier) to a later version,
the destination side running the higher version failed to
unmount, leaving it in a frozen state.
VNX File OE When the VNX control station accumulated multiple weeks of 58504016/ 618756 8.1.6.96
historical data, the Jserver ran out of internal memory and re-
started. This prevented accurate file system statistics from
appearing in Unisphere and interrupted the system from
generating appropriate file system alerts.
VNX File OE File storage rescan failed to diskmark LUNs with identifiers 61572438/ 625553 8.1.6.96
equal or greater than 10000, preventing users from using the
LUNs as File-side disk volumes.
VNX File OE When automatic file system extension was enabled for a file 680653 8.1.6.96
system, the Data Mover (DM) on which the file system was
mounted was not eligible for remote DR.
VNX File OE VBB restore did not restore CIFS related attributes if a file was 61065884/ 631900 8.1.6.96
created on NFS share.
VNX File OE Setting the NDMP module log level to LOG_DBG2, caused a File 62110602/ 632115 8.1.6.96
side OE bugcheck with the following alert:
NDMP: 10: Thread rest004 DAR: waiting for finish of previous
file, thrdId=4, write_owner=3, bufpt=0x1c58a9ffb,
bufend=0x1c58aa000, bufpt string
VNX File OE When generating user quotas, Unisphere took an extended 61862190/ 631383/634746 8.1.6.96
period of time to populate the user name information.
VNX File OE When using Unisphere to view a VNX system running a 05.33 61727774 /636371 8.1.6.96
version of File OE targeting a VNX system running a 05.32
version of the File OE, the Search LUN page did not return any
results.
VNX File OE CIFS clients were unable to delete symbolic links with targets 61689146/ 639434 8.1.6.96
that pointed to absolute path (that is, paths beginning with /).
VNX File OE Using the File CLI server_stats cifs.user command to monitor a 62007318/ 643364 8.1.6.96
user name with the following format: DOMAIN\\\\username
caused a File OE "Page Fault Interrupt" bugcheck or caused
memory corruption.
VNX File OE When configuring the LDAP Service to use Kerberos 60419804/ 646763 8.1.6.96
authentication, the vbb restore process caused a system
bugcheck.
VNX File OE Log files generated by the server_stats/nas_stats process and 60520098/ 647406 8.1.6.96
the statsd daemon were not properly cleaned up in the
directory /nbsnas/stat_groups. Eventually the number of files
grew to excessive size.

58 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE The checkpoint auto extension was occasionally skipped, 59015588/ 651818 8.1.6.96
producing log entries such as:
2013:96104480787::Slot4:1384521607: ckpt_h_f00385_001
autoextension skipped: used 89, HWM 90%.
This led to condition where the save volume was full and the
checkpoint remained inactive.
VNX File OE In reverse replication operations, systems encountered an error 57418482/ 651819 8.1.6.96
that forced them to retry the operation. This produced
messages such as:
“priIp:10.10.16.78, secIp:127.0.0.1 not found";
"CmdReplicatev2ReverseSec::startingRep fsId: 0 not found";
and/or "VersionSetContext::removeReplicaContext() failed
If the reverse replication operation issue was not resolved in a
timely manner, the Data Mover (DM) performed a "couldn't get
a free page" bugcheck or an out of memory bugcheck.
VNX File OE While performing a diagnostic check, USM displayed incorrect 63613130/ 651218 8.1.6.96
information about pool space."
VNX File OE A Data Mover (DM) bugcheck occurred with the following 652194/ 652194 8.1.6.96
message:
Memcpy size 0x7fff8 is too large
VNX File OE A corrupted server message block (SMB) request contained a 59848028/ 652199 8.1.6.96
corrupted length value that was larger than the available data.
This caused a large memory allocation that failed and resulted
in a Data Mover (DM) bugcheck.
VNX File OE An incorrectly formatted Service Principal Name (SPN) 60584052/ 652201 8.1.6.96
indentifier - lacking either the service component or hostname
value - caused a Data Mover (DM) to bugcheck.
VNX File OE When the system experienced a hardware resume checksum 57606950/ 652210 8.1.6.96
error, a call home message was not generated.
VNX File OE SRM queries caused frequent Control Stations reboots due to 65172254/ 679760 8.1.6.96
out-of-memory condition
VNX File OE The VNX File CLI command nas_fs -info -all failed with a 67458600/ 699981/700863 8.1.6.96
backtrace.
VNX File OE The VNX File CLI command nas_pool -info id=42 failed with the 56276112/ 579242 8.1.6.96
following error: 55669828 / 587686 657156
Error 2237: Execution failed: Segmentation fault: Operating
system signal. [LINKED_LIST.first]
VNX File OE When working with a deduplicated file system, the VNX OE 58909740/ 662444 8.1.6.96
incorrectly reported that the file system was extended to its
maximum size and generated log messages such as the
following:
Couldn't reserve -1 blocks for inode 3976589. Error encountered
- NoSpace
The condition most commonly occurred on file systems where
the autoextension feature was enabled.
VNX File OE When attempting to mount a file system named with an invalid 64938512/ 663866 8.1.6.96
character, everything after the invalid character was ignored.
This sometimes led to unexpected behavior for the file system.
For example, MIXED file systems could be treated as native file
systems because the accesspolicy=MIXED setting was ignored.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 59
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE After a standby operation in an IPv6 environment, when a VNX 64936446/ 668537 8.1.6.96
Data Mover (DM) took over for as the primary system DM, its IP
address could not be accessed.
VNX File OE In an IPv6 environment on a VNX system with static routes 64936446/ 668538 8.1.6.96
configured, the VNX Data Mover (DM) experienced a "Page
Fault Interrupt" bugcheck.
VNX File OE In a statically routed environment with supernetted subnets, 64635042/ 668539 8.1.6.96
IPv6 networking did not work on a VNX Data Mover (DM)
because the IPv6 routing logic did not select the appropriate
source IP address for a given remote destinatlion.
VNX File OE IPv6 static routes to supernetted networks were created using 64635042/ 674559 8.1.6.96
the "server_ip -route -create" command. Under some
circumstances the Data Mover (DM) did not recognize these
static routes until it was rebooted.
VNX File OE IPv6 traffic from a VNX Data Mover (DM) failed in particular 64635042/ 674560 8.1.6.96
configurations.
VNX File OE Unisphere returned an alert that the Block OE software version 66124824 / 676340 8.1.6.96
was not compatible with the File OE software version.
VNX File OE Occasionally during synchronous replication failover operations, 700784 8.1.6.96
the source Data Mover (DM) experienced a bugcheck because
the source LUN became read-only.

VNX File OE, Limitations for nas_halt . 566413, 566887 8.1.6.96


Replication
VNX File OE, Some SFTP client tools such as "Hiteksoftware JaSFTP" or "Avaya 63667372/ 652531 8.1.6.96
Security Aura" open 2 SSH channels per TCP connection to the SFTP
server. These 2 SSH channels are used to transfer files by SFTP
protocol.
SSH multichannels are not supported by VNX. By consequence,
the DART may have some blocked SSHD threads, visible running
'server_thread -list -pool SSHD'
VNX File OE, When installing the ESRS IP Client on a Windows Server 2012 or 64018344/ 656572 8.1.6.96
Serviceability above system (which are not supported), the ESRS environment
ESRS check incorrectly reported the OS as Windows 2008 (which is
supported).
VNX File OE, The command /nas/bin/nas_cs -info reported the IPv4 Gateway 655932 8.1.6.96
System value incorrectly.
Management
VNX File OE, Startup and termination errors appeared when the 652523 8.1.6.96
System deduplication enabler was installed on the system but was not
Management in use.
VNX File OE Applying the latest security related hot fix to any current VNX2 675307 8.1.3.79
release (8.1.0, 8.1.1, 8.1.2 and 8.1.3) will cause upgrades using
USM (Unisphere Service Manager) to a later release to fail.
Upgrades using CLI would remove the hot fix, which will have to
be re-applied after completing the upgrade.
VNX File OE When the pool name has a space, the fs_extend_handler puts 610831 8.1.3.72
error messages in apl_tm.log.

60 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Oracle Siebel clusters crashes the application when trying to 60238902/634648 8.1.3.72
access NFS export.
VNX File OE FS offline with corruption detected and recovery (FSCK) reports 621305, 614391 8.1.3.72
corruption on the File System SuperBlock.
FS marks a SuperBlock related transaction complete, even the
modification on the SuperBlock has not been persistent. If a
panic happens before the system has a chance to write the
SuperBlock again (periodical 30 seconds), it results in a lost-
write conditions on the SuperBLock, thus the corruption.
VNX File OE Unable to configure LDAPS if LDAP server certificate 'Subject 572543 8.1.3.72
Name' property didn't have LDAP server URL and in the
certificate 'Subject Alternative Name' property it had the LDAP
server URL.
VNX File OE ProcessMFCExtent error handling 583674 8.1.3.72
VNX File OE When all the initiators are visible in a storage group except one 584290 8.1.3.72
initiator (which is in the ~management storage group), the
Connect button on the Initiators tab on the Host List page
showed the error "Found no user created Storage groups to
reconnect on subsystem 184" and did not put the initiator in the
requested storage group.
VNX File OE All dedup-enabled LUNs on pool 0 went offline. 603082, 602262, 611854, 8.1.3.72
621903, 626956, 644194
VNX File OE The following error was displayed: 632514 8.1.3.72
DEDUP OFFLINE: Pool Gold, all luns are offline and need
recovery
VNX File OE Subfolder contents were not being restored when the Restore 47151218/613372 8.1.3.72
option on the previous version tab in WindowsExplorer was
used with C$ path.
VNX File OE A SYSTEM WATCHDOG panic, or a GP exception panic or a 61243802/623117 8.1.3.72
isAValidIndBlkBuf: bad bn panic, or a Corrupted indirect block
panic, or a mangled directory entry panic, or a Page Fault
Interrupt panic, and/or a number of other symptoms occurred.
VNX File OE LDAP Service Connection configuration failed on attempt to add 52658438/613483 8.1.3.72
server certificate in PKCS7 format
VNX File OE An install issue occurred when using non-default account for the 54342902/613486 8.1.3.72
nasadmin role
VNX File OE GUI showed a faulted DM but nas_server -l did not. 60433682/ 618249 8.1.3.72
VNX File OE A loss of CIFS access occurred and the Data Mover did not 642440 8.1.3.72
respond to any commands.
VNX File OE User could not initiate ROBV on the RAID Group with a number 62946704/ 8.1.3.72
larger than 256. 649286
VNX File OE When performing an EI recover, an incorrect message that the 53678158/ 8.1.3.72
Data Mover needed to be rebooted was received. 613386
VNX File OE When CS0 failed-over to CS1, and the Pre-Upgrade Health 57195216/ 8.1.3.72
Check (PU HC) had been run on CS1, The PUHC reports a failure 613396
due to not being able to find some commands under
/nas_standby.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 61
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Reading a DHSM offline file could cause a Data Mover panic if 61225828/ 8.1.3.72
there were connection issues between the primary storage and 629252
secondary storage or another failure of reading remote data.
VNX File OE, Server lost connectivity to Domain Controllers on the reboot 60226850/617052 8.1.3.72
CIFS after adding a specific Service Principal Name to a server using
the server_cifs server_N -setspn -add command.
VNX File OE, CIFS clients failed to access a file due to a lock conflict. File 59688542/619315 8.1.3.72
CIFS system unmounts or a freeze operation hangs.
VNX File OE, When enumerating open sessions or open files on a CIFS server 6037558/620402 8.1.3.72
CIFS which belonged to a Data Mover with many servers, the reports
may have included information that belonged to other CIFS
servers.
VNX File OE, User sees a panic when using FS that was mounted with the 639006 8.1.3.72
CIFS ceppnfs option.
VNX File OE, User encounters a 'bad bn' error code when performing an 632014 8.1.3.72
CIFS offload copy operation.
VNX File OE, YP requests hung indefinitively when the domain was removed. 61583520/638168 8.1.3.72
CIFS
VNX File OE, Could not access CIFS server with NULL session. 618728 8.1.3.72
CIFS
VNX File OE, Under certain circumstances, our internal FS layer returns 641492 8.1.3.72
NFS invalid uninitialized status on NFS write. This results in a
memory corruption.
VNX File OE, In NFS 4.1, the system may hang when it reconnects. 629752 8.1.3.72
NFS
VNX File OE, When using the parameter nfs.manageGids. The system could 63288760/646572 8.1.3.72
NFS panic if the client accessed an NFS export with user as root.
VNX File OE, When the user mapping service was not started or not 59549160/621157 8.1.3.72
NFS configured on a Linux client, the client had access issues through
NFSV4.
VNX File OE, Inability to access the Data Mover via NFS V4 from a client using 58917286/621161 8.1.3.72
NFS UDP for NFSV4.
VNX File OE, A System Watchdog panic occurred with the error: 61114502/622178 8.1.3.72
NFS file: ../sched.cxx at line: 1909 : SYSTEM WATCHDOG
VNX File OE, When there was invalid syntax in the host list (access, ro, rw or 638559 8.1.3.72
NFS root), the export was accepted without errors. When the export
options were processed, the host list was analyzed and the error
is only printed in the log. It is not clear to the user that his host
list is wrong.
VNX File OE, The Data Mover hung when a file-system was mounted with the 54489848/641100 8.1.3.72
NFS option ntcredential and when replication was used.
VNX File OE, FS was taken offline because of reAcquire failure. cbfs:CBFSA: 627477, 620510, 651332 8.1.3.72
System UFS: 3: Unmounting fs 5: Reason: VBM processToBeModified
Management reAcquire failure

62 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, LUN was taken offline and during LUN recovery, FSCK would not 619255, 596690, 608904, 8.1.3.72
System report corresponding corruption at CBFS SliceMap metadata. 610756, 618451, 618971,
Management 637794, 639663, 644557,
646120
VNX File OE, SP panicked due to IOD watchdog timeout. 601275, 615600 8.1.3.72
System
Management
VNX File OE After changing the admin password for a CIFS client connection, 568365 8.1.2.51
the original credential is be retained in the Kerboros cache for
the file system. This can cause a password mismatches and
make the CIFS client unable to access CIFS data.
VNX File OE Windows XP (and earlier) clients cannot navigate directories 591553 8.1.2.51
created with newer clients (for example, Windows 8 or
Windows 2012 server OS) that support the SMB3 protocol.
VNX File OE The Windows Event Viewer cannot open a VNX security audit 591565 8.1.2.51
log file that was previouslyopened in UNIX (NFS).
VNX File OE Unable to access CIFS shares from Windows 7/Windows 2008, 592737 8.1.2.51
although Windows XP/2003 clients can access the shares
without problem. Unable to unmount check point file systems
that were mounted in a VNX virtual data mover (VDM)
environment. The unmount process is interrupted when a CIFS
client accesses the check point though the .ckpt directory.
VNX File OE In Windows, previous versions of deleted files cannot be 593570 8.1.2.51
retrieved with the Restore Previous Version option if the
widelinks feature is enabled for a CIFS server. Deleted files can
only be retrieved through the .ckpt directory.
VNX File OE When the VNX default server names (for example, server_2 ) 598131 8.1.2.51
are renamed (for example, to server_hr), the
migrate_system_conf command generates the following error:
Error! Invalid source <movername> is provided
VNX File OE The VNX File OE reboot occurs after a failover with a message 603272 8.1.2.51
similar to the following:
GP exception. Virt ADDRESS: <addr>. Err code: 0
VNX File OE When the Windows audit auto-archive feature is enabled for a 607285 8.1.2.51
VNX virtual data mover (VDM) but disabled on a VNX data
mover (DM), auto-archive does not work correctly after:
Restarting the DM
Unloading/loading the VDM
VNX File OE A VNX Data Mover (DM) exeriences a "couldn't get a free page" 602155 8.1.2.51
bug check.
VNX File OE The Data Mover experienced a "GP exception" bug check. 595437 8.1.2.51
VNX File OE When permanently unmounting a split-log filesystem and all its 52621636/ 8.1.2.51
checkpoints, a VNX Data Mover (DM) failover fails with the 597422
following error:
replace_ufslog: failed to complete command
VNX File OE When one or more Data Movers (DMs) are powered off or 600826 8.1.2.51
pulled out from the system, the nas_rdf -restore function
finishes without returning an error code.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 63
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE Windows 8 or 8.1 and W2012/R2 servers are occasionally not 602082 8.1.2.51
allowed to access a CIFS share on a VNX Data Mover (DM)
when browsing the server UNC path \\<server or IP>.
A pop-up window appears with a \\server\share is not
accessible message.
VNX File OE The Data Mover experienced a rolling panic when the nas_fs - 602139 8.1.2.51
translate command was run on the Control Station to convert
the access policy of a file system from UNIX or SECURE to
MIXED.
When running the nas_fs -translate control station command to
convert a file system access policy from UNIX or SECURE to
MIXED, the VNX File OE goes into a rolling panic.
A workaround invlved removing the following option from the
boot.cfg file:
accesspolicy=MIXED
VNX File OE Avamar backup fails, and server log is filled with alert messages. 602168 8.1.2.51
VNX File OE NFS mounts did not complete successfully. NFS mount threads 604117 8.1.2.51
were hung. If all of the NFS mount threads were hung the Data
Mover could panic.
VNX File OE When a VNX Data Mover (DM) is associated with a large 605740 8.1.2.51
number of file systems, and the fiile systems contain a large
number of directories that remain open or are frequently
opened, this can exhaust VNX system memory and cause the a
Data Mover (DM) bug check.
VNX File OE When a SavVol is unavailable at boot time, any relevant 607082 8.1.2.51
Production File Systems (PFS) are mounted, invalidating any
associated checkpoints.
VNX File OE On a Data Mover (DM) with several concurrent backup sessions 605163 8.1.2.51
running for long time, it is possible for the DM to reboot.
VNX File OE, The VNX System Management Daemon crashes with an 601412 8.1.2.51
System OutOfMemoryException message recorded in the
Management /nas/log/nas_log.al.mgmtd.crash file.
VNX File OE, FC port did not function after P2P to tape (DDR) attempted. 587501, 591030 8.1.2.51
System
Management
VNX File OE When using Filemover (DHSM), files with a modification time 53114380 8.1.1.33
earlier than 1970/01/01 could not be recalled. /563271
VNX File OE, Windows Quotas View of a CIFs Share were not displayed from 52223586 8.1.1.33
CIFS one VNX when the drive was mapped. The user quota /554095
information was missing (using Window MMC Quota
management) when several CIFS users were mapped to the
same Unix Id.
VNX File OE, RecoverPoint Failover Consistency Group failed in both 563551, 571761 8.1.1.33
RecoverPoint directions
FS
VNX File OE, DART panicked while downloading a large file (more than a few 55032486 8.1.1.33
Security GB) using SCP (secure copy over ssh). /578783

64 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


VNX File OE, SECURITY: Oracle JRE/JDK Multiple Vulnerabilities (CPU-FEB- 541017, 543319 8.1.1.33
System 2013) - UNIX/Linux – JRE.
Management
VNX File OE, Data Mover completely hung and was unresponsive. FileMover 56871206 8.1.1.33
System (DHSM) users using the partial recall policy could experience a /580075
Management hang if they were reading a file that was larger than 4GB.
VNX File OE, Temporary Checkpoints flooded the Control Station. Users were 570984 8.1.1.33
System unable to create their own checkpoints or file systems.
Management
Unisphere Management Server crashed when performing host polling. 09046266/935385 1.3.9.1.0236-1
Backend/CLI
Unisphere/CLI The pool expansion cancellation failed because an internal LUN 80444558/847075 1.3.9.1.0236-1
failed to be created.
Unisphere/CLI Attempting to create a Certificate Signing Request (CSR) with a 81978348/858981 1.3.9.1.0236-1
Common Name more than 40 characters failed.
Unisphere/CLI An incorrect call home event (a31-Enclosure not 83906062/882024 1.3.9.1.0236-1
capable of changing speed to match the
current loop speed) was generated or sent.
Unisphere/CLI The naviseccli command "environment -list" did not 06009014/84063320/ 1.3.9.1.0236-1
work when an enclosure number greater than 8 was specified. 884573
Unisphere/CLI Invalid Write Hit Ratio was displayed. 85098392/893642 1.3.9.1.0236-1
Unisphere/CLI Incorrect SnapView sourceWWN displayed in 06538633/893993 1.3.9.1.0236-1
SPX_arrayconfig.xml in SPcollect when more than two snapshot
sessions occurred in a system.
Unisphere/CLI Attempting to start NTP Time Synchronization failed. 07349775/910471, 1.3.9.1.0236-1
86414586/908597
Unisphere/CLI Could not use an SSL certificate that contained a wildcard for a 10666856/957828 1.3.9.1.0236-1
storage processor.
Unisphere GUI When exporting .csv files from Unisphere for file performance, 78848432/827272 1.3.9.1.0236-1
the exported units were incorrectly labeled.
Unisphere GUI In Unisphere, reclaim schedules were incorrectly displayed on 828220 1.3.9.1.0236-1
the ”File Systems Checkpoints” window.
Unisphere GUI More than 16 disks could not be added manually when 79625942/830694 1.3.9.1.0236-1
expanding a storage pool but could be added automatically.
Unisphere Off When USM's DAE Installation Wizard encountered a bus full of 81627300/854186 1.3.9.1.0236-1
Array Tools enclosures, it ignored any remaining buses, preventing USM
from displaying the proper enclosures.
Unisphere Off USM failed to correctly display upgrade or conversion paths for 81473664/863045 1.3.9.1.0236-1
Array Tools a VNX5200 with 3 Data Movers, or a VNX5400 or VNX5600 with
3 or 4 Data Movers.
Unisphere Off When a Unisphere Central system was in an array's domain, 83437464/881211, 876800 1.3.9.1.0236-1
Array Tools USM's Firmware Download Wizard failed with the error
message "Could not retrieve catalog. Reason:
Xml file not found".
Unisphere Off USM intermittently encountered problems at startup, showing 971156 1.3.9.1.0236-1
Array Tools an error in the Advisories or USM Upgrade icons.
Unisphere Replication destination SavVol extension failed and was not 99999999 / 901986 1.3.9.1.0217-1
retried. The replication session stopped transferring data.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 65
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere After a file system was deleted by using the -reclaim option, 6911919 / 900398 1.3.9.1.0217-1
dbchk reported the following error: Error: Volume 7878
on server server_3 is missing in Control
Station volumes database.
Unisphere NDMP backups randomly failed. 6200178 / 897358 1.3.9.1.0217-1
Unisphere The VDM move command failed with the following: Error 83437886 / 896524 1.3.9.1.0217-1
2237: Execution failed:
filtered_path_valid: Precondition violated.
[MOUNTPOINT_ENTRY.header].
Unisphere SP panicked with the following: BugCheck C2, 72349244 / 887979 1.3.9.1.0217-1
{0000000000000007, [ BugcheckCode: C2
Definition: BAD_POOL_CALLER ]
Unisphere NDMP checkpoint creation intermittently failed with error: 81207880 / 868823 1.3.9.1.0217-1
Needed resource unavailable, try later.
Unisphere Poor performance of NDMP operations on a gateway system 80044620 / 847929 1.3.9.1.0217-1
with a Symmetrix backend caused frequent database locks
contentions.
Unisphere XML API Volumes query time exceeded 5 minutes and caused 80044620 / 847926 1.3.9.1.0217-1
an SRM timeout.
Unisphere SPA responded slowly to CLI getagent and ndu -list 76185008 / 797796 1.3.9.1.0217-1
commands.
Unisphere No File data was returned to CIM client. Multiple errors were 85387966 / 901666 1.3.9.1.0217-1
observed in /nas/log/smis/CIMNASPlugin.log.
Unisphere After a file system was deleted by using the nas_fs - 81345534 / 867309 1.3.9.1.0217-1
delete -reclaim command, nas_task reported the
following: In Progress: Operation is still
running.
Unisphere After a file system was deleted by using the nas_fs - 81345534 / 860284 1.3.9.1.0217-1
delete -reclaim command, nas_task reported the
operation failed.
Unisphere, SPA was unmanaged, and a hardware exception was generated 73850044 / 763820 1.3.1.9.0231
CLI on the SP.
Unisphere, Pool LUN creation in Unisphere occurred slowly in large 79607490 / 829849 1.3.1.9.0231
CLI configurations.
Unisphere, Migrating a LUN in a RAID group with the power saving feature 80804824 / 851434 1.3.1.9.0231
CLI enabled resulted in a faulted migration session.
Unisphere, When running the CLI backendbus –analyze command, 81846812 / 855986 1.3.1.9.0231
CLI the speed was incorrectly reported as 0 instead of 12 Gbps.
Unisphere, CLI Duplicate storage group names occurred. 6173226 / 906227 1.3.9.1.0217-1
Unisphere, CLI LUN migration issues occurred after upgrading to Unisphere 5498496 / 893482 1.3.9.1.0217-1
version 1.3.9.1.184.
Unisphere, CLI LDAPS login failed when a certificate in the certificate chain was 855344 1.3.9.1.0217-1
signed by the MD5 algorithm.
Unisphere ESRS IP Client / Management Utility's Add System always failed 841501 1.3.9.1.184
with Error Processing connection request.
Unisphere User could not power off a VNX1 array by using VNX2 77592216 / 807380 1.3.9.1.184
Unisphere.
Unisphere CallHome operation failed to report a reboot of both SPs with a 76620428 / 805738 05.33.009.5.184
bugcheck.

66 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere A LUN Pool size was greater than the maximum parameter type 74995212 / 791420 05.33.009.5.184
value of 4,294,967,295 blocks. The LUN size was truncated to a
small number, making the LUN Pool size smaller than the used
LUN Pool size. The percentage usage of the Reserved LUN Pool
was an invalid value.
Unisphere MirrorView/S Attention state alert was not displayed in 71876700 / 741908 05.33.009.5.184
Unisphere’s Dashboard.
Unisphere, Analyzer could not be started. 74422332 / 775267 1.3.9.1.184
Analyzer
Unisphere, Could not run the naviseccli analyzer archivedump 76441094 / 807102 1.3.9.1.184
Analyzer, command against more than one nar file in Linux.
Block CLI
Unisphere, The destination LUN is not available for 791829 1.3.9.1.184
Block CLI migration error occurred when trying to perform a migration
operation by using naviseccli.
Unisphere, LUN migrations were leaving residual internal private LUNs. 78233834 / 76526568 / 1.3.9.1.184
Block CLI 816959
Unisphere, ManagementServer restarted intermittently. 64755754 / 686500 / 781866 1.3.9.1.184
Block CLI
Unisphere, User could not specify a Customer Contact email address field 77133490 / 803817 1.3.9.1.184
Block CLI that had a leading underscore or a leading dash.
Unisphere, If a drive was faulted and a hot spare was activated, the NQM 72595066 / 807560 1.3.9.1.184
NQM, Block policy stopped and restarted on every poll cycle.
CLI
Unisphere, After an upgrade, enabling QoS caused a high LUN response 78713486 / 823234 1.3.9.1.184
NQM time.
Unisphere, Unisphere Central incorrectly reported performance impact 72368842 / 789286 1.3.9.1.184
Unisphere everyday at 10:30 and 17:00 PST. 75860206 / 798966
Central
Unisphere, If a Data Mover in a primary-site VNX has a standby Data Mover 797255 1.3.9.1.184
VDM configured, and the standby Data Mover failed, VDM MetroSync
MetroSync Manager did not trigger a failover for network failures or file
Manager system failures.
Unisphere, A Data Mover in a primary-site VNX has a standby Data Mover 797250 8.1.9.184
VDM configured. During a local Data Mover failover, if the file system
MetroSync service failed but the network interface was available, VDM
Manager MetroSync Manager may trigger a failover without waiting for a
local Data Mover timeout.
USM "View System Config" in USM failed with the following error: 77044638 / 804774 1.3.9.1.184
Error: Generating Config Report Could not
capture the storage system configuration
XML file from selected system.
USM The Check System Upgrade Readiness test reported an incorrect 75939780 / 795398 1.3.9.1.184
power supply of 110_220_op.
USM, While using VIA to set a host name with FQDN, communication 736612/740247 1.3.9.1.155
Unisphere Off with the VNX repeatedly failed. 8.1.9.155
Array Tools
Unisphere Some new features/changes do not take effect in the Unisphere 746018 05.33.009.5.155
GUI after the array is updated.
Unisphere An error message was returned when generating certificate 64280272/ 684304 05.33.008.5.119
signing request via the Setup page.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 67
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere If Snapshots were taken on dedup enabled LUNs in a pool, the 706194 05.33.008.5.119
GUI incorrectly showed values in the Snapshot Allocations and
Snapshot Subscriptions fields under LUN properties.
Unisphere Analyzer CLI comands would not dump the pool's performance 66484864/ 682179 05.33.008.5.119
Analyzer statistics data if the pool only has RLP (Reserved LUN Pool)
LUNs.
Unisphere Analyzer is not showing LUN utilization when it reaches 100%. 69878898/ 720666 05.33.008.5.119
Analyzer
Unisphere When Unisphere Central is in the system’s domain, the GUI's 64983040/ 686352/ 673140 05.33.008.5.119
Analyzer, Statistics/Retrieve Archive page did not display the NAR files for
Unisphere retrieval. Users were required to use CLI as a workaround to list
Central and retrieve NAR files.
Unisphere, CLI The naviseccli storagegroup XML output did not report the host 65503296/ 678064 05.33.008.5.119
name within the XML elements.
Unisphere, CLI Failed to create or expand pool if Management Server was 69873438/ 714173 05.33.008.5.119
restarted.
Unisphere, Unisphere Host Agent logged message: 65814450/ 678938 05.33.008.5.119
Host EV_RAIDGroupState::RAIDGroupOpPriorityConvert -
Enum out of range, -1.
Software, CLI,
Utilities
Unisphere, The Host Agent service could not be started successfully on Red 67949012/ 696546 05.33.008.5.119
Host Hat Enterprise Linux 7.
Software, CLI,
Utilities
Unisphere An unknown exception error message was encountered when 66928156/ 713632 05.33.008.5.119
QoS Manager stopping NQM.
Unisphere The User Names column of the User Quotas List in Unisphere 58851466/ 696532 8.1.8.119
populated with names very slowly.
Unisphere Tree Quotas drop down list on User Quotas GUI page was 71623624/ 739543 8.1.8.119
missing existing tree quotas.
Unisphere If Unisphere with the Analyzer was enabled and USM was 560704, 649788 05.33.006.5.096
running on the same host, Analyzer Real-Time statistics
sometimes failed to update.
Unisphere, ESX server lost its path to the storage system. 63964894/651722 05.33.006.5.096
NQM, VNX
Block OE
Unisphere Using Unisphere to create Mirrorview secondary LUN images 59188084/ 654433 1.3.6.1.0096-1
failed for LUNs with mixed RAID types if no disks were available.
Unisphere CLI An incorrect error message - I/O module - was reported when 61304630/ 654455 1.3.6.1.0096-1
a mezzanine card was in a faulted state.

Unisphere CLI Error 0x60000500 was reported when restoring an offline 62707090/ 654457 1.3.6.1.0096-1
Storage Pool that used RAID 3 or RAID 6.

Unisphere CLI SNMP MIB read requests reported incorrect information about 63916164/ 655157 1.3.6.1.0096-1
VNX2 arrays.

Unisphere CLI Sending test SNMP traps produced core dumps. 63316528/ 654719 1.3.6.1.0096-1

68 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere LDAPS did not work if the LDAPS server's certificate contained a 65187126/ 666544 1.3.6.1.0096-1
host software certificate chain.

Unisphere, Unisphere, the Unisphere Service Manager (USM), or the VNX 697449/ 697215/699525 1.3.6.1.0096-1
host software Installation Assistant (VIA) did not work correctly because the
Java Runtime Environment (JRE) on the host was not supported.
Unisphere In the Unisphere "Storage Group Advanced Properties" panel, in 642279 / 63020558 1.3.6.1.0096
cases where multiple iSCSI initiators were in different storage
groups but were connected to the same host, when users
sorted the host's initiators before attempting to remove a
particular initiator from a specific storage group, all of the
initiators associated with the storage group were removed.
Unisphere Could not create a FS on Windows 2012 with Chrome. 642579 1.3.6.1.0096
Unisphere CLI command response time was slow in some conditions. 60933588/ 619647 1.3.6.1.0096
Unisphere When running the getlun Block CLI command to retrieve disk 58886038/ 624200 1.3.6.1.0096
statistics, the returned Prct Idle and Prct Busy values were
inaccurate.
Unisphere In some circumstances, the Unisphere Search LUN page 61727774/ 628463 1.3.6.1.0096
returned no results for searches, even when the Search criteria
matched existing LUNs.
Unisphere The Tree Quota usage did not display properly in Unisphere, 61551390/ 631797 1.3.6.1.0096
when the soft and hard limit were set to zero.
Unisphere Unisphere did not allow users to create hidden CIFS shares. 61918032/ 631865 1.3.6.1.0096
Unisphere During the user login/authentication process, user accounts 58357456/ 652208 1.3.6.1.0096
were locked out after three incorrect attempts.
Unisphere When a CIFS share was created with the VNX File CLI and then 498866/ 654499 1.3.6.1.0096
modified with Unisphere, some configuration options were not
avaialble.
Unisphere In Unisphere, the "Copy to Hotspare" option was grayed out for 62067244/59395272/ 1.3.6.1.0096
disks associated with mixed RAID type storage groups. 606329/654723
Unisphere The Unisphere Analyzer application stopped without apparent 64649720/ 660841 1.3.6.1.0096
reason.
Unisphere, Unisphere and USM did not operate correctly in Chrome. 635310 1.3.6.1.0096
USM
Unisphere Unable to create a network device using the Unisphere wizards 612485, 614365 1.3.3.1.0072-1
or the Unisphere management screens.
Unisphere Unisphere and the CLI did not accurately display the 60-drive 563290, 564080 1.3.3.1.0072-1
Disk Array Enclosure (DAE) information after replacing a 25-
drive DAE with a 60-drive DAE.
Unisphere Creatingf FAST Cache could fail if created at the same time as 561400, 597279 1.3.3.1.0072-1
the storage pool.
Unisphere Unisphere would sometimes hang after connecting a newly- 563392, 564522 1.3.3.1.0072-1
created LUN to a storage group when the creation of more than
100 LUNs was in process.
Unisphere Unisphere hung after attaching LUNs to a storage group on a 574400, 574986 1.3.3.1.0072-1
VNX5800 with a large configuration.
Unisphere User was unable to login with LDAP accounts when using 65187126/665689 1.3.3.1.0072-1
certification chains.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 69
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere From within the GUI, a user can create a VNX Snapshot using a 637800, 618792 1.3.3.1.0072-1
single quote mark in the name. For example, te'st.
The user is unable to delete the snapshot using that name. The
user receives an error message, te'st - Cannot destroy
snapshot. the specified snapshot does not exist.
Unisphere The Unisphere GUI Get Diagnostic Files option truncated the 555002, 537437, 628171 1.3.3.1.0072-1
retrieved size of files larger than 4 GB. GUI truncated diagnostic
file transfers.
Unisphere Unauthenticated HTTP Requests were sent to Tomcat. Error 578520 1.3.3.1.0072-1
pages displyed for HTTP 404 for non-existent addresses.
Unisphere Failed to change the SP name. 8215634/ 59420, 594205 1.3.3.1.0072-1
Unisphere FAST Cache creation failed at 14%. 58483984/597279 1.3.3.1.0072-1
Unisphere Issues with ECOM could result in the following symptoms: 597295, 62770, 637243 1.3.3.1.0072-1
1. CQ 601785: hardware exception dump. When open 2 GUI
session on same laptop, hardware exception dump may be
generated.
2. CQ 602079: ECOM will return inaccurate message "No
provider found to handle this request".
3. CQ 606857: Memory lead happened due to BSAFE/SSL.
Unisphere Unable to login to Block array using LDAP account. 53681376/ 600034 1.3.3.1.0072-1
Unisphere Peer Boot State is showing UNKNOWN in Unisphere. 59060556 / 602760 1.3.3.1.0072-1

Unisphere LDAP timeout. 58765738/ 1.3.3.1.0072-1


604355
Unisphere Unable to view the Tiering info [LUN Properties/Tiering > Tier 58106360/604472 1.3.3.1.0072-1
Details].
Unisphere The Data Mover experienced an updateDouble failure in 61897752/ 1.3.3.1.0072-1
relocateIndexSlot panic. 632237
Unisphere LDAP is not working on multiple Unisphere storage domains. 6128260/ 622499 1.3.3.1.0072-1
Unisphere Not able to list the Symmetrix LUN View for the file systems in 615316 1.3.3.1.0072-1
Unisphere.
Unisphere Edit and Use Configuration buttons are disabled in Available 615749 1.3.3.1.0072-1
Configurations Page of Unisphere.
Unisphere When creating a RAID Group with automatic disk selection 616360 1.3.3.1.0072-1
mode, the creation process failed with an error message
indicating that the RAID Group could not be created with the
selected drive types.
Unisphere Using Java 1.7 versions on PC/host to access Unisphere prevents 61766992/627950 1.3.3.1.0072-1
a new language setting from taking effect.
Unisphere, Unishphere Host Agent, ConnectEMC, and VNX for Block CLI are 623176 1.3.3.1.0072-1
ESRS IP Client always installed on the system’s drive, even if the installation
path is changed to another drive.

70 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere CQECC00606857-CIMOM_PerfCounter_Exceeded.dmp.zip 576807, 59729 1.3.3.1.0072-1
Host software, generated on DAE testing secondary array.
CLI, and
Utilities
Unisphere If the host timezone was 'UTC + x', the commmand 'naviseccli 616366, 631684 1.3.3.1.0072-1
Host analyzer -status' failed with an error.
Software, CLI,
and Utilities
Unisphere Unable to configure LDAPS on Off Array Management server. 58009364/593000, 591485 1.3.3.1.0072-1
Host
Software, CLI,
and Utilities
Unisphere To fix outputting 'Get Drives for a Tier' audit log message 58005674/ 1.3.3.1.0072-1
Host 613959
Software, CLI,
and Utilities
Unisphere The Storage Processor (SP) was unmanaged when the SR numbers: 51019358, 1.3.3.1.0072-1
Host Management Server restarted due to improper memory 53107146, 56043768,
Software, CLI, manipulation. 56047852, 56774788,
and Utilities 58603098, 59247626,
59892166, 60874054,
60951330, 61083608
AR numbers: 528255,543184,
577835,78959, 586192,93053,
593077,96368, 603192,05252,
610054,616203,
618190,619026, 622270,
622015
Unisphere Using VASA with Unisphere sometimes resulted in a memory 59002892/616454 1.3.3.1.0072-1
Host leaks due to the misuse of an internal string, where the
Software, CLI, management process eventually produced a dump file and
and Utilities restarted.
Unisphere Improper internal library management resulted in a restart of 58312648/ 622224 1.3.3.1.0072-1
Host the management process when the SP will be temporarily
Software, CLI, unmanaged.
and Utilities
Unisphere Navisphere CLI (naviseccli) segfaults and dumps. 60874280/623692 1.3.3.1.0072-1
Host
Software, CLI,
and Utilities
Unisphere Pool creation failed on Unisphere with Japanese language pack. 59815336/ 616387 1.3.3.1.0072-1
Host
Software, CLI,
and Utilities
Unisphere Storage Pool creation failed when a removed drive was 618360 1.3.3.1.0072-1
Host selected.
Software, CLI,
and Utilities

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 71
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere Fail Safe Network feature for bxgv2 devices could not be tested 647510 1.3.3.1.0072-1
Host by bringing down the ports using software commands.
Software, CLI,
and Utilities
Unisphere A blue screen error occurs on Windows server 2012 64-bit 632609 1.3.3.1.0072-1
Host environment with MPIO enabled when running Unisphere Host
Software, CLI, Agent.
and Utilities
Unisphere When power source was restored from the failure state, LUNs 539374, 646381 1.3.3.1.0072-1
Host may have stayed offline after backend retruned to the normal
Software, CLI, state.
and Utilities
Unisphere The Virtual Provisioning feature is in a degraded state. 616361 1.3.3.1.0072-1
Host
Software, CLI,
and Utilities
Unisphere Unexpected SP reboot. 619259 1.3.3.1.0072-1
Host
Software, CLI,
and Utilities
Unisphere, When power source was restored from the failure state, LUNs 539374, 646381 1.3.3.1.0072-1
Virtual may have stayed offline after backend retruned to the normal
Provisioning state.
Unisphere, Unexpected SP reboot. 619259 1.3.3.1.0072-1
Virtual
Provisioning
Unisphere, Pool statistics are not updated after destroying LUNs. 612574 1.3.3.1.0072-1
Virtual
Provisioning,
Unisphere
Host
Software, CLI,
and Utilities
Unisphere After completing installation with the VNX Installation Assistant 566078 1.3.2.1.0051
(VIA), the system sometimes remained in an unfused state.
Unisphere, Unable to install ESRS IP Client version 1.3 when selecting proxy 600457 1.3.2.1.0051
ESRS IP Client server connection option.
Unisphere, In Fast Cache configurations, when a drive is removed from the 592310 1.3.2.1.0051
FAST Cache, enclosure and then reinserted after a hot spare has been
FAST VP activated, both the new hot spare and the removed disk appear
when you run the cache -fast -ino CLI command or when you
view system information though Unisphere
Unisphere The Unisphere SP Properties dialog box shows the speed of the 593974 1.3.2.1.0051
management port as as 10Mbps, even if it is actually set to Auto.
Any modifications made in the SP Properties dialog changes the
requested speed from Auto to 10Mbps.

72 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


Unisphere When using Unisphere version 1.3.0 to log in to a domain with a 598046 1.3.2.1.0051
legacy Celerra system, the client reports that the Celerra system
is “Not Logged in”.
Unisphere Unisphere off-array applications will not launch in Java 7 Update 595857 1.3.2.1.0051
45. This is caused by a known issue of Java 7 Update 45.
Unisphere When running VNX Unisphere, a Java warning appears saying 596623 1.3.2.1.0051
that future Java versions will not work with the application
Unisphere When opening a Unisphere Analyzer (NAR) file obtained from a 593280 1.3.2.1.0051
Analyzer VNX system with no defined pools or LUNs the following error
message is generated:
An unknown error has occurred.
Unisphere When VNX LUNs are connected to multiple hosts, searching 579127 1.3.2.1.0051
Analyzer Unisphere Analyzer (NAR) files based on a Host criteria does not
show all connections.
Unisphere, The USM registration wizard sometimes fails with the error: 595816 1.3.2.1.0051
Serviceability The version of management software installed on the storage system
does not suport the storage system registration ….
Unisphere, USM application stops when at the VNX OE for File Installation 594553 1.3.2.1.0051
Serviceability Status stage of the Install Software wizard. It displays the
following message:
Retrieving data. Please wait ...
Unisphere The user received a security message when closing Unisphere if 568652, 570148 1.3.1.1.0033
using JRE 7u21 or later.
Unisphere When running a compression command, the followng error 592568, 589781 1.3.1.1.0033
code was sometimes returned
Output: Unrecognized Error Code: (0xe12d8110)
Unisphere IPv6 addresses appeared in the domain list when IPv6 was 575343 1.3.1.1.0033
configured on the CS and received any operation that involved
setup_slot.
Unisphere, When uninstalling the ESRS IP Client, the ESRS IP Client would 577404 1.3.1.1.0033
Serviceability, sometimes report that ConnectEMC, one of the ESRS IP Client
ESRS IP Client components, could not be uninstalled.
Unisphere Unisphere sometimes loads slowly or does not load completely. 538390 1.3.0.1.0015
Unisphere On a Windows 2012 server, Windows STOP errors can occur 519934 1.3.0.1.0015
Host software, when running Server Utility 1.3 or Host Agent 1.3.
CLI, and
Utilities
Unisphere There are missing columns in the UI table if the language setting 472986 1.3.0.1.0015
has been recently changed.
Unisphere, Two hosts with the same name were displayed in the host list in 548376, 566546 Power Path 5.7 SP2
Powerpath Unisphere.
USM USM was unable to retrieve build data from Dell. 13765890/995821 1.3.9.1.0239
USM The pop up notifications for array software upgrades and disk 749831 1.3.9.1.0152-1
drive upgrades can display at the same time. The user may need
more time to read both notifications.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 73
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Fixed in previous releases

Category Description Tracking Fixed in version


USM An unexpected error occurred during USM Online Disk Firmware 644788 1.3.6.1.0096
Upgrade (ODFU).
USM Unified Service Manager (USM) showed incorrect information 63613130/ 651218 1.3.6.1.0096
about available pool space while performing a diagnostic check.
USM USM was unable to view certain information in the report page. 61047728/621079 1.3.3.1.0072-1
USM A Data Mover error occurred during Installation. 51004590/613374 1.3.3.1.0072-1
USM Customers are unable to complete an NDU via USM because of 643460 1.3.3.1.0072-1
a timeout.
USM Information in the Unisphere Service Manager (USM) Available 603183 1.3.2.1.0051
Storage tab does not show all bus/loop information associated
with the VNX configuration.
USM When running the Disk Replacement wizard in USM, the state of 576837, 577469 1.3.1.1.0033
a replaced disk remains unchanged.
USM, An unexpected error was received while running the Install DAE 574069 1.3.1.1.0033
Serviceability wizard. The error was seen during the Connect to SPB/SPA
Bus/Loop step. USM stopped the wizard with a message saying
a fault had been detected on the storage system.
USM, Install Software wizard keept reporting Control Station reboot 575966 1.3.1.1.0033
Serviceability timeout, and waiting for the suggested 3 minutes did not solve
the problem.
USM, The VNX for Block OE packages were not automatically 577921 1.3.1.1.0033
Serviceability committed after running the USM software wizards to perform
an upgrade.
USM An internal error appears during a language pack installation. 561544/551522 1.3.0.1.0015

74 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platform: VNX for A Dell API used by VNX SRS ESRS provisioning does not No workaround
Block has been updated to TLS 1.2, complete successfully. Exists in versions:
Severity: Medium causing ESRS to not be able All
to provision for the first time.
Tracking: 1030603 For more information, refer to
KnowledgeBase article 000541662
Unisphere Platform: All Unisphere Central releases of A TLS incompatibility Versions previous to 1.3.9.1.0231
version 4.0 SP5 and earlier do between VNX2 version are supported. Support for version
Severity: Medium
not support VNX2 Unisphere 1.3.9.1.0231 and Unisphere 1.3.9.1.0231 will be added to a
Tracking: N/A future Unisphere Central release.
version 1.3.9.1.0231. Central prohibits the array
from rendering in Unisphere
Exists in versions:
Central.
All Unisphere Central versions.
VNX OE Platform: All USM and VIA do not load on Attempting to start USM or Use JRE 8.
Severity: Medium Clients running JRE 9 or later. VIA fails. A blank screen is
Tracking: 905463 displayed. Exists in versions:
For more information, refer All
to ETA 500891.
VNX OE Platform: All Unisphere does not load on Attempting to start Use JRE 8.
Severity: Medium Clients running JRE 9 or later. Unisphere fails. A blank
Tracking: 909803 screen is displayed. Exists in versions:
For more information, refer All
to ETA 500891.
VNX File, OE, Platform: VNX for While performing an NDMP Files that have a soft Do not use a soft (symbolic) link
NDMP File backup, files that have a soft (symbolic) link whose name whose name length is 1024.
Severity: High (symbolic) link whose name length is 1024 will not be
Tracking: 950896 length is 1024 will not be included in an NDMP backup.
backed up.
VNX File OE, Platform: VNX for The secmap database is a The CIFS secure mapping The value of the origin field for a
CIFS File permanent cache of all (secmap) database does not secmap entry can safely be
mappings between SIDs and contain origin information for ignored. It is not necessary and it is
Severity: Low
UIDs/GIDs used by the Data some entries. When these not recommended to update or
Tracking: delete the secmap entries that
79510888/855078 Mover. The origin field for a entries are listed the origin
secmap entry is for shows as “unknown”. For show an “unknown” origin.
informational purposes only example, the message Exists in versions:
and may not always be set. “Mapped from unknown” will
An “unknown” origin does be displayed. 8.1.9.236
not indicate a security issue 8.1.9.232
with the secmap entry. 8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.132
8.1.8.121

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 75
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platform: VNX for If the source and destination The following messages are Perform one of the following
VDM File sites are running different received in VDM MetroSync actions:
MetroSync Severity: Medium versions of VDM MetroSync, Manager during a Clean • On the source, run
Manager Tracking: 892250 one site at v3.0.17 and the operation on the remote site: /nas/sbin/syncrep/sy
other site at a version earlier Error 13422034949: ncrep_clean_pool -
than v3.0.17, reversing a Internal error. and name <pool_name> -
synchronous replication Error 13431996587: skip_pool. Then run the
session fails. Failed to reverse reverse synchronous
sync replication command on the destination.
session. • To avoid this issue, make sure
that both source and
destination sites have VDM
MetroSync version 3.0.17
installed.
Exists in versions:
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
3.0.17
VNX OE Platform: Unified Unisphere GUI may display Some new features/changes Clear the java cache after array
Severity: Medium stale information after an may not be correctly upgrade.
Tracking: 748971 array software upgrade. displayed by the Unisphere
GUI after array upgrade. Exists in versions:
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.119

76 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Install, Platforms: All A Block-to-Unified upgrade Block-to-Unified upgrade The user should follow the
Upgrade, VNX Severity: Medium failed due to misconnected failed. instruction in VIA to reconnect the
Block OE, VNX Tracking: 652467 FC cables from one Data FC cable and then continue with
File OE Mover to SPA and SPB. the ugprade. If the BTU upgrade
failed later due to the ndf partition
being offline, then retry the
upgrade.
Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
VNX Block OE Platform: VNX for A LUN compression state is The user received the Trespassing the LUN will restart
Block listed as faulted. following message in the GUI: the compression.
Severity: High Compression is not Compression
Exists in versions:
Tracking: 796010 automatically restarted encountered an I/O
failure. Please 05.33.009.5.238
following resolution of a
failure of the LUN being resolve any issues 05.33.009.5.236
compressed. with the LUN for 05.33.009.5.231
compression to
continue. 05.33.009.5.218
(0x71658221) 05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 77
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platform: VNX LUN may show Faulted Enabling deduplication failed. If the dedup migration progression
Severity: High status when deduplication is increases, then wait for the
Tracking: 789919 being enabled. migration to complete. If the
problem persists, contact customer
support.
Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
VNX Block OE Platform: VNX for The reserved space of DLU is The user sees an SP panic. 1. Find the storagepool whose
Block more than the pool could consumed space is larger than
Severity: High provide. usable space.
Tracking: 775029 2. Expand the storagepool or
destroy some DLUs in the pool.
Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
VNX Block OE Platform: VNX for A deduplicated LUN Deduplication failed to Delete the private migration LUN.
Block migration did not progress or enable correctly on a LUN. Exists in versions:
Severity: Medium was faulted. The user had
trouble deleting the LUN. 05.33.009.5.238
Tracking: 757945
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155

78 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: VNX After removing a VNX Try to restart the management
Severity: Medium enclosure, Unisphere may server via the Unisphere Setup
Tracking: 701406 still show the enclosure as page. If the removed enclosure still
present. appears, reboot the Storage
Processor (SP).
Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
VNX Block OE Platforms: All A single VNX Storage VNX Storage Processor (SP) Exists in versions:
Severity: Medium Processor (SP) bugcheck bugcheck occurs during a 05.33.009.5.238
Frequency: Rarely occurs when a migration migration process.
05.33.009.5.236
under a rare set of process attempts to start a
new session on storage pool 05.33.009.5.231
circumstances
LUN. 05.33.009.5.218
Tracking: 686838
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 79
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: All Encryption percentage does Look for the following activities in
Severity: Medium not change. the system:
Tracking: 649384 • Faulted disk
• Disk zeroing in progress
• Disk rebuild in progress
• Disk verify in progress
Exists in verions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072

80 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: All When migrating LUNs within For more information, refer Avoid migrating LUNs within a
Severity: Medium the same deduplication to KnowledgeBase article deduplication domain. If you need
domain, deduplication 176653. to move the current LUN to a
Frequency:
Occasionally savings are lower than larger one, use the LUN expansion
expected when migration is functionality instead.
Tracking: 611558
complete. Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051
VNX Block OE Platforms: All When downloading firmware In this case, ODFU process is Wait for the RAID group rebuild to
Severity: Medium to drives using The Online not hung but is paused complete and the ODFU process
Disk Firmware Upgrade waiting for the RAID group will automatically resume.
Frequency: Rarely rebuild to complete.
(ODFU) process, if a drive in a
Tracking: 616702 Exists in versions:
RAID group is faulted during
the download process it can 05.33.009.5.238
take an extended time (up 05.33.009.5.236
toseveral days) for the 05.33.009.5.231
download process to fully 05.33.009.5.218
complete. ODFU shows that 05.33.009.5.217
it is in the ACTIVATE state for 05.33.009.5.186
the remaining drives in the 05.33.009.5.184
degraded RAID group. 05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 81
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: All The VNX Security The timeout value for Exists in versions:
Severity: Minor Configuration Guide lists Unisphere on VNX Block-only
systems cannot be changed 05.33.009.5.238
Frequency: Always steps required to change the
with VNX Unisphere or the 05.33.009.5.236
Unisphere session timeout
Tracking: 617430 VNX for Block CLI. 05.33.009.5.231
period. These steps only
apply to VNX Unified and 05.33.009.5.218
VNX File-only systems. VNX 05.33.009.5.217
Block-only systems do not 05.33.009.5.186
provide a way to adjust the 05.33.009.5.184
Unisphere timeout session 05.33.009.5.155
period. 05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051
VNX Block OE Platforms: All After a non-disruptive In rare cases, a Broadcom Log in through the VNX system
Severity: Low upgrade (NDU), the driver can cause issues when service port, then first disable and
management port for one of bringing up VNX system SP then re-enable the disconnected
Frequency: Seldom management ports after a
the VNX system’s Storage VNX SP management port.
Tracking: 613348 NDU.
Processors (SPs) may If disabling and then re-anabling
transition to disconnected the SP port does not work, try
status. rebooting the SP.
Exists in versions:
05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051

82 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platform: VNX for When an automatic failover The failover command Check the output of the commands
Block is initiated in VMSM, the reports an elapsed time only after the commands are
Severity: Low failover text output is not longer than timestamp in the finished via ssh.
displayed in real time. Main Log window. Exists in versions:
Tracking: 794267
VDM MetroSync Manager 2.0.6
VNX Block OE Platforms: All The Service Processor This issue occurs when both No workaround.
Severity: Medium Enclosure (SPE) fault LED the xPE SPA battery and the
does not turn on after SPE Exists in versions:
Frequency: Rarely DAE 0_0 SPB battery are
battery removal. removed. In this case, the All 5.33 versions.
under specific
fault LED on DAE 0_0 will
circumstances
assert, but the fault LED on
Tracking: 569147 the xPE will not. In this
specific case, the fault LED
should assert on the xPE
because a second SPS on
both the xPE and DAE allows
cache to remain enabled.
VNX Block OE Platforms: All Online Drive Firmware This problem occurs when The workaround is to repair the
Severity: Medium Upgrade (ODFU) will wait for using USM to install firmware degraded RAID group, so it is
Frequency: Rarely a degraded RAID group to on drives that are in a healthy again and the firmware
under specific become healthy before it will degraded RAID group. Some upgrade can continue/complete,
circumstances complete the firmware drives in the degraded RAID or cancel the outstanding ODFU
upgrade of all drives in that group may be shown as ‘Non- from USM.
Tracking: 568692 degraded RAID group. qualified’, while other drives
won’t be listed by USM and Exists in versions:
won’t be upgraded. The All 5.33 versions.
ODFU screen may show 100%
in the progress bar, although
the installation status will still
show ‘In progress.
VNX Block OE Platforms: All Unexpected error is displayed When using the ODFU Wait until the NDU operation that
during the online disk Wizard, if you cancel the is part of the ODFU Wizard's
Severity: Medium
firmware upgrade (ODFU). operation after it has already activities is actually completed
Frequency: Rarely before retrying the Wizard. This
under specific started, the non-disruptive
upgrade (NDU) installation can be determined by running
circumstances “naviseccli ndu –status” command
that is part of the underlying
Tracking: 566151 implementation will and confirming the status is:
continue. The Wizard “Status: Operation completed
confirms the cancellation successfully.” If so, then retry the
even though the NDU ODFU Wizard.
operation is ongoing and
Exists in versions:
cannot be interrupted.
All 5.33 versions.
If you retry the ODFU Wizard
and it attempts to perform
the underlying NDU
operation again, it may result
in an error saying that a
package is already installed.
The error is an accurate
reflection of what is
happening on the array.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 83
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: All When a module is removed It takes about 3 seconds for No workaround.
Severity: Medium from an enclosure, the drive or the enclosure to
approximately 3 seconds be reported as removed after Exists in versions:
Frequency: Always All 5.33 versions.
under specific elapse before the removal a drive pull or an enclosure
detection occurs. pull. In other words, there is
circumstances
a 3-second delay between
Tracking: 558151 the time the drive is rendered
out of service and the time
the drive is detected as
removed.
VNX Block OE Platforms: VNX5200, On the battery back-up (BBU) On the backup battery unit, No workaround.
VNX5400, VNX5600, unit, the marker LED does not in certain Battery failure
VNX5800, VNX7600 get set for certain fault cases, the fault LED (located Exists in versions:
Severity: Medium conditions. on the battery backup unit) All 5.33 versions.
Frequency: Always will not assert. In most cases,
under specific the firmware will assert the
LED. But if the BBU fails due
circumstances
to specific self-test faults, or
Tracking: 554516 the battery backup unit fails
to charge fully after 16 hours,
the fault LED will not be set.
VNX Block OE Platforms: VNX5200, The Dirty Cache Pages Dirty Cache Pages (MB) No workaround.
VNX5400, VNX5600, (MB):, message displayed by message displays an
VNX5800, VNX7600, the Unisphere CLI after estimation that may be Exists in versions:
VNX8000 higher than the actual value All 5.33 versions.
issuing the command cache
Severity: Low for short time periods,
-sp -info, may be inexact.
especially while the SP Cache
Frequency: is in the Disabling state.
Infrequently under
specific
circumstances
Tracking: 564282
VNX Block OE Platforms: All A Windows 2012 server may Windows 2012 server No workaround.
Severity: Low unexpectedly reboot. reboots unexpectedly when
the specified threshold of Exists in versions:
Tracking: 561589 All 5.33 versions.
70% is crossed on the storage
pool. This event only happens
with NativeWindows MPIO
and when no Multi-pathing is
enabled on the server.

84 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX Block OE Platforms: VNX5200, A Background Verify A Background Verify Reissue the Background Verify
VNX5400, VNX5600, operation may not complete operation may not complete operation after the initialization is
VNX5800, VNX7600, while a RAID type 10 is as expected; it is not normal complete.
VNX8000 to issue a Background Verify
initializing after a bind
Severity: Low operation immediately after Exists in versions:
operation. a bind operation.
Frequency: All 5.33 versions.
Frequently under
specific
circumstances
Tracking: 555603
VNX Block OE, Platforms: All A decompressing LUN cannot Compression uses LUN Enable compression on the LUN
Compression Severity: Low be destroyed. Migrator technology for before destroying it, or wait for
decompression and decompression to complete.
Frequency: Always
under specific destroying a LUN under this
condition is not allowed. Exists in versions:
circumstances
All 5.33 versions.
Tracking: 549338
VNX Block OE, Platforms: All A LUN shrink operation on a Compression has internal No workaround.
Compression Severity: Low LUN with compression logic to iterate many times
enabled takes a long time. for the shrink operation to Exists in versions:
Frequency:
properly clear the data. All 5.33 versions.
Infrequently under
specific
circumstances
Tracking: 543919
VNX Block OE, Platforms: All In the case of single-fault disk On an idle array (where no If IOs are running on the array, or
Virtual Severity: Medium failures, the pool LUN state is IOs are running on the array), even a single write is done to the
Provisioning reported by the non-owning in the case of single-fault disk LUN right after the disk fault
Frequency: Always
during a specific SP as ready when the LUNs failures (RAID protection is occurred, the situation will correct
event. are actually faulted. still intact), the pool LUNs on itself after two minutes and both
the non-owning SP show as SPs will report pool LUNs as
Tracking: 556191 ready when they should faulted.
show as faulted like the LUNs
on the owning SP. Exists in versions:
All 5.33 versions.
VNX Block OE, Platforms: All Windows Server 2012 When the storage pool low This is a reporting issue with the
Virtual Severity: Medium storage pool low space space warning threshold is Windows Server event log. The
Provisioning Frequency: Always warning shows the incorrect crossed, the event logged on correct used and available capacity
used and available capacity a Windows 2012 server does are displayed in the Unisphere UI.
during a specific
event of the pool. not show the correct used
and available capacity of the Exists in versions:
Tracking: 539153 pool. All 5.33 versions.
VNX Block OE, Platforms: All LUN –destroy operation fails Snapshot destroy is an Wait for the snapshots to be
VNX Severity: Medium with snapshots exist error if a asynchronous operation. If a destroyed or include the
Snapshots Frequency: Rarely LUN is destroyed LUN is destroyed “destroySnapshots” switch while
immediately after destroying immediately after destroying destroying the LUN.
under a rare set of
circumstances snapshot(s) associated with its snapshots, there is a
the LUN. possibility of a snapshot Exists in versions:
Tracking: 481887 being in the destroying state All 5.33 versions.
that prevents the LUN from
being destroyed.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 85
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE Platform: VNX for The Preserved RepV2 restore The following error is After the apl_task_mgr has
File failed with the error displayed: 13422034976: automatically restarted, retry the
Severity: High 13422034976: Internal operation again.
Internal communication Exists in versions:
Tracking: 772096 communication error error...
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.132
VNX File OE Platforms: All After disabling deduplication When attempting to enable Wait for the enable or disable
Severity: High on a LUN, it can take longer or disable the deduplication deduplication operation to
Tracking: 661800 than expected to enable on a LUN while it is complete, and then try again.
deduplication on the same disengaging from its Exists in versions:
LUN again. deduplication destination,
8.1.9.236
the system returns the
following error: 8.1.9.232
8.1.9.231
Could not set
properties:(Deduplic 8.1.9.217
ation: SP A: Cannot 8.1.9.211
enable/disable 8.1.9.184
Deduplication on the
LUN %x which is not 8.1.9.155
ready. If problems 8.1.8.121
persist please 8.1.8.119
gather SP Collects
8.1.6.101
and contact your
service provider. 8.1.6.96
(0x716a841b)).
VNX File OE Platforms: All During a failover, if the failed IP address conflicts after Disconnect the source interfaces
Severity: High source system is not stable failover. from network.
Tracking: 763988 during the operation, IP
address conflicts can occur.

VNX File OE Platforms: All When there are several An error such as the Retry the failed command after the
Severity: High simulanous active repv2 following is generated: apl_task_mgr process
sessions running, an automatically starts.
Tracking: 758202 repv2_tsys_fs_16_1,Failed,
apl_task_mgr bugcheck can Error 13432061960: Repv2
occur. session repv2_tsys_fs_16_1 on
remote system ID=0 cannot be
created with the response:
Operation task id=196490 on
Secondary-VNX8000 at first
internal checkpoint mount state

86 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE Platform: VNX for The latest home directory for Some USM operations (for Create the home directory
File the VNX administration example: System Verification (/home/sysadmin<n>) for the
Severity: Medium account is not always created Wizard, Registration wizard, administration account that is
on the standby Control View System Config Report) missing its home directory with the
Frequency: Rarely
Station. When the standby fail, with the USM log file command:
Tracking: 838196 Control Station becomes the showing the error: Could # mkdir
primary Control Station, this not chdir to home /home/sysadmin<n> &&
VNX administration account directory. chmod 700
does not have its home /home/sysadmin<n>;
directory, preventing some chown
administrative functions. sysadmin<n>:nasadmin
/home/sysadmin<n>
This command must be run on the
Control Station that is missing the
home directory and that Control
Station must be configured as the
primary Control Station.
Exists in versions:
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
VNX File OE, Platform: VNX for An NDMP restore operation During an NDMP restore, an Delete the symlink on the target
NDMP File does not overwrite the error similar to the following file system before performing the
Severity: Medium symlink if a symlink with the occurs: NDMP restore.
Frequency: Always same name already exists on 42619:nsrndmp_recove
Exists in versions:
the target file system. r: NDMP Service
Tracking: 836772 Warning: Cannot All 8.1 versions
restore soft link
{link_name} File
exists.
VNX File OE, Platform: VNX for CLI commands that require CLI commands that require Restart the System Management
System File using the System the System Management daemon.
Management Severity: Medium Management daemon fail. daemon fail with an error
similar to the following: Exists in versions:
Frequency: Rarely
Error 13958709251: All 8.1 versions
under a specific set
of circumstances For task Query
control stations, an
Tracking: 843948 invalid list was
returned.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 87
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platform: VNX for An out of space indication or After a server_mount Use the appropriate
System File event occurred while server_2 –restore server_export command to
Management Severity: Medium restoring SnapSure –all restore operation, export each missing NFS pathname
checkpoints. some of the related NFS and each missing CIFS share. For
Frequency: Always
pathnames or CIFS shares example:
under a specific set
were no longer available to # server_export vdm1
of circumstances
client systems. Upon -Protocol nfs -option
Tracking: 841438 investigation with using the anon=0
server_export ALL /fs1_ckpt1_writeable1
command, it was found that
the unavailable NFS Exists in versions:
pathnames and CIFS shares 8.1.9.236
were no longer defined on 8.1.9.232
the VNX as exported file 8.1.9.231
system mount points.
8.1.9.217
8.1.9.211
8.1.9.184
VNX File OE Platform: VNX for After a repV2 operation such The VDM replication failed The workaround is to create the
File as failover/reverse, the due to a source attached missed interface manually so that
Severity: Medium interface attached on the interface that had been all interfaces attached on the VDM
source Virtual Data Mover synchronized to the exist in both the source and the
Tracking: 802648 destination Data Mover.
(VDM) replicated to the destination.
target VDM was not created Exists in versions:
in the target Data Mover. 8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
VNX File OE Platforms: All After a cache lost event, Storage pool resources are Manually fix the corruption and run
Severity: Medium running Storage Pool temporarily unavailable. Pool Recovery again.
Tracking: 701819 recovery failed, and the Exists in versions:
Storage Pool was 8.1.9.236
inaccessable.
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

88 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE Platforms: All When a VNX Storage Connecting the faulted SP Manually reset the faulted Storage
Severity: Medium Processor (SP) reboot is with console server or Processor.
Tracking: 661888 initiated, the SP can stop terminal server, displays a Exists in versions:
booting because of a POST message such as ErrorCode:
8.1.9.236
error. The SP will appear to 0x00000310.
be faulted by its peer storage 8.1.9.232
processor. 8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
VNX File OE Platforms: All The following error occurred The pool name on the Run the nas_diskmark -m –a
Severity: Medium when the user renamed a Control Station (CS) is command on the CS after
Tracking: 564012 mapped pool in the backend: different from the pool name performing any of the following
com.emc.cs.symmpoller.Agen on the backend. This error operations related to the
tException: The specified results when renaming a ~filestorage group:
object does not exist. mapped pool on the • Add/remove a LUN.
backend, but the
• Turn on, turn off, pause,
nas_diskmark command is
resume or modify
not run on the Control
Compression on a LUN.
Station to synchronize the
pool names. • Modify the Tiering Policy or
Initial Tier of a LUN.
• Create/destroy a mirror (both
sync and async) on a LUN.
Exists in versions:
All 8.1 releases.
VNX File OE, Platforms: All VNX CIFS server cannot be When a VNX CIFS server is Microsoft Windows 2012 Server
CIFS Severity: Medium managed by Windows 2012 added to a Windows 2012 Manager requires Microsoft Agent
Server Manager. Server manager, the on the server; EMC does not
Tracking: 566436
following error is displayed: support third party installation on
cannot manage the operating
its server.
system of the target computer To access information on a server
managed by a VNX storage system,
you must use 'computer
management' instead of 'server
manager'.
Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 89
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All Unisphere failed to access When attempting to connect Close or terminate the existing
ESRS (Control Severity: Critical the Control Station. to the array by using session using the terminate
Station) Tracking: 557982 Unisphere and the ESRS button. Alternately, go to Service
Service Link, a Certificate Link > View All > End (for all
Error may occur. existing sessions) and start a new
session.
Exists in versions:
All 8.1 releases.
VNX File OE, Platforms: All When VDM MetroSync Perform a ‘check status’ operation
MetroSync Severity: Medium Manager service is stopped, to obtain the current sync
Manager, Tracking: 769649 the replication session status replication task status.
Synchronous shown in VDM MetroSync
Replication Manager does not update in
a timely manner.
VNX File OE, Platforms: All Usermapper service was On global configuration Manually disable the usermapper
Migration Severity: Critical enabled on destination while migration, if usermapper is service on the target.
it was disabled on source. disabled on the source
Tracking: 565968 Exists in versions:
cabinet, warning is reported
but manual operation is All 8.1 releases.
asked to the user to disable
the target usermapper
service.
VNX File OE, Platforms: All Migration information does The UI provides inaccurate CLI and log files provide more
Migration Severity: Low not accurately reflect the task progress information on accurate info.
Tracking: 565886 information and percent migration tasks.
complete on the source Exists in versions:
system. All 8.1 releases.
VNX File OE, Platforms: All nas_migrate -i Some warning messages These messages are only warnings
Migration <migration name> printed relative to other migration and will not impact the execution
Severity: Low
other migrations' warning sessions running on the same of the current session.
Tracking: 565259 messages. box could appear when
information is requested on a Exists in versions:
particular session. All 8.1 releases.
VNX File OE, Platforms: All The Recover Point init Running /nas/sbin/nas_rp This issue rarely occurs. Rerun
RecoverPoint command failed while -cabinetdr -init /nas/sbin/nas_rp -cabinetdr -init
Severity: Critical
FS discovering storage on a <cel_name> failed with the <cel_name>.
Tracking: 570980 following error:
source site. Exists in versions:
Error 5008: Error discovering
All 8.1 releases.
storage on src site.

90 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All RecoverPoint failback fails. When running RecoverPoint No workaround.
RecoverPoint Severity: Medium failback, the following error
FS appears, then the operation Exists in versions:
Tracking: 644251
fails: 8.1.9.236
13421849338: File system FS1 is
8.1.9.232
made up of disk volumes with 8.1.9.231
inconsistent disk types: d7 (dense
8.1.9.217
UNSET),d8 (dense UNSET),d9
(dense UNSET),d10 (dense 8.1.9.211
UNSET),d11 (dense 8.1.9.184
Mirrored_performance),d12 (dense
UNSET),d13 (dense 8.1.9.155
Mirrored_performance),d14 (dense 8.1.8.121
UNSET),d15 (dense UNSET),d16
(dense Mirrored_performance) 8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
VNX File OE, Platforms: All RPA reboot regulation and Replication replica cluster will Disable group will stabilize RP
Replication Severity: Critical detach form cluster. become unstable and rpa will system.
reboot until affected CG is
Tracking: 586433 Exists in versions:
expelled or Journal Volume
LUN configuration is fixed. All 8.1 releases.
VNX File OE, Platforms: All Replication switchover stops The replication session fails; Add disk space to the SavVol pool.
Replication Severity: Critical after the source Data Mover no transfer will continue
fails over. although the session status Exists in versions:
Tracking: 568516 All 8.1 releases.
shows fine. A message such
as the following appears in
/nas/log/sys_log.txt:
CS_PLATFORM:SVFS:ERROR:2
1:::::Slot4:1372057453:
/nas/sbin/rootnas_fs -x
root_rep_ckpt_978_56827_1
QOSsize=20000M Error 3024:
There are not enough free disks
available to satisfy the request..
VNX File OE, Platforms: All RecoverPoint File: nas_rp - The primary system has not Wait for the Primary Control
Replication Severity: Critical failback - Error 13421904947: completely rebooted, Station to complete its reboot
Unable to communicate with resulting in the error. operation before running nas rp -
Tracking: 558919
Control Station on source site. failback.
Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 91
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: File and When a user creates a Network interface Check the device mapping by
Synchronous Unified na_syncrep session for an information is not displayed running the following FILE CLI
Replication Severity: High interface attached to a virtual in the nas_syncrep -info commands:
Tracking: 768954 data mover (VDM), the details. 1. On the active system, run:
network mapping is not nas_syncrep -i sync_id
displayed when running the where the VDM ID = 155124.
nas_syncrep –info command. 2. Check the interface on the VDM
by running:
nas_server -i -v
vdm_155124
3. Then find the interface on the
destination system by running:
server_ifconfig server_n
-all
server_2:

VNX File OE, Platforms: File and If you delete a synchronous The following errors are Power down the remote site and
Platform, Unified replication session locally on returned: use the nas_syncrep -delete -local to
Synchronous Severity: High the standby side by using the Error 13431997103: The mirror command delete the sync
Replication Tracking: 737780 nas_syncrep –delete command <mirror_name> is half removed. replication session locally at active
and the nas_syncrep -delete - Error 13431997108: Error occur site.
local command, the session when remove consistency
fails. group.

VNX File OE, Platforms: File and When creating a synchronous The following error is Wait and then retry the operation.
Platform, Unified replication session, the returned:
Synchronous Severity: Medium process fails because storage Error 13431997070: Failed to
Replication
Tracking: 730159 is “locked” on the remote synchronize local storage on
end. remote site when creating
sync replication session.

VNX File OE, Platforms: File and When reversing or failing If the interface at remote side At remote side, delete the
Platform, Unified over sync replication session, is active, the error is interfaces which uses the same IP
Synchronous Severity: Low if any active or inactive expected. However, if the address with the source VDM.
Replication Tracking: 751082 interface at the remote end interface at remote side is
uses the same IP address down, there is no impact on
with the source VDM, reverse/failover.
reverse/failover fails with the
following error:
Error 13431996668: IP
address conflict of network
interface <interface_name> on
Data Mover <server_name>.

92 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All Failure to create a file system Some LUNs were bound from Avoid operations that lead to LUN
System Severity: Critical from a NAS pool. a RAID group. The user ran migration when file systems are
Management Tracking: 563493 The following error message diskmark and then created built on them.
displays: file systems on them. Some Verify that the disk type of all disk
File system cannot be built of these LUNs were moved to volumes under the file system are
because it would contain space another thin pool via LUN from the same mapped pool or
from multiple mapped pools or a compression or migration. compatible RAID group based
mix of both mapped pool and The new file system could not
RAID group based pool.
pool(s). For file system extension,
be created on these LUNs choose the matching pool that the
and diskmark failed. current storage of the file system is
from.
Exists in versions:
All 8.1 releases.
VNX File OE, Platforms: All During an upgrade, if there The out of space error may Limit or suspend write operations
System Severity: Critical are heavy writes to a thin file occur during an upgrade during upgrades.
Management Tracking: 558325 system, operations can fail when the system encounters
with an out of space write operations on file Exists in versions:
error. systems. The file system All 8.1 releases.
auto-extension cannot
complete because the NAS
service is down during the
upgrade.
VNX File OE, Platforms: All CS cannot be added to the If the hostname was not Ensure there is a public CS IP entry
System Severity: Medium master domain. File functions mapped to a public Operating in /etc/hosts. If there is not, add
Management Tracking: 648929 in Unisphere were disabled System IP, the CS could not one.
as a result. be added to master domain. Exists in versions:
As a result, the File Function 8.1.9.236
in the Unisphere GUI was 8.1.9.232
disabled if the user logged in 8.1.9.231
with the domain user
8.1.9.217
account.
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 93
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All SSL Certificate Weak Public SSL Certificate Weak Public If the current certificate uses a
System Severity: Medium Key Strength. Key Strength Found Value: weak key, a new certificate can be
Management Tracking: 645204 bits [=] 1024. generated by executing
"nas_config -ssl" in the control
station in order to generate a new
key with 2048 length.
Exists in versions:
8.1.9.236
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
VNX File OE, Platforms: All Deduplication LUN is left in Deduplication on a private Please contact your service
System Severity: Medium inconsistent state when LUN hangs at enabling for provider to recover the LUNs to
Management Tracking: 624401 Storage Processor (SP) weeks. normal state.
reboots and Exists in versions:
enabling/disabling operation 8.1.9.236
on its way to completion.
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

94 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All An error posts during rescan Error during rescan after No workaround.
System Severity: Medium stating that root_disk(s) are setting Host I/O Limits Exists in versions:
Management Tracking: 577681 unavailable when the limit on (previously called Front-End 8.1.9.236
the SG is set that contains the I/O Quota) on UVMAX SG
control volumes. that contains the control 8.1.9.232
volumes. 8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
VNX File OE, Platforms: All Error 5007: Running nas_quotas If the error occurs, retry the
System Severity: Medium server_5: /nas/sbin/repquota -u occasionally produces the operation.
Management /nas/server/server_4/quotas_fs_file following error:
Tracking: 546410 /nas/server/server_4/quotas_uid_file Exists in versions:
.etc/rpt_file .etc/rpt_file.sids : : nas_quotas CMD failed:
All 8.1 releases.
exec failed /nas/bin/nas_quotas -user -report -
fs pbrfs_0005 501 Error 5007:
server_5: /nas/sbin/repquota -u
/nas/server/server_4/quotas_fs_fil
e
/nas/server/server_4/quotas_uid_fi
le .etc/rpt_file .etc/rpt_file.sids : :
exec failed
VNX File OE, Platforms: All The battery properties in the Users are unable to obtain No workaround.
System Severity: Medium inventory page do not display detailed power information
Management input power. when using the GUI or CLI. Exists in versions:
Tracking: 540001 All 8.1 releases.
VNX File OE, Platforms: All Unisphere incorrectly After fixing a compatibility Manually delete the alert from the
System Severity: Low displays alerts that the VNX problem, Unisphere GUI.
Management Block OE and VNX File OE displayed alerts that the VNX Exists in versions:
Tracking: 646174
versions are not compatible. Block OE and VNX File OE 8.1.9.236
versions are not compatible.
8.1.9.232
8.1.9.231
8.1.9.217
8.1.9.211
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 95
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All SavVol auto extension does Currently, SnapSure SavVol Add more storage into the storage
System not use up all space left in auto-extend asks for a certain pool used by SnapSure SavVol
Severity: Low
Management the pool if the pool is smaller amount of storage (for (make sure at least 20GB is free)
Tracking: 569073 and the SavVol auto-extend will
than 20 GB. example, 20 GB) which is
calculated from storage pool succeed on the next attempt.
properties (such as HWM). If Exists in versions:
the storage pool has less
storage than that amount All 8.1 releases.
(such as 19GB), the SavVol
auto-extend will fail and
some existing checkpoints
may be inactive. The
following alert displays in
Unisphere:
/nas/sbin/rootnas_fs -x
<ckpt_name> QOSsize=20G Error
12010: The requested space
20000 is not available from the
pool <pool_name>.
There will always be a small
amount of storage (for
example, 19GB) that cannot
be fully consumed by SavVol
auto-extend, but which can
be consumed by other
operations (such as
Production File System auto-
extend).
VNX File OE, Platforms: All Storage Pools for File is When running nas_diskmark Press the "Rescan Storage System"
System Severity: Low empty in the GUI even on a system with massive or manually run nas_diskmark
Management though multiple pools exist. LUNs, there is a very low again.
Tracking: 573536
possibility that diskmark Exists in versions:
succeeds; the GUI does not
receive data about its Storage All 8.1 releases.
Pool for File.
The Storage Pool for File page
has no change.
VNX File OE, Platforms: All The following error can occur The file Do not use naviseccli.bin. Use
System Severity: Low when using naviseccli.bin /usr/lib/libNonFipsUtilities.so naviseccli instead.
Management Tracking: 532380 after a fresh installation: is missing from the Control
Station after an Express Exists in versions:
./naviseccli.bin: error while loading
shared libraries: Install installation. All 8.1 releases.
libNonFipsUtilities.so: cannot open
shared object file: No such file or
directory

96 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

VNX File OE, Platforms: All After File install & VIA When connecting a VNX via a Check the checkbox "Always trust
Unisphere Severity: Low execution, the initial CS IP where the CS's content from this publisher".
Tracking: 573444 Unisphere login to a Unified certificate cannot be verified,
system fails with a a web browser generates a Exists in versions:
Connection Error. popup, “Do you wish to All 8.1 releases.
Subsequent login attempts continue?” It also provides a
succeed. checkbox, "Always trust
content from this publisher".
If it is unchecked, the login
screen will not display and
there will be communication
errors with the CS.

The Java pop up warning


does not appear until you
click Show Options.
VNX File OE, Platforms: All VIA window closes after After launching VIA, the Run Windows ipconfig command
VNX Severity: Medium clicking Next from Welcome Welcome screen appears. If to determine if the system has
Installation Frequency: Rarely screen. Next is clicked, the VIA more than six connections. If so,
Assistant (VIA) application closes. disable the unused connections to
under specific
circumstances make the total number no more
than six and retry the VIA
Tracking: 593647 operation.
Exists in versions:
All 8.1 releases.
VNX File OE, Platforms: All VIA fails to change the An error message displays Repair any network connectivity
VNX Severity: Medium sysadmin password on the continuously when trying to issues on the storage system. Wait
Installation Tracking: 566667 Apply page change the password on the 10 minutes, then click Retry.
Assistant (VIA) storage system and the retry
fails. Exists in versions:
All 8.1 releases
Unisphere Platforms: All When creating a storage pool Since I/O is running, the Try to create storage pool again, or
Severity: Critical of "MAX" size while I/O is amount of available space select a specific size for the storage
Frequency: running to other LUNs in that can change from the time the pool.
pool, Unisphere can return request to create the pool is
Occasionally Exists in versions:
error that states that the initiated until the time it is
Tracking: 616169 requested LUN size is too processed by the system. 1.3.9.1.0236-1
large. 1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.3.1.0072-1
1.3.2.1.0051

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 97
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All When managing background Users must have admin Ensure that anyone who wants to
Severity: Critical tasks in Unisphere, the Abort privileges to abort or delete delete or abort a background task
Frequency: and Delete buttons appear background tasks. has admin privileges in Unisphere.
Occasionally disabled in the Background
Tasks tab. Exists in versions:
Tracking: 612365 1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
1.3.2.1.0051
Unisphere Platforms: All When a VNX system is Workaround: change the VNX
Severity: Critical assigned a hostname system hostname so that the
Frequency: composed strictly of numeric hostname contains one or more
characters, the system will non-numeric characters.
Always under a not start and users cannot Exists in versions:
specific set of access the system through
cirumstances 1.3.9.1.0236-1
Unisphere.
Tracking: 611766 1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
1.3.2.1.0051
1.3.1.1.0033
Unisphere Platforms: All When changing the Use the VNX for Block CLI Exists in versions:
Recommended Ratio of hotsparepolicy -set 1.3.9.1.0236-1
Severity: Critical
Keep Unused setting from command to overwrite the
Frequency: 1.3.9.1.0231
30 to 60, insufficient disk recommended value (for
Occasionally space may be reserved for example, overwriting 60 with 1.3.9.1.0217-1
Tracking: 606043 hotspares and may increase 30) to the desired value. 1.3.9.1.184
the risk of failure. 1.3.9.1.155
For example:
1.3.8.1.0119
hotsparepolicy -set 1.3.6.1.0096
<Policy ID> -
keep1unusedper 30 -o 1.3.3.1.0072-1
1.3.2.1.0051
1.3.1.1.0033

98 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All The progress of a pool or LUN The progress of a pool’s Restart Unisphere by using net
Severity: Medium operation is frozen. On deduplication operation stop k10governor and net
another SP, the display is as hangs. start k10governor
Tracking: 741477
expected. Exists in versions:
1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
Unisphere Platforms: All Some thin LUNs in a pool It is normal that consumed Functions as designed.
Severity: Medium don't show the Consumed capactiy changed to N/A after Exists in versions:
Tracking: 745895 Capacity in the LUN dedup is enabled. 05.33.009.5.238
properties field.
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
Unisphere Platforms: All When running VNX When Unisphere off-array Work arounds:
Severity: Medium Unisphere in off-array mode, online help is invoked in • Use Internet Explorer or
Tracking: 635311 the Unisphere online help Chrome, the browser displays Firefox to open Unisphere
will not load if Chrome is set a blank page
• Prior to accessing Unisphere,
as the default browser.
open Chrome with the
following command: <Chrome-
path>" --allow-file-access-
from-files
Exists in versions:
1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 99
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All The Storage Capacity There is more free space than The displayed free capacity of
Severity: Medium displayed in the Dashboard what is displayed. storage pool accounts for the File
Tracking: 652886 section shows File free side only.
capacity as smaller than the The File free capacity displayed on
Free capacity displayed on Dashboard is from the point view
Storage Pool table. of the whole system including File
and Block.
Exists in versions:
1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
Unisphere Platforms: All After BBU B is removed from Only the first line of the error Use naviseccli account to login <SP
Severity: Medium the enclosure, navi faults -list is reported, "Bus 0 Enclosure IP>/debug, click the Force A Full
Tracking: 651009 fails to report the message 0: Faulted". The complete Poll.
"Bus 0 Enclosure 0 BBU B: message is not reported. Exists in versions:
Removed, as expected. only 1.3.9.1.0236-1
reports the first line of the
error "Bus 0 Enclosure 0: 1.3.9.1.0231
Faulted". It does not report 1.3.9.1.0217-1
the complete message. 1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

100 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All With PowerPath installed, The host could not be found There are two workarounds for
Severity: Medium the user wants to disconnect when the user wants to this issue:
Tracking: 641746 the storage group from the disconnect the storage group 1) Set the DWORD registry value
host, but the host cannot be from the host. AutoHostRegistration located
found. under
HKLM\System\CurrentControlSet\
Control\EmcPowerPath registry
key to 0. Then, reboot the host.
2) Uninstall PowerPath. Then,
install PowerPath using the
<Setup.exe>
/v"AUTO_HOST_REGISTRATION=0"
option.
Exists in versions:
1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
Unisphere Platforms: All In the Unisphere UI, an When an asynchronous There are two workarounds for
Severity: Medium asynchronous mirror with a mirror only has a fractured this issue:
Tracking: 567075 fractured secondary image secondary image but no 1. Select the asynchronous mirror
and no primary image cannot primary image, the secondary and click properties to open
be deleted. image cannot be deleted the Remote Mirror Properties
from the Unisphere UI. After dialog. Go to the Secondary
selecting the secondary Image tab and click Force
image and clicking on the Delete to force-destroy this
Delete button, nothing will secondary image.
happen and there will be no
2. Use the CLI to destroy this
failure message. secondary image by using the -
force switch.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: All A faulted LUN does not A faulted LUN will not be No workaround.
Severity: Medium display in the output of the shown in the faults -list
faults –list command output unless it is associated Exists in versions:
Tracking: 564905 All 1.3 versions.
unless it is associated with a with a system feature such as
system feature. a storage group, mirror, snap,
and so on.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 101
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All After a bus cable is moved If a bus cable is moved from This is a display issue only. Restart
Severity: Medium from one port to another, one port to another, there the management server from the
Tracking: 563384 faults may be seen on both will be faults from both the setup page or reboot the SP.
the original bus and the new original bus and the new
bus. bus.This will remain even Exists in versions:
after moving the bus cable All 1.3 versions.
back to the first port.
Unisphere Platforms: All Host appears registered with After attaching a cloned host To create a host from a cloned
Severity: Medium unexpected information and to array, the host might be image, do not include a Host
Tracking: 559676 may be unavailable. registered with unexpected Agent in the image. Or
information, such as Host IP, alternatively, before attaching the
host ID or host name. This new host to array, uninstall Host
may be inherited from the Agent with all the configuration
image that was used to information removed and install a
create the host. Without the new Host Agent from scratch.
right host information, the Refer to KnowledgeBase articles
new host might be emc66921, and emc63749 for
unavailable on the array. details about how to clean up the
Host Agent inherited from the
image.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: Two Windows 2008 iSCSI When running Unisphere, Log out of Unisphere and log in
Windows Server hosts connected at the same after two Windows 2008 again.
2008 time do not display under iSCSI hosts are connected to
Host > Host List. an array and rebooted, Exists in versions:
Severity: Medium All 1.3 versions.
although they are shown as
Tracking: 526482 logged in and registered
under Hosts > Initiators, the
following warning is shown
on the status column.
The initiator is not
fully connected to the
Storage System. As a result
they do not show under
Hosts > Host List.
Unisphere Platforms: All Error when pinging/tracing The following error message Do not ping/trace route SP
Severity: Medium the route SP management IP will display when management IP from an iSCSI data
from an iSCSI data port pinging/tracing the route SP port; it is not recommended.
Tracking: 522202
inUnisphereI. management IP from an iSCSI
data port in Unisphere UI: Exists in versions:
All 1.3 versions.
Ping/traceroute to peer SP or
management workstation from iSCSI
port disrupts management
communication and is not permitted.

102 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All More LUNs cannot be In the Storage > LUNs table, In the case where pool LUNs reach
Severity: Low created in the Storage > more LUNs cannot be created the limit, but RAID group LUNs do
Tracking: 574138 LUNs table by clicking Create by clicking Create when not, create RAID group LUNs in
when either the pool LUNs or either the pool LUNs or RAID Storage > Storage Pools > RAID
RAID group LUNs have group LUNs reach the limit. A Groups.
reached the limit. popup warning message In the case where RAID group LUNs
appears and then the create reach the limit, but pool LUNs do
LUN dialog closes. not, create pool LUNs in Storage >
Storage Pools > Pools.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: All It can take multiple attempts When a user launches Increase the amount of RAM in the
Severity: Low to access the Control Station. Unisphere, the applet may computer running Unisphere and
Tracking: 551081 hang. The user has to try disable on-access scanning of anti-
multiple times before virus applications when launching
launching Unisphere Unisphere for the first time.
successfully.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: All A timeout error occurs when In rare cases, binding a large No workaround. LUNs are deleted
Severity: Low deleting LUNs at the same number of LUNs to a RAID successfully, despite this timeout
Tracking: 532907 time as binding a large group using Naviseccli at error.
number of LUNs to a RAID nearly the same time as
group. deleting LUNs in the UI will Exists in versions:
result in a timeout error, All 1.3 versions.
although the LUNs are
deleted sucessfully.
Unisphere Platforms: All LUN compression fails if the If the system is configured Delete another LUN, then
Severity: Low system is configured with the with maximum LUNs, compress the target LUN.
Tracking: 490729 maximum number of LUNs. enabling the LUN
compression will fail. Exists in versions:
All 1.3 versions.
Unisphere Platforms: All After issuing the getlog If there are many log entries There are two steps to this
Severity: Low command where there are in the event log, the getlog workaround:
many event log command may output a lot of 1. Ensure the network
Tracking: 488940
entries,CIMOM may restart information, which can take a bandwidth between the host
and thus stop service long time. If getlog which issued the getlog
temporarily. command takes over half an command and the target array is
hour, a dump will be found.
generated, and CIMOM will
2. Redirect the command output
stop service for about two to a file instead of outputting to
minutes due to CIMOM's the screen. For example:
restarting. naviseccli -h xx.xx.xx.x
getlog > naviloginfo.txt

Exists in versions:
All 1.3 versions.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 103
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All Unisphere may crash when For systems with large Increase Java memory by:
Analyzer Severity: Critical using Analyzer on a system configurations, the Unisphere 1. Go the Start menu in Windows.
Tracking: 572930 with a large configuration UI may crash if there are 2. Select Settings > Control
(hundreds of LUNs). hundreds of LUNs. Panel.
3. Double-click the Java Plugin
icon.
4. Select the Advanced tab.
5. In the Java Runtime
Parameters, type Xmx1024m.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: All The deduplication feature- The deduplication feature No workaround
Analyzer Severity: Medium level state of the array is state of the array is
inaccurate in both the UI or inaccurate when opening the Exists in versions:
Tracking: 562742 All 1.3 versions.
CLI. archive dump file in the UI or
CLI.
In the UI, the deduplication
tab of the specific mapped
LUN should display the
deduplication feature-level
state, but it actually displays
the state of the pool in which
the mapped LUN is
contained. In the CLI, the
deduplication feature state is
incorrect in the dump for the
LUNs configuration. It shows
the states of the LUN instead
of the feature-level state.
Unisphere Platforms: All Analyzer does not work when Unisphere Real-Time Close both sessions and start only
Analyzer Severity: Low there are two open sessions Analyzer does not work when one Unisphere Analyzer UI session.
Tracking: 559104 for the same system. there are two sessions for the
same host system. Exists in versions:
All 1.3 versions.
Unisphere, CLI Platforms: File and Before performing a non- Run the setstats -off command to
Unified disruptive upgrade (NDU), turn off the stats logging, and then
Severity: Medium when running the ndu - run the ndu -runrules command
Tracking: 772431 runrules command to check again for the NDU.
the array status, the
command fails and reports
that the stats logging process
is turned on.
Unisphere, Platforms: All When SMTP Send Test Email Use tools such as ping, tracert,
MetroSync Severity: High fails in MetroSync Manager telnet, etc. to diagnose the
Manager Tracking: 773994 operation, there is little network environment and server
information to troubleshoot behavior.
the cause of the failure.

104 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All Linux Server Utility update Unisphere Server Utility Add the hostname-ipaddress
Host software, and Unix platforms command failed with error serverutilcli update mapping to /etc/hosts.
CLI, and Severity: Critical message: command fails with error
Utilities message: Exists in versions:
NAVLInetHostNotFound:
Tracking: 572417 All 1.3 versions.
No such host...
No such host.

Unisphere Platforms: Linux and Unisphere Host Agent cannot Unisphere Host Agent cannot The hostname-ipaddress mapping
Host software, UNIX platforms be started on an AsianUx be started on AsianUx host. should be added to /etc/hosts.
CLI, and Severity: Critical host. An error message such as the Host Agent requires the hostname-
Utilities Tracking: 547526 following appears in ipaddress mapping for a network
/var/log/agent.log: connection.
mm/dd/yyyy 16:43:56 Agent Exists in versions:
Main -- Net or File event. Err:
Local hostname/address All 1.3 versions.
mapping unknown; see
installation notes.
Unisphere Platforms: Windows When configuring an iSCSI Server Utility may fail to If multiple network adapters are
Host software, Severity: Critical connection, the Server Utilty connect if the server has configured on the server, EMC
CLI, and may fail to connect if the serveral network adapters, recommends that the default
Tracking: 537021
Utilities wrong network adapter is not all of them are reachable adapter is selected while using the
selected. to each other, and the wrong Server Utility to create an iSCSI
network adapter is selected connection. Alternatively, ensure
when configuring an iSCSI that the right network adapter is
connection. The failure selected. Otherwise, the selected
message may also take a subnet may not be able to access
while to appear. the target. Make sure that the
selected network adapter can
access the target configured on
array.
Exists in versions:
All 1.3 versions.
Unisphere Platforms: Windows Removal of an iSNS target is Although the Server Utility No workaround. The Server Utility
Host software, Severity: Medium reported by the Server Utility cannot remove an iSNS cannot be used to remove an iSNS
CLI, and as successful when the target, the iSNS target still target.
Tracking: 573772
Utilities operation was actually can be selected for removal.
unsuccessful. The remove operation will Exists in versions:
then be reported as All 1.3 versions.
completing successfully, even
though it was unsuccessful
and the target is still present.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 105
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All Naviseccli commands on the Naviseccli commands fails The naviseccli string that contains
Host software, Severity: Low control station that have a authentication even with a "$" should be enclosed in single
CLI, and Frequency: Always "$" in the username or password that is confirmed to quotes (such as 'test$me'), so the
Utilities under specific password fail authentication. work with Unisphere. entire string will be passed to
naviseccli, passing authentication.
circumstances
Exists in versions:
Tracking: 62974296/
641084, 642898 1.3.9.1.0236-1
1.3.9.1.0231
1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
Unisphere Platforms: VNX5400 Pool creation failed on a When attempting to create a No workaround. This is expected
Host software, Severity: Low VNX5400. storage pool, the following behavior when the system limits
CLI, and pool status message may be are exceeded. The storage pool
Tracking: 573776
Utilities received if the system LUN can be destroyed.
limits have been reached:
Exists in versions:
Current Operation: Creating
Current Operation State: Failed
All 1.3 versions.
Current Operation Status: Illegal
unit number Current Operation
Percent Completed: 24

This message indicates that


system configuration has
reached it maximum
configuration for FLU's and
the pool cannot be created.
Unisphere Platforms: The virtual Fibre Channel port This is an environmental Ensure that the management
Host software, Windows Server configured for the Hyper-V issue caused by the operating system and virtual
CLI, and 2012 virtual machine (VM) cannot management operating machine are running on the same
Utilities Severity: Low be recognized by the VM. system and the virtual version of integration services. For
machine being run on more information, refer to:
Tracking: 521220 different versions of Version compatibility of Integration
integration services. The Service:
management operating http://technet.microsoft.com/en-
system (which runs the us/library/ee207413%28v=WS.10%2
Hyper-V role) and virtual 9.aspx
machines should run the How to upgrade the Integration
same version of integration Service:
http://technet.microsoft.com/en-
services. Otherwise, some
us/library/ee941103%28v=WS.10%2
new features of the 9.aspx
management operating
system might not be Exists in versions:
supported on the VM. All 1.3 versions.

106 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Unisphere Platforms: All In rare cases, Unisphere In rare conditions, if the Disable the failback policy and
QoS Manager Severity: Critical Quality of Service Manager fallback policy is enabled, the reconnect to the array after three
Tracking: 562832 may lose connection to the Unisphere QoS Manager mintues.
array for two or three GUI/CLI will lose connection
minutes. to the array for two or three Exists in versions:
minutes. All 1.3 versions.

USM Platforms: All USM Online Disk Firmware The information about failed Exists in versions:
Severity: Medium Upgrade (ODFU) failed to disks presented to the user is 1.3.9.1.0236-1
Tracking: 645113 report the information about not accurate.
1.3.9.1.0231
failed disks. 1.3.9.1.0217-1
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
USM Platforms: All The USM Online Disk While watching the status in No workaround.
Severity: Medium Firmware Upgrade (ODFU) the USM ODFU wizard, some Exists in versions:
Tracking: 572236 wizard inaccurately reports reports are inaccurate. The All 1.3 versions.
the disk firmware upgrade completed disks number
status and progress. increases, while the
Remaining disks and
Upgrading disks number
remains at 0; the progress
bar remains at 100%, while
the upgrade is still in
progress; once the overall
staus is complete, it's
possible that not all disks
have been reported.
USM Platforms: All Cannot log into USM when When one SP is down, USM None.
Severity: Medium one SP is down. takes several minutes to log Exists in versions:
Tracking: 558207 into the system. All 1.3 versions.

USM Platforms: All The software upgrade USM does not support No workaround.
Severity: Medium process hangs after a file upgrades of VNX OE in a Exists in versions:
Tracking: 559742 transfer. proxy configuration. All 1.3 versions.

USM Platforms: CLARiiON An error is presented when The wizard returns an error Java 64 does not support loading
Severity: Medium running the Storage System when run against a CLARiiON 32-bit DLL. Install 32-bit Java.
Tracking: 550578 Verification wizard on a system.
CLARiiON system.
USM Platforms: VNX The Install Software wizard The Install Software wizard Under Tools, click Software
Severity: Medium fails to upgrade VNX Block fails to upgrade the VNX for Maintenance Status to view the
Tracking: 578647 OE. Block OE. An error states that upgrade status. When the upgrade
USM is unable to obtain the finishes, commit the package and
software maintenance status. re-enable statistics, if needed.
Exists in versions:
All 1.3 versions.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 107
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

USM, Platforms: All A USM upgrade failed and A control station upgrade To restore the missed home
Unisphere Off Severity: Medium the user had insufficent operation failed in USM and directories, failover to the
Array Tools Tracking: 748953 permissions to run a CLI returned following error previously activated control
command. The message ID message: stderr: Could station.
not chdir to home
was 15569322006. directory
/home/GlobalAdmin1: No Exists in versions:
such file or directory. 05.33.009.5.238
05.33.009.5.236
05.33.009.5.231
05.33.009.5.218
05.33.009.5.217
05.33.009.5.186
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.008.5.117
MirrorView Platforms: All MirrorView/A An uncleared deadlock When you promote a large group,
Asynch Severity: Medium An attempt to promote a prevention flag (error reduce other loads on the storage
Frequency: Likely large consistency group when 0x7152807) may prevent system where possible. If this issue
the storage systems are further promote or is found, destroy and recreate the
under specific
circumstances connected by iSCSI can fail. administrator operations. affected group.
This issue can occur during An SP restart of the primary and/or
Tracking: 329924 heavy I/O traffic and/or other secondary storage system clears
mirror updates. the deadlock prevention flag.
Exists in versions:
All 5.33 versions.
MirrorView Platforms: All MirrorView/A When Secure CLI processes a Retry the mirror create operation.
Asynch Severity: Low Secure CLI may time out large number of system
actions, it may return the Exists in versions:
Frequency: Rarely during a mirror creation
operation. following error: All 5.33 versions.
under specific
circumstances Request failed. The force polling
failed, because of timeout - the
Tracking: 334575
system may be busy, please try
again. (334575)
MirrorView Platforms: All MirrorView/A Internal data structures can Issue a synchronize command for
Asynch Severity: Low Mirrors may become inconsistent, which the mirror. If this fails the first
Frequency: Rarely administratively fracture results in the mirrors being time, retry the command.
unexpectedly. administratively fractured.
under specific Exists in versions:
circumstances
All 5.33 versions.
Tracking: 333132

108 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

MirrorView Platforms: All MirrorView/S If a mirror is destroyed while Avoid destroying a mirror when a
Synch Severity: Medium Storage processor can reboot a trespass occurs, it is trespass is likely, such as just after
Frequency: Rarely unexpectedly when a mirror possible for internal data a storage processor boots, when
is destroyed. structures to become hosts with failover software may
under specific
circumstances inconsistent and result in a attempt to rebalance the load.
storage processor reboot.
Tracking: 237638 Exists in versions:
All 5.33 versions.
MirrorView Platforms: All MirrorView/S If one storage processor Reissue the synchronize command.
Synch Severity: Low A mirror may fracture shuts down at the same time
the other storage processor Exists in versions:
Frequency: Rarely unexpectedly.
sends a synchronize All 5.33 versions.
under specific
circumstances command, the mirror may
fracture.
Tracking: 389452
SANCopy for Platforms: All Remote devices in the SAN If SAN Copy sessions start Schedule the SAN Copy sessions on
Block Severity: Medium Copy session may fail if iSCSI between two storage both storage systems so that they
Frequency: Rarely, ports are used as a systems connected over iSCSI do not run at the same time.
under specific combination of initiator ports and both storage systems act Or configure the iSCSI connections
circumstances and target ports. as SAN Copy systems for between the storage systems so
some SAN Copy sessions and that iSCSI ports used for SAN Copy
Tracking: 311047 remote systems for other in the I/O module are all used
SAN Copy sessions, remote either as initiator ports or target
devices in the SAN Copy ports but not in combination.
session may fail with error:
Exists in versions:
Unable to locate the device. Check
that the device with this WWN All 5.33 versions.
exists.
SANCopy for Platforms: All A SAN Copy modify command If the only destination of a Modify the copy session to a full
Block Severity: Low fails if the only destination of SAN Copy session fails and session and then back to an
a SAN Copy session fails. this session is modified to incremental session. Once the
Frequency: Always
replace it with a new modify is successful, add your new
Tracking: 215061 destination, the modify fails. destination to the session. Once
This issue happens because the addition of the destination is
the modify command checks successful, remove the previously
for the new destination failed session.
before checking that the
failed destination is removed Exists in versions:
in the modified session. The All 5.33 versions.
modify fails as it thinks a
failed destination exists.
SANCopy for Platforms: All If you change the max The max concurrent SAN New setting takes effect after
Block Severity: Low concurrent setting, and there Copy setting does not take existing active sessions stop.
Frequency: Under are more active sessions than effect immediately when
the new setting allows, the there are already more active Exists in versions:
specific
new setting does not take sessions than new setting. All 5.33 versions.
circumstances
effect immediately.
Tracking: 475390

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 109
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

SANCopy for Platforms: All SAN Copy does not verify the When you use SAN Copy with When you modify a copy
Block Severity: Low selected LUN size when the Navisphere CLI, you are able descriptor with Navisphere CLI,
Frequency: Always size of the LUN source to change the source LUN to ensure any destination LUNs are
changes. be incorrectly larger than the always the same size or larger than
Tracking: 184420 destination LUNs. The SAN the source LUN.
Copy session eventually fails
because the destination LUNs Exists in versions:
are too small. SAN Copy does All 5.33 versions.
not verify that the selected
LUN size is valid.
Snapview Platforms: All Unisphere command will fail Navisphere CLI and Reissue the protected restore
clones Severity: Low during a single SP reboot. Unisphere Manager return an
operation after both SPs have
Frequency: Rarely error message when starting completed booting.
a protected restore. If a
under specific
circumstances clone’s protected restore Exists in versions:
operation is initiated while All 5.33 versions.
Tracking: 294717 one of the storage processors
is booting, the SP that owns
the clone source may not be
able to communicate with
the peer SP for a short period
of time.
Snapview Platforms: All Synchronizing and reverse- SnapView clones will restart Fracture the clone after the SP
clones Severity: Low sychronizing will be started in synchronizing/reverse recovers from the failure.
Frequency: Always automatic mode when synchronizing an unfractured
running with a single SP. clone after an SP failure even Exists in versions:
under a rare set of
if the recovery policy is set to All 5.33 versions.
circumstances
manual. This behavior occurs
Tracking: only if the system is running
247637, 248149, with a single SP, or when the
251731 clone source is also a
MirrorView secondary.
SnapCLI Platforms: Windows SnapCLI fails when the VNX Snapshots containing a Dynamic disks must be imported
Severity: Low operation is targeted for a dynamic disk, which is rolled using the Disk Administrator or
dynamic disk. back to the production host, the Microsoft Diskpart utility.
Frequency: Always
cannot be imported on the
Tracking: 286529 production host using Exists in versions:
SnapCLI. 3.32.0.0.5 and previous

Admhost Platforms: Windows Admhost does not support Admhost operations fail if The workaround is to import
Severity: Medium Windows Dynamic Drives. you attempt them on a Dynamic Drives using the Disk
Frequency: Always Dynamic Drive. Administrator or the Microsoft
Diskpart utility.
Tracking: 203400
Exists in versions:
1.32.0.0.5 and previous

110 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Admsnap Platforms: All A snapcli command may A snapcli command may The actual state of the operation
Severity: Low report a failure, but the report a failure, but the can be verified using Unisphere®
Frequency: Rarely command succeeds if the command succeeds if the software.
target LUN is trespassed target LUN was trespassed
under specific Exists in versions:
circumstances during the operation. during the operation.
3.32.0.0.5 and previous
Tracking: 286532
Admsnap Platforms: All An admsnap command may The admsnap command may The actual state of the operation
Severity: Low incorrectly report a failure, report a failure but the can be verified by using
Frequency: Rarely but the command succeeds if operation succeeds if the Navisphere® /Unisphere® software.
under specific the target LUN is trespassed target LUN of the operation is
during the operation. trespassed during the Exists in versions:
circumstances 2.32.0.0.5 and previous
admsnap operation.
Tracking: 286532
Admsnap Platforms: AIX The snapcli attach The error messages are Activate the VNX snapshots mount
Severity: Low command executed in an AIX generated due to a presence point using Navisphere Secure CLI.
Frequency: PowerPath® environment can of a detached VNX snapshots Then execute SnapCLI on the host
generate SC_DISK_ERR2 mount point in the storage to prevent these messages from
Likely under specific messages in the system log group. Whenever VNX being generated.
circumstances file. snapshots mount points are
Tracking: 215474 in the storage group, running Exists in versions:
cfgmgr causes ASC 2051 3.32.0.0.5 and previous
problems in the errpt file.
The messages are harmless
and can be ignored.
Admsnap Platforms: AIX Admsnap activate commands The messages are generated To prevent these messages from
Severity: Low executed in an AIX because a deactivated beign generated, activate the
PowerPath® environment snapshot is present in the snapshot using Secure CLI, which
Frequency: Likely
under specific may generate SC_DISK_ERR2 storage group. Whenever will allow execution of admsnap on
circumstances messages in the system log snapshots are in the storage the host.
file. group, running cfgmgr causes
Tracking: 215474 ASC 2051 problems in the Exists in versions:
errpt file. The messages are 2.32.0.0.5 and previous
harmless and can be ignored.
Admsnap Platforms: Linux VNX snapshots are not SnapCLI cannot access more The Linux kernel creates only eight
Severity: Medium attached on a Linux system if than eight device paths on SG devices (SCSI generic devices)
Frequency: more than eight device paths the backup host. by default. Additional SG devices
are required to complete the must be created and then linked to
Always under specific operation. the SD devices. (The internal disk
circumstances uses one of the SG devices).
Tracking: 286533 Use the Linux utility
/dev/MAKEDEV to create
additional SCSI generic devices.
Exists in versions:
3.32.0.0.6 and previous

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 111
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Admsnap Platforms: Linux Admsnap sessions are not Admsnap cannot access more The Linux kernel creates only eight
Severity: Medium activated on a Linux system if than eight device paths on SG devices (SCSI generic devices)
Frequency: Always more than eight device paths the backup host. by default. Additional SG devices
under specific are required to complete the must be created and then linked to
operation. the SD devices. (The internal disk
circumstances
uses one of the SG devices).
Tracking: 286533
Use the Linux utility
/dev/MAKEDEV to create
additional SCSI generic devices.
Exists in versions:
2.32.0.0.6 and previous
Admsnap Platforms: Solaris The snapcli attach The snapcli attach Use the –o <device> option for the
Severity: Low command causes LUNs to command (when issued snapcli attach command.
Frequency: Always. trespass on the host in a DMP without the –o <device>
environment. parameters) scans the SCSI Exists in versions:
Tracking: 207655 bus using the primary and 3.32.0.0.5 and previous
secondary paths for the
devices, causing the LUNs to
trespass.
Admsnap Platform: Solaris The admsnap activate The admsnap activate Use the –o <device> option for the
Severity: Low command causes LUNs to command (when issued admsnap activate command.
Frequency: Always. trespass on the host in a DMP without the –o <device>
environment. parameters) scans the SCSI Exists in versions:
Tracking: 207655 bus using the primary and 2.32.0.0.5 and previous
secondary paths for the
devices, causing the LUNs to
trespass.
Admsnap Platforms: SuSE If “rpm” is used on SuSE The package fails to complete Use the yast2 command on SuSE
Severity: Low systems during the the operation (installation or Linux systems to install or remove
Frequency: Always installation or removal removal) or warning the SnapCLI package.
under specific process, warning messages messages are generated.
are generated or there are Exists in versions:
circumstances 3.32.0.0.6 and previous
problems in installing or
Tracking: 230902 uninstalling SnapCLI.
Admsnap Platform: SuSE Warning messges are The package fails to complete Use the yast2 command on SuSE
Severity: Low generated during the the operation (installation or Linux systems to install or remove
Frequency: Always installation or removal removal) without generating the admsnap package.
under specific process if Admsnap is warning messages.
installed or uninstalled when Exists in versions:
circumstances. 2.32.0.0.6 and previous
rpm is used on SuSE systems.
Tracking: 230902

112 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Admsnap Platforms: Windows Admsnap fails when the An admsnap session Dynamic disks must be imported
Severity: Low operation is targeted for a containing a dynamic disk using the Disk Administrator or the
Frequency: Always dynamic disk. that is rolled back to the Microsoft Diskpart utility.
Tracking: 286529 production host cannot be Exists in versions:
imported on the production
2.32.0.0.5 and previous
host using admsnap.

Admsnap Platforms: Windows SnapCLI fails to create or SnapCLI fails to create or The LUN trespassing that is the
Severity: Low destroy a VNX Snapshot in a destroy a VNX Snapshot in a target of the Snap CLI causes
DMP environment. DMP environment. failure. The DMP/Volume Manager
Frequency: Always
under specific keeps track of the current storage
processor owner independently
circumstances
from the operating system. To
Tracking: 226043 resolve this issue, trespass the
volume to the peer storage
processor and use the snapcli
command.
Exists in versions:
3.32.0.0.5 and previous
Admsnap Platform: Windows The admsnap command fails The admsnap command fails To resolve this issue, trespass the
Severity: Low to start or stop a session in a to start or stop a session in a volume to the peer storage
DMP environment. DMP environment. processor and reissue the admsnap
Frequency: Always
under specific command.
The failure is due to the
circumstances. trespassing of the LUN that is Exists in versions:
Tracking: 226043 the target of admsnap 2.32.0.0.5 and previous
operation. The DMP/Volume
Manager keeps track of the
current storage processor
owner, independently from
the operating system.
Admsnap Platforms: Windows The snapcli attach The snapcli attach This usually occurs when the
Severity: Low command might return a command generates a registry contains stale device
Frequency: Always warning if the operating warning that one or more mapping information. Update the
system maintains a drive devices are not assigned registry by using the “scrubber”
under specific
circumstances. letter mapping for the drive letters, when the other utility from Microsoft to remove
volume being brought online. volumes are assigned drive stale entries. The stale device
Tracking: 225937 letters. mapping information generates a
condition that prevents SnapCLI
from determining if all volumes
attached were assigned drive
letters. SnapCLI generates a
warning.
Exists in versions:
3.32.0.0.5 and previous

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 113
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Admsnap Platform: Windows The admsnap activate The admsnap activate This usually occurs when the
Severity: Low command may inaccurately command generates a registry contains stale device
Frequency: Always return a warning if the warning that one or more mapping information. Update the
under specific operating system maintains a devices was not assigned a registry by using a utility from
drive letter mapping for the drive letter when all volumes Microsoft called “scrubber” to
circumstances.
volume being brought online. were assigned drive letters. remove stale entries. The stale
Tracking: 225937 device mapping information
generates a condition that
prevents admsnap from
determining if all volumes
activated were assigned drive
letters. Admsnap generates a
warning.
Exists in versions:
2.32.0.0.5 and previous
Admsnap Platforms: Windows The snapcli list The snapcli list Use the –d option on the
command might list devices command might list command line to suppress
Severity: Low
multiple times when devices multiple times when duplicate entries for a particular
Frequency: Always executed in a DMP
under specific executed in a DMP device or volume.
environment. environment.
circumstances Exists in versions:
Tracking: 225832 3.32.0.0.5 and previous
Admsnap Platform: Windows The admsnap list The admsnap list Use the –d option on the
Severity: Low command may list devices command may list devices command line to suppress
Frequency: Always multiple times when multiple times when duplicate entries for a particular
executed in a DMP executed in a DMP device or volume.
under specific
circumstances. environment. environment.
Exists in versions:
Tracking: 225832 2.32.0.0.5 and previous
Admsnap Platforms: Windows The snapcli create command The snapcli create command Trespass the volume to the
Severity: Low fails if the volume contains a fails with the following error primary path or use Navisphere
Veritas volume and the LUNs message if the volume Secure CLI to create the VNX
Frequency: Always
for that volume are contains a Veritas volume Snapshot.
Tracking: 206345 trespassed to the secondary and the LUNs for that volume
path. are trespassed to the Exists in versions:
secondary path: 3.32.0.0.5 and previous

Error: 0x3E050011 (One or


more devices need manual
attention by the operator).
Admsnap Platforms: Windows The admsnap start command The admsnap start command Trespass the volume to the
Severity: Low fails if the volume contains a fails with Error: 0x3E050011 primary path or use Navisphere
Veritas volume and the LUNs (One or more devices need Secure CLI to start the SnapView
Frequency: Always
for that volume are manual attention by the session.
Tracking: 206345 trespassed to the secondary operator),if the volume
path. contains a Veritas volume Exists in versions:
and the LUNs for that volume 2.32.0.0.5 and previous
are trespassed to the
secondary path.

114 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Virtualization Platforms: All VASA Provider goes off-line To add a VNX Block storage Remove both VASA connections to
Severity: Low and the certificate is missing. system *twice* to the list of the storage system, then re-add a
Tracking: 565093 Information does not update Vendor Providers (once for single VASA connection to either of
in Virtual Center. each Storage Processor) is the SPs.
not a valid configuration.
This results in duplicate data
being returned to vCenter
Server. To remove one of
these duplicate connections
also de-authorizes the
vCenter Server on the other
(remaining) connection, and
results in this connection
becoming offline.
Virtualization Platform: All VASA Unable to add Vendor To allow a LDAP user to login
Frequency: Always LDAP users are not properly Provider in vSphere using through VASA, it is not sufficient
under specific mapped to correct VASA credentials of an LDAP user for the LDAP group to be mapped
circumstances privileges on VNX for File. on VNX for File. to an appropriate group.
Severity: Low 1. Create an LDAP user account, if
Tracking: 496273 it does not exist, by having the
user login through Unisphere
(through the Control Station
IP). This will auto-create the
account..
2. When the appropriate mapped
account exists on the Control
Station, the administrator can
add the user to any of the
three groups 'VM
Administrator', 'Security
Administrator' or
'Administrator'.
Note: The account should be
mapped to the least privileged
account that allows all the
necessary operations to take place.
Exists in versions:
All versions

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 115
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Known problems and limitations

Category Details Description Symptom Workaround/Version

Virtualization Platforms: All VMware: Sync does not VASA information displays in If you suspect that this information
Severity: Low perform after a session is vSphere may be out of sync may be out of date, perform a
Tracking: 465905 re-established. with the current state of the manual Sync via the Storage
storage system due to errors Providers page and refresh/update
in the Storage Provider, the appropriate pages in the
vCenter Server, or vSphere Client. If this does not fix
communication errors the issue, remove and re-add the
between the two Storage Provider(s) in question.
components.

Virtualization Platform: All VAAI The copy offload operation No action necessary. After the
Copy offload between SPs generateds requests on the copy offload operation completes,
Frequency: Always
under specific causes implicit trespasses. array that trigger load normal host I/O to the destination
circumstances. balancing. If the source and LUN trigger the load balancing
destination devices are not back to the correct user-assigned
Severity: Low SP.
owned by the same SP,
the array moveds ownership Exists in versions:
of the destination LUN to
match the source. All 1.3 versions.

VNX File, OE, Platform: VNX for When performing an NDMP While performing an NDMP If backing up a directory will result
NDMP File backup, the original backup, the complete name in a final pathname length longer
Severity: Low pathname will be prefixed passed for backup along with than 1023 bytes, consider backing
with the NDMP checkpoint the checkpoint, mount point up its parent directory instead.
Tracking: 953466 name. If the final pathname name and path length adds
length exceeds 1023, the up to become more than the
backup job will fail. provided limit.

116 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Documentation

Documentation
Dell provides the ability to create step-by-step planning, installation, and maintenance instructions
tailored to your environment. To create VNX customized documentation, go to:
https://mydocuments.emc.com/VNX.
For the most up-to-date documentation and help go to Online Support at https://Support.EMC.com.

Configuring and Managing CIFS on VNX


Please note the following for the manual, Configuring and Managing CIFS on VNX (P/N 300-014-332 Rev.
04):

Configure SMB Signing with the Windows Registry


26B

In the “Client-side signing” subsection, the settings are located in:


HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\lanmanworkstation\parameters\

Rename a compname
27B

Step 4 is incorrect, and should be replaced with the following:


4. To rename a NetBIOS server to the new name, type:
$ server_cifs server_2 -rename -netbios W2kTemp W2kProd

Configuring VNX Naming Services


This manual refers to iPlanet support rather than support for Oracle Directory Server Enterprise Edition
(ODSEE) in VNX. ODSEE is the new version formerly known as iPlanet or Sun Java System Directory Server
and Sun ONE Directory Server. Version 11.x is supported in by VNX. This manual will be updated in a
subsequent release.

Configuring Virtual Data Movers on VNX


Please note the following for the manual, Configuring Virtual Data Movers on VNX (P/N 300-014-560 Rev
01):

Attaching interfaces to a VDM


28B

The “Attach one or more interfaces to a VDM” section needs to be updated to include details for the
nas_server attach command:
| -vdm <vdm_name> -attach <interface>[,<interface2>…]

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 117
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Documentation

Detaching interfaces from a VDM


29B

A new topic called “Detach a network interface from a VDM” needs to be added, which will include
details for the nas_server detach command:
| -vdm <vdm_name> -detach <interface>[,<interface2>…]
Also, the “Assign an interface to a VDM” section needs to be updated to include a cross-reference to the
new “Detach a network interface from a VDM” topic.

Querying the interfaces attached to a VDM


30B

The output in the “Query the NFS export list on a VDM” section needs to be updated to include network
attached interface configuration information.

Editing VDM domain configurations


31B

A new topic called “Clear the domain configuration for a VDM” needs to be added, which will include
details for the server_nsdomains unset command:
| -unset –resolver <resolver> [-resolver <resolver>...]

Parameters Guide for VNX for File


Please note the following for the manual, Parameters Guide for VNX for File (P/N 300-151-171 Rev 01):
The following parameter has been added for the ftpd facility:
Facility : ftpd
Parameter : showSysInfo
Value : 0 or 1
Default Value : 1
Comments/description : Displays the system information in the FTP server banner.
0 = Disable the display of the system version info in the FTP server banner.
1 = Enable the display of the system version info in the FTP server banner.

The following parameters have been added for the ldap facility:
Facility : ldap
Parameter : cachePersistent
Value : 0 or 1
Default Value : 0
Comments/description : Enables the cache of the hostname inside the LDAP service, and sets a cache
persistent value after a NAS reboot.
0 = The LDAP cache is cleared at Data Mover boot.
1 = The LDAP cache is not cleared at Data Mover boot.

118 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Documentation

Facility : ldap
Parameter : maxClientSetupDelay
Value : 30 to 90 seconds
Default Value : 30 seconds
Comments/description : Sets the maximum time period that LDAP requests are held while the LDAP
service initializes its configuration when a Data Mover boots. All LDAP requests received during this
maxClientSetupDelay period are held. Once the LDAP service configuration completes, any held
and subsequent LDAP requests are accepted immediately. If the LDAP service fails to complete its
configuration within the maxClientSetupDelay period, it will respond to any held LDAP requests
with a service could not provide information error, and will continue responding to any
additional LDAP requests with that error until its configuration has completed.

The following parameter has been added for the NDMP facility:
Facility : NDMP
Parameter : remoteMoverNetworkTimeOut
Value : 600–86400
Default Value : 7200
Comments/description : Specifies the maximum time (in seconds) that the Data Mover waits for the
remote Data Mover during a three-way backup or restore operation. It is a read timeout during a restore
operation, and a write timeout during a backup operation while waiting for a network connection. Some
systems may take exceptionally long time to respond during restore or backup operations.

The following parameters have been added for the ufs facility:
Facility : ufs
Parameter : dirLinkAlertThreshold
Value : 1 to 101 percent
Default Value : 101 (disables the threshold)
Comments/description : An alert message is generated when a directory’s subdirectory limit crosses the
specified threshold.

Facility : ufs
Parameter : dirLinkAlertTimer
Value : 0 to 10000000 minutes; setting 0 disables the timer
Default Value : 15 minutes
Comments/description : Prevents redundant alert messages from being generated. The subdirectory
limit count must stay above the specified threshold for this timer value. If a directory is constantly going
above and below the threshold, an alert will be sent every 15 minutes (or the time set for the default
parameter value).

Using FTP, TFTP, and SFTP on VNX


Please note the following for the manual, Using FTP, TFTP, and SFTP on VNX (P/N 300-015-134 Rev 01):

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 119
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Where to get help

FTP does not support client certificate authentication. On page 21, the corrected information for the last
bullet in the "Authentication methods for FTP" section is as follows:
• If you do not specify the username option, but enable SSL and configure the sslpersona, the
anonymous authentication without SSL is used.

Using VNX Replicator


Please note the following for the manual, Using VNX Replicator (P/N 300-014-567 Rev 01):
The syntax for the Snapsure configuration has changed to: ckpt:10:200:20
When changing the Snapsure configuration, the updated process is as follows:
1. Locate this SnapSure configuration line in the file:
ckpt:10:200:20, where:
10 = Control Station polling interval rate in seconds
200 = maximum rate at which a file system is written in mb/second
20 = percentage of the entire system’s volume allotted to the creation and extension of all the
SavVols used by the VNX system.
Note: If this line does not exist, it means the SavVol-space-allotment parameter is currently set to its
default value of 20, which means 20 percent of the system space can be used for SavVols. To change this
setting, you must first add the line: ckpt:10:200:20.

VNX 5400 Parts Location Guide


Please note the following for the manual, VNX5400 Parts Location Guide (P/N 300-015-013 Rev 04):
Table 5, SP CPU module part number, on page 15 is incorrect, and should be changed to the following:
Part number label
location (Figure 8 on
page 14) Part number Description FRU CRU
110-201-003B-01 SP 4-core 1.8-GHz CPU module with
16 GB of memory
110-201-006B-01 SP 4-core 1.8-GHz CPU module √
without memory

Where to get help


Support, product, and licensing information can be obtained as follows:

120 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Where to get help

Product information
For documentation, release notes, software updates, or for information about EMC products, licensing,
and service, go to Online Support (registration required) at: https://support.emc.com/.
Troubleshooting
Go to Online Support. After logging in, locate the appropriate Support by Product page.
Technical support
For technical support and service requests, go to Online Support. After logging in, locate the appropriate
Support by Product page and choose either Join Live Chat or Create Service Request. To open a service
request through Online Support, you must have a valid support agreement. Contact your Sales
Representative for details about obtaining a valid support agreement or to answer any questions about
your account.

EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File 121
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes
Where to get help

Copyright © 2013-2020 Dell Inc. or its subsidiaries. All rights reserved.


Published March 2020
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS-IS”. DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE
DESCRIBED IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell Technologies, Dell, EMC, Dell EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may
be the property of their respective owners. Published in the USA.
Dell EMC
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.DellEMC.com

122 EMC VNX Operating Environment for Block 05.33.021.5.256, EMC VNX Operating Environment for File
8.1.21.256, and EMC Unisphere 1.3.21.1.0256-1 Release Notes

Das könnte Ihnen auch gefallen