Beruflich Dokumente
Kultur Dokumente
5.5
Legal Notice
Copyright 2009 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. This Symantec product may contain third party software for which Symantec is required to provide attribution to the third party (Third Party Programs). Some of the Third Party Programs are available under open source or free software licenses. The License Agreement accompanying the Software does not alter any rights or obligations you may have under those open source or free software licenses. Please see the Third Party Legal Notice Appendix to this Documentation or TPIP ReadMe File accompanying this Symantec product for more information on the Third Party Programs. The product described in this document is distributed under licenses restricting its use, copying, distribution, and decompilation/reverse engineering. No part of this document may be reproduced in any form by any means without prior written authorization of Symantec Corporation and its licensors, if any. THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. SYMANTEC CORPORATION SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE. The Licensed Software and Documentation are deemed to be commercial computer software as defined in FAR 12.212 and subject to restricted rights as defined in FAR Section 52.227-19 "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in Commercial Computer Software or Commercial Computer Software Documentation", as applicable, and any successor regulations. Any use, modification, reproduction release, performance, display or disclosure of the Licensed Software and Documentation by the U.S. Government shall be solely in accordance with the terms of this Agreement.
Technical Support
Symantec Technical Support maintains support centers globally. Technical Supports primary role is to respond to specific queries about product features and functionality. The Technical Support group also creates content for our online Knowledge Base. The Technical Support group works collaboratively with the other functional areas within Symantec to answer your questions in a timely fashion. For example, the Technical Support group works with Product Engineering and Symantec Security Response to provide alerting services and virus definition updates. Symantecs maintenance offerings include the following:
A range of support options that give you the flexibility to select the right amount of service for any size organization Telephone and Web-based support that provides rapid response and up-to-the-minute information Upgrade assurance that delivers automatic software upgrade protection Global support that is available 24 hours a day, 7 days a week Advanced features, including Account Management Services
For information about Symantecs Maintenance Programs, you can visit our Web site at the following URL: www.symantec.com/techsupp/
Product release level Hardware information Available memory, disk space, and NIC information Operating system
Version and patch level Network topology Router, gateway, and IP address information Problem description:
Error messages and log files Troubleshooting that was performed before contacting Symantec Recent software configuration changes and network changes
Customer service
Customer service information is available at the following URL: www.symantec.com/techsupp/ Customer Service is available to assist with the following types of issues:
Questions regarding product licensing or serialization Product registration updates, such as address or name changes General product information (features, language availability, local dealers) Latest information about product updates and upgrades Information about upgrade assurance and maintenance contracts Information about the Symantec Buying Programs Advice about Symantec's technical support options Nontechnical presales questions Issues that are related to CD-ROMs or manuals
Consulting Services
Educational Services
To access more information about Enterprise services, please visit our Web site at the following URL: www.symantec.com Select your country or language from the site index.
Contents
Technical Support ............................................................................................... 4 Chapter 1 Introducing the Veritas Storage Foundation Scalable File Server ....................................................... 15
About Storage Foundation Scalable File Server .................................. About the core strengths of SFS ...................................................... About SFS features ....................................................................... Simple installation ................................................................. Administration ...................................................................... Scalable NFS ......................................................................... NFS Lock Management (NLM) ................................................... Active/Active CIFS ................................................................. Storage tiering ...................................................................... SFS key benefits and other applications ........................................... High performance scaling and seamless growth ........................... High availability .................................................................... Consolidating and reducing costs of storage ................................ Enabling scale-out compute clusters and heterogeneous sharing of data ........................................................................... 15 16 17 17 17 18 18 18 18 19 19 20 20 21
Chapter 2
Contents
Chapter 3
Chapter 4
Chapter 5
Contents
Chapter 6
Chapter 7
........................ 117 117 120 120 121 124 126 127 127 129 130 131 133 133 134 138 140
About creating and maintaining file systems ................................... Listing all file systems and associated information ........................... About creating file systems .......................................................... Creating a file system ........................................................... Adding or removing a mirror to a file system ................................... Configuring FastResync for a file system ........................................ Disabling the FastResync option for a file system ............................. Increasing the size of a file system ................................................. Decreasing the size of a file system ................................................ Checking and repairing a file system .............................................. Changing the status of a file system ............................................... Destroying a file system .............................................................. About snapshots ........................................................................ Configuring snapshots .......................................................... About snapshot schedules ............................................................ Configuring snapshot schedules ..............................................
Chapter 8
About NFS file sharing ................................................................ Displaying exported file systems ............................................. Adding an NFS share ............................................................ Sharing file systems using CIFS and NFS protocols ..................... Unexporting a file system or deleting NFS options ......................
10
Contents
Chapter 9
Contents
11
Chapter 10
Using FTP
............................................................................ 207 207 208 208 209 210 213 216 217 219
About FTP ................................................................................. Displaying FTP server ................................................................. About FTP server commands ........................................................ Using the FTP server commands ............................................. About FTP set commands ............................................................. Using the set commands ........................................................ About FTP session commands ....................................................... Using the FTP session commands ............................................ Using the logupload command ......................................................
Chapter 11
Chapter 12
12
Contents
Chapter 13
Chapter 14
Chapter 15
Contents
13
Chapter 16
14
Contents
Chapter
About Storage Foundation Scalable File Server About the core strengths of SFS About SFS features SFS key benefits and other applications
Backup operations using both NDMP and/or the built-in NetBackup client Active/Active CIFS, including integration with Active Directory operations Global cluster administration through a single interface Active/Active shared data NFS sharing including shared read/write and LDAP/NIS support
16
Introducing the Veritas Storage Foundation Scalable File Server About the core strengths of SFS
Simple administration of Fibre Channel Host Bus Adapters (HBAs), file systems, disks, snapshots, and Dynamic Storage Tiering (DST) SNMP, syslog, and email notification Seamless upgrade and patch management Support information Online man pages Simple help
SFS provides sharing of NFS and CIFS file systems in a simple, highly scalable, and highly available manner. The components of SFS include a security-hardened, custom-install SLES 10 SP2 operating system, core Storage Foundation services including Cluster File System, and the SFS software platform. These components are provided on a single DVD or DVD ISO image.
Dynamic Multipathing (DMP) Cluster Volume Manager Cluster File System (CFS) Veritas Cluster Server (VCS) Dynamic Storage Tiering (DST) I/O Fencing
DMP provides Fibre Channel Host Bus Adapter load balancing policies and tight integration with array vendors to provide in-depth failure detection and path failover logic. DMP is compatible with more hardware than any similar product. Cluster Volume Manager provides a cluster-wide consistent virtualization layer that leverages all the strengths of Veritas Volume Manager (VxVM) including online re-layout and resizing of volumes, and online array migrations. You can mirror your underlying SFS file systems across separate physical frames to ensure maximum availability on the storage tier. This technique seamlessly adds or removes new storage, whether single drives or entire arrays.
Introducing the Veritas Storage Foundation Scalable File Server About SFS features
17
Cluster File System complies with the Portable Operating System Interface (POSIX) standard. It also provides full cache consistency and global lock management at a file or sub-file level. CFS lets all nodes in the cluster perform metadata or data transactions. This allows linear scalability in terms of NFS operations per second. VCS monitors communication, and failover for all nodes in the cluster and their associated critical resources. This includes virtual IP addressing failover for all client connections regardless of the client protocol. DST dynamically and transparently moves files to different storage tiers to respond to changing business needs. DST is used in Storage Foundation Scalable File Server as SFS Storage Tiering. I/O fencing further helps to guarantee data integrity in the event of a multiple network failure by using the SFS storage to ensure that cluster membership can be determined correctly. This virtually eliminates the chance of a cluster split-brain from occurring.
Simple installation
A single node in the cluster is booted from a DVD containing the operating system image, core Storage Foundation, and SFS modules. While the node boots, the other nodes are defined using IP addresses. After you install SFS and the first node is up and running, the rest of the cluster nodes are automatically installed with all necessary components. The key services are then automatically started to allow the cluster to begin discovering storage and creating file shares.
Administration
SFS contains a role-based administration model consisting of the following key roles:
These roles are consistent with the operational roles in many data centers.
18
Introducing the Veritas Storage Foundation Scalable File Server About SFS features
For each role, the administrator uses a simple menu-driven text interface. This interface provides a single point of administration for the entire cluster. A user logs in as one of those roles on one of the nodes in the cluster and runs commands that perform the same tasks on all nodes in the cluster. You do not need to have any knowledge of the Veritas Storage Foundation technology to install or administer an SFS cluster. If you are currently familiar with core SFCFS or Storage Foundation in general, you will be familiar with the basic management concepts.
Scalable NFS
With SFS, all nodes in the cluster can serve the same NFS shares as both read and write. This creates very high aggregated throughput rates, because you can use sum of the bandwidth of all nodes. Cache-coherency is maintained throughout the cluster.
Active/Active CIFS
CIFS is active on all nodes within the SFS cluster. The specific shares are read/write on the node they reside on, but can failover to any other node in the cluster. SFS supports CIFS home directory shares.
Storage tiering
SFS's built-in Dynamic Storage Tiering (DST) feature can reduce the cost of storage by moving data to lower cost storage. SFS storage tiering also facilitates the moving of data between different drive architectures. DST lets you do the following:
Create each file in its optimal storage tier, based on pre-defined rules and policies. Relocate files between storage tiers automatically as optimal storage changes, to take advantage of storage economies.
Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications
19
Retain original file access paths to minimize operational disruption, for applications, backup procedures, and other custom scripts. Handle millions of files that are typical in large data centers. Automate these features quickly and accurately.
20
Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications
Figure 1-1
When using 16-node clusters, extremely high throughput performance numbers can be obtained. This is due to the benefits of near linear SFS cluster scalability.
High availability
SFS has an always on" file service that provides zero interruption of file services for company critical data. The loss of single or even multiple nodes does not interrupt I/O operations on the client tier. This is in stark contrast to the traditional NFS active/passive failover paradigm. The SFS architecture provides transparent failover for other key services such as NFS lock state, CIFS and FTP daemons, reporting, logging, and backup/restore operations. The console service that provides access to the centralized menu-driven interface is automatically failed over to another node. The installation service is also highly available and can seamlessly recover from the initially installed node failing during the installation of the remaining nodes in the cluster. The use of Veritas Cluster Server technology and software within SFS is key to the ability of SFS to provide best-of-breed high availability, in addition to class-leading scale-out performance.
Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications
21
With SFS, you can group storage assets into fewer, larger shared pools. This increases the use of backend LUNs and overall storage. SFS also has built-in, pre-configured heterogeneous storage tiering. This lets you use different types of storage in a primary and secondary tier configuration. Using simple policies, data can be transparently moved from the primary storage tier to the secondary tier. This is ideal when mixing drive types and architectures such as high-speed SAS drives with cheaper storage, such as SATA-based drives. Furthermore, data can be stored initially on the secondary tier and then promoted to the primary tier dynamically based on a pattern of I/O. This creates an optimal scenario when you use Solid State Disks (SSDs) because there will often be a significant change between the amount of SSD storage available, and amount of other storage availability, such as SATA drives. Data and files that are promoted to the primary tier are transferred back to the secondary tier in accordance with the configured access time policy. All of this results in substantially increased efficiency, and it can save you money because you make better use of the storage you already have.
22
Introducing the Veritas Storage Foundation Scalable File Server SFS key benefits and other applications
Chapter
About user roles and privileges About the naming requirements for adding new users About using the SFS command-line interface Logging in to the SFS CLI About accessing the online man pages About creating Master, System Administrator, and Storage Administrator users About the support user Displaying the command history
24
Creating users based on roles About the naming requirements for adding new users
System Administrator
Storage Administrator
The Support account is reserved for Technical Support use only, and it cannot be created by administrators. See Using the support login on page 325.
Length
Can be up to 31 characters. If user names are greater than 31 characters, you will receive the error, "Invalid user name." Command names are case sensitive: username and USERNAME are the same. However, user-provided variables are case-sensitive. Hyphens (-) and underscores (_) are allowed. Valid user names include:
Case
Creating users based on roles About using the SFS command-line interface
25
See Creating Master, System Administrator, and Storage Administrator users on page 33.
Command-line help by typing a command and then a question mark (?) Command-line manual (man) pages by typing man and the name of the command you are trying to find Conventions used in the SFS online command-line man pages Description
Indicates you must choose one of elements on either side of the pipe. Indicates that the element inside the brackets is optional. Indicates that the element inside the braces is part of a group. Indicates a variable for which you need to supply a value. Variables are indicated in italics in the man pages.
26
Log in to SFS using the appropriate user role, System Admin, Storage Admin, or Master. See Logging in to the SFS CLI on page 25.
Enter the name of the mode you want to enter. For example, to enter the admin mode, you would enter the following:
admin
You can tell you are in the admin mode because you will see the following:
Admin>
The following tables describe all the available modes, commands associated with that mode, and what roles to use depending on which operation you are performing. Table 2-4 Admin mode commands System Admin
X X
Storage Admin
X X
Master
X X X X
Table 2-5
Storage Admin
Master
X X X X X X X
27
Table 2-5
Storage Admin
Master
X
Table 2-6
Storage Admin
Master
X X X X X X X
Table 2-7
Storage Admin
Master
X X X X X
Table 2-8
Storage Admin
Master
X X X
28
Table 2-8
Storage Admin
Master
X X
Table 2-9
Storage Admin
X
Master
X
Table 2-10
Storage Admin
Master
X X X X X X X X X
Table 2-11
Storage Admin
Master
X X X X
29
Table 2-12
Storage Admin
Master
X X X X X X
Table 2-13
Storage Admin
Master
X X X X X X X X X
Table 2-14
30
Creating users based on roles About accessing the online man pages
Table 2-14
Table 2-15
Storage Admin
Master
X X X X X X X
Table 2-16
Storage Admin
Master
X X
Creating users based on roles About accessing the online man pages
31
Network> man ldap NAME ldap - configure LDAP client for authentication SYNOPSIS ldap ldap ldap ldap
enable disable show [users|groups|netgroups] set {server|port|basedn|binddn|ssl|rootbinddn|users-basedn| groups-basedn|netgroups-basedn|password-hash} value ldap get {server|port|basedn|binddn|ssl|rootbinddn| users-basedn|groups-basedn|netgroups-basedn|password-hash}
You can also type a question mark (?) at the prompt for a list of all the commands that are available for the command mode that you are in. For example, if you are within the admin mode, if you type a question mark (?), you will see a list of the available commands for the admin mode.
sfs> admin ? Entering admin mode... sfs.Admin> exit logout man passwd show supportuser user --return to the previous menus --logout of the current CLI session --display on-line reference manuals --change the administrator password --show the administrator details --enable or disable the support user --add or delete an administrator
To exit the command mode, enter the following: exit. For example:
sfs.Admin> exit sfs>
To exit the system console, enter the following: logout. For example:
sfs> logout
32
Creating users based on roles About creating Master, System Administrator, and Storage Administrator users
passwd
Creates a password. Passwords should be eight characters or less. If you enter a password that exceeds eight characters, the password is truncated, and you need to specify the truncated password when re-entering the password. For example, if you entered "elephants" as the password, the password is truncated to "elephant," and you will need to re-enter "elephant" instead of "elephants" for the system to accept your password. By default, the initial password for any user is the same as the username. For example, if you logged in as user1, your default password would also be user1. You will not be prompted to supply the old password. See To change a user's password on page 34.
show
Displays a list of current users, or you can specify a particular username and display both the username and its associated privilege. See To display a list of current users on page 34.
user delete
Creating users based on roles About creating Master, System Administrator, and Storage Administrator users
33
For example:
Admin> user add master1 master Creating Master: master1 Success: User master1 created successfully
For example:
Admin> user add systemadmin1 system-admin Creating System Admin: systemadmin1 Success: User systemadmin1 created successfully
For example:
Admin> user add storageadmin1 storage-admin Creating Storage Admin: storageadmin1 Success: User storageadmin1 created successfully
34
Creating users based on roles About creating Master, System Administrator, and Storage Administrator users
To change the password for the current user, enter the following command:
Admin> passwd
You will be prompted to enter the new password for the current user.
To change the password for a user other than the current user, enter the following command:
Admin> passwd [username]
You will be prompted to enter the new password for the user. To display a list of current users
For example:
Admin> show List of Users ------------master user1 user2
To display the details of the administrator with the username master, enter the following:
Admin> show master Username : master Privileges : Master Admin>
35
If you want to display the list of all the current users prior to deleting a user, enter the following:
Admin> show
For example:
Admin> user delete user1 Deleting User: user1 Success: User user1 deleted successfully
supportuser password
Changes the support user password. The password can be changed at any time. See To change the support user password on page 36.
supportuser status Checks the status of the support user (whether it is enabled or disabled).
36
For example:
Admin> supportuser enable Enabling support user. support user enabled. Admin>
If you want to change the support user password, enter the following:
Admin> supportuser password
For example:
Admin> supportuser password Changing password for support. New password: Re-enter new password: Password changed Admin>
37
If you want to check the status of the support user, enter the following:
Admin> supportuser status
For example:
Admin> supportuser status support user status : Enabled Admin>
For example:
Admin> supportuser disable Disabling support user. support user disabled. Admin>
38
For example:
SFS> history master 7 Username : master Privileges : Master Time Status 02-12-2009 11:09 Success 02-12-2009 11:10 Success 02-12-2009 11:19 Success 02-12-2009 11:28 Success 02-12-2009 15:00 SUCCESS 02-12-2009 15:31 Success 02-12-2009 15:49 Success SFS>
Message NFS> server status NFS> server start NFS> server stop NFS> fs show Disk list stats completed Network shows success Network shows success
Command (server status) (server start ) (server stop ) (show fs ) (disk list ) (show ) (show )
Chapter
About the cluster commands Displaying the nodes in the cluster About adding a new node to the cluster Installing the SFS software onto a new node Adding a node to the cluster Deleting a node from the cluster Shutting down the cluster nodes Rebooting the nodes in the cluster
40
Displaying and adding nodes to a cluster Displaying the nodes in the cluster
Installs the SFS software onto the new node. See Installing the SFS software onto a new node on page 43. Adds a new node to the SFS cluster. See Adding a node to the cluster on page 44.
cluster> delete
Deletes a node from the SFS cluster. See Deleting a node from the cluster on page 45.
cluster> shutdown Shuts down one or all of the nodes in the SFS cluster. See Shutting down the cluster nodes on page 47. cluster> reboot Reboots a single node or all of the nodes in the SFS cluster. Use the nodename(s) that is displayed in the show command. See Rebooting the nodes in the cluster on page 47.
Displaying and adding nodes to a cluster Displaying the nodes in the cluster
41
To display a list of nodes that are part of a cluster, and the systems that are available to add to the cluster, enter the following:
Cluster> show
For the nodes not yet added to the cluster, they are displayed with unique identifiers.
Node ---4dd5a565-de6c-4904-aa27-3645cf557119 bafd13c1-536a-411a-b3ab-3e3253006209 State ----INSTALLED 5.0SP2 (172.16.113.118) INSTALLING-Stage-4-of-4
42
Displaying and adding nodes to a cluster Displaying the nodes in the cluster
To display the CPU and network loads collected from now to the next five seconds, enter the following:
Cluster> show [currentload]
Example output:
pubeth0(5 sec) pubeth1(5 sec) rx(MB/s) tx(MB/s) rx(MB/s) tx(MB/s) -------- -------- -------- -------0.01 0.00 0.01 0.00 0.01 0.00 0.01 0.00 27.83 12.54 0.01 0.00
Displays the node name if the node has already been added to the cluster. Displays the unique identifier for the node if it has not been added to the cluster. Example: node_1 or 35557d4c-6c05-4718-8691-a2224b621920
State
Displays the state of the node or the installation state of the system along with an IP address of the system if it is installed. Example: INSTALLED (172.16.113.118) RUNNING FAULTED EXITED LEAVING UNKNOWN
Indicates the CPU load Indicates the network load for the Public Interface 0 Indicates the network load for the Public Interface 1
If a system is physically removed from the cluster, or if you power off the system, you will not see the unique identifier for the system, installation state, and IP address for the system when you issue the cluster> show
Displaying and adding nodes to a cluster About adding a new node to the cluster
43
command. If you power the system back on, you will see the unique identifier for the system, the installation state, and the IP address for the system. You can then use the IP address to add the node back to the cluster. See About adding a new node to the cluster on page 43.
You first need to install the SFS software binaries on the node. You then add the node to your existing cluster. After the SFS software has been installed, the node enters the INSTALLED state. It can then be added to the cluster and become operational.
Note: Before proceeding, make sure that all of the nodes are physically connected to the private and public networks. This allows the software installation to run concurrently on each node. See the Veritas Storage Foundation Scalable File Server Installation Guide for more information.
Log in to the master account through the SFS console and access the network mode. To log in to the SFS console:
Use ssh master@consoleipaddr where consoleipaddr is the console IP address. For the password, enter the default password for the master account, master. You can change the password later by using the Admin> password command.
If the nodes have not been preconfigured, you need to preconfigure them. To preconfigure nodes:
44
Obtain the IP address ranges, as described in the Veritas Storage Foundation Scalable File Server Installation Guide, for the public network interfaces of the nodes to be installed. Add each IP address using the following command:
Network> ip addr add ipaddr netmask type
IP is a protocol that allows addresses to be attached to an Ethernet interface. Each Ethernet interface must have at least one address to use the protocol. Several different addresses can be attached to one Ethernet interface. Add the ipaddr and the netmask. And type is the type of IP address (virtual or physical).
Power up and press F12 for each new node to initiate a network boot. The SFS software is automatically installed on all of the nodes.
Enter Cluster> show to display the status of the node installation as it progresses.
Cluster> show
Displaying and adding nodes to a cluster Deleting a node from the cluster
45
node to the cluster. Only the nodes in the INSTALLED state can be added to the cluster. Note: This command is not supported in a single-node cluster. The coordinator disks must be visible on the newly added node as a prerequisite for I/O fencing to be configured successfully. Without the coordinator disks, I/O fencing will not load properly and the node will not be able to obtain cluster membership. For more information about I/O fencing, go to About I/O fencing. To add the new node to the cluster
1 2 3
Log in to SFS using the master user role. Enter the cluster mode. To add the new node to the cluster, enter the following:
Cluster> add nodeip
where nodeip is the IP address assigned to the INSTALLED node. For example:
Cluster> add 172.16.113.118 Checking ssh communication with 172.16.113.118 ...done Configuring the new node .....done Adding node to the cluster.........done Node added to the cluster New node's name is: sfs_1
46
Displaying and adding nodes to a cluster Deleting a node from the cluster
into the cluster. You must first reinstall the operating system SFS software (using the PXE installation) onto the node before adding it to the cluster. Refer to Veritas Storage Foundation Scalable File Server Installation Guide. After the node is deleted from the cluster, that node's IP address is free for use by the cluster for new nodes. The state of each node can be:
To show the current state of all nodes in the cluster, enter the following:
Cluster> show
where nodename is the nodename that appeared in the listing from the show command. For example:
Cluster> delete sfs_1 Stopping Cluster processes on sfs_1 ...........done deleting sfs_1's configuration from the cluster .....done Node sfs_1 deleted from the cluster
If you try to delete a node that is unreachable, you will receive the following warning message:
This SFS node is not reachable, you have to re-install the SFS software via PXE boot after deleting it. Do you want to delete it now? (y/n)
Displaying and adding nodes to a cluster Shutting down the cluster nodes
47
nodename indicates the name of the node you want to shut down. For example:
Cluster> shutdown sfs_1 Stopping Cluster processes on sfs_1 .......done Sent shutdown command to sfs_1
To shut down all of the nodes in the cluster, enter the following:
Cluster> shutdown all
Use all as the nodename if you want to shut down all of the nodes in the cluster. For example:
Cluster> shutdown all Stopping Cluster processes on all ...done Sent shutdown command to sfs_1 Sent shutdown command to sfs_2
48
Displaying and adding nodes to a cluster Rebooting the nodes in the cluster
To reboot a node
nodename indicates the name of the node you want to reboot. For example:
Cluster> reboot sfs_1 Stopping Cluster processes on sfs_1 .......done Sent reboot command to sfs_1
Use all as the nodename if you want to reboot all of the nodes in the cluster. For example:
Cluster> reboot all Stopping Cluster processes on all ...done Sent reboot command to sfs_1 Sent reboot command to sfs_2
Chapter
About network mode commands Displaying the network configuration and statistics About bonding Ethernet interfaces About DNS About IP commands About configuring IP addresses About configuring Ethernet interfaces About configuring routing tables About LDAP Before configuring LDAP settings About configuring LDAP server settings About administering SFS cluster's LDAP client About NIS About NSS About VLAN
50
DNS
Identifies enterprise DNS servers for SFS use. See About DNS on page 54.
IP
Manages the SFS cluster IP addresses. See About IP commands on page 58.
LDAP
Identifies the LDAP servers thatSFS can use. See About LDAP on page 72.
NIS
Identifies the NIS server that SFS can use. See About NIS on page 81.
NSS
Provides a single configuration location to identify the services (such as NIS or LDAP) for network information such as hosts, groups, or passwords. See About NSS on page 84.
VLAN
Views, adds, or deletes VLAN interfaces. See Configuring VLAN on page 86.
Configuring SFS network settings Displaying the network configuration and statistics
51
To display the cluster's network configuration and statistics, enter the following:
Network> show Interface Statistics -------------------sfs_1 ------------Interfaces MTU Metric lo 16436 1 priveth0 1500 1 priveth1 1500 1 pubeth0 1500 1 pubeth1 1500 1 TX-OK 13766 953273 506641 152817 673 TX-DROP 0 0 0 0 0 TX-ERR 0 0 0 0 0
RX-OK 13766 452390 325940 25806318 25755262 Flag LRU BMR BMRU BMRU BMRU
RX-DROP 0 0 0 0 0
RX-ERR 0 0 0 0 0
RX-FRAME 0 0 0 0 0
TX-CAR 0 0 0 0 0
Routing Table ------------sfs_1 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1
For definitions of the column headings in the Routing Table, see To display the routing tables of the nodes in the cluster.
52
create
Creates a bond between sets of two or more correspondingly named Ethernet interfaces on all SFS cluster nodes. See To create a bond on page 53.
remove
Removes a bond between two or more correspondingly named Ethernet interfaces on all SFS cluster nodes. The bond show command displays the names. See To remove a bond on page 54.
53
To display a bond and the algorithm used to distribute traffic among the bonded interfaces, enter the following:
Network> bond show
To create a bond
To create a bond between sets of two or more Ethernet interfaces on all SFS cluster nodes, enter the following:
Network> bond create interfacelist mode interfacelist Specifies a comma-separated list of public Ethernet interfaces to bond. Bonds are created on correspondingly named sets of Ethernet interfaces on each cluster node. Specifies how the bonded Ethernet interfaces divide the traffic.
mode
For example:
Network> bond create pubeth1,pubeth2 broadcast 100% [#] Bonding interfaces. Please wait... bond created, the bond name is: bond0
active-backup
54
balance-xor
Transmits based on the selected transmit hash policy. The default policy is a simple. This mode provides load balancing and fault tolerance. You can use the xmit_hash_policy option to select alternate transmit policies.
broadcast
Transmits everything on all slave interfaces and provides fault tolerance. Creates aggregation groups with the same speed and duplex settings. It uses all slaves in the active aggregator based on the 802.3ad specification. Provides channel bonding that does not require special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. The current slave receives incoming traffic. If the receiving slave fails, another slave takes over its MAC address. Includes balance-tlb plus Receive Load Balancing (RLB) for IPV4 traffic. This mode does not require any special switch support. ARP negotiation load balances the receive.
802.3ad
balance-tlb
balance-alb
To remove a bond
To remove a bond from all of the nodes in a cluster, enter the following:
Network> bond remove bondname
About DNS
The Domain Name System (DNS) service translates between numeric IP addresses and their associated host names. The DNS commands let you view or change an SFS cluster 's DNS settings. You can configure an SFS cluster's DNS lookup service to use up to three DNS servers.
55
You must enable the SFS cluster's DNS name service before you specify the DNS servers it is to use for lookups. Table 4-3 Command
dns show
dns enable
Enables SFS to perform DNS lookups. When DNS is enabled, the SFS cluster's DNS service uses the data center's DNS server(s) to determine the IP addresses of network entities such as SNMP, NTP, LDAP, and NIS servers with which the cluster must communicate. See To enable DNS settings on page 56.
dns disable
Disables DNS lookups. If the DNS services are already disabled, the command does not respond. See To disable DNS settings on page 56.
Specifies the IP addresses of DNS name servers to be used by the SFS DNS lookup service. The order of the IP addresses is the order in which the name servers are to be used. Enter the IP addresses of the name servers. The order of the IP addresses is the order in which the name servers are to be used. See To specify IP addresses of DNS name servers on page 57.
Removes the IP addresses of DNS name servers from the cluster's DNS lookup service database. See To remove name servers list used by DNS on page 57.
Enter the domain name that the SFS cluster will be in. For the required information, contact your Network Administrator. This command clears any previously set domain name. Before you use this procedure, you must enable the DNS server. See To set the domain name for the DNS server on page 57.
Removes the DNS domain name. See To remove domain name used by DNS on page 58.
56
To enable DNS settings to allow SFS hosts to do lookups and verify the results, enter the following commands:
Network> dns enable Network> Network> dns show DNS Status : Enabled domain : cluster1.com nameserver : 10.216.50.132
57
To specify the IP addresses of DNS name servers to be used by the SFS DNS service and verify the results, enter the following commands:
Network> dns set nameservers nameserver1 [nameserver2] [nameserver3]
For example:
Network> dns set nameservers 10.216.50.199 10.216.50.200 Network> Network> dns show DNS Status : Enabled nameserver : 10.216.50.199 nameserver : 10.216.50.200
To remove the name servers list used by DNS and verify the results, enter the following commands:
Network> dns clear nameservers Network> Network> dns show DNS Status : Enabled
To set the domain name for the DNS server, enter the following:
Network> dns set domainname domainname
where domainname is the domain name for the DNS server. For example:
Network> dns set domainname example.com Network> Network> dns show DNS Status : Enabled domain : example.com nameserver : 10.216.50.132
58
About IP commands
Internet Protocol (IP) commands configure your routing tables, Ethernet interfaces, and IP addresses, and display the settings. The following sections describe how to configure the IP commands:
About configuring IP addresses About configuring Ethernet interfaces Configuring routing tables
IP commands Definition
Displays the IP addresses, the devices (Ethernet interfaces) they are assigned to, and their attributes.
59
60
Configuring IP addresses
To configure your IP addresses, perform the following commands. To display all the IP addresses for the cluster
Device -----pubeth0 pubeth1 pubeth0 pubeth1 pubeth0 pubeth0 pubeth0 pubeth1 pubeth1
Node ---sfs_1 sfs_1 sfs_2 sfs_2 sfs_1 sfs_2 sfs_1 sfs_2 sfs_1
Type ---Physical Physical Physical Physical Virtual Virtual Virtual Virtual Virtual
Status ------
61
Status
A virtual IP can be in the FAULTED state if it is already being used. It can also be in the FAULTED state if the corresponding device is not working on all nodes in the cluster (for example, a disconnected cable).
62
For example, to add a virtual IP address on a normal device, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual pubeth0 SFS ip addr Success V-288-0 ip addr add successful. Network>
For example, to add a virtual IP address on a bond device, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual bond0 SFS ip addr Success V-288-0 ip addr add successful. Network>
For example, to add a virtual IP address on a VLAN device created over a normal device with VLAN ID 3, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual pubeth0.3 SFS ip addr Success V-288-0 ip addr add successful. Network>
For example, to add a virtual IP address on a VLAN device created over a bond device with VLAN ID 3, enter the following:
Network> ip addr add 10.10.10.10 255.255.255.0 virtual bond0.3 SFS ip addr Success V-288-0 ip addr add successful. Network>
63
To change an IP address to the online mode on a specified node, enter the following:
Network> ip addr online ipaddr nodename ipaddr nodename Specifies the IP address that needs to be brought online. Specifies the nodename on which the IP address needs to be brought online. If you do not want to enter a specific nodename, enter any with the IP address.
For example:
Network> ip addr online 10.10.10.15 node5_2 Network> ip addr show IP Netmask Device Node ---------------10.216.114.212 255.255.248.0 pubeth0 node5_1 10.216.114.213 255.255.248.0 pubeth1 node5_1 10.216.114.214 255.255.248.0 pubeth0 node5_2 10.216.114.215 255.255.248.0 pubeth1 node5_2 10.216.114.217 255.255.248.0 pubeth0 node5_1 10.10.10.10 255.255.248.0 pubeth0 node5_1 10.10.10.11 255.255.248.0 pubeth1 node5_1 10.10.10.12 255.255.248.0 pubeth0 node5_2 10.10.10.13 255.255.248.0 pubeth1 node5_2 10.10.10.15 255.255.248.0 pubeth0 node5_2
Type ---Physical Physical Physical Physical Virtual Virtual Virtual Virtual Virtual Virtual
Status ------
64
To modify an IP address
A valid netmask has a "1" on the far right, with all "1's" to the left in bitwise form. If the specified oldipaddr is not assigned to the cluster, an error message is displayed. If you enter an invalid IP address (one that is not four bytes or has a byte value greater than 255), an error message is displayed. If the new IP address is already being used, an error message is displayed. For example:
Network> ip addr modify 10.10.10.15 10.10.10.16 255.255.240.0 SFS ip addr Success V-288-0 ip addr modify successful.
where ipaddr is the IP address to remove from the cluster. For example:
Network> ip addr del 10.10.10.15 SFS ip addr Success V-288-0 ip addr del successful. Network>
65
For example:
Network> ip link show sfs_1 pubeth0 Nodename -------sfs_1 Device Status ------ -----pubeth0 UP MTU Detect --- -----1500 yes Speed -----100Mb/s
66
up - Brings the Ethernet interface online. down - Brings the Ethernet interface offline.
mtu MTU - Changes the Ethernet interface's Maximum Transmission Unit (MTU) to the value that is specified in the argument field. detect- Displays whether the Ethernet interface is physically connected or not. speed- Displays the device speed. argument The argument field is used only when you enter mtu in the operation field. Setting the incorrect MTU value causes the console IP to become unavailable. The argument field specifies what the MTU of the specified Ethernet interface on the specified node should be changed to. The MTU value must be an unsigned integer between 46 and 9216. If you enter the argument field, but do not enter an MTU in the operation field, the argument is ignored.
For example:
Network> ip link set all pubeth0 mtu 1600 sfs_1 : mtu updated on pubeth0 sfs_2 : mtu updated on pubeth0
67
Network> ip link show Nodename -------sfs_1 sfs_1 sfs_2 sfs_2 Device Status ------ -----pubeth0 UP pubeth1 UP pubeth0 UP pubeth1 UP MTU Detect --- -----1600 yes 1500 yes 1600 yes 1500 yes Speed -----100Mb/s 100Mb/s 100Mb/s 100Mb/s
The target network node's IP address and accompanying netmask. Gateways IP address. Optionally, a specific Ethernet interface via which to communicate with the target. This is useful, for example, if the demands of multiple remote clients are likely to exceed a single gateways throughput capacity.
You add or remove routing table entries using the Network> mode ip route command. Table 4-6 lists the commands used to configure the routing tables of the nodes in the cluster. Table 4-6 Command
route show
68
route del
Deletes a route used by the cluster. Use all for nodename to delete the route from all of the nodes in the cluster. The combination of ipaddr and netmask specifies the network or host for which the route is deleted. Use a value of 255.255.255.255 for the netmask to delete a host route to ipaddr. See To delete route entries from the routing tables of nodes in the cluster on page 72.
69
To display the routing tables of the nodes in the cluster, enter the following:
Network> ip route show [nodename]
where nodename is the node whose routing tables you want to display. To see the routing table for all of the nodes in the cluster, enter all. For example:
Network> ip route show all sfs_1 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1
sfs_2 ------------Destination Gateway 172.27.75.0 0.0.0.0 10.182.96.0 0.0.0.0 10.182.96.0 0.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0 10.182.96.1
Destination
Displays the destination network or destination host for which the route is defined. Displays a network node equipped for interfacing with another network. Displays the netmask.
Gateway
Genmask
70
Flags
MSS
Displays maximum segment size. The default is 0. You cannot modify this attribute. Displays the maximum amount of data the system accepts in a single burst from the remote host. The default is 0. You cannot modify this attribute. Displays the initial round trip time with which TCP connections start. The default is 0. You cannot modify this attribute. Displays the interface. On UNIX systems, the device name lo refers to the loopback interface.
Window
irtt
Iface
To add a route entry to the routing table of nodes in the cluster, enter the following:
Network> ip route add nodename ipaddr netmask via gateway [dev device] nodename Specifies the node to whose routing table the route is to be added. To add a route path to all the nodes, use all in the nodename field. If you enter a node that is not a part of the cluster, an error message is displayed. ipaddr Specifies the destination of the IP address. If you enter an invalid IP address, then a message notifies you before you fill in other fields. netmask Specifies the netmask associated with the IP address that is entered for the ipaddr field. Use a netmask value of 255.255.255.255 for the netmask to add a host route to ipaddr. via This is a required field. You must type in the word.
71
gateway
Specifies the gateway IP address used for the route. If you enter an invalid gateway IP address, then an error message is displayed. To add a route that does not use a gateway, enter a value of 0.0.0.0.
Specifies the route device option. You must type in the word. Specifies which Ethernet interface on the node the route path is added to. This variable is optional. You can specify the following values:
any - Default pubeth0 - Public Ethernet interface pubeth1 - Public Ethernet interface
The Ethernet interface field is required only when you specify dev in the dev field. If you omit the dev and device fields, SFS uses a default Ethernet interface.
For example:
Network> ip route add sfs_1 10.10.10.10 255.255.255.255 via 0.0.0.0 dev pubeth0 sfs_1: Route added successfully
72
To delete route entries from the routing tables of nodes in the cluster
To delete route entries from the routing tables of nodes in the cluster, enter the following:
Network> ip route del nodename ipaddr netmask nodename Specifies the route entry from which the node is deleted. To delete the route entry from all nodes, use the all option in this field. ipaddr Specifies the destination IP address of the route entry to be deleted. If you enter an invalid IP address a message notifies you before you enter other fields. netmask Specifies the IP address to be used.
For example:
Network> ip route del sfs_1 10.216.128.0 255.255.255.255 sfs_1: Route deleted successfully
About LDAP
The Lightweight Directory Access Protocol (LDAP) is the protocol used to communicate with LDAP servers. The LDAP servers are the entities that perform the service. In SFS the most common use of LDAP is user authentication. For sites that use an LDAP server for access or authentication, SFS provides a simple LDAP client configuration interface.
IP address or host name of the LDAP server. You also need the port number of the LDAP server. Base (or root) distinguished name (DN), for example, cn=employees,c=us. LDAP database searches start here.
73
Bind distinguished name (DN) and password, for example, ou=engineering,c=us. This allows read access to portions of the LDAP database to search for information. Base DN for users, for example, ou=users,dc=com. This allows access to the LDAP directory to search for and authenticate users. Base DN for groups, for example, ou=groups,dc=com. This allows access to the LDAP database, to search for groups. Root bind DN and password. This allows write access to the LDAP database, to modify information, such as changing a user's password. Secure Sockets Layer (SSL). Configures an SFS cluster to use the Secure Sockets Layer (SSL) protocol to communicate with the LDAP server. Password hash algorithm, for example md5, if a specific password encryption method is used with your LDAP server.
Configuring LDAP server settings Administering the SFS cluster's LDAP client
74
set binddn
Sets the bind Distinguished Name (DN) and its password for the LDAP server. This DN is used to bind with the LDAP server for read access. For LDAP authentication, most attributes need read access.
Note: You must set the LDAP users, groups, and netgroups base DN.
set See To set the LDAP users, groups, or netgroups base DN on page 78. netgroups-basedn
75
set password-hash Sets the LDAP password hash algorithm used when you set or change the LDAP user's password. The password is encrypted with the configured hash algorithm before it is sent to the LDAP server and stored in the LDAP directory.
To set the base DN for the LDAP server, enter the following:
Network> ldap set basedn value
For example:
Network> ldap set basedn dc=example,dc=com OK Completed
76
For example, if you enter an IP address for the value you get the following message:
Network> ldap set server 10.10.10.10 OK Completed
For example:
Network> ldap set ssl on OK Completed
77
To set the bind DN for the LDAP server, enter the following:
Network> ldap set binddn value
The value setting is mandatory. You are prompted to supply a password. You must use your LDAP server password. For example:
Network> ldap set binddn cn Enter password for 'cn': *** OK Completed
To set the root bind DN for the LDAP server, enter the following:
Network> ldap set rootbinddn value
You are prompted to supply a password. You must use your LDAP server password. For example:
Network> ldap set rootbinddn dc Enter password for 'dc': *** OK Completed
78
To set the LDAP users, groups, or netgroups base DN, enter the following:
Network> ldap set users-basedn value Network> ldap set groups-basedn value Network> ldap set netgroups-basedn value users-basedn value Specifies the value for the users-basedn. For example: ou=users,dc=example,dc=com (default)
groups-basedn value
Specifies the value for the groups-basedn. For example: ou=groups,dc=example,dc=com (default)
netgroups-basedn Specifies the value for the netgroups-basedn. For example: value ou=netgroups,dc=example,dc=com (default)
For example:
Network> ldap set users-basedn ou=Users,dc=example,dc=com OK Completed
For example:
Network> ldap set password-hash clear OK Completed
Configuring SFS network settings About administering SFS cluster's LDAP client
79
For example:
Network> ldap get server LDAP server: ldap-server.example.com OK Completed
For example:
Network> ldap clear binddn OK Completed
ldap enable
Enables the LDAP client configuration. See To enable LDAP client configuration on page 81.
ldap disable
Disables the LDAP client configuration. This command stops SFS from querying the LDAP service. See To disable LDAP client configuration on page 81.
80
Configuring SFS network settings About administering SFS cluster's LDAP client
groups netgroups
If you do not include one of the optional variables, the command displays all the configured settings for the LDAP client. For example:
Network> ldap show LDAP client is enabled. ======================= LDAP server: LDAP port: LDAP base DN: LDAP over SSL: LDAP bind DN: LDAP root bind DN: LDAP password hash: LDAP users base DN: LDAP groups base DN: LDAP netgroups base DN: OK Completed Network>
ldap_server 389 (default) dc=example,dc=com on cn=binduser,dc=example,dc=com cn=admin,dc=example,dc=com md5 ou=Users,dc=example,dc=com ou=Groups,dc=example,dc=com ou=Netgroups,dc=example,dc=com
LDAP clients use the LDAPv3 protocol for communicating with the server. Enabling the LDAP client configures the Pluggable Authentication Module (PAM) files to use LDAP. PAM is the standard authentication framework for Linux.
81
For example:
Network> ldap enable Network>
LDAP clients use the LDAPv3 protocol for communicating with the server. This command configures the PAM configuration files so that they do not use LDAP. To disable LDAP client configuration
For example:
Network> ldap disable Network>
About NIS
SFS supports Network Information Service (NIS), implemented in a NIS server, as an authentication authority. You can use NIS to authenticate computers. If your environment uses NIS, enable the NIS-based authentication on the SFS cluster. Table 4-9 Command
nis show
Sets the NIS domain name in the SFS cluster. See To set the NIS domain name on all nodes in the cluster on page 82.
nis set servername Sets the NIS server name in the SFS cluster. See To set NIS server name on all nodes in the cluster on page 83.
82
nis disable
Disables the NIS clients in the SFS cluster. See To disable NIS clients on page 83.
groups
netgroups
For example:
Network> nis show NIS Status : Disabled domain : NIS Server :
To set the NIS domain name on the cluster nodes, enter the following:
Network> nis set domainname [domainname]
83
To set the NIS server name on all cluster nodes, enter the following:
Network> nis set servername servername
where servername is the NIS server name. You can use the server's name or IP address. For example:
Network> nis servername 10.10.10.10 Setting NIS Server "10.10.10.10"
For example:
Network> nis enable Enabling NIS Client on all the nodes..... Done. Please enable NIS in nsswitch settings for required services.
For example:
Network> nis disable Disabling NIS Client on all nodes Please disable NIS in nsswitch settings for required services.
84
About NSS
Name Service Switch (NSS) is an SFS cluster service which provides a single configuration location to identify the services (such as NIS or LDAP) for network information such as hosts, groups, or passwords. For example, host information may be on an NIS server. Group information may be in an LDAP database. The NSS configuration specifies which network services the SFS cluster should use to authenticate hosts, users, groups, and netgroups. The configuration also specifies the order in which multiple services should be queried. Table 4-10 Command
nsswitch show
nsswitch conf
Configures the order of the NSS services. See To configure the NSS lookup order on page 84.
ldap
ldap
85
Selects the hosts file. Selects the netgroups file. Selects the password. Selects the shadow file. Specifies the following NSS lookup order with the following values:
value1 (required)- { files/nis/winbind/ldap } value2 (optional) - { files/nis/winbind/ldap } value3 (optional) - { files/nis/winbind/ldap } value4 (optional) - { files/nis/winbind/ldap }
To select DNS, you must use the following command: Network> nsswitch conf hosts nsswitch conf hosts <value1> [value2] [value3] --select hosts file value1 value2 value3 : Choose the type (files) (files) : Type the type (files/nis/dns) [] : Type the type (files/nis/dns) []
For example:
Network> nsswitch conf shadow files ldap Network> nsswitch show group: files nis winbind hosts: files nis dns netgroup: nis passwd: files nis winbind shadow: files ldap
ldap
ldap
About VLAN
The virtual LAN (VLAN) feature lets you create VLAN interfaces on the SFS nodes and administer them as any other VLAN interfaces. The VLAN interfaces are created using Linux support for VLAN interfaces. The Network> vlan commands view, add, or delete VLAN interfaces.
86
vlan add
vlan del
Configuring VLAN
To display the VLAN interfaces
For example:
VLAN ----pubeth0.2 DEVICE -----pubeth0 VLAN id ------2
87
vlan_id
For example:
Network> vlan add pubeth1 2 Network> vlan show VLAN ----pubeth0.2 pubeth1.2 DEVICE -----pubeth0 pubeth1 VLAN id ------2 2
where the vlan_device name combines the interface on which the VLAN is based and the VLAN ID separated by '.'. For example:
Network> vlan del pubeth0.2 Network> vlan show VLAN ----pubeth1.2 DEVICE -----pubeth1 VLAN id ------2
88
Chapter
server start
Starts the NFS server. See Starting the NFS server on page 91.
server stop
Stops the NFS server. See To stop the NFS server on page 91.
90
show fs
Displays all of the online file systems and snapshots that can be exported. See To display a file system and snapshots that can be exported on page 93.
Prior to starting the NFS server, check on the status of the server by entering:
NFS> server status
For example:
NFS> server status NFS Status on sfs_1 : OFFLINE NFS Status on sfs_2 : OFFLINE
The states (ONLINE, OFFLINE, and FAULTED) correspond to each SFS node identified by the node name. The states of the node may vary depending on the situation for that particular node. The possible states of the NFS> server status command are:
ONLINE OFFLINE FAULTED Indicates that the node can serve NFS protocols to the client. Indicates the NFS services on that node are down. Indicates something is wrong with the NFS service on the node.
You can run the NFS> server start command to restart the NFS services, and only the nodes where NFS services have problems, will be restarted.
91
You can use the NFS> server start command to clear an OFFLINE state from the NFS> server status output by only restarting the services that are offline. You can run the NFS> server start command multiple times without it affecting the already-started NFS server. For example:
NFS> server start ..Success.
Run the NFS> server status command again to confirm the change.
NFS> server status NFS Status on sfs_1 : ONLINE NFS Status on sfs_2 : ONLINE
For example:
NFS> server stop ..Success.
You will receive an error if you try to stop an already stopped NFS server.
92
where nodename specifies the node name for which you are trying to obtain the statistical information. If the nodename is not specified, statistics for all the nodes in the cluster are displayed. For example:
NFS> stat sfs_01 sfs_01 ---------------Server rpc stats: calls badcalls 52517 0 Server nfs v2: null getattr 10 100% 0 0% read wrcache 0 0% 0 0% link symlink 0 0% 0 0% Server null 11 read 4138 remove 0 fsstat 0 nfs v3: getattr 0% 17973 35% write 8% 4137 8% rmdir 0% 1 0% fsinfo 0% 2 0%
badauth 0
badclnt 0
xdrcall 0
93
To display online file systems and the snapshots that can be exported, enter the following:
NFS> show fs
For example:
NFS> show fs FS/Snapshot =========== fs1
94
Chapter
Configuring storage
This chapter includes the following topics:
About storage provisioning and management About configuring storage pools About configuring disks About displaying information for all disk devices Increasing the storage capacity of a LUN Printing WWN information Initiating SFS host discovery of LUNs About I/O fencing
96
Use the SFS Storage> pool commands to create storage pools using disks (the named LANS). Each disk can only belong to one storage pool. If you try to add a disk that is already in use, an error message is displayed. With these storage pools, use the Storage> fs commands to create file systems with different layouts (for example mirrored, striped, striped-mirror). The storage commands are defined in Table 6-1. To access the commands, log into the administrative console (master, system-admin, or storage-admin) and enter the Storage> mode. For login instructions, go to About using the SFS command-line interface. Table 6-1 Command
pool
pool adddisk, pool Configures the disk(s) in the pool. mvdisk, pool See About configuring disks on page 101. rmdisk hba Prints the World Wide Name (WWN) information for all of the nodes in the cluster. See Printing WWN information on page 109. scanbus Scans all of the SCSI devices connected to all of the nodes in the cluster. See Initiating SFS host discovery of LUNs on page 110. fencing Protects the data integrity if the split-brain condition occurs. See About I/O fencing on page 111. disk list Lists all of the available disks, and identifies which ones you want to assign to which pools. See About displaying information for all disk devices on page 105.
97
them to pools. Disk discovery and pool assignment are done once. SFS propagates disk information to all cluster nodes. You must first create storage pools that can be used to build file systems on. Disks and pools can be specified in the same command provided the disks are part of an existing storage pool. The pool and disk specified first are allocated space before other pools and disks. If the specified disk is larger than the space allocated, the reminder of the space is still utilized when another file system is created spanning the same disk. Table 6-2 Command
pool create
Note: Disks being used for the pool create command must support
SCSI-3 PGR registrations if I/O fencing is enabled.
Note: The minimum size of disks required for creating a pool or adding
a disk to the pool is 10 MB. See To create the storage pool used to create a file system on page 99. pool list Lists all of the available disks, and identifies which ones you want to assign to which pools. A storage pool is a collection of disks from shared storage; the pool is used as the source for adding file system capacity as needed.
Note: Your output for the pool list command depends upon which
node console is running. See To list your pools on page 100. pool rename Renames a pool. See To rename a pool on page 100. pool destroy Destroys storage pools used to create file systems. Destroying a pool does not delete the data on the disks that make up the storage pool. See To destroy a storage pool on page 101.
98
99
List all of the available disks, and identify which ones you want to assign to which pools.
Storage> disk list Disk sfs_01 ==== ======== disk1 OK
disk1, disk2,...
For example:
Storage> pool create pool1 Disk_0,Disk_1 SFS pool Success V-288-1015 Pool pool1 created successfully 100% [#] Creating pool pool1
100
For example:
Storage> pool list Pool List of disks ----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_3 pool3 Disk_4 Disk_5
To rename a pool
new_name
For example:
Storage> pool rename pool1 p01 SFS pool Success V-288-0 Disk(s) Pool rename successful.
101
where pool_name specifies the storage pool to delete. If the specified pool_name is not an existing storage pool, an error message is displayed. For example:
Storage> pool destroy pool1 SFS pool Success V-288-988 Pool pool1 is destroyed.
Because you cannot destroy an Unallocated storage pool, you need to remove the disk from the storage pool using the Storage> pool rmdisk command prior to trying to destroy the storage pool. Go to To remove a disk. If you want to move the disk from the unallocated pool to another existing pool, you can use the Storage> pool mvdisk command. Go to To move disks from one pool to another. To list free space for pools
where pool_name specifies the pool for which you want to display free space information. If a specified pool does not exist, an error message is displayed. If pool_name is omitted, the free space for every pool is displayed, but information for specific disks is not displayed. For example:
storage> pool free Pool Free Space ==== ========== pool_1 0 KB pool_2 0 KB pool_3 57.46M
102
The pool and disk that are specified first are allocated space before other pools and disks. If the specified disk is larger than the space allocated, the remainder of the space is still utilized when another file system is created spanning the same disk. Table 6-3 Command
pool adddisk
Note: Disks being used for the pool adddisk command must support
SCSI-3 PGR registrations if I/O fencing is enabled. See To add a disk on page 103. pool mvdisk You can move disks from one storage pool to another.
Note: You cannot move a disk from one storage pool to another if the
disk has data on it. See To move disks from one pool to another on page 104. pool rmdisk You can remove a disk from a pool.
Note: You cannot remove a disk from a pool if the disk has data on
it. See To remove a disk on page 105. If a specified disk does not exist, an error message is displayed. If one of the disks does not exist, then none of the disks are removed. A pool cannot exist if there are no disks assigned to it. If a disk specified to be removed is the only disk for that pool, the pool is removed as well as the assigned disk. If the specified disk to be removed is being used by a file system, then that disk will not be removed.
103
Configuring disks
To add a disk
disk1,disk2,...
For example:
Storage> pool adddisk pool2 Disk_2 SFS pool Success V-288-0 Disk(s) Disk_2 are added to pool2 successfully.
104
To move a disk from one pool to another, or from an unallocated pool to an existing pool, enter the following:
Storage> pool mvdisk src_pool dest_pool disk1[,disk2,...] src_pool Specifies the source pool to move the disks from. If the specified source pool does not exist, an error message is displayed. Specifies the destination pool to move the disks to. If the specified destination pool does not exist, a new pool is created with the specified name. The disk is moved to that pool. Specifies the disks to be moved. To specify multiple disks to be moved, use a comma with no space in between. If a specified disk is not part of the source pool or does not exist, an error message is displayed. If one of the disks to be moved does not exist, all of the specified disks to be moved will not be moved. If all of the disks for the pool are moved, the pool is removed (deleted from the system), since there are no disks associated with the pool.
dest_pool
disk1,disk2,...
For example:
Storage> pool mvdisk p01 pool2 Disk_0 SFS pool Success V-288-0 Disk(s) moved successfully.
105
To remove a disk
where disk1,disk2 specifies the disk(s) to be removed from the pool. An unallocated pool is a reserved pool for holding disks that are removed from other pools. For example:
Storage> pool list Pool Name List of disks -----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_5 pool3 Disk_3 Disk_4 Unallocated Disk_6 Storage> pool rmdisk Disk_6 SFS pool Success V-288-987 Disk(s) Disk_6 are removed successfully. Storage> pool list Pool Name List of disks -----------------------------pool1 Disk_0 Disk_1 pool2 Disk_2 Disk_5 pool3 Disk_3 Disk_4
To remove additional disks, use a comma with no spaces in between. For example:
Storage> pool rmdisk disk1,disk2 Storage>
106
See To display a list of disks and nodes in tabular form on page 107. disk list detail Displays the disk information, including a list of disks and their properties. If the console server is unable to access any disk, but if any other node in the cluster is able to access that disk, then that disk is shown as "---." See To display the disk information on page 108. disk list paths Displays the list of multiple paths of disks connected to all of the nodes in the cluster. It also shows the status of each path on each node in the cluster. See To display the disk list paths on page 108. disk list types Displays the enclosure name, array name, and array type for a particular disk that is present on all of the nodes in the cluster. See To display information for all disk devices associated with nodes in a cluster on page 108.
Displaying information for all disk devices associated with nodes in a cluster
Depending on which command variable you use, the column headings will differ.
Disk Serial Number Enclosure Size Use% Indicates the disk name. Indicates the serial number for the disk. Indicates the type of storage enclosure. Indicates the size of the disk. Indicates the percentage of the disk that is being used.
107
ID
ID column consists of the following four fields. A ":" separates these fields.
VendorID - Specifies the name of the storage vendor, for example, NETAPP, HITACHI, IBM, EMC, HP, and so on.
ProductID - Specifies the ProductID based on vendor. Each vendor manufactures different products. For example, HITACHI has HDS5700, HDS5800, and HDS9200 products. These products have ProductIDs such as DF350, DF400, and DF500. TargetID - Specifies the TargetID. Each port of an array is a target. Two different arrays or two ports of the same array have different TargetIDs. TargetIDs start from 0. LunID - Specifies the ID of the LUN. This should not be confused with the LUN serial number. LUN serial numbers uniquely identify a LUN in a target. Whereas a LunID uniquely identifies a LUN in an initiator group (or host group). Two LANS in the same initiator group cannot have the same LunID. For example, if a LUN is assigned to two clusters, then the LunID of that LUN can be different in different clusters, but the serial number is the same.
Enclosure
Name of the enclosure to distinguish between arrays having the same array name. Indicates the name of the storage array. Indicates the type of storage array and can contain any one of the three values: Disk for JBODs, Active-Active, and Active-Passive.
To display a list of disks and nodes in tabular form, enter the following:
Storage> disk list stats Disk ==== disk1 sfs_1 ======== OK sfs_2 ======== OK
108
To display information for all disk devices associated with nodes in a cluster
To display information for all of the disk devices connected to all of the nodes in a cluster, enter the following:
Storage> disk list types Disk ==== Disk_0 Disk_1 Disk_3 Disk_4 Disk_5 Enclosure ========== Disk Disk Disk Disk Disk Array Name ========== Disk Disk Disk Disk Disk Array Type ========== Disk Disk Disk Disk Disk
109
Warning: When increasing the storage capacity of a disk, make sure that the storage array does not reformat it. This will destroy the data. For help, contact your Storage Administrator. To increase the storage capacity of a LUN
1 2
Increase the storage capacity of the disk on your storage array. Contact your Storage Administrator for assistance. Run the SFS Storage> scanbus command to make sure that the disk is connected to the SFS cluster. See Initiating SFS host discovery of LUNs on page 110.
110
where you can use the host_name variable if you want to find WWN information for a particular node. Example output:
Storage> hba Node ==== sfs_1 sfs_2 sfs_3
Host Initiator HBA WWNs ======================= 21:00:00:e0:8b:9d:85:27, 21:01:00:e0:8b:bd:85:27 21:00:00:e0:8b:9d:65:1c, 21:01:00:e0:8b:bd:65:1c 21:00:00:e0:8b:9d:88:27, 21:01:00:e0:8b:bd:88:27
There are two WWN on each row that represent the two HBAs for each node.
To scan the SCSI devices connected to all of the nodes in the cluster, enter the following:
Storage> scanbus
For example:
Storage> scanbus 100% [#] Scanning the bus for disks Storage>
111
112
fencing replace
Replaces a coordinator disk with another disk. The command first checks the whether the replacement disks is in failed state or not. If its in the failed state, then an error appears. After the command verifies that the replacement disk is not in a failed state, it checks whether the replacement disk is already being used by an existing pool (storage or coordinator). If it is not being used by any pool, the original disk is replaced. See To replace an existing coordinator disk on page 115.
fencing off
Disables I/O fencing on all of the nodes. This command does not free up the coordinator disks. See To disable I/O fencing on page 115.
fencing destroy
Destroys the coordinator pool if I/O fencing is disabled. This command is not supported on a single-node setup. See To destroy the coordinator pool on page 115.
113
In the following example, the I/O fencing is configured on the three disks Disk_0,Disk_1 and Disk_4 and the column header Coord Flag On indicates that the coordinator disk group is in an imported state and these disks are in good condition. If you check the Storage> disk list output, it will be in the OK state.
IO Fencing Status ================= Disabled Disk Name ============== Disk_0 Disk_1 Disk_2 Coord Flag On ============== Yes Yes Yes
114
The three disks are optional arguments and are required only if the coordinator pool does not contain any disks. You may still provide three disks for fencing with the coordinator pool already containing three disks. This will however remove the three disks previously used for fencing from the coordinator pool and configure I/O fencing on the new disks. For example:
Storage> fencing on SFS fencing Success V-288-0 IO Fencing feature now Enabled 100% [#] Enabling fencing Storage> fencing status IO Fencing Status ================= Enabled Disk Name ============== Disk_0 Disk_1 Disk_2 Coord Flag On ============== Yes Yes Yes
115
where src_disk is the source disk and dest_disk is the destination disk. For example:
Storage> fencing replace Disk_2 Disk_3 SFS fencing Success V-288-0 Replaced disk Disk_2 with Disk_3 successfully. 100% [#] Replacing disk Disk_2 with Disk_3 Storage> fencing status IO Fencing Status ================= Enabled Disk Name ============== Disk_0 Disk_1 Disk_3 Coord Flag On ============== Yes Yes Yes
116
Chapter
About creating and maintaining file systems Listing all file systems and associated information About creating file systems Adding or removing a mirror to a file system Configuring FastResync for a file system Disabling the FastResync option for a file system Increasing the size of a file system Decreasing the size of a file system Checking and repairing a file system Changing the status of a file system Destroying a file system About snapshots About snapshot schedules
118
Creating and maintaining file systems About creating and maintaining file systems
For more information on the fs commands, See Table 7-1 on page 118. File systems consist of both metadata and file system data. Metadata contains information such as the last modification date, creation time, permissions, and so on. The total amount of space required for the metadata depends on the number of files in the file system. A file system with many small files requires more space to store metadata. A file system with fewer larger files requires less space for handling the metadata. When you create a file system, you need to set aside some space for handling the metadata. The space required is generally proportional to the size of the file system. For this reason, after you create the file system with the Storage> fs list command the output includes non-zero percentages. The space set aside for handling metadata may increase or decrease as needed. For example, a file system on a 1 GB volume takes approximately 35 MB (about 3%) initially for storing metadata. In contrast, a file system of 10 MB requires approximately 3.3 MB (30%) initially for storing the metadata. To access the commands, log into the administrative console (as a master, system-admin, or storage-admin) and enter Storage> mode. For login instructions, go to About using the SFS command-line interface. Table 7-1 Command
fs list
fs create
Creates a file system. See About creating file systems on page 120.
fs addmirror
Adds a mirror to a file system. See To add a mirror to a file system on page 124.
fs rmmirror
Removes a mirror from a file system. See To remove a mirror from a file system on page 126.
fs setfastresync
Keeps the mirrors in the file system in a consistent state. See To enable the FastResync option on page 127.
fs unsetfastresync Disables the FastResync option for a file system. See To disable the FastResync option on page 127.
Creating and maintaining file systems About creating and maintaining file systems
119
fs growby
Increases the size of a file system by a specified size. See To increase the size of a file system by a specified size on page 128.
fs shrinkto
Decreases the size of a file system to a specified size. See To decrease the size of a file system to a specified size on page 129.
fs shrinkby
Decreases the size of a file system by a specified size. See To decrease the size of a file system by a specified size on page 130.
fs fsck
Checks and repair a file system. See To check and repair a file system on page 131.
fs online
Mounts (places online) a file system. See To change the status of a file system on page 132.
fs offline
Unmounts (places offline) a file system. See To change the status of a file system on page 132.
fs destroy
snapshot
Copies a set of files and directories as they were at a particular point in the past. See About snapshots on page 133.
snapshot schedule Creates or remove a snapshot. See About snapshot schedules on page 138.
120
Creating and maintaining file systems Listing all file systems and associated information
To list all file systems and associated information, enter the following:
Storage> fs list [fs_name]
where fs_name is optional. If you enter a file system that does not exist, an error message is displayed. If you do not enter a specified file system, a list of file systems is displayed. For example:
Storage> fs list fs1 General Info: =============== Block Size: 1024 Bytes Primary Tier ============ Size: Use%: Layout: Mirrors: Columns: Stripe Unit: FastResync: Mirror 1: List of pools: List of disks:
p2 sda
121
fs create mirrored Creates a mirrored file system with a specified number of mirrors, a list of pools, and online status. Each mirror uses the disks from the corresponding pools as listed. See To create a mirrored file system on page 122. fs create mirrored-stripe Creates a mirrored-stripe file system with a specified number of columns, mirrors, pools, and protection options. See To create a mirrored-stripe file system on page 122. fs create striped-mirror Creates a striped-mirror file system with a specified number of mirrors and stripes. See To create a striped-mirror file system on page 122. fs create striped Creates a striped file system. A striped file system is a file system that stores its data across multiple disks rather than storing the data on one disk. See To create a striped file system on page 122.
To create a simple file system with a specified size, enter the following:
Storage> fs create simple fs_name size pool1[,disk1,...] [blksize=bytes]
For example:
Storage> fs create simple fs2 10m sda 100% [#] Creating simple filesystem
122
For example:
Storage> fs create mirrored fs1 100M 2 pool1,pool2 100% [#] Creating mirrored filesystem
fs_name
Specifies the name of the file system being created. The file system name should be a string. If you enter a file that already exists, you receive an error message and the file system is not created.
123
size
Specifies the size of a file system. To create a file system, you need at least 10 MB of space. Available units are the following:
MB GB TB
You can enter the units with either uppercase (10M) or lowercase (10m) letters. To see how much space is available on a pool, use the Storage> pool free command. See About configuring storage pools on page 96. nmirrors Specifies the number of mirrors the file system has. You must enter a positive integer. Specifies the number of columns for the striped file system. The number of columns represents the number of disks to stripe the information across. If the number of columns exceeds the number of disks for the entered pools, an error message is displayed. This message indicates that there is not enough space to create the striped file system. Specifies the pool(s) or disk(s) for the file system. If you specify a pool or disk that does not exist, you receive an error message. Specify more than one pool or disk by separating the name with a comma; however, do not include a space between the comma and the name. To find a list of pools and disks, use the Storage> pool list command. To find a list of disks, use the Storage> disk list command. The disk must be part of the pool or an error message is displayed. protection If you do not specify a protection option, the default is "disk." The available options for this field are:
ncolumns
pool1[,disk1,...]
disk - Creates mirrors on separate disks. pool - Creates mirrors in separate pools. If there is not enough space to create the mirrors, an error message is displayed, and the file system is not created.
124
Creating and maintaining file systems Adding or removing a mirror to a file system
stripeunit=kilobytes Specifies a stripe width (in kilobytes). Possible values are the following:
blksize=bytes
Specifies the block size for the file system. Possible values of bytes are the following:
Creating and maintaining file systems Adding or removing a mirror to a file system
125
pool1[,disk1,...]
Specifies the pool(s) or disk(s) to use for the file system. If the specified pool or disk does not exist, an error message is displayed, and the file system is not created. You can specify more than one pool or disk by separating the name with a comma, but do not include a space between the comma and the name. To find a list of existing pools and disks, use the Storage> pool list command. See About configuring storage pools on page 96. To find a list of the existing disks, use the Storage> disk list command. See About displaying information for all disk devices on page 105. The disk needs to be part of the pool or an error message is displayed.
protection
The default value for the protection field is "disk." Available options are:
disk - Creates mirrors on separate disks. pool - Uses pools from any available pool.
For example:
Storage> fs addmirror fs1 pool3,pool4 Storage>
126
Creating and maintaining file systems Configuring FastResync for a file system
pool_or_disk_name
For a striped-mirror file system, if any of the disks are bad, the Storage> fs rmmirror command disables the mirrors on the disks that have failed. If no disks have failed, SFS chooses a mirror to remove. For example:
Storage> fs rmmirror fs1 AMS_WMS0_0 Storage>
Creating and maintaining file systems Disabling the FastResync option for a file system
127
pool_or_disk_name
where fs_name specifies the name of the file system for which to disable FastResync. If you specify a file system does not exist, an error message is displayed. For example:
Storage> fs unsetfastresync fs6 Storage>
128
Creating and maintaining file systems Increasing the size of a file system
To increase the size of a file system to a specified size, enter the following:
Storage> fs growto {primary|secondary} fs_name new_length [pool1[,disk1,...]] [protection=disk|pool]
For example:
Storage> fs growto primary fs1 1G Storage>
To increase the size of a file system by a specified size, enter the following:
Storage> fs growby {primary|secondary} fs_name length_change [pool1[,disk1,...]] [protection=disk|pool]
For example:
Storage> fs growby primary fs1 50M Storage> primary|secondary Specifies the primary or secondary tier. fs_name Specifies the file system whose size will be increased. If you specify a file system that does not exist, an error message is displayed. Expands the file system to a specified size. The size specified must be a positive number, and it must be bigger than the size of the existing file system. If the new file system is not larger than the size of the existing file system, an error message is displayed, and no action is taken. This variable is used with the Storage> fs growto command. length_change Expands the file system to a specified size. The size specified must be a positive number, and it must be bigger than the size of the existing file system. If the new file system is not larger than the size of the existing file system, an error message is displayed, and no action is taken. This variable is used with the Storage> fs growby command.
new_length
Creating and maintaining file systems Decreasing the size of a file system
129
pool1[,disk1,...]
Specifies the pool(s) or disk(s) to use for the file system. If you specify a pool or disk that does not exist, an error message is displayed, and the file system is not resized. You can specify more than one pool or disk by separating the name with a comma; however, do not include a space between the comma and the name. To find a list of existing pools and disks, use the Storage> pool list command. See About configuring storage pools on page 96. To find a list of the existing disks, use the Storage> disk list command. See About displaying information for all disk devices on page 105. The disk needs to be part of the pool or an error message is displayed.
protection
The default value for the protection field is "disk." Available options are: disk - New disks required for increasing the size of the file system must come from the same pool. pool - Pools are used from any available pool.
For example:
Storage> fs shrinkto primary fs1 10M Storage>
130
Creating and maintaining file systems Checking and repairing a file system
For example:
Storage> fs shrinkby primary fs1 10M Storage> primary|secondary fs_name Specifies the primary or secondary tier. Specifies the file system whose size will decrease. If you specify a file system that does not exist, an error message is displayed. Specifies the size to decrease the file system to. The size specified must be a positive number, and it must be smaller than the size of the existing file system. If the new file system size is not smaller than the size of the existing file system, an error message is displayed, and no action is taken. Decreases the file system by a specified size. The size specified must be a positive number, and it must be smaller than the size of the existing file system. If the new file system size is not smaller than the size of the existing file system, an error message is displayed, and no action is taken.
new_length
length_change
Creating and maintaining file systems Changing the status of a file system
131
where fs_name specifies the file system for which to check and repair. For example:
Storage> fs fsck fs1 SFS fs ERROR V-288-693 fs1 must be offline to perform fsck.
132
Creating and maintaining file systems Changing the status of a file system
To change the status of a file system, enter one of the following, depending on which status you are using:
Storage> fs online fs_name Storage> fs offline fs_name
where fs_name specifies the name of the file system that you want to mount (online) or unmount (offline). If you specify a file system that does not exist, an error message is displayed. For example, to bring a file system online:
Storage> fs list FS STATUS SIZE === ====== ==== fs1 online 5.00G fs2 offline 10.00M NFS SHARED ======= no no CIFS SHARED ======= no no
MIRRORS ======= -
COLUMNS ======= -
Storage> fs online fs2 100% [#] Online filesystem Storage> fs list FS STATUS SIZE === ====== ==== fs1 online 5.00G fs2 online 10.00M NFS SHARED ======= no no LAYOUT ====== simple simple MIRRORS ======= COLUMNS ======= USE% ==== 10% 100%
133
where fs_name specifies the name of the file system that you want to destroy. For example:
Storage> fs destroy fs1 100% [#] Destroy filesystem
About snapshots
A snapshot is a virtual image of the entire file system. You can create snapshots of a parent file system on demand. Physically, it contains only data that corresponds to changes made in the parent, and so consumes significantly less space than a detachable full mirror. Snapshots are used to recover from data corruption. If files, or an entire file system, are deleted or become corrupted, you can replace them from the latest uncorrupted snapshot. You can mount a snapshot and export it as if it were a complete file system. Users can then recover their own deleted or corrupted files. You can limit the space consumed by snapshots by setting a quota on them. If the total space consumed by snapshots remains above the quota, SFS rejects attempts to create additional ones. You can create a snapshot by either using the snapshot create command or by creating a schedule that calls the snapshot create command depending on the values entered for the number of hours or minutes after which this command should run. This method automatically creates the snapshot by storing the following values in the crontab: minutes, hour, day-of-month, month, and day-of-week.
134
snapshot list
Lists all the snapshots for the specified file system. If you do not specify a file system, snapshots of all the file systems are displayed. See To list snapshots on page 136.
snapshot destroy
snapshot online
snapshot offline
snapshot quota list Displays snapshot information for all the file systems. See To display snapshot quotas on page 137. snapshot quota on Enables the creation of snapshots on the given file system when the space used by all of the snapshots of that file system exceeds a given capacity. The space used by the snapshots is not restricted. See To enable or disable a quota limit on page 138. snapshot quota off Disables the creation of snapshots on the given file system when the space used by all of the snapshots of that file system exceeds a given capacity. The space used by the snapshots is not restricted. See To enable or disable a quota limit on page 138.
Configuring snapshots
To create a snapshot
135
Specifies the name for the snapshot. Specifies the name for the file system. Valid values are:
yes no
If the removable attribute is yes, and the file system is offline, the snapshot is removed automatically if the file system runs out of space. The default value is removable=no.
For example:
Storage> snapshot create snapshot1 fs1 100% [#] Create snapshot
136
To list snapshots
schedule_name
For example:
Storage> snapshot list Snapshot =============================== schedule2_26_Feb_2009_00_15_01 schedule2_26_Feb_2009_00_10_01 presnap_schedule2_25_Feb_2009_18_00_02 ctime ===== 2009.Feb.26.00:15:04 2009.Feb.26.00:10:03 2009.Feb.25.18:00:04 Snapshot FS
Status ====== offline offline offline Removable ========= no no no Preserved ========= No No Yes
Displays the name of the created snapshots. Displays the file systems that correspond to each created snapshots. Displays whether or not the snapshot is mounted (that is, online or offline). Displays the time the snapshot was created. Displays the time the snapshot was modified. Determines if the snapshot should be automatically removes in case the underlying file system runs out of space. You entered either yes or no in the snapshot create snapshot_name fs_name [removable] Determines if the snapshot is preserved when all of the automated snapshots are destroyed.
Status
Preserved
137
To destroy a snapshot
For example:
Storage> snapshot destroy snapshot1 fs1 100% [#] Destroy snapshot
To mount or unmount snapshots, enter one of the following commands, depending on which operation you want to perform:
Storage> snapshot online|offline snapshot_name fs_name snapshot_name fs_name Specifies the name of the snapshot. Specifies the name of the file system.
138
fs_name capacity_limit
off
139
The crontab interprets the numeric values in a different manner when compared to the manner in which SFS interprets the same values. For example, snapshot schedule create schedule1 fs1 30 2 * * * automatically creates a snapshot every day at 2:30 AM, and does not create snapshots every two and a half hours. If you wanted to create a snapshot every two and a half hours with at most 50 snapshots per schedule name, then run snapshot schedule create schedule1 fs1 50 */30 */2 * * *, where the value */2 implies that the schedule runs every two hours. You can also specify a step value for the other parameters, such as day-of-month or month and day-of-week, as well and can use a range along with a step value. Specifying a range in addition to the numeric_value implies the number of times the crontab skips for a given parameter. For example, to create a snapshot every two and a half hours with no restrictions on the maximum number of snapshots per schedule name, run the following command:snapshot schedule create schedule1 fs1 0 0-59/30 0-23/2 * * * as crontab interprets a step value and a step and range combination in a similar manner. Table 7-4 Command Snapshot schedule commands Definition
snapshot schedule Creates a schedule to automatically create a snapshot of a particular create file system. See To create a snapshot schedule on page 140. snapshot schedule Modifies the snapshot schedule of a particular filesystem. modify See To modify a snapshot schedule on page 141. snapshot schedule Creates a schedule to destroy all of the automated snapshots. This destroyall excludes the preserved and online snapshots. See To remove all snapshots on page 141. snapshot schedule Preserves a limited number of snapshots corresponding to an existing preserve schedule and specific file system name. These snapshots are not removed as part of the snapshot schedule autoremove command. See To preserve snapshots on page 142. snapshot schedule Displays all schedules that have been set for automatically creating show snapshots. See To display a snapshot schedule on page 142. snapshot schedule Deletes the schedule set for automatically creating snapshots for a delete particular file system or for a particular schedule. See To delete a snapshot schedule on page 142.
140
For example, to create a schedule for an automated snapshot creation of a given file system every 3 hours on a daily basis, enter the following:
Storage> snapshot schedule create schedule1 fs1 * 3 * * * Storage>
When an automated snapshot is created, the entire date value is appended, including the time zone.
schedule_name Specifies the name of the schedule corresponding to the automatically created snapshot. The schedule_name cannot contain an underscore ('_') as part of its value. For example, sch_1 is not allowed. fs_name Specifies the name of the file system. The file system name should be a string.
max_snapshot_limit Specifies the number of snapshots that can be created for a given file system and schedule name. This field only accepts numeric input. Entering 0 implies the snapshots can be created on a given file system and schedule name without any restriction. Any other value would imply that only x number of snapshots can be created for a given file system and schedule name. If the number of snapshots corresponding to the schedule name is equal to or greater than the value of this field, then snapshots that are more than an hour old are automatically destroyed until the number of snapshots is less than the maximum snapshot limit value. The range allowed for this parameter is 0-999. minute This parameter may contain either an asterisk, (*), which implies "every minute," or a numeric value between 0-59. You can enter */(0-59), a range such as 23-43, or just the *. hour This parameter may contain either an asterisk, (*), which implies "run every hour," or a number value between 0-23. You can enter */(0-23), a range such as 12-21, or just the *.
141
day_of_the_month This parameter may contain either an asterisk, (*), which implies "run every day of the month," or a number value between 1-31. You can enter */(1-31), a range such ass 3-22, or just the *. month This parameter may contain either an asterisk, (*), which implies "run every month," or a number value between 1-12. You can enter */(1-12), a range such as 1-5, or just the *. You can also enter the first three letters of any month (must use lowercase letters). day_of_the_week This parameter may contain either an asterisk (*), which implies "run every day of the week," or a numeric value between 0-6. Crontab interprets 0 as Sunday. You can also enter the first three letters of the week (must use lowercase letters).
For example, to modify the existing schedule so that a snapshot is created every 2 hours on the first day of the week, enter the following:
Storage> snapshot schedule modify schedule1 fs1 *2**1 Storage>
To automatically remove all of the snapshots created under a given schedule and file system name (excluding the preserved and online snapshots), enter the following:
Storage> snapshot schedule destroyall schedule_name fs_name
For example:
Storage> snapshot schedule destroyall schedule1 fs1 Storage>
142
To preserve snapshots
To preserve a number of snapshots corresponding to an existing schedule and specific file system name, enter the following:
Storage> snapshot schedule preserve schedule_name fs_name snapshot_name
For example, to preserve a snapshot created according to a given schedule and file system name, enter the following:
Storage> snapshot schedule preserve schedule fs1 schedule1_Feb_27_16_42_IST Storage>
To display all of the schedules for automated snapshots, enter the following:
Storage> snapshot schedule show [fs_name] [schedule_name] fs_name Displays all of the schedules of the specified file system. If no file system is specified, schedules of all of the file systems are displayed. Displays the schedule name. If no schedule name is specified, then all of the schedules created under fs_name are displayed.
schedule_name
For example, to display all of the schedules for creating or removing snapshots to an existing file system, enter the following:
Storage> snapshot schedule show fs2 FS Schedule Name Max Snapshot Minute == ============= ============ ====== fs2 schedule2 0 0 fs2 schedule2 10 5 fs2 schedule1 20 30
Hour ==== 2 * 16
Day === * * *
Month ===== * * *
WeekDay ======= * * 5
For example:
Storage> snapshot schedule delete fs1 Storage>
Chapter
144
To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter the NFS> mode. For login instructions, go to About using the SFS command-line interface. Table 8-1 Command
share show
share add
share delete
Unexport the file system of the exported file system. See Unexporting a file system or deleting NFS options on page 151.
For example:
NFS> share show /vx/fs2 /vx/fs3
* (sync) * (secure,ro,no_root_squash)
Right-hand column
Displays the system that the file system is exported to, and the NFS options with which the file system was exported. For example: * (secure,ro,no_root_squash)
145
When sharing a file system, SFS does not check whether the client exists or not. If you add a share for an unknown client, then an entry appears in the NFS> show command output. If the file system does not exist, you will not be able to export to any client. SFS gives the following error:
SFS nfs ERROR V-288-0 File system file_system_name is offline or does not exist
You cannot export a non-existent file system. The NFS> show fs command displays the list of exportable file systems. Valid NFS options include the following:
rw Grants read and write permission to the file system. Hosts mounting this file system will be able to make changes to the file system.
146
ro
Grants read-only permission to the file system. Hosts mounting this file system will not be able to change it. Grants synchronous write access to the file system. Forces the server to perform a disk write before the request is considered complete. Grants asynchronous write access to the file system. Allows the server to write data to the disk when appropriate. Grants secure access to the file system. Requires that clients originate from a secure port. A secure port is between 1-1024. Grants insecure access to the file system. Permits client requests to originate from unprivileged ports (those above 1024). Prevents the root user on an NFS client from having root privileges on an NFS mount. This effectively "squashes" the power of the remote root user to the lowest local user, preventing remote root users from acting as though they were the root user on the local system.
sync
async
secure
insecure
root_squash
no_root_squash Disables the root_squash option. Allows root users on the NFS client to have root privileges on the NFS server. wdelay Causes the NFS server to delay writing to the disk if another write request is imminent. This can improve performance by reducing the number of times the disk must be accessed by separate write commands, reducing write overhead. Disables the wdelay option.
no_wdelay
The default NFS export options are: sync, ro, root_squash, and wdelay. The no_wdelay option has no effect if the async option is set. For example, you could issue the following commands:
NFS> share add rw,async fs2 NFS> share add rw,sync,secure,root_squash fs3 10.10.10.10
Note: With root_squash, the root user can access the share, but with 'nobody' permissions.
147
To see your exportable online file systems and snapshots, enter the following:
NFS> show fs
For example:
NFS> show fs FS/Snapshot =========== fs2 fs3
For example:
NFS> share show /vx/fs2 /vx/fs3
* (sync) * (secure,ro,no_root_squash)
If the client is not given, then the specified file system can be mounted or accessed by any client. To re-export new options to an existing share, the new options will be updated after the command is run.
148
NFS> share add sync fs4 Exporting *:/vx/fs4 with options sync ..Success.
2-node SFS cluster Data access by CIFS protocol Data access by NFS protocol
Windows user
UNIX user
149
Note: When a share is exported over both NFS and CIFS protocols, the applications running on the NFS and CIFS clients may attempt to concurrently read or write the same file. This may lead to unexpected results since the locking models used by these protocols are different. For example, an application reads stale data. For this reason, SFS warns you when the share export is requested over NFS or CIFS and the same share has already been exported over CIFS or NFS, when at least one of these exports allows write access.
150
To export a file system to Windows and UNIX users with read-only and read-write permission respectively, go to CIFS mode and enter the following commands:
CIFS> show Name Value ---- ----netbios name mycluster ntlm auth yes allow trusted domains no homedirfs quota 0 idmap backend rid:10000-20000 workgroup SYMANTECDOMAIN security ads Domain SYMANTECDOMAIN.COM Domain user administrator Domain Controller SYMSERVER CIFS> share add fs1 share1 ro Exporting CIFS filesystem : share1... CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,ro
CIFS> exit > nfs Entering share mode... NFS> share add rw fs1 SFS nfs WARNING V-288-0 Filesystem (fs1) is already shared over CIFS with 'ro' permission. Do you want to proceed (y/n): y Exporting *:/vx/fs1 with options rw ..Success. NFS> share show /vx/fs1 * (rw) NFS>
151
To see your existing exported file systems, enter the following command:
NFS> share show
Only the file systems that are displayed can be unexported. For example:
NFS> share show /vx/fs2 /vx/fs3
* (sync) * (secure,ro,no_root_squash)
To delete a file system from the export path, enter the following command:
NFS> share delete filesystem [client]
For example:
NFS> share delete fs3 Removing export path *:/vx/fs3 ..Success. filesystem Specifies the name of the file system you want to delete. Where filesystem can be a string of characters, but the following characters are not allowed: / \ ( ) < >. For example: NFS> share delete "*:/vx/example" You cannot include single or double quotes that do not enclose characters. You cannot use one single quote or one double quote, as in the following example: NFS> share delete ' "filesystem
152
client
Single host - specify a host either by an abbreviated name that is recognized by the resolver (DNS is the resolver), the fully qualified domain name, or an IP address. Netgroups - netgroups may be given as @group. Only the host part of each netgroup member is considered for checking membership.
If client is included, the file system is removed from the export path that was directed at the client. If a file system is being exported to a specific client, the NFS> share delete command must specify the client to remove that export path. If the client is not specified, then the specified file system can be mounted or accessed by any client.
Chapter
About configuring SFS for CIFS About configuring CIFS for standalone mode About configuring CIFS for NT domain mode About leaving an NT domain Changing NT domain settings Changing security settings Changing security settings after the CIFS server is stopped About configuring CIFS for AD domain mode Leaving an AD domain Changing domain settings for AD domain mode Removing the AD interface About setting NTLM About setting trusted domains About storing account information About reconfiguring the CIFS service About managing CIFS shares Sharing file systems using CIFS and NFS protocols About SFS cluster and load balancing
154
About managing home directories About managing local users and groups About configuring local groups
Windows 2000 Server Windows XP Windows Server 2003 Older Windows NT Windows 9.x operating systems
You can control and manage the network resources by using Active Directory or NT workgroup domain controllers. Before you use SFS with CIFS, you must have administrator-level knowledge of the Microsoft operating systems, Microsoft services, and Microsoft protocols (including Active Directory and NT services and protocols). You can find more information about them at: www.microsoft.com. To access the commands, log into your administrative console (master, system-admin, or storage-admin) and enter CIFS> mode. For login instructions, go to About using the SFS command-line interface. When serving the CIFS clients, SFS can be configured to operate in one of the modes described in Table 9-1.
Using SFS as a CIFS server About configuring CIFS for standalone mode
155
NT Domain
Active Directory
When SFS operates in the NT or AD domain mode, it acts as a domain member server and not as the domain controller.
Make sure that the CIFS server is not running. Set security to user. Start the CIFS server.
Check the server status. Display the server settings. Configure CIFS for standalone mode commands Definition
Checks the status of the server. See To check the CIFS server status on page 156.
156
Using SFS as a CIFS server About configuring CIFS for standalone mode
show
Checks the security setting. See To check the security setting on page 157.
Sets security to user. This is the default value. In standalone mode you do not need to set the domaincontroller, domainuser, or domain. See To check the security setting on page 157.
server start
Starts the service in standalone mode. See To start the CIFS service in standalone mode on page 158.
Be default, security is set to user, the required setting for standalone mode. The following example shows that security was previously set to ads. For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : ads Disabled SYMANTECDOMAIN.COM symantecdomain_ad administrator
Using SFS as a CIFS server About configuring CIFS for standalone mode
157
Check the current settings before setting security, enter the following:
CIFS> show
For example:
Value ---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Domain user Domain Controller Name
158
Using SFS as a CIFS server About configuring CIFS for standalone mode
For example:
Name Value ----mycluster yes no 0 rid:10000-20000 SYMANTECDOMAIN user SYMANTECDOMAIN.COM administrator SYMSERVER
---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Domain user Domain Controller
To make sure that the server is running in standalone mode, enter the following:
CIFS> server status
For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security : user
The CIFS service is now running in standalone mode. To create local users and groups, go to About managing local users and groups. To export the shares, go to About managing CIFS shares.
Using SFS as a CIFS server About configuring CIFS for NT domain mode
159
Make sure that an NT domain has already been configured. Make sure that SFS can communicate with the domain controller (DC) over the network. Make sure that the CIFS server is stopped. Set the domain user, domain, and domain controller. Set the security to domain. Start the CIFS server.
Check the server status. Display the server settings. Configure CIFS for NT domain mode commands Definition
Sets the name of the domain user. The credentials of the domain user will be used at the domain controller while joining the domain. Therefore the domain user should be an existing NT domain user who has permission to perform the join domain operation. See To set the domain user name for NT mode on page 160.
set domain
Sets the name for the NT domain that you would like SFS to join and become a member. See To set the domain for the NT domain node on page 160.
set domaincontroller
Note: If security is set to domain, you can use both the AD server and
the Windows NT 4.0 domain controller as domain controllers. However, if you use the Windows NT 4.0 domain controller, you can only use the netbios name of the domain controller to set the domaincontroller parameter. See To set the domain controller for the NT domain mode on page 161.
set security
Before you set the security for the domain, you must set the domaincontroller, domainuser, and domain. See To set security to domain for the NT domain mode on page 161.
160
Using SFS as a CIFS server About configuring CIFS for NT domain mode
where username is an existing NT domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.
where domainname is the name of the domain that SFS will join. For example:
CIFS> set domain SYMANTECDOMAIN.COM Global option updated. Note: Restart the CIFS server.
Using SFS as a CIFS server About configuring CIFS for NT domain mode
161
where servername is the netbios name if it is an Windows NT 4.0 domain controller. For example, if the domain controller is in Windows NT 4.0, enter the server name SYMSERVER:
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.
162
Using SFS as a CIFS server About configuring CIFS for NT domain mode
When you enter the correct password, the following messages appear:
Joined domain SYMANTECDOMAIN.COM OK Starting CIFS Server.....Success.
To find the current settings for the domain name, domain controller name, and domain user name, enter the following:
CIFS> show
To make sure that the service is running as a member of the NT domain, enter the following:
CIFS> server status
For example:
CIFS> server status CIFS Status on sfs_1 : ONLINE CIFS Status on sfs_2 : ONLINE Security Domain membership status Domain Domain Controller Domain User : : : : : domain Enabled SYMANTECDOMAIN.COM SYMSERVER administrator
The CIFS service is now running in the NT domain mode. You can export the shares, and domain users can access the shares subject to authentication and authorization control.
163
Sets the security user. When you change the security setting, and you start or stop the CIFS server, the CIFS server leaves the existing NT domain. For example, if you change the security setting from domain to user and you stop or restart the CIFS server, it leaves the NT domain. See To change security settings on page 165. If the CIFS server is already stopped, and you change the security to a value other than domain, SFS leaves the domain. This method of leaving the domain is provided so that if a CIFS server is already stopped, and may not be restarted soon, you have a way to leave an existing join to the NT domain. See To change security settings for a CIFS server that has been stopped on page 165.
164
where newdomain.com is the new domain name. When you start the CIFS server, the CIFS server tries to leave the existing domain. This requires the old domainuser to enter their password. After the password is supplied, and the domain leave operation succeeds, the CIFS server joins an NT domain with the new settings.
165
If the server is stopped, then changing the security mode will disable the membership of the existing domain.
166
Using SFS as a CIFS server About configuring CIFS for AD domain mode
Make sure that the SFS and AD server clocks are reasonably synchronized with each other. The most commonly allowed maximum value of clock difference would be 5 minutes, but it depends on the AD server settings. One of the ways to ensure this, is by configuring SFS to use the NTP service running on the AD server. You can change the clock settings on the AD server by modifying Kerberos Policy, which is a part of the Domain Security Policy. Make sure that SFS is configured to use a DNS service that has entries for the AD domain controller and SFS nodes. You can also use the DNS service running on the AD domain controller. Make sure that the CIFS server is not running. Set the AD domain user, AD domain, and domain controller. Set security to ads. Start the CIFS server. Check the server status. Display the server settings. Configure CIFS for AD domain mode commands Definition
Sets the name of the domain user. The domain user's credentials will be used at the domain controller while joining the domain. Therefore, the domain user should be an existing AD user who has the permission to perform the join domain operation. See To set the domain user for AD domain mode on page 167.
set domain
Sets the name of the domain for the AD domain mode that SFS will join. See To set the domain for AD domain mode on page 167.
Sets the domain controller server name. See To set the domain controller for AD domain mode on page 168. Sets security for the domain. You must first set the domaincontroller, domainuser, and domain. See To set security to ads on page 168.
Using SFS as a CIFS server About configuring CIFS for AD domain mode
167
where username is the name of an existing AD domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.
168
Using SFS as a CIFS server About configuring CIFS for AD domain mode
where servername is the server's IP address or DNS name. For example, if the server SYMSERVER has an IP address of 172.16.113.118, you can specify one of the following:
CIFS> set domaincontroller 172.16.113.118 Global option updated. Note: Restart the CIFS server.
or
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.
Using SFS as a CIFS server About configuring CIFS for AD domain mode
169
After you enter the correct password for the user administrator belonging to AD domain SYMANTECDOMAIN.COM, the following message appears:
Joined domain SFSQA.COM OK Starting CIFS Server.....Success.
The CIFS server is now running in the AD domain mode. You can export the shares, and the domain users can access the shares subject to the AD authentication and authorization control.
170
Leaving an AD domain
There is no SFS command that lets you leave an AD domain. It happens automatically as a part of change in security or domain settings, and then starts or stops the CIFS server. Thus, SFS provides the domain leave operation depending on existing security and domain settings and new administrative commands. However, the leave operation requires the credentials of the old domains user. All of the cases for a domain leave operation have been documented in Table 9-6. Table 9-6 Command
set domain
Sets the security user. If you change the security setting from ads to user and you stop or restart the CIFS server, it leaves the AD domain. When you change the security setting, and you stop or restart the CIFS server, the CIFS server leaves the existing AD domain. For example, the CIFS server leaves the existing AD domain if the existing security is ads, and the new security is changed to user, and the CIFS server is either stopped, or started again. See To change the security settings for the AD domain mode on page 173. If the CIFS server is already stopped, changing the security to a value other than ads causes SFS to leave the domain. Both the methods mentioned earlier require either stopping or starting the CIFS server. This method of leaving the domain is provided so that if a CIFS server is already stopped, and may not be restarted in near future, you should have some way of leaving an existing join to AD domain. See Changing security settings with stopped server on the AD domain mode on page 173.
Using SFS as a CIFS server Changing domain settings for AD domain mode
171
172
Using SFS as a CIFS server Changing domain settings for AD domain mode
When you start the CIFS server, it tries to leave the existing domain. This requires the old domainuser to enter its password. After the password is supplied, and the domain leave operation succeeds, the CIFS server joins an AD domain with the new settings.
173
1 2 3
Open the interface Active Directory Users and Computers. In the domain hierarchy tree, click on Computers. In the details pane, right-click the computer entry corresponding to SFS (this can be identified by the SFS cluster name) and click Delete.
174
When SFS CIFS service is running in the standalone mode (with security set to user) some versions of the Windows clients require NTLM authentication to be enabled. You can do this by setting CIFS> set ntlm_auth to yes. When NTLM is disabled and you use SFS in the NT domain mode, the only protocol available for user authentication is Microsoft NTLMv2. When NTLM is disabled and you use SFS in AD domain mode, the available authentication protocols is Kerberos and NTLMv2. The one used depends on the capabilities of both the SFS clients, and domain controller. If no special action is taken, SFS allows the NTLM protocol to be used. For any specific CIFS connection, all the participants, that is the client machine, SFS and domain controller select the protocol that they all support and that provides the highest security. In the AD domain mode, Kerberos provides the highest security. In the NT domain mode, NTLMv2 provides the highest security. Table 9-7 Command
set ntlm_auth no
set ntlm_auth yes Enables NTLM. See To enable NTLM on page 175.
175
Setting NTLM
To disable NTLM
For example:
CIFS> set ntlm_auth no Global option updated. Note: Restart the CIFS server.
To enable NTLM
For example:
CIFS> set ntlm_auth yes Global option updated. Note: Restart the CIFS server.
176
set Enables the use of trusted domains in the AD domain mode. allow_trusted_domains Note: Depending on the value you specify for idmap_backend it may yes or it may not be possible to enable AD trusted domains. See To enable AD trusted domains on page 176. set Disables the use of trusted domains in the AD domain mode. allow_trusted_domains See To disable trusted domains on page 177. no
For example:
CIFS> set allow_trusted_domains yes Global option updated. Note: Restart the CIFS server.
177
For example:
CIFS> set allow_trusted_domains no Global option updated. Note: Restart the CIFS server.
The rid value can be used in any of the following modes of operation:
It is the default value for idmap_backend in all of these operational modes. The ldap value can be used if the AD domain mode is used.
178
set idmap_backend Configures SFS to store information about users and groups locally. rid Note: This command requires that the allow_trusted_domains variable be set to no, as the command is not compatible with trusted domains. See To set idmap_backend to rid on page 179. set idmap_backend Configures SFS to store information about users and groups in a ldap remote LDAP service. You can only use this command when SFS is operating in the AD domain mode. The LDAP service can run on the domain controller or it can be external to the domain controller.
Note: For SFS to use the LDAP service, the LDAP service must include
both RFC 2307 and Samba schema extensions. When the idmap_backend command is set to ldap you can enable or disable trusted domains. If idmap_backend is set to ldap, you must first configure the SFS LDAP options using the Network> ldap commands. See About LDAP on page 72. See To set idmap_backend to LDAP on page 179.
179
To store information about user and group accounts locally, enter the following:
CIFS> set idmap_backend rid [uid_range]
where the uid_range represents the range of identifiers which are used by SFS when mapping domain users and groups to local users and groups. The default range is 10000-20000.
To make sure that you have first configured LDAP, enter the following:
Network> ldap
To use the remote LDAP store for information about the user and group accounts, enter the following:
CIFS> set idmap_backend ldap
180
Make sure that the server is not running. Set the domain user, domain, and domain controller. Start the CIFS server. Reconfigure the CIFS service commands Definition
Changes the configuration option to reflect the values appropriate for the new domain. See To set the user name for the AD on page 181.
set domain
Changes the configuration option to reflect the values appropriate for the new domain. See To set the AD domain on page 181.
set domaincontroller
Changes the configuration option to reflect the values appropriate for the new domain. See To set the AD server on page 182.
server start
Starts the server and causes it to leave the old domain and join the new Active Directory domain. You can only issue this command after you enter the CIFS> set security command. See To start the CIFS server on page 183.
181
To set the user name for the AD, enter the following:
CIFS> set domainuser username
where username is the name of an existing AD domain user who has permission to perform the join domain operation. For example:
CIFS> set domainuser administrator Global option updated. Note: Restart the CIFS server.
where domainname is the name of the domain. This command also sets the system workgroup. For example:
CIFS> set domain NEWDOMAIN.COM Global option updated. Note: Restart the CIFS server.
182
where servername is the AD server IP address or DNS name. For example, if the AD server SYMSERVER has an IP address of 172.16.113.118, you can specify on of the following:
CIFS> set domaincontroller 172.16.113.118 Global option updated. Note: Restart the CIFS server.
or
CIFS> set domaincontroller SYMSERVER Global option updated. Note: Restart the CIFS server.
If you use the AD server name, you must configure SFS to use a DNS server which can resolve this name.
183
184
share add
Exports a file system with the given sharename or re-export new options to an existing share. The new options are updated after this command is run. This CIFS command, which creates and exports a share, takes as input the name of the file system which is being exported, the share name, and optional attributes. You can use the same command for a share that is already exported. You can do this if it is required to modify the attributes of the exported share. A file system used for storing users home directories cannot be exported as a CIFS share, and a file system that is exported as a CIFS share cannot be used for storing users' home directories. See To export a file system on page 184.
share delete
Stops the associated file system from being exported. Any files and directories which may have been created in this file system remain intact; they are not deleted as a result of this operation. See To delete a CIFS share on page 187.
sharename
185
cifsoptions
A comma-separated list of export options. This part of the command is optional. If it is not given, SFS uses the default value. (Example: ro,rw,guest,noguest,oplocks,nooplocks,owner=ownername, group=groupname,ip=virtualip). The default values are: ro, noguest, oplocks, owner=root, group=root.
For example, an existing file system called fsA being exported as a share called ABC:
CIFS> share add fsA ABC rw,guest,owner=john,group=abcdev
There is a share option which specifies if the files in the share will be read-only or if both read and write access will be possible, subject to the authentication and authorization checks when a specific access is attempted. This share option can be given one of these values:
ro Grants read-only permission to the exported share. Files cannot be created or modified. This is the default value. Grants read and write permission to the exported share.
rw
Another configuration option specifies if a user trying to establish a CIFS connection with the share must always provide the user name and password, or if they can connect without it. In this case, only restricted access to the share will be allowed. The same kind of access is allowed to anonymous or guest user accounts. This share option can have one of the following values:
guest SFS allows restricted access to the share when no user name or password is provided. SFS always requires the user name and password for all of the connections to this share. This is the default value.
noguest
SFS supports the CIFS opportunistic locks. You can enable or disable them for a specific share. The opportunistic locks improve performance for some workloads, and there is a share configuration option which can be given one of the following values:
oplocks SFS supports opportunistic locks on the files in this share. This is the default value.
186
nooplocks
No opportunistic locks will be used for this share. Disable the oplocks when:
1) A file system is exported over both CIFS and NFS protocols. 2) Either CIFS or NFS protocol has read and write access.
There are more share configuration options that can be used to specify the user and group who own the share. If you do not specify these options for a share, SFS uses the default values for these options, which are the privileged or root SFS user and group. You may want to change the default values to allow a specific user or group to be the share owner.
owner By default, the SFS root owns the root directory of the exported share. This lets CIFS clients create folders and files in the share. However, there are some operations which require owner privileges; for example, changing the owner itself, and changing permissions of the top-level folder (that is, the root directory in UNIX terms). To enable these operations, you can set the owner option to a specific user name, and this user can perform the privileged operations. By default, the SFS root is the primary group owner of the root directory of the exported share. This lets CIFS clients create folders and files in the share. However, there are some operations which require the group privileges; for example, changing the group itself, and changing permissions of the top level folder (that is, the root directory in UNIX terms). To enable these operations you can set the group option to a specific group name and this group can perform the privileged operations. SFS lets you specify a virtual IP address. This address must be part of the SFS cluster, and is used by the system to serve the share internally.
group
ip
After a file system is exported as a CIFS share, you can decide to change one or more share options. This is done using the same share add command, giving the name of an existing share and the name of the file system exported with this share. SFS will realize the given share has already been exported and that it is only required to change the values of the share options. For example, to export the file system fs1 with name share1, enter the following:
CIFS> share add fs1 share1 "owner=administrator,group=domain users,rw" Exporting CIFS filesystem : share1 ... CIFS> share show
187
ShareName share1
FileSystem fs1
To display the information about all of the exported shares, enter the following:
CIFS> share show
For example:
CIFS> share show ShareName FileSystem share1 fs1
ShareOptions owner=root,group=root
To display the information about one specific share, enter the following:
CIFS> share show sharename
For example:
CIFS> share show share1 ShareName VIP Address share1 10.10.10.10
where sharename is the name of the share you want to delete. For example:
CIFS> share delete share1 Unexporting CIFS filesystem : share1 .. CIFS>
ShareOptions
188
Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols
2-node SFS cluster Data access by CIFS protocol Data access by NFS protocol
Windows user
UNIX user
It is recommended that you disable the oplocks option when the following occurs:
A file system is exported over both the CIFS and NFS protocols. Either the CIFS and NFS protocol is set with read and write permission.
Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols
189
To disable oplocks refer to Setting share properties Note: When a share is exported over both NFS and CIFS protocols, the applications running on the NFS and CIFS clients may attempt to concurrently read or write the same file. This may lead to unexpected results since the locking models used by these protocols are different. For example, an application reads stale data. For this reason, SFS warns you when the share export is requested over NFS or CIFS and the same share has already been exported over CIFS or NFS, when at least one of these exports allows write access.
190
Using SFS as a CIFS server Sharing file systems using CIFS and NFS protocols
To export a file system to Windows and UNIX users with read-only permission, go to CIFS mode and enter the following commands:
CIFS> show Name ---netbios name ntlm auth allow trusted domains homedirfs quota idmap backend workgroup security Domain Value ----mycluster yes no 0 rid:10000-20000 SYMANTECDOMAIN ads SYMANTECDOMAIN.COM
Domain user administrator Domain Controller SYMSERVER CIFS> share add fs1 share1 rw SFS cifs WARNING V-288-0 Filesystem (fs1) is already shared over NFS with 'ro' permission. Do you want to proceed (y/n): y Exporting CIFS filesystem : share1 .. CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,rw CIFS>
When the file system in CIFS is set to homedirfs, the SFS software assumes that the file system is exported to CIFS users in read and write mode. SFS does not allow you to export the same file system as an CIFS share and a home directory file system (homedirfs). For example, if the file system fs1 is already exported as a CIFS share then you cannot set it as homedirfs.
Using SFS as a CIFS server About SFS cluster and load balancing
191
To request that a file system be used for home directories, you need to export the file system. Go to the CIFS mode and enter the following:
CIFS> share show ShareName FileSystem ShareOptions share1 fs1 owner=root,group=root,rw CIFS> set homedirfs fs1 SFS cifs ERROR V-288-615 Filesystem (fs1) is already exported by another CIFS share. CIFS>
Each share's top-level directories is treated as a single share. Each top-level directory becomes like a root of a new share and only one node at a time can perform the file operations on this new share. The ownership of different top-level directories is assigned to different nodes in the SFS cluster, balancing the CIFS-related workload.
Caution: You cannot specify which node owns the split share. If the node getting the ownership already has a heavy load, the new load distribution may worsen your situation.
192
Using SFS as a CIFS server About SFS cluster and load balancing
Use the CIFS> share show command to view which virtual IP is assigned to a share. Use the Network> ip addr show command to view which node is assigned a virtual IP. This shows which node is the current owner of the exported shares.
Splitting a share
You can split an exported share with the split command. This changes the way a CIFS-related workload is allocated to the SFS nodes. The purpose of the split command is to have multiple nodes serving a large share. Although the command can balance the subdirectory share in a round-robin fashion, the split is not based on the actual load. Restrictions for split command include the following:
You cannot split a sharename more than once. You cannot delete the subdirectory share of a split share. You cannot undo the effects of the split command.
Using SFS as a CIFS server About SFS cluster and load balancing
193
To split a share
DirName
For example:
CIFS> split share1 Splitting share splitshare : .........Success.
To display the list of all of the CIFS shares, enter the following command. The output, the asterisk and the word split indicate that a share is split.
CIFS> share show ShareName FileSystem share1* fs3 share2 fs2 share3 fs3
194
To create a new top-level directory in a split share, enter the following command. To create a new top-level directory called newdir in an already split share called share1, enter the following:
CIFS> split share1 newdir Creating directory: newdir Success: Directory 'newdir' created
homedir quota
Enables use of quotas on home directory file systems. See To enable use of quotas on home directory file systems on page 197.
homedir set
Manually creates a home directory. See To manually create a home directory on page 198.
homedir setall
Sets the quota for all of the users. The command also modifies the value of the global quota. See To set the quota value for all of the home directories on page 199.
homedir show
Displays information about home directories. See To display information about home directories on page 200.
195
homedir deleteall
Deletes the home directories. See To delete the home directories on page 201.
To reserve one or more file systems for home directories, enter the following:
CIFS> set homedirfs [filesystemlist]
where filesystemlist is a comma-separated list of names of the file systems which are used for the home directories. For example:
CIFS> set homedirfs fs1,fs2,fs3 Global option updated. Note: Restart the CIFS server.
If you want to remove the file systems you previously set up, enter the command again, without any file systems:
CIFS> set homedirfs
To find which file systems (if any) are currently used for home directories, enter the following:
CIFS> show
After you select one or more of the file systems to be used in this way, you cannot export the same file systems as ordinary CIFS shares. If you want to change the current selection, for example, to add an additional file system to the list of home directory file systems or to specify that no file
196
system should be used for home directories, you have to use the same CIFS> set homedirfs command. In each case you must enter the entire new list of home directory file systems, which may be an empty list when no home directory file systems are required. SFS treats home directories differently from ordinary shares. The differences are as follows:
An ordinary share is used to export a file system, while a number of home directories can be stored in a single file system. The file systems used for home directories cannot be exported as ordinary shares. The CIFS> split command can be used for an ordinary share but not for a home directory share. Exporting a home directory share is done differently than exporting ordinary share. Also, removing these two kinds of shares is done differently. The configuration options you specify for an ordinary share (such as read-only or use of opportunistic locks) are different from the ones you specify for a home directory share.
197
where quotaoption is the variable you want to enter for the command. To enable the use of quotas, enter the following:
CIFS> homedir quota on
198
To find the current settings for a home directory, including the quota, enter the following:
CIFS> homedir show [username] [domainname] username domainname The name of the home directory. The Active Directory/Windows NT domain name or specify 'local' for the SFS local user [local].
To find the current settings for all home directories, including quotas, enter the following:
CIFS> homedir show
When you connect to your home directory for the first time, and if the home directory has not already been created, SFS selects one of the available home directory file systems and creates the home directory there. The file system is selected in a way that tries to keep the number of home directories balanced across all available home directory file systems. The automatic creation of a home directory does not require any commands, and is transparent to both the users and the SFS administrators. The quota limits the amount of disk space you can allocate for the files in a home directory. You can set the same quota value for all home directories using the CIFS> homedir setall command.
199
To set the quota value which will be applied to all home directories, enter the following:
CIFS> homedir setall quota
For example:
CIFS> homedir Setting quota Setting quota Setting quota Setting quota Done CIFS> setall 6M for CIFS local user: usr1 for CIFS local user: usr2 for SFSQA domain user: administrator for SFSQA domain user: smith
SFS CIFS currently uses soft quotas for home directories. This means that the storage space quota can be exceeded, but only for a period of time. This period is seven days and it cannot be changed. After this period has expired, if the allocated space is still over the limit, any new request to allocate space for files in the same home directory fails.
200
To display information about a specific user's home directory, enter the following:
CIFS> homedir show [username] [domainname] username domainname The name of the home directory. The domain where the home directory is located.
Click on the Save button which saves the file to a new home directory. To delete a home directory share
You can delete all of the home directory shares with the CIFS> homedir deleteall command. This also deletes all files and subdirectories in these shares.
Using SFS as a CIFS server About managing local users and groups
201
After you delete the existing home directories, you can again create the home directories manually or automatically. To delete the home directories
Respond with y(es) or n(o) to confirm the deletion. After you delete the home directories, you can stop SFS serving home directories by using the CIFS> set homedirfs command. To disable creation of home directories
To specify that there are no home directory file systems, enter the following:
CIFS> set homedirfs
202
Using SFS as a CIFS server About managing local users and groups
Deletes local user accounts. See To delete the local CIFS user on page 204.
Displays the user ID and lists the groups to which the user belongs. If you do not enter an optional username, the command lists all CIFS existing users. See To display the local CIFS user(s) on page 203.
Adds a user to one or more groups. For existing users, this command changes a user's group membership. See To change a user's group membership on page 204.
where username is the name of the user. The grouplist is a comma-separated list of group names. For example:
CIFS> local user add usr1 grp1,grp2 Adding USER : usr1 Success: User usr1 created successfully
Using SFS as a CIFS server About managing local users and groups
203
where username is the name of the user whose password you are changing. For example, to reset the local user password for usr1, enter the following:
CIFS> local password usr1 Changing password for usr1 New password:***** Re-enter new password:***** Password changed for user: 'usr1'
where username is the name of the user. For example, to list all local users:
CIFS> local user show List of Users ------------usr1 usr2 usr3
204
where username is the name of the local user you want to delete. For example:
CIFS> local user delete usr1 Deleting User: usr1 Success: User usr1 deleted successfully
where username is the local user name being added to the grouplist. Group names in the grouplist must be separated by commas. For example:
CIFS> local user members usr3 grp1,grp2 Success: usr3's group modified successfully
Displays the list of available local groups you created. See To list all local groups on page 205.
205
local group delete Deletes a local CIFS group. See To delete the local CIFS groups on page 206.
where groupname lists all of the users that belong to that specific group. For example:
CIFS> local group show List of groups ------------grp1 grp2 grp3
For example:
CIFS> local group show grp1 GroupName UsersList ----------------grp1 usr1, usr2, usr3, urs4
206
where groupname is the name of the local CIFS group. For example:
CIFS> local group delete grp1 Deleting Group: grp1 Success: Group grp1 deleted successfully
Chapter
10
Using FTP
This chapter includes the following topics:
About FTP Displaying FTP server About FTP server commands About FTP set commands About FTP session commands Using the logupload command
About FTP
The File Transfer Protocol (FTP) server feature allows clients to access files on the SFS servers using the FTP protocol. The FTP service provides secure/non-secure access via FTP to files in the SFS servers. The FTP service runs on all of the nodes in the cluster and provides simultaneous read/write access to the files. The FTP service also provides configurable anonymous access to the filer. The FTP commands are used to configure the FTP server. By default, the FTP server is not running. You can start the FTP server using the FTP> server start command. The FTP server starts on the standard FTP port 21. FTP mode commands are listed in Table 10-1. To access the commands, log into the administrative console (master, system-admin, or storage-admin) and enter FTP> mode. For login instructions, go to About using the SFS command-line interface.
208
server
Starts, stops, and displays the status of the FTP server. See About FTP server commands on page 208.
set
Configures the FTP server. See About FTP set commands on page 210.
session
Displays and terminates the FTP sessions. See About FTP session commands on page 216.
logupload
Uploads the FTP logs to a URL. See Using the logupload command on page 219.
209
Note: All configuration changes made using the FTP> set commands come into effect only when the FTP server is restarted. Table 10-2 Command
server status
server start
Starts the FTP server on all nodes. If the FTP server is already started, the SFS software clears any faults and tries to start the FTP server. See To start the FTP server on page 209.
server stop
Stops the FTP server and terminates any existing FTP sessions. By default, the FTP server is not running. See To stop the FTP server on page 210.
210
set anonymous_login_dir
Specifies the login directory for anonymous users. The default value of this parameter is /vx/. Valid values of this parameter start with /vx/. Make sure that the anonymous user (UID:40 GID:49 UNAME:ftp) has the appropriate permissions to read files in login_directory. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set anonymous logins on page 214.
211
set anonymous_write
set allow_non_ssl
Specifies whether or not to allow non-secure (plain-text) logins into the FTP server. Enter yes (default) to allow non-secure (plain-text) logins to succeed. Enter no to allow non-secure (plain-text) logins to fail. For the changes to take effect you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set non-secure logins on page 214.
set max_connections
Specifies the maximum number of simultaneous FTP clients allowed. Valid values for this parameter range from 1-9999. The default value is 2000. It affects the entire cluster. For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set maximum connections on page 215.
212
set passive_port_range
set idle_timeout
Specifies the amount of time in minutes after which an idle connection is disconnected. Valid values for time_in_minutes range from 1 to 600 (default value is 15 minutes). For the changes to take effect, you need to restart the FTP server. Enter FTP> server stop followed by FTP> server start. See To set idle timeout on page 215.
213
You need to stop and then start the server for the new setting to take affect. For example:
FTP> set anonymous_logon yes FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout FTP> server stop FTP> server start FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout Current Value ------------2000 yes no yes /vx/ 30000:40000 15 minutes Current Value ------------2000 no no yes /vx/ 30000:40000 15 minutes New Value --------yes
214
where the login_directory is the login directory of the anonymous users on the FTP server. To set anonymous write access
no (default)
For example:
FTP> set anonymous_write yes FTP>
To set non-secure login access to the FTP server, enter the following:
FTP> set allow_non_ssl yes|no yes (default) no Allows non-secure (plain-text) logins to succeed. Allows non-secure (plain-text) logins to fail.
For example:
FTP> set allow_non_ssl no FTP>
215
To set the maximum number of allowed simultaneous FTP clients, enter the following:
FTP> set max_connections connections_number
where connections_number is the number of concurrent FTP connections allowed on the FTP server. For example:
FTP> set max_connections 3000 FTP>
To set the range of port numbers to listen on for passive FTP transfers, enter the following:
FTP> set passive_port_range port_range
where port_range is the range of port numbers to listen on for passive FTP transfers. For example:
FTP> set passive_port_range 35000:45000 FTP>
To set the amount of time a connection can stay idle before disconnecting, enter the following:
FTP> set idle_timeout time_in_minutes
where time_in_minutes is the amount of time you want the connection to stay idle before disconnecting. For example:
FTP> set idle_timeout 30 FTP>
216
To view all of the FTP> set command changes, enter the following:
FTP> show Parameter --------max_connections anonymous_logon anonymous_write allow_non_ssl anonymous_login_dir passive_port_range idle_timeout Current Value ------------2000 no no yes /vx/ 30000:40000 15 minutes New Value --------3000 yes yes no 35000:45000 30 minutes
217
session showdetail Displays the details of each session that matches the filter_options criteria. If no filter_options are specified, all sessions are displayed. If multiple filter options are provided then sessions matching all of the filter options are displayed. Filter options can be combined by using ','. The details displayed include: Session ID, User, Client IP, Server IP, State (UL for uploading; DL for downloading, or IDLE), and File (the name of the files that appear are either being uploaded or downloaded). If an '?' appears under User, the session is not yet authenticated. See To display the FTP session details on page 218. session terminate Terminates the session entered for the session_id variable. What you enter is the same session displayed under Session ID with the FTP> session showdetail command. See To terminate an FTP session on page 218.
218
where filter_options display the details of the sessions under specific headings. Filter options can be combined by using ','. If multiple filter options are used, sessions matching all of the filter options are displayed. For example, to display all of the session details, enter the following:
FTP> session showdetail Session ID User Client IP ---------- -----------sfs_1.1111 user1 10.209.105.219 sfs_1.1112 user2 10.209.106.11 sfs_2.1113 user3 10.209.107.21 sfs_1.1117 user4 10.209.105.219 sfs_2.1118 user1 10.209.105.219 sfs_1.1121 user5 10.209.111.219
File ----
file123 file345
For example, to display the details of the current FTP sessions to the Server IP (10.209.105.112), originating from the Client IP (10.209.107.21), enter the following:
FTP> session showdetail server_ip=10.209.105.112,client_ip=10.209.107.21 Session ID User Client IP Server IP State File ---------- ------------------------ ---sfs_2.1113 user3 10.209.107.21 10.209.105.112 IDLE
To terminate one of the FTP sessions displayed in the FTP> session showdetail command, enter the following:
FTP> session terminate session_id
where session_id is the unique identifier for each FTP session displayed in the FTP> session showdetail output.
FTP> session terminate sfs_2.1113 Session sfs_2.1113 terminated
219
To upload the FTP server logs to a specified URL, enter the following:
FTP> logupload url [nodename] url The URL where the FTP logs will be uploaded. The URL supports both FTP and SCP (secure copy protocol). If a nodename is specified, only the logs from that node are uploaded. The default name for the uploaded file is ftp_log.tar.gz. nodename The node on which the operation occurs. Enter the value all for the operation to occur on all of the nodes in the cluster. Use the password you already set up on the node to which you are uploading the logs.
password
For example, to upload the logs from all of the nodes to an SCP-based URL:
FTP> logupload scp://user@host:/path/to/directory all Password: Collecting FTP logs, please wait..... Uploading the logs to scp://root@host:/path/to/directory, please wait...done
220
Chapter
11
About configuring event notifications About severity levels and filters About email groups About syslog event logging Displaying events About SNMP notifications Configuring events for event reporting Exporting events in syslog format to a given URL
222
syslog
showevents
snmp
Configures an SNMP management server. See Configuring an SNMP management server on page 233.
event
Configures events for event reporting. See Configuring events for event reporting on page 236.
exportevents
Exports events in syslog format to a given URL. See Exporting events in syslog format to a given URL on page 237.
223
network - if an alert is for a networking event, then selecting the "network" filter triggers that alert. If you select the "network" filter only, and an alert is for a storage-related event, the "network" alert will not be sent. storage - is for storage-related events, for example, file systems, snapshots, disks, and pools all
Adding email groups. Adding filters to the group. Adding email addresses to the email group. Adding event severity to the group. Configuring an external email server for sending the event notification emails. Email group commands Definition
Displays an existing email group or details for the email group. See To display an existing email group or details for the email group on page 225.
Uses email groups to group multiple email addresses into one entity; the email group is used as the destination of the SFS email notification. Email notification properties can be configured for each email group. When an email group is added initially, it has the all default filter. When a group is added initially, the default severity is info. See To add an email group on page 225.
224
email add severity Adds a severity level to an email group. See To add a severity level to an email group on page 226. email add filter Adds a filter to a group. See To add a filter to a group on page 227. email del email-address email del filter Deletes an email address. See To delete an email address from a specified group on page 227. Deletes a filter from a specified group. See To delete a filter from a specified group on page 228. email del group Deletes an email group. See To delete an email group on page 228. email del severity Deletes a severity from a specified group. See To delete a severity from a specified group on page 228. email get Displays the details of the configured email server. Obtain the following information:
Name of the configured email server Email user's name Email user's password
See To display the details of the configured email server on page 229. email set Displays details for the configured email server and the email user. See To set the details of the email server on page 229. email set Deletes the configured email server by specifying the command without any options to delete the email server. See To delete the configured email server on page 229.
225
To display an existing email group or details for the email group, enter the following:
Report> email show [group]
group is optional, and it specifies the group for which to display details. If the specified group does not exist, an error message is displayed. For example:
Report> email show root Group Name: root Severity of the events: info,debug Filter of the events: all,storage Email addresses in the group: adminuser@localhost OK Completed
where group specifies the name of the added group and can only contain the following characters:
Entering invalid characters results in an error message. If the entered group already exists, then no error message is displayed. For example:
Report> email add group alert-grp OK Completed
226
For example:
Report> email add email-address alert-grp symantecexample.com OK Completed group Specifies the group to which the email address is being added. If the email group specified does not exist, then an error message is displayed. Specifies the email address to be added to the group. If the email address is not a valid email address, for example, name@symantecexample.com, a message is displayed. If the email address has already been added to the specified group, a message is displayed.
email-address
For example:
Report> email add severity alert-grp alert OK Completed group Specifies the email group for which to add the severity. If the email group specified does not exist, an error message is displayed. Indicates the severity level to add to the email group. See About severity levels and filters on page 222. Entering an invalid severity results in an error message, prompting you to enter a valid severity. Only one severity level is allowed at one time. You can have two different groups with the same severity levels and filters. Each group can have its own severity definition. You can define the lowest level of the severity that will trigger all other severities higher than it.
severity
227
filter
For example:
Report> email add filter root storage OK Completed
email-address
For example, to delete an existing email address from the email group, enter the following:
Report> email del email-address root testuser@localhost
228
filter
group specifies the name of the email group to be deleted. If the email group specified does not exist, an error message is displayed. To delete a severity from a specified group
severity
229
To display the details of the configured email server, enter the following:
Report> email get E-Mail Server: smtp.symantec.com E-Mail Username: adminuser E-mail User's Password: ******** OK Completed
email-user
Specifies the email user for which you want to display details for. For example, you would specify the following command:
For example:
Report> email set smtp.symantec.com adminuser Enter password for user 'adminuser': ********
To delete the configured email server, enter the following command without any options:
Report> email set
230
For the syslog messages, options can be selected to report about storage, networks, or all. For the list of severities to report syslog messages, go to Table 11-2. Table 11-4 Commands
syslog show
syslog add
syslog set severity Sets the severity for the syslog server. See To set the severity of the syslog server on page 231. syslog set filter Sets the syslog server filter. See To set the filter of the syslog server on page 231. syslog get filter Displays the values of the configured syslog server. See To display the values of the configured syslog server on page 231. syslog delete Deletes a syslog server. See To delete a syslog server on page 231.
syslog-server-ipaddr specifies the hostname or the IP address of the external syslog server.
231
For example:
Report> syslog set severity warning
value for severity indicates the severity for the syslog server. See About severity levels and filters on page 222. To set the filter of the syslog server
value for filter indicates the filter for the syslog server. See About severity levels and filters on page 222. To display the values of the configured syslog server
To display the values of the configured syslog server, enter the following:
Report> syslog get filter|severity
Displaying events
To display events
number of events specifies the number of events that you want to display. If you leave number_of_events blank, or if you enter 0, SFS displays all of the events.
232
snmp show
Displays the current list of SNMP management servers. See To display the current list of SNMP management servers on page 233.
snmp delete
Deletes an already configured SNMP management server. See To delete an already configured SNMP server on page 234.
Sets the severity for SNMP notifications. See To set the severity for SNMP notifications on page 234.
Sets the filter for SNMP notifications. See To set the filter for SNMP notifications on page 235.
Displays the values of the configured SNMP notifications. See To display the values of the configured SNMP notifications on page 235.
233
snmp-mgmtserver-ipaddr specifies the host name or the IP address of the SNMP management server. For example, if using the IP address, enter the following:
Report> snmp add 10.10.10.10 OK Completed
To display the current list of SNMP management servers, enter the following:
Report> snmp show Configured SNMP management servers: 10.10.10.10,mgmtserv1.symantec.com OK Completed
234
specifies the host name or the IP address of the SNMP management server. For example:
Report> snmp delete 10.10.10.10 OK Completed
If you input an incorrect value for snmp-mgmtserver-ipaddr you will get an error message. For example:
Report> snmp delete mgmtserv22.symantec.com SFS snmp delete ERROR V-288-26 Cannot delete SNMP management server, it doesn't exist.
where value for indicates the severity level of the notification. For example:
Report> snmp set severity warning OK Completed
See About severity levels and filters on page 222. Notifications are sent for events having the same or higher severity.
235
For example:
Report> snmp set filter network OK Completed
value for filter indicates the filter for the notification. See About severity levels and filters on page 222. Notifications are sent for events matching the given filter. To display the values of the configured SNMP notifications
To display the values of the configured SNMP notifications, enter the following:
Report> snmp get filter|severity
For example:
Report> snmp get severity Severity of the events: warning OK Completed Report> snmp get filter Filter for the events: network OK Completed
To export the SNMP MIB file to a given URL, enter the following:
Report> snmp exportmib url
url specifies the location the SNMP MIB file is exported to. For example:
Report> snmp exportmib scp://admin@server1.symantec.com:/tmp/sfsfs_mib.txt Password: ***** OK Completed
236
To set the time interval or the number of duplicate events sent for notifications, enter the following:
Report> event set dup-frequency number
For the event set dup-frequency command, number indicates the time interval for which only one event of duplicate events is sent for notifications. For example:
Report> event set dup-frequency 120 OK Completed
For the event set dup-number command, number indicates the number of duplicate events to ignore during notifications.
Report> event set dup-number number
For example:
Report> event set dup-number 10 OK Completed
237
To display the time interval or the number of duplicate events sent for notifications
For example:
Report> event get dup-frequency Duplicate events frequency (in seconds): 120 OK Completed
To set the number of duplicate events sent for notifications, enter the following:
Report> event get dup-number
For example:
Report> event get dup-number Duplicate number of events: 10 OK Completed
FTP SCP
url specifies the location to which the events in syslog format are exported to. For example: scp://root@server1.symantecexample.com:/exportevents/event.1. If the URL specifies a remote directory, the default filename is sfsfs_event.log.
238
To export audit events in syslog format to a given URL, enter the following:
Report> exportevents url [audit]
url specifies the location to which the audit events in syslog format are exported to. For example: scp://root@server1.symantecexample.com:/exportauditevents/auditevent.1. If the URL specifies a remote directory, the default filename is sfsfs_audit.log.
Chapter
12
Configuring backup
This chapter includes the following topics:
About backup Configuring backups using NetBackup or other third-party backup applications About NetBackup Adding a NetBackup master server to work with SFS Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation Configuring the virtual name of NetBackup About Network Data Management Protocol About NDMP supported configurations About the NDMP policies Displaying all NDMP policies About retrieving the NDMP data Restoring the default NDMP policies About backup configurations
About backup
The Backup commands are defined in Table 12-1. To access the commands, log into the administrative console (for master, system-admin, or storage-admin) and enter Backup> mode. For login instructions, go to About using the SFS command-line interface.
240
Configuring backup Configuring backups using NetBackup or other third-party backup applications
virtual-ip
Configures the NetBackup and NDMP data server installation on SFS nodes to use ipaddr as its virtual IP address. See Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation on page 244.
virtual-name
Configures the NetBackup installation on SFS nodes to use name as its hostname. See Configuring the virtual name of NetBackup on page 245.
ndmp
Transfers data between the data server and the tape server under the control of a client. The Network Data Management Protocol (NDMP) is used for data backup and recovery. See About Network Data Management Protocol on page 246.
show
Displays settings of the configured servers. See About backup configurations on page 259.
status
Displays status of configured servers. See About backup configurations on page 259.
start
Starts the configured servers. See About backup configurations on page 259.
stop
Stops the configured servers. See About backup configurations on page 259.
241
For information about the Veritas NetBackup 6.5 client capability, refer to the Veritas NetBackup 6.5 product documentation set. The Backup> netbackup commands configure the local NetBackup installation of SFS to use an external NetBackup master server, Enterprise Media Manager (EMM) server, or media server. When NetBackup is installed on SFS, it acts as a NetBackup client to perform IP-based backups of SFS file systems. Note: A new public IP address, not an IP address that is currently used, is required for configuring the NetBackup client. Use the Backup> virtual-ip and Backup> virtual-name commands to configure the NetBackup client.
About NetBackup
SFS includes built-in client software for Symantecs NetBackup data protection suite. If NetBackup is the enterprises data protection suite of choice, file systems hosted by SFS can be backed up to a NetBackup media server. To configure the built-in NetBackup client, you need the names and IP addresses of the NetBackup master and media servers. Backups are scheduled from those servers, using NetBackups administrative console. Consolidating storage reduces the administrative overhead of backing up and restoring many separate file systems. With a 256 TB maximum file system size, SFS makes it possible to collapse file storage into fewer administrative units, thus reducing the number of backup interfaces and operations necessary. All critical file data can be backed up and restored through the NetBackup client software included with SFS (separately licensed NetBackup master and media servers running on separate computers are required), or through any backup management software that supports NAS systems as data sources.
242
netbackup emm-server
Adds an external NetBackup Enterprise Media Manager (EMM) server (which can be the same as the NetBackup master server) to work with SFS.
Note: If you want to use NetBackup to backup SFS file systems, you
must add an external NetBackup EMM server. See To add a NetBackup EMM server on page 243. netbackup media-server add Adds an external NetBackup media server (if the NetBackup media server is not co-located with the NetBackup master server).
243
where server is the hostname of the NetBackup master server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup master-server nbumaster.symantecexample.com Ok Completed
where server is the hostname of the NetBackup EMM server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup emm-server nbumedia.symantecexample.com OK Completed
where server is the hostname of the NetBackup media server. Make sure that server can be resolved through DNS, and its IP address can be resolved back to server through the DNS reverse lookup. For example:
Backup> netbackup media-server add nbumedia.symantecexample.com OK Completed
244
Configuring backup Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation
where server is the hostname of the NetBackup media server you want to delete. For example:
Backup> netbackup media-server delete nbumedia.symantecexample.com OK Completed
Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation
You can configure or change the virtual IP address used by NetBackup and the NDMP data server installation on SFS nodes. This is a highly available virtual IP address in the cluster. For information about the Veritas NetBackup 6.5 client capability, refer to the Veritas NetBackup 6.5 product documentation set. Note: If you are using NetBackup and the NDMP data server installation on SFS nodes, configure the virtual IP address using the Backup> virtual-ip command so that it is different from all of the virtual IP addresses, including the console server IP address and the physical IP addresses used to install SFS.
245
To configure or change the virtual IP address used by NetBackup and NDMP data server installation
To configure or change the virtual IP address used by NetBackup and the NDMP data server installation on SFS nodes, enter the following:
Backup> virtual-ip ipaddr
where ipaddr is the virtual IP address to be used with the NetBackup and the NDMP data server installation on the SFS nodes. Make sure that ipaddr can be resolved back to the hostname that is configured by using the Backup> virtual-name command. For example:
Backup> virtual-ip 10.10.10.10 OK Completed
To configure the NetBackup installation on SFS nodes to use name as its hostname, enter the following:
Backup> virtual-name name
where name is the hostname to be used by the NetBackup installation on SFS nodes.
Backup> virtual-name nbuclient.symantecexample.com
Make sure that name can be resolved through DNS, and its IP address can be resolved back to name through the DNS reverse lookup. Also, make sure that name resolves to an IP address configured by using the Backup> virtual-ip command. For example:
Backup> virtual-name nbuclient.symantecexample.com OK Completed
See Configuring or changing the virtual IP address used by NetBackup and NDMP data server installation on page 244.
246
Defines a mechanism and protocol for controlling backup, recovery, and other transfers of data between the data server and the tape server. Separates the network attached Data Management Application, Data Servers, and Tape Servers participating in archival, recovery, or data migration operations. Provides low-level control of tape devices and SCSI media changers. NDMP terminology Definition
The host computer system that executes the NDMP server application. Data is backed up from the NDMP host to either a local tape drive or to a backup device on a remote NDMP host. The virtual state machine on the NDMP host that is controlled using the NDMP protocol. This term is used independently of implementation. There are three types of NDMP services: data service, tape service, and SCSI service. An instance of one or more distinct NDMP services controlled by a single NDMP control connection. Thus a Data/Tape/SCSI Server is an NDMP server providing data, tape, and SCSI services. The configuration of one client and two NDMP services to perform a data management operation such as a backup or a recovery.
service
server
session
247
Data Management An application that controls the NDMP session. In NDMP there is a Application master-slave relationship. The Data Management Application is the session master; the NDMP services are the slaves. In NDMP versions 1, 2, and 3 the term "NDMP client" is used instead of the Data Management Application.
The Backup> ndmp commands configure the default policies that will be used during the NDMP backup and restore sessions. In SFS, NDMP supports two sets of commands.
setenv commands. The set environment commands let you configure the
variables that make up the NDMP backup policies for your environment.
getenv commands. The get environment commands display what you have
set up with the setenv commands or the default values of all of the NDMP environment variables.
showenv command. The show environment command displays all of the NDMP
policies.
248
Figure 12-1
NFS clients NBU / TSM / EMC Legato with Control NDMP Flow
Control Flow
Data Flow SFSFS Cluster NDMP Server NBU Media Server with NDMP
Tape Library
The NDMP commands configure the default policies that are used during the NDMP backup or restore sessions. The Data Management Application (client) initiating the connection for NDMP backup and restore operations to the NDMP data/tape server can override these default policies by setting the same policy name as environment variable and using any suitable value of that environment variable. The SFS NDMP server supports MD5 and text authentication. The Data Management Application that initiates the connection to the server uses master for the username and for the password for the NDMP backup session authentication. The password can be changed using the Admin> passwd command. To change the password, Creating Master, System Administrator, and Storage Administrator users.
249
Continues the backup and restore session even if an error condition occurs. During a backup or restore session, if a file or directory cannot be backed up or restored, setting value to yes lets the session continue with the remaining specified files and directories in the list. A log message is sent to the Data Management Application about the error. Refer to the Data Management Application documentation for the location of the NDMP logs. Some conditions, such as an I/O error, will not let the command continue the backup and restore session. See To configure the failure resilient policy on page 251.
Note: During the restore session, the DST policy only applies to the
file system, but it does not become effective until you run it through the storage tier policy commands. See To configure the restore DST policy on page 252.
Configures the NDMP recursive restore policy to restore the contents of a directory each time you restore. See To configure the recursive restore policy on page 252.
ndmp setenv Contains the file system backup information for the backup command. update_dumpdates In the SFS NDMP environment, the dumpdates file is /etc/ndmp.dumpdates. See To configure the update dumpdates policy on page 253.
250
Lets you bring back previous versions of the files for review or to be used. A snapshot is a virtual copy of a set of files and directories taken at a particular point in time. The NDMP use snapshot policy enables the backup of a point-in-time image of a set of files and directories instead of a continuous changing set of files and directories. See To configure the use snapshot policy on page 253.
Enables the configuration of the NDMP backup method policy. This policy enables an incremental backup. See To configure the backup method policy on page 254.
ndmp setenv Configures the masquerade as EMC policy. masquerade_as_emc See To configure the masquerade as EMC policy on page 254.
251
where the variables for value are listed in the following table.
no_overwrite Checks if the file or directory to be restored already exists. If it does, the command responds with an error message. A log message is returned to the Data Management Application. Refer to the Data Management Application documentation for the location of the NDMP log messages. The file or directory is not overwritten. Checks if the file or directory already exists. If it does, it is renamed with the suffix .#ndmp_old and a new file or directory is created. If the file or directory already exists, it will be overwritten. It is recommended that while doing a restore from incremental backups, the value is set to overwrite_always. No checks are made when overwriting a directory with files. The destination path being overwritten is removed recursively.
rename_old (default)
overwrite_always
For example:
Backup> ndmp setenv overwrite_policy rename_old Ok Completed
no
252
no
253
no
no
no
254
mtime
For example:
Backup> ndmp setenv backup_method mtime OK Completed
no (default)
For example:
Backup> ndmp setenv masquerade_as_emc yes OK Completed Backup>
255
For example:
Backup> ndmp showenv Overwrite policy: Failure Resilient: Restore DST policies: Recursive restore: Update dumpdates: Send history: Use snapshot: Backup method: Masquerade as EMC: OK Completed
Rename old yes yes yes yes yes yes fcl yes
Enables the continuation of the backup and restore session even if an error condition occurs because a file or directory cannot be backed up or restored. To retrieve the settings for the policy that you set up, use the ndmp getenv failure_resilient command. See To retrieve the failure resilient backup data on page 257.
Configures the dynamic storage tiering (DST) restore policy. To retrieve the settings for the policy that you set up, use the ndmp getenv restore_dst command. See To retrieve the restore DST data on page 257.
256
ndmp getenv Enables the configuration of the dumpdates file. To retrieve the update_dumpdates settings for the policy that you set up, use the ndmp getenv update_dumpdates command. See To retrieve the update dumpdates data on page 258. ndmp getenv send_history States whether or not you want the file history of the backed up data to be sent to the Data Management Application. To retrieve the settings for the policy that you set up, use the ndmp getenv send_history command. See To retrieve the send history data on page 258. ndmp getenv use_snapshot Enables how much of the files and directories you want to copy during the back up session. To retrieve the settings for the policy that you set up, use the ndmp getenv use_snapshot command. See To retrieve the NDMP use snapshot data on page 258. ndmp getenv backup_method Enables the configuration of the method to back up the file system. To retrieve the settings for the policy that you set up, use the ndmp getenv backup_method command. See To retrieve the NDMP backup method on page 258. ndmp getenv Configures the NDMP server to masquerade as an EMC-compatible masquerade_as_emc device for certain NDMP backup applications. See To retrieve the masquerade as EMC policy on page 259.
257
For example:
Backup> ndmp getenv overwrite_policy Overwrite policy: Rename old OK Completed
For example:
Backup> ndmp getenv failure_resilient Failure Resilient: yes OK Completed
For example:
Backup> ndmp getenv restore_dst Restore DST policies: no OK Completed
For example:
Backup> ndmp getenv recursive_restore Recursive restore: yes OK Completed
258
For example:
Backup> ndmp getenv update_dumpdates Update dumpdates: yes OK Completed
For example:
Backup> ndmp getenv send_history Send history: no OK Completed
For example:
Backup> ndmp getenv use_snapshot Use snapshot: yes OK Completed
For example:
Backup> ndmp getenv backup_method Backup Method: fcl OK Completed
259
For example:
Backup> ndmp getenv masquerade_as_emc Masquerade as EMC: yes OK Completed Backup>
status
Displays if the NetBackup and the NDMP data server has started or stopped on the SFS nodes. If the NetBackup and the NDMP data server has currently started and is running, then Backup> status displays any on-going backup or restore jobs. See Configuring the virtual name of NetBackup on page 245. See To display the status of backup services on page 261.
260
stop
Enables the processes that handle backup and restore. You can also change the status of a virtual IP address to offline after it has been configured using the Backup> virtual-ip command. The Backup> stop command does nothing if backup jobs are running that involve SFS file systems. See To stop backup services on page 262.
Configuring backup
To display NetBackup configurations
For example:
Backup> show Virtual name: Virtual IP: NetBackup Master Server: NetBackup EMM Server: NetBackup Media Server(s): Ok Completed
261
: up : running : running
An example of the status command when backup services are running with file systems on the SFS nodes:
Backup> status Virtual IP state NDMP server state NetBackup client state
: up : running : running
Following filesystems are currently busy in backup/restore jobs by NDMP: myfs1 OK Completed
An example of the status command when the backup jobs that are running involve file systems using the NetBackup client.
Backup> status Virtual IP state NDMP server state NetBackup client state
: up : running : running
262
For example:
Backup> start OK Completed
For example:
Backup> stop SFS backup ERROR V-288-0 Cannot stop, some backup jobs are running.
Chapter
13
About SFS Dynamic Storage Tiering (DST) How SFS uses Dynamic Storage Tiering About policies About adding tiers to file systems Removing a tier from a file system About configuring a mirror on the tier of a file system Listing all of the files on the specified tier Displaying a list of DST file systems Displaying the tier location of a specified file About configuring the policy of each tiered file system Relocating a file or directory of a tiered file system About configuring schedules for all tiered file systems Displaying files that will be moved by running a policy
264
Configuring SFS Dynamic Storage Tiering About SFS Dynamic Storage Tiering (DST)
The following features are part of the SFS Dynamic Storage Tiering Solution:
Relocate files between primary and secondary tiers automatically as files age and become less business critical. Promote files from a secondary storage tier to a primary storage tier based on I/O temperature. Retain original file access paths to eliminate operational disruption, for applications, backup procedures, and other custom scripts. Allow you to manually move folders/files and other data between storage tiers. Enforce policies that automatically scan the file system and relocate files that match the appropriate tiering policy.
Current active tier 1 (primary) storage. Tier 2 (secondary) storage for aged or older data.
To configure SFS DST, add tier 2 (secondary) storage to the configuration. Specify where the archival storage will reside (storage pool) and the total size. Files can be moved from the active storage after they have aged for a specified number of days, depending on the policy selected. The number of days for files to age (not accessed) before relocation can be changed at any time. Note: An aged file is a file that exists without being accessed. Figure 13-1 depicts the features of SFS and how it maintains application transparency.
Configuring SFS Dynamic Storage Tiering About SFS Dynamic Storage Tiering (DST)
265
Figure 13-1
/sales
/financial /sales
/development /sales
/current
/forecast
/current /2007
/forecast /2008
/current /new
/forecast /history
storage
Primary Tier
Secondary Tier
mirrored RAID5
If you are familiar with Veritas Volume Manager (VxVM), every SFS file system is a multi-volume file system (one file system resides on two volumes). The DST tiers are predefined to simplify the interface. When an administrator wants to add storage tiering, a second volume is added to the volume set, and the existing file system is encapsulated around all of the volumes in the file system. This chapter discusses the SFS storage commands. You use these commands to configure tiers on your file systems. The Storage commands are defined in Table 13-1. You log into the administrative console (for master, system-admin, or storage-admin) and enter Storage> mode to access the commands. For login instructions, go to About using the SFS command-line interface.
266
Configuring SFS Dynamic Storage Tiering How SFS uses Dynamic Storage Tiering
tier remove
Removes a tier from a file system. See Removing a tier from a file system on page 270.
tier addmirror
Adds a mirror to a tier of a file system. See About configuring a mirror on the tier of a file system on page 271.
tier rmmirror
Removes a mirror from a tier of a file system. See About configuring a mirror on the tier of a file system on page 271.
tier listfiles
Lists all of the files on the specified tier. See Listing all of the files on the specified tier on page 273.
tier mapfile
Displays the tier location of a specified file. See Displaying the tier location of a specified file on page 274.
tier policy
Configures the policy of each tiered file system. See About configuring the policy of each tiered file system on page 274.
tier relocate
Relocates a file or directory. See Relocating a file or directory of a tiered file system on page 277.
tier schedule
Creates schedules for all tiered file systems. See About configuring schedules for all tiered file systems on page 277.
tier query
Displays a list of files that will be moved by running a policy. See Displaying files that will be moved by running a policy on page 280.
Primary tier
267
Secondary Tier
Each newly created file system has only one primary tier initially. This tier cannot be removed. For example, the following operations are applied to the primary tier:
Storage> fs addmirror Storage> fs growto Storage> fs shrinkto
The Storage> tier commands manage file system DST tiers. All Storage> tier commands take a file system name as an argument and perform operations on the combined construct of that file system. The SFS file system default is to have a single storage tier. An additional storage tier can be added to enable storage tiering. A file system can only support a maximum of two storage tiers.
Storage> tier commands can be used to perform the following:
Adding/removing/modifying the secondary tier Setting policies Scheduling policies Locating tier locations of files Listing files that are located on the primary or secondary tier Moving files from the secondary tier to the primary tier
About policies
Each tier can be assigned a policy. The policies include:
Specify on which tier (primary or secondary) the new files get created. Relocate files from the primary tier to the secondary tier based on any number of days of inactivity of a file. Relocate files from the secondary tier to the primary tier based on the Access Temperature of the file.
268
Configuring SFS Dynamic Storage Tiering About adding tiers to file systems
Adds a mirrored second tier to a file system. See To add a mirrored tier to a file system on page 268.
Adds a striped second tier to a file system. See To add a striped tier to a file system on page 269.
Adds a mirrored-striped second tier to a file system. See To add a mirrored-striped tier to a file system on page 269. Adds a striped-mirror second tier to a file system. See To add a striped-mirror tier to a file system on page 269.
To add a tier to a file system where the volume layout is "simple" (concatenated), enter the following:
Storage> tier add simple fs_name size pool1[,disk1,...]
For definitions of the command variables, go to Table 13-3. To add a mirrored tier to a file system
Configuring SFS Dynamic Storage Tiering About adding tiers to file systems
269
For definitions of the command variables, go to Table 13-3. To add a mirrored-striped tier to a file system
For definitions of the command variables, go to Table 13-3. To add a striped-mirror tier to a file system
For definitions of the command variables, go to Table 13-3. Table 13-3 Command variable
fs_name
size
ncolumns
nmirrors
pool1[,disk1,...]
270
Configuring SFS Dynamic Storage Tiering Removing a tier from a file system
stripeunit=kilobytes Specifies a stripe width of either 128K, 256k, 512K, 1M, or 2M. The default stripe width is 512K.
where fs_name specifies the name of the tiered file system that you want to remove. For example:
Storage> tier remove fs1 Storage>
Configuring SFS Dynamic Storage Tiering About configuring a mirror on the tier of a file system
271
tier rmmirror
Note: For a striped-mirror file system, if any of the disks are bad, this
command disables the mirrors from the tiered file system for which the disks have failed. If no disks have failed, SFS chooses a mirror to remove from the tiered file system. See To remove a mirror from a tier of a file system on page 272.
pool1[,disk1,...]
272
Configuring SFS Dynamic Storage Tiering About configuring a mirror on the tier of a file system
protection
If no protection level is specified, disk is the default protection level. Available options are: disk - If disk is entered for the protection field, then mirrors will be created on separate disks. The disks may or may not be in the same pool. pool - If pool is entered for the protection field, then mirrors will be created in separate pools. If not enough space is available, then the file system will not be created.
For example:
Storage> tier addmirror fs1 pool5 100% [#] Adding mirror to secondary tier of filesystem
where fs_name specifies the name of the tiered file system from which you want to remove a mirror. For example:
Storage> tier rmmirror fs1 Storage>
This command provides another level of detail for the remove mirror operation. You can use the command to specify which mirror you want to remove by specifying the pool name or disk name. Note: The disk must be part of a specified pool.
Configuring SFS Dynamic Storage Tiering Listing all of the files on the specified tier
273
To remove a mirror from a tier that spans a specified pool or disk, enter the following:
Storage> tier rmmirror fs_name [pool_or_disk_name] fs_name Specifies the name of the file system from which to remove a mirror. If the specified file system does not exist, an error message is displayed.
pool_or disk_name Specifies the pool or disk from which the mirror of the tiered file system spans.
The syntax for the Storage> tier rmmirror command is the same for both pool and disk. If you try to remove a mirror using Storage> fs rmmirror fs1 abc, SFS first checks for the pool with the name abc, then SFS removes the mirror spanning on that pool. If there is no pool with the name abc, then SFS removes the mirror that is on the abc disk. If there is no disk with the name abc, then an error message is displayed.
To list all of the files on the specified tier, enter the following:
Storage> tier listfiles fs_name {primary|secondary}
where fs_name indicates the name of the tiered file system from which you want to list the files. You can specify to list files from either the primary or secondary tier. For example:
Storage> tier listfiles fs1 secondary Storage>
274
Configuring SFS Dynamic Storage Tiering Displaying a list of DST file systems
file_path
For example, to show the location of a.txt, which is in the root directory of the fs1 file system, enter the following:
tier mapfile fs1 /a.txt Tier Extent Type ==== =========== Primary Data
Modifies the policy of a tiered file system. See To modify the policy of a tiered file system on page 276.
Configuring SFS Dynamic Storage Tiering About configuring the policy of each tiered file system
275
tier policy remove Removes the policy of a tiered file system. See To remove the policy of a tiered file system on page 277.
To display the policy of each tiered file system, enter the following:
Storage> tier policy list
For example:
Storage> tier policy list FS Create on Days MinAccess Temp == ========= ==== ============== fs1 primary 2 3
PERIOD ====== 4
Each tier can be assigned a policy. A policy assigned to a file system has three parts:
file creation inactive files Specifies on which tier the new files are created. Indicates when a file has to be moved from the primary tier to the secondary tier. For example, if the days option of the tier is set to 10, and if a file has not been accessed for more than 10 days, then it is moved from the primary tier of the file system to the secondary tier. Measures the number of I/O requests to the file during the period designated by the period. In other words, it is the number of read or write requests made to a file over a specified number of 24-hour periods divided by the number of periods. If the access temperature of a file exceeds minacctemp (where the access temperature is calculated over a period of time previously specified) then this file is moved from the secondary tier to the primary tier.
access temperature
276
Configuring SFS Dynamic Storage Tiering About configuring the policy of each tiered file system
tier
days
minacctemp
period
For example:
Storage> tier policy modify fs1 primary 6 5 3 SFS fs SUCCESS V-288-0 Successfully modifies tiering policy for File system fs1
where fs_name indicates the name of the tiered file system for which you want to run a policy. For example:
Storage> tier policy fs1 SFS fs SUCCESS V-288-0 Successfully ran tiering policy for File system fs1
Configuring SFS Dynamic Storage Tiering Relocating a file or directory of a tiered file system
277
where fs_name indicates the name of the tiered file system from which you want to remove a policy. For example:
Storage> tier policy remove fs1 SFS fs SUCCESS V-288-0 Successfully removed tiering policy for File system fs1
You can run the policy of a tiered file system, which would be similar to scheduling a job to run your policies, except in this case running the policy is initiated manually. The Storage> tier policy run command moves the older files from the primary tier to the secondary tier according to the policy setting.
dirPath
278
Configuring SFS Dynamic Storage Tiering About configuring schedules for all tiered file systems
Removes the schedule of a tiered file system. See To remove the schedule of a tiered file system on page 280.
Configuring SFS Dynamic Storage Tiering About configuring schedules for all tiered file systems
279
minute
280
Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy
To display schedules for all tiered file systems, enter the following:
Storage> tier schedule list [fs_name]
where fs_name indicates the name of the tiered file system for which you want to run a policy. For example:
Storage> tier schedule list FS Minute Hour Day === ====== ==== === fs1 1 1 1
Month ===== *
WeekDay ======= *
where fs_name is the name of the tiered file system from which you want to remove a schedule. For example:
Storage> tier schedule remove fs1 SFS fs SUCCESS V-288-0 Command tier schedule remove executed successfully for fs1
Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy
281
To display a list of files that will be moved by running a policy, enter the following:
Storage> tier query fs_name
where fs_name is the name of the tiered file system for which you want to display a list of files that will be moved by running a policy. For example:
Storage> tier query fs1 /a.txt /b.txt /c.txt /d.txt
282
Configuring SFS Dynamic Storage Tiering Displaying files that will be moved by running a policy
Chapter
14
About system commands About setting the clock commands About configuring the locally saved configuration files Using the more command About coordinating cluster nodes to work with NTP servers Displaying the system statistics Using the swap command About the option commands
284
config
Imports or exports the SFS configuration settings. See About configuring the locally saved configuration files on page 288.
more
Enables, disables, or checks the status of the more filter. See Using the more command on page 292.
ntp
Sets the Network Time Protocol (NTP) server on all of the nodes in the cluster. See About coordinating cluster nodes to work with NTP servers on page 292.
stat
Displays the system, Dynamic Multipathing (DMP), and process-related node wide statistics. See Displaying the system statistics on page 294.
swap
Swaps two network interfaces of a node in a cluster. See Using the swap command on page 295.
option
Adjusts a variety of tunable variables that affect the global SFS settings. See Using the option commands on page 299.
285
clock timezone
To display the current system date and time, enter the following:
System> clock show
For example:
System> clock show Fri Feb 20 12:16:30 PST 2009
You can set the current date and time of the system on all of the nodes in the cluster.
286
year
For example:
System> clock set 12:00:00 17 July 2009 .Done. Fri Jul 17 12:00:00 PDT 2009 System>
287
To set the time zone for the system, enter the following:
System> clock timezone timezone
The system will reset to the time zone for that specific region. For example:
System> clock show Thu Apr 3 09:40:26 PDT 2008 System> clock timezone GMT Setting time zone to: GMT ..Done. Thu Apr 3 16:40:37 GMT 2008 System> clock show Thu Apr 3 16:40:47 GMT 2008 System> clock timezone Los_Angeles Setting time zone to: Los_Angeles ..Done. Thu Apr 3 09:41:06 PDT 2008
288
Configuring system information About configuring the locally saved configuration files
region
Africa America Asia Australia Canada Europe GMT-offset - (this includes GMT, GMT +1, GMT +2) Pacific US
For example:
System> clock regions US
Configuring system information About configuring the locally saved configuration files
289
config export local Exports configuration settings locally. See To export configuration settings either locally or remotely on page 290. config export remote Exports configuration settings remotely. See To export configuration settings either locally or remotely on page 290.
config delete
Deletes the locally saved configuration file. See To delete the locally saved configuration file on page 291.
290
Configuring system information About configuring the locally saved configuration files
For example:
System> config export local 2007_July_20
For example:
System> config export remote ftp://admin@ftp.docserver.symantec.com/configs/config1.tar.gz Password: ******* file_name URL Specifies the saved configuration file. Specifies the URL of the export file (supported protocols are FTP and SCP).
You can import the configuration settings saved in a local file or saved to a remote machine specified by a URL. To import configuration settings either locally or remotely
For example:
System> config import local 2008_July_20 network Backup of current configuration was saved as 200907150515 network configuration was imported Configuration files are replicated to all the nodes
Configuring system information About configuring the locally saved configuration files
291
For example:
System> config import remote ftp://user1@server.com/home/user1/ 2008_July_20.tar.gz report Password: ******* file_name URL Specifies the saved configuration file. Specifies the saved configuration at a remote machine specified by a URL. Available import configuration options are: network - Imports DNS, LDAP, NIS, nsswitch settings (does not include IP). admin - Imports list of users, passwords.
all - Imports all configuration information. report - Imports report settings. system - Imports NTP settings.
cluster_specific - Imports public IP addresses, virtual IP addresses, and console IP addresses. Be careful before using this import option. The network connection to the console server will be lost after performing an import. You need to reconnect to the console server after importing the configuration option. all_except_cluster_specific - Imports all configuration information except for cluster-specific information. nfs - Imports NFS settings.
backup - Imports the NBU client and NDMP configuration, excluding the virtual-name and virtual-ip. replication - Imports replication settings.
storage_schedules - Imports dynamic storage tiering (DST) and automated snapshot schedules.
file_name specifies the locally saved configuration file for which to delete.
292
For example:
System> more status Status : Enabled System> more disable SFS more Success V-288-748 more deactivated on console System> more enable SFS more Success V-288-751 more activated on console
Configuring system information About coordinating cluster nodes to work with NTP servers
293
ntp show
Displays NTP status and server name. See To display the status of the NTP server on page 293.
ntp enable
Enables the NTP server on all of the nodes in the cluster. See To enable the NTP server on page 294.
ntp disable
Disables the NTP server on all of the nodes in the cluster. See To disable the NTP server on page 294.
To set the NTP server on all of the nodes in the cluster, enter the following:
System> ntp servername server-name
where server-name specifies the name of the server or IP address you want to set. For example:
System> ntp servername ntp.symantec.com Setting NTP server = ntp.symantec.com ..Done.
Example output:
System> ntp show Status: Enabled Server Name: ntp.symantec.com
294
To enable the NTP server on all of the nodes in the cluster, enter the following:
System> ntp enable
For example:
System> ntp enable Enabling ntp server: ntp.symantec.com ..Done.
To disable the NTP server on all of the nodes in the cluster, enter the following:
System> ntp disable
For example:
System> ntp disable Disabling ntp server:..Done. System> ntp show Status : Disabled Server Name: ntp.symantec.com
295
all
node
To view the cluster-wide network and I/O throughput, enter the following:
System> stat cluster Gathering statistics... Cluster wide statistics:::: ======================================= IO throughput :: 0 Network throughput :: 1.205
296
Note: Do not use this command if you have exported CIFS/NFS shares. To use the swap command
For example:
System> swap pubeth0 priveth0 All ssh connection(s) need to start again after this command. Do you want to continue [Enter "y/yes" to continue]... Check status of this command in history. Wait.......
option modify nfsd Modifies the number of Network File System (NFS) daemons on all of the nodes in the cluster. The range for the number of daemons you can modify is 16 to 1892.
297
option show dmpio Displays the type of Dynamic Multipathing (DMP) I/O policy and the enclosure for each node in a cluster. See To display the DMP I/O policy on page 300. option modify dmpio Modifies the Dynamic Multipathing (DMP) I/O policy, corresponding to the enclosure, arrayname, and arraytype.
Warning: Check the sequence before modifying the I/O policy. The
policies need to be applied in following sequence: arraytype, arrayname, and enclosure. The enclosure-based modification of the I/O policy overwrites the I/O policy set using the arrayname and the arraytype for that particular enclosure. In turn, the arrayname-based modification of the I/O policy overwrites the I/O policy set using the arraytype for that particular arrayname. See To change the DMP I/O policy on page 300. option reset dmpio Resets the Dynamic Multipathing (DMP) I/O policy setting for the given input (enclosure, arrayname, and arraytype). Use this command when you want to change the I/O policy from the previously set enclosure to arrayname. The settings hierarchy is enclosure, arrayname, and arraytype, so to modify the I/O policy to arraytype, you need to reset arrayname and enclosure.
Note: This command does not set the default I/O policy.
See To reset the DMP I/O policy on page 301. option show ninodes option modify ninodes Displays the ninodes cache size in the cluster. See To display the ninodes cache size on page 302. Changes the cache size of the global inodes. If your system is caching a large number of metadata transactions, or if there is significant virtual memory manager usage, modifying some of the variables may improve performance. The range for the inode cache size is from 10000 to 2097151.
298
299
For example:
System> option show nfsd NODENAME NUMBER_DAEMONS --------------------sfs_1 96 sfs_2 96
If you want to view your current enclosure names, use the following command:
Storage> disk list detail
For example:
Storage> disk list detail Disk Pool Enclosure Size ==== ==== ========== ==== sda p1 OTHER_DISKS 10.00G ID == VMware%2C:VMware%20Virtual%20S:0:0 Serial Number ============= -
For example:
System> option modify nfsd 97 System>
300
For example:
NODENAME -------rama_01 rama_01 TYPE --------arrayname enclosure ENCLR/ARRAY ----------disk disk IOPOLICY -------balanced minimumq
arrayname
array_name
arraytype
array_type
301
iopolicy
adaptive
In storage area network (SAN) environments, this option determines the paths that have the least delays, and schedules the I/O on paths that are expected to carry a higher load. Priorities are assigned to the paths in proportion to the delay. The I/O is scheduled according to the length of the I/O queue on each path. The path with the shortest queue is assigned the highest priority. Takes into consideration the track cache when balancing the I/O across paths. Uses a minimum I/O queue policy. The I/O is sent on paths that have the minimum number of I/O requests in the queue. This policy is suitable for low-end disks or JBODs where a significant track cache does not exist. This is the default policy for Active/Active (A/A) arrays. Assigns the path with the highest load carrying capacity as the priority path. This policy is useful when the paths in a SAN have unequal performances, and you want to enforce load balancing manually. Sets a simple round-robin policy for the I/O. This is the default policy for Active/Passive (A/P) and Asynchronous Active/Active (A/A-A) arrays. The I/O is channeled through the single active path. The optional attribute use_all_paths controls whether the secondary paths in an Asymmetric Active/Active (A/A-A) array are used for scheduling I/O requests in addition to the primary paths. The default setting is no, which disallows the use of the secondary paths.
adaptiveminq
balanced
minimumq
priority
round-robin
singleactive
302
For example:
System> option show ninodes INODE_CACHE_SIZE ---------------2000343
For example:
System> option modify ninodes 2000343 SFS option WARNING V-288-0 This will require cluster wide reboot. Do you want to continue (y/n)?
For example:
System> option show tunefstab NODENAME ATTRIBUTE ---------------sfs_01 write_throttle sfs_02 write_throttle
303
where value is the number you are assigning to the write_throttle parameter. For example:
System> option modify tunefstab write_throttle 20003 System> option show tunefstab NODENAME ATTRIBUTE VALUE -------------------sfs_01 write_throttle 20003 sfs_02 write_throttle 20003 System>
304
Chapter
15
About upgrading drivers Displaying the current version of SFS About installing patches
306
patch install
Downloads the patch from the specified URL and installs it on all of the nodes. See About installing patches on page 308.
patch uninstall-upto
Uninstalls the software upgrade from all of the nodes up to the specified version. See About installing patches on page 308.
patch sync
Synchronizes the specified node. See About installing patches on page 308.
patch duduninstall Removes all of the driver updates previously added to the cluster and reverts back to the original driver update image. See About installing patches on page 308.
Upgrading Storage Foundation Scalable File Server Displaying the current version of SFS
307
To display the current version of SFS and the patch level, enter the following:
Upgrade> show
For example:
Upgrade> show 5.5 (Tue Aug 11 08:40:23 2009), Installed on Tue Aug 11 17:21:18 EDT 2009
To display the current version of SFS, the DUD upgrades, the patch level, and major upgrades, enter the following:
Upgrade> show detail
For example:
Upgrade> show detail 5.5SP1RP1 (Tue Dec 15 08:40:23 2009) 5.5SP1 (Tue Aug 11 08:40:23 2009), Installed on Tue Aug 11 17:21:18 EDT 2009 5.5SP1RP1 (Tue Dec 15 08:40:23 2009), Installed on Tue Dec 15 19:19:54 EDT 2009 Major Upgrade(s) ================ Upgraded from 5.5 to 5.5SP1 (Tue Aug 11 08:40:23 2009) on Tue Aug 11 17:21:18 EDT 2009
308
309
patch duduninstall Removes all of the driver updates previously added to the cluster and reverts back to the original driver update image. This process does not remove the drivers that were added during the installation of the first node. The DUD uninstall process is not incremental, unlike the DUD upgrade process where you can add different drivers by using the patch install commands multiple times. See To uninstall driver updates on page 312.
310
Installing patches
To install the latest patches on your system
For example, you can download a DUD ISO from an HTTP server with authentication and install it. The following output shows the update of the driver update image (on all of the nodes present in the cluster) with the tg3 driver of version 3.71b and the megaraid-sas.ko driver of version 00.00.03.16.
http://admin@docserver.symantec.com/DRIVER_UPDATES/SFS_DUD.iso tg3.ko:3.71b,megaraid_sas.ko:00.00.03.16 Enter password for user 'admin': ********** Please wait. Upgrade is in progress... Patch upgraded on all nodes of cluster. URL The URL of the location from where you can download the software patch. The URL supports HTTP, FTP, and SCP protocols for download. The username and password for the HTTP and FTP protocols are supported. An optional variable that you can use for DUD upgrades. Enter a list of comma-separated [drivername:versionnumber] pairs when you want to apply the DUD upgrade. You can exit the patch DUD upgrade process by entering no/no at the prompt. For example: Upgrade> patch install scp:// support@10.209.106.101:/home/support/SFS.iso Enter password for user 'support':******** No input driver given... List of drivers present in DUD:: Drivername:Versionnumber ************************** e1000.ko:7.6.9.1 tg3.ko:3.71b megaraid_sas.ko:00.00.03.16 Please enter driver list you want to add [Enter "No" to exit from here]:: no Sorry...Patch driverupgrade process is terminated by you.
driver_list
311
To uninstall patches
where version specifies the versions of software up to the version that you want to uninstall. For example:
Upgrade> patch uninstall-upto 5.5RP1 OK Completed
where nodename specifies the node that needs to be synchronized to the same software version as the one currently installed in the cluster. For example:
Upgrade> patch sync node2 ............... Syncing software upgrades on node2... SFS patch SUCCESS V-288-122 Patch sync completed.
This command lists all of the drivers updated on the cluster and asks you to confirm the uninstall on each one by entering y or yes. If you decide not to uninstall the drivers, press any key other than y or yes to exit the uninstall process.
312
You will be asked to confirm the uninstallation of the drivers. For example:
Upgrade> patch duduninstall patch duduninstall DUD updated with following drivers :: =================================== tg3.ko:3.71b megaraid_sas.ko:00.00.03.16 Do you really want to continue with uninstallation [Enter "y/yes" to continue]:: y Uninstalling DUD... DUD uninstall completed successfully.
Chapter
16
Troubleshooting
This chapter includes the following topics:
About troubleshooting commands Retrieving and sending debugging information About the iostat command About excluding the PCI ID prior to the SFS installation Testing network connectivity About the services command Using the support login About network traffic details Accessing processor activity Using the traceroute command
314
iostat
Generates CPU statistical information. Generates the device utilization report. See About the iostat command on page 315.
pciexclusion
Excludes the Peripheral Component Interconnect (PCI) IDs from the nodes in a cluster prior to installing the SFS software. The PCI IDs must be excluded prior to the PXE boot. See About excluding the PCI ID prior to the SFS installation on page 317.
network> ping
Tests whether a particular host or gateway is reachable across an IP network. See Testing network connectivity on page 321.
services
Brings services that are OFFLINE or FAULTED back into the ONLINE state. See Using the services command on page 323.
support login
Reports SFS technical support issues. See Using the support login on page 325.
tethereal
Exports the network traffic details to the specified location. Displays captured packet data from a live network. See About network traffic details on page 325.
top
Displays the dynamic real-time view of currently running tasks. See Accessing processor activity on page 327.
traceroute
Displays all of the intermediate nodes on a route between two nodes. See Using the traceroute command on page 328.
315
To upload debugging information from a specified node to an external server, enter the following:
Support> debuginfo nodename debug-url
For example:
Support> debuginfo sfsnode scp://john@abc.com:/tmp nodename Specifies the nodename from which to collect the debugging information. Specifies the URL where you want to upload the debugging information. Depending on the type of server from which you are uploading debugging information, use one of the following example URL formats: ftp://admin@ftp.docserver.company.com/patches/ scp://root@server.company.com:/tmp/ If debug-url specifies a remote directory, the default filename is sfsfs_debuginfo.tar.gz.
debug-url
iostat device
Generates the device utilization report. This information can be used to balance the load among the physical disks by modifying the system configuration. When this command is executed for the first time, it contains information since the system was booted. Each subsequent report shows the details since the last report. There are two options for this command. See To use the iostat device command on page 317.
316
interval
count
where the nodename option asks for the name of the node from where the report will be generated. The default is console for the Management Console. For example, to generate the CPU utilization report of the console node, enter the following:
Support> iostat cpu sfs_01 Linux 2.6.16.60-0.21-smp (sfs_01) avg-cpu: %user 1.86 %nice 0.07 %system 4.53
317
dataunit
interval
count
For example, to generate a device utilization report of a node, enter the following:
Support> iostat device sfs_01 Blk Linux 2.6.16.60-0.21-smp (sfs_01) Device: hda sda hdc tps 4.82 1.95 0.00 Blk_read/s 97.81 16.83 0.01
318
Note: If you decide to include the PCI IDs you previously excluded you need to reinstall SFS on your cluster. Table 16-3 Command PCI exclusion commands Definition
pciexclusion show Displays the list of PCI IDs that have been excluded during the initial SFS installation. The status of the PCI IDs is designated by a y (yes) or n (no). The yes option means they have been excluded. The no option means they have not yet been excluded. See To display the list of excluded PCI IDs on page 319. pciexclusion add Allows you to add specific PCI IDs for exclusion. You must enter the values in this command before the PXE boot installation for the PCI IDs to be excluded from the second node installation. See To add a PCI ID for exclusion on page 320. pciexclusion delete Deletes a specified PCI ID from being excluded. If you do not want the same PCI ID excluded on additional nodes, you must delete them here. You must perform this command before doing the PXE boot installation. See To delete a PCI ID on page 320.
319
To display the list of PCI IDs that you excluded during the SFS installation, enter the following:
Support> pciexclusion show PCI ID -----0000:0e:00.0 0000:0e:00.0 0000:04:00:1 PCI ID EXCLUDED -------y y n NODENAME/UUID ------------sfs_1 a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820
The PCI IDs you entered to be excluded during the initial SFS installation. The PCI ID is made up of the following: [ [<domain>] : ] [ [ <bus> ] : ] [ <slot > ] [ . [ <func> ] ]
EXCLUDED
(y) means the PCI ID has been excluded. (n) means the PCI ID has not been excluded.
NODENAME UUID
The node names corresponding to the PCI IDs. The ID of the node which is in the installed state but not yet added into the cluster.
320
where pci_list is a comma-separated list of PCI IDs. The format of the PCI ID is in hexadecimal bits (XXXX:XX:XX.X). For example:
Support> pciexclusion add 0000:00:09.8 Support> pciexclusion show PCI ID EXCLUDED NODENAME/UUID ------------------------0000:0e:00.0 y sfs_1 0000:0e:00.0 y a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820 0000:04:00:1 n 0000:00:09.0 n
To delete a PCI ID
To delete a PCI ID that you excluded during the SFS installation so that the PCI ID is now available for use, enter the following:
Support> pciexclusion delete pci
where pci is the PCI ID in hexadecimal bits. For example: XXXX:XX:XX.X. You can only delete a PCI ID exclusion that was not already used on any of the nodes in the cluster. In the following example, you cannot delete PCI IDs with the NODENAME/UUID sfs_1 or a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820. For example:
Support> pciexclusion delete 0000:04:00:1 Support> pciexclusion show PCI ID EXCLUDED NODENAME/UUID ------------------------0000:0e:00.0 y sfs_1 0000:0e:00.0 y a79a7f43-9fe2-4eeb-aa1f-27a70e7a0820 0000:00:09.0 n
321
devicename
packets
NFS server
322
CIFS server Console service Backup NIC information FS manager IP addresses Services commands Definition
Attempts to fix any service that is offline or faulted, running on all of the nodes in the cluster. See To display the state of the services on page 323.
services online
Fixes a specific service. Enter the servicename and this option will attempt to bring the service back online. If the servicename is already online, no action is taken. If the servername is a parallel service, an attempt is made to online the service on all nodes. If the servicename is a failover service, an attempt is made to online the service on any of the running nodes of the cluster. See To display the state of all of the services on page 324.
services show
Lists the state of important services. The state of the IPs and file systems are only shown if they are not online. When the show option is used, the program will attempt to online any services that are offline or faulted. There is a timeout of 15 minutes. If you run a services show command and then run the command again before 15 minutes has elapsed, the command will not attempt to online any services. See To fix any service fault on page 324.
services showall
Lists the state of all of the services. When the show option is used, the program will attempt to online any services that are offline or faulted. There is a timeout of 15 minutes. If you run a services show command and then run the command again before 15 minutes has elapsed, the command will not attempt to online any services. See To bring a service online on page 324.
323
To display the important services running on the nodes, enter the following:
Support> services show Verifying cluster state...........done sfs 1 2 -------- -------ONLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE
324
To display all of the services running on the nodes, enter the following:
Support> services showall sfs 1 2 -------- -------ONLINE ONLINE OFFLINE OFFLINE OFFLINE OFFLINE ONLINE OFFLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE OFFLINE ONLINE ONLINE OFFLINE OFFLINE ONLINE ONLINE OFFLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE
Service ------nfs cifs ftp backup console nic_pubeth0 nic_pubeth1 fs_manager 10.182.107.201 10.182.107.202 10.182.107.203 10.182.107.204 /vx/fs1 /vx/fs2 /vx/fs3
where servicename is the name of the service you want to bring online. For example:
Support> services online 10.182.107.203 Support>
325
For example,
login as: support Password: Last login: Fri Dec 14 12:09:49 2007 from 172.16.113.118 sfs_1:~ #
After having logged in as the support account, it is recommended that you change your password. See To change a user's password on page 34.
To use the supportuser commands refer to: See About the support user on page 35.
tethereal show
Displays captured packet data from a live network. See To use the tethereal show command on page 327.
326
nodename
interface count
For example, to export the network traffic details, enter the following:
Support> tethereal export scp://user1@172.31.168.140:/ Password: ******* Capturing on pubeth0 ... Uploading network traffic details to scp://user1@172.31.168.140:/ is completed.
327
interface count
For example, the traffic details for five packets, for the Management Console on the pubeth0 interface are:
Support> tethereal show 0.000000 172.31.168.140 0.000276 10.209.105.147 0.000473 10.209.105.147 packet len=112 0.000492 10.209.105.147 packet len=112 sfs_01 pubeth0 5 172.31.168.140 -> 10.209.105.147 ICMP Echo (ping) request -> 172.31.168.140 ICMP Echo (ping) reply -> 172.31.168.140 SSH Encrypted response -> 172.31.168.140 SSH Encrypted response
328
iterations
delay
For example, to show the dynamic real-time view of tasks running on the node sfs_01, enter the following:
Support> top sfs_01 1 1 top - 16:28:27 up 1 day, 3:32, 4 users, load average: 1.00, 1.00, 1.00 Tasks: 336 total, 1 running, 335 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1% us, 0.1% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si Mem: 16405964k total, 1110288k used, 15295676k free, 183908k buffers Swap: 1052248k total, 0k used, 1052248k free, 344468k cached PID 6314 1 USER root root PR 15 16 NI 0 0 VIRT 5340 640 RES 1296 260 SHR 792 216 S R S %CPU 3.9 0.0 %MEM 0.0 0.0 TIME+ 0:00.02 0:04.86 COMMAND top init
329
source
maxttl
For example, to trace the route to the network host, enter the following:
Support> traceroute www.symantec.com sfs_01 10 traceroute to www.symantec.com (8.14.104.56), 10 hops max, 40 byte packets 1 10.209.104.2 0.337 ms 0.263 ms 0.252 ms 2 10.209.186.14 0.370 ms 0.340 ms 0.326 ms 3 puna-spi-core-b02-vlan105hsrp.net.symantec.com (143.127.185.130) 0.713 ms 0.525 ms 0.533 ms 4 143.127.185.197 0.712 ms 0.550 ms 0.564 ms 5 10.212.252.50 0.696 ms 0.600 ms 78.719 ms
330
Glossary
CFS (cluster file system) A file system that can be simultaneously mounted on multiple nodes. CFS is used
and other network utilities. The Scalable File Server supports CIFS file sharing. A virtual IP address that is configured for administrative access to the Scalable File Server cluster management console. Three or more LUNs designated to function as part of the I/O fencing mechanism of the Scalable File Server. Coordinator disks cannot be used to store user data. An optional capability of NDMP Data and Tape Services where only relevant portions of the secondary media are accessed during Recovery Operations. data connection in NDMP is either an NDMP interprocess communication mechanism (for local operations) or a TCP/IP connection (for 3-way operations).
coordinator disks
data connection (NDMP) The connection between the two NDMP servers that carry the data stream. The
An NDMP service that transfers data between primary storage and the data connection. A unidirectional byte stream of data that flows over a data connection between two peer NDMP services in an NDMP session. For example, in a backup, the data stream is generated by the data service and consumed by the tape service. The data stream can be backup data, recovered data, etc. An application that controls the NDMP session. In NDMP there is a master-slave relationship. The data management application is the session master; the NDMP services are the slaves. In NDMP versions 1, 2, and 3 the term "NDMP client" is used instead of data management application. An enhancement technique that provides the load balancing and path failover to disks that are connected to the Scalable File Server cluster nodes. A feature that allows the files and directories to be automatically and seamlessly transferred to different types of storage technology that may originate from different hardware vendors. An ISO image or media that contains one or more additional drivers that are needed to install the Scalable File Server on specific hardware, if the base Scalable File Server installer did not include the necessary drivers.
332
Glossary
failover
The capability to have the service of a failed computer resource made available automatically with little or no interruption. With the Scalable File Server configured as a cluster, the services provided by any failed node are automatically provided by the remainder of functioning nodes. A file system quota for file and block consumption which can be established for individual users or groups. When the hard limit is reached no further files or blocks can be allocated. An optional Scalable File Server feature that configures a specific group of LUNs with (to have) an additional layer of data protection. This extra protection prevents data loss from occurring in the rare case that the redundant cluster interconnect and public low-priority interconnect fails. A NetBackup server that provides storage within a master and a media server cluster. See also NetBackup. A file system that is constructed and managed by a technique for automatically maintaining one or more copies of the file system, using separate underlying storage for each copy. If a storage failure occurs, then access is maintained through the remaining accessible mirrors. data access to network-capable clients. An open standard protocol that is used to control the data backup and the recovery communications between primary and secondary storage in a heterogeneous network environment. NDMP specifies a common architecture for the backup of network file servers. It enables the creation of a common agent which a centralized program can use to back up the data on file servers that run on different platforms. An application that controls the NDMP session. See also data management application. The host computer system that executes the NDMP server application. Data is backed up from the NDMP host to either a local tape drive or to a backup device on a remote NDMP host. An instance of one or more distinct NDMP services controlled by a single NDMP control connection. Thus a data/tape/SCSI server is an NDMP server providing data, tape, or SCSI services. The state computer on the NDMP host accessed with the Internet protocol and controlled using the NDMP protocol. This term is used independently of implementation. The three types of NDMP services are: data service, tape service, and SCSI service. The configuration of one data management application and two NDMP services to perform a data management operation such as a backup or a recovery.
hard limit
I/O fencing
media server
NAS (Network Attached A file-level computer data storage that is connected to a network that provides Storage) NDMP (Network Data Management Protocol)
NDMP client
NDMP host
NDMP server
NDMP service
NDMP session
Glossary
333
NetBackup
A Veritas software product that backs up, archives, and restores files, directories, or raw partitions that reside on a client system. A protocol that lets the user on a client computer access files over a network. To the client's applications the files appear as if they resided on one of the local devices. A feature that lets a customer use the Network File System (NFS) advisory client locking feature in parallel with core Cluster File System (CFS) global lock management. An NFS sharing option. Does not map requests from the UID 0. This option is on by default. A protocol for synchronizing computer system clocks over packet-switched, variable-latency data networks. A file-locking mechanism that is designed to improve performance by controlling the caching of files on the client. An internal IP network that is used by the Scalable File Server to facilitate communications between the Scalable File Server server nodes. available data storage devices (such as hard disks) or installed operating systems. A technique in which a DNS server, not a dedicated computer, performs the load balancing. An open-source implementation of the SMB file sharing protocol. It provides file and print services to SMB/CIFS clients. A specification of a file system or proper subset of a file system, which supports shared access to a file system through an NFS or CIFS server. The specification defines the folder or directory that represents the file system along with access characteristics and limitations. A point-in-time image or replica of a file system that looks identical to the file system from which the snapshot was taken. A file system quota for file and block consumption which can be established for individual users or groups. If a user exceeds the soft limit, there is a grace period, during which the quota can be exceeded. After the grace period has expired, no more file or data blocks can be allocated. A logical construct that contains one or more LUNs from which file systems can be created. The granularity at which data is stored on one drive of the array before subsequent data is stored on the next drive of the array.
no root_squash
private interconnect
PXE (Pre-boot eXecution An environment to boot computers using a network interface independent of Environment) round robin DNS
Samba
share
snapshot
soft limit
storage pool
stripe unit
334
Glossary
syslog
A standard for forwarding log messages in an IP network. The term refers to both the syslog protocol and the application sending the syslog messages. An NDMP service that transfers data between secondary storage and the data connection and allows the data management application to manipulate and access the secondary storage. A 64-bit identifier that is used in Fibre Channel networks to uniquely identify each element in the network (i.e., nodes and ports).
Index
A
about backup configurations 259 changing share properties 184 configuring CIFS for AD domain mode 165 configuring disks 101 configuring locally saved configuration files 288 configuring SFS for CIFS 154 configuring storage pools 96 creating and maintaining file systems 117 creating file systems 120 disk lists 105 DNS 54 FTP 207 FTP server 208 FTP session 216 FTP set 210 I/O fencing 111 installing patches 308 iostat 315 leaving AD domain 170 leaving NT domain 163 managing CIFS shares 183 managing home directories 194 NDMP policies 249 NDMP supported configurations 247 Network Data Management Protocol 246 network services 50 network traffic details 325 NFS file sharing 143 NIS 81 option commands 296 reconfiguring CIFS service 180 retrieving the NDMP data 255 services command 321 setting NTLM 173 setting trusted domains 176 SFS cluster and load balancing 191 snapshot schedules 138 snapshots 133 storage provisioning and management 95
about (continued) storing account information 177 support user 35 troubleshooting 313 about bonding Ethernet interfaces 52 accessing man pages 30 processor activity 327 Active Directory setting the trusted domains for 176 AD domain mode changing domain settings 171 configuring CIFS 165 security settings 171 CIFS server stopped 171 setting domain 167 setting domain controller 167 setting domain user 167 setting security 167 starting CIFS server 167 AD interface using 173 AD trusted domains disabling 176 adding a severity level to an email group 225 a syslog server 230 an email address to a group 225 an email group 225 CIFS share 184 disks 103 external NetBackup master server to work with SFS 243 filter to a group 225 IP address to a cluster 60 mirror to a file system 124 mirror to a tier of a file system 271 mirrored tier to a file system 268 mirrored-striped tier to a file system 268 NetBackup Enterprise Media Manager (EMM) server 243 NetBackup media server 243
336
Index
adding (continued) new nodes to the cluster 43 NFS share 145 second tier to a file system 268 SNMP management server 233 striped tier to a file system 268 striped-mirror tier to a file system 268 users naming requirements for 24 vlan 86
B
backup configurations about 259 backup services displaying the status of 260 starting 260 stopping 260 bind distinguished name setting for LDAP server 75
C
change security settings 165 after CIFS server stopped 165 changing an IP address to online on any running node 60 configuration of an Ethernet interface 65 DMP I/O policy 299 domain settings 163 local CIFS user password 202 NFS daemons 299 ninodes cache size 299 status of a file system 131 support user password 36 changing domain settings AD domain mode 171 changing share properties about 184 checking and repairing a file system 130 I/O fencing status 113 on the status of the NFS server 90 support user status 36 CIFS standalone mode 155
CIFS and NFS protocols share file systems 148, 188 CIFS server starting 181 CIFS server status standalone mode 156 CIFS server stopped change security settings 165 CIFS service standalone mode 156 CIFS share adding 184 deleting 184 clearing DNS domain names 56 DNS name servers 56 LDAP configured settings 75 CLI logging in to 25 client configurations displaying 80 LDAP server 80 cluster adding an IP address to 60 adding new nodes 43 adding the new node to 44 changing an IP address to online for any running node 60 deleting a node from 45 displaying a list of nodes 40 displaying all the IP addresses for 60 rebooting a nodes or all nodes 47 shutting down a node or all nodes in a cluster 47 command history displaying 37 Command-Line Interface (CLI) how to use 25 configuration of an Ethernet interface changing 65 configuration files deleting the locally saved 289 viewing locally saved 289 configuration settings exporting either locally or remotely 289 importing either locally or remotely 289 configuring backup using NetBackup 240 CIFS for standalone mode 155
Index
337
configuring (continued) IP routing 69 masquerade as EMC policy 250 NDMP backup method policy 250 NDMP failure resilient policy 250 NDMP overwrite policy 250 NDMP recursive restore policy 250 NDMP restore DST policy 250 NDMP send history policy 250 NDMP update dumpdates policy 250 NDMP use snapshot policy 250 NetBackup virtual IP address 244 NetBackup virtual name 245 NSS 84 NSS lookup order 84 SFS for CIFS 154 vlan 86 configuring CIFS NT domain mode 159 configuring disks about 101 configuring locally saved configuration files about 288 configuring storage pools about 96 coordinating cluster nodes to work with NTP servers 293 coordinator disks replacing 113 CPU utilization report generating 316 create snapshot schedule 140 creating local CIFS group 205 local CIFS user 202 Master, System Administrator, and Storage Administrator users 33 mirrored file systems 121 mirrored-stripe file systems 121 simple file systems 121 storage pools 99 striped file systems 121 striped-mirror file systems 121 users 33 creating and maintaining file systems about 117 creating file systems about 120
creating snapshots 134 current Ethernet interfaces and states displaying 65 current users displaying list 33
D
debugging information retrieving and sending 314 decreasing size of a file system 129 default passwords resetting Master, System Administrator, and Storage Administrator users 33 delete snapshot schedule 140 deleting a node from the cluster 45 already configured SNMP management server 233 CIFS share 184 configured email server 225 configured NetBackup media server 243 email address from a specified group 225 email group 225 filter from a specified group 225 home directories 200 home directory of given user 200 local CIFS group 205 local CIFS user 202 locally saved configuration file 289 NFS options 151 route entries from routing tables of nodes in cluster 69 severity from a specified group 225 syslog server 230 users 33 vlan 86 destroy I/O fencing 113 destroying a file system 133 storage pools 99 destroying snapshots 134 device utilization report generating 316 disabling AD trusted domains 176
338
Index
disabling (continued) creation of home directories 200 DNS settings 56 FastResync option 127 I/O fencing 113 LDAP clients configurations 80 NIS clients 82 NTLM 175 NTP server 293 quota limits used by snapshots 134 support user account 36 disk lists about 105 disks adding 103 removing 103 display FTP server 208 displaying all the IP addresses for cluster 60 command history 37 current Ethernet interfaces and states 65 current list of SNMP management servers 233 current version 307 DMP I/O policy 299 DNS settings 56 events 231 existing email groups or details 225 exported file systems 144 file systems that can be exported 93 files moved by running a policy 280 home directory usage information 199 information for all disk devices for nodes in a cluster 106 LDAP client configurations 80 LDAP configured settings 75 list of current users 33 list of DST file systems 274 list of nodes in a cluster 40 list of syslog servers 230 local CIFS group 205 local CIFS user 202 NDMP backup method 257 NDMP failure resilient data 257 NDMP masquerade as EMC 257 NDMP overwrite data 257 NDMP recursive restore data 257 NDMP restore DST data 257
displaying (continued) NDMP send history data 257 NDMP update dumpdates data 257 NDMP use snapshot data 257 NDMP variables 255 NetBackup configurations 260 network configuration and statistics 51 NFS daemons 299 NFS statistics 92 ninodes cache size 299 NIS-related commands 82 node-specific network traffic details 326 NSS configuration 84 option tunefstab 299 policy of each tiered file system 275 routing tables of the nodes in the cluster 69 schedules for all tiered file systems 279 share properties 184 snapshot quotes 134 snapshots that can be exported 93 status of backup services 260 status of the NTP server 293 system date and time 285 system statistics 294 tier location of a specified file 274 time interval or number of duplicate events for notifications 236 values of the configured SNMP notifications 233 values of the configured syslog server 230 vlan 86 DMP I/O policy changing 299 displaying 299 resetting 299 DNS about 54 domain names clearing 56 name servers clearing 56 specifying 56 settings disabling 56 displaying 56 enabling 56 domain setting 181 setting user name 181
Index
339
domain controller setting 181 domain name for the DNS server setting 56 domain settings changing 163 domain user NT domain mode 160 DUD driver updates uninstalling 310
exclusion PCI 317 exporting audit events in syslog format to a given URL 237 configuration settings 289 events in syslog format to a given URL 237 network traffic details 326 SNMP MIB file to a given URL 233
F
file systems adding a mirror to 124 changing the status of 131 checking and repairing 130 creating 121 decreasing the size of 129 destroying 133 disabling FastResync option 127 displaying exported 144 DST displaying 274 enabling FastResync 126 increasing the size of 127 listing with associated information 120 removing a mirror from 124 that can be exported displayed 93 unexporting 151 filter about 222 adding to a group 225 deleting from a specified group 225 FTP about 207 logupload 219 server start 209 server status 209 server stop 209 session show 217 session showdetail 217 session terminate 217 set anonymous login 213 set anonymous logon 213 set anonymous write 213 set non-secure logins 213 FTP server about 208 display 208
E
email address adding to a group 225 deleting from a specified group 225 email group adding 225 deleting 225 displaying existing and details 225 email server deleting the configured email server 225 obtaining details for 225 setting the details of external 225 enabling DNS settings 56 FastResync for a file system 126 I/O fencing 113 LDAP client configurations 80 NIS clients 82 NTLM 175 NTP server 293 quota limits used by snapshots 134 support user account 36 enabling quotas home directory file systems 196 Ethernet interface changing configuration of 65 Ethernet interfaces bonding 52 event notifications displaying time interval for 236 event reporting setting events for 236 events displaying 231 excluding PCI IDs 319
340
Index
G
generating CPU utilization report 316 device utilization report 316 group membership managing 202
IP addresses adding to a cluster 60 displaying for the cluster 60 modifying 60 removing from the cluster 60 IP routing configuring 69
L
LDAP before configuring settings 72 configuring server settings 73 LDAP password hash algorithm setting password for 75 LDAP server clearing configured settings 75 disabling client configurations 80 displaying client configurations 80 displaying configured settings 75 enabling client configurations 80 setting over SSL 75 setting port number 75 setting the base distinguished name 75 setting the bind distinguished name 75 setting the hostname or IP address 75 setting the password hash algorithm 75 setting the root bind DN 75 setting the users, groups, and netgroups base DN 75 leaving AD domain about 170 leaving NT domain about 163 list of DST file systems displaying 274 list of nodes displaying in a cluster 40 listing all file systems and associated information 120 all of the files on the specified tier 273 free space for storage pools 99 storage pools 99 listing snapshots 134 local CIFS group creating 205 deleting 205 displaying 205 local CIFS groups managing 204
H
history command using 37 home directories and use quotas setting up 197 home directory file systems enabling quotas 196 setting 195 home directory of given user deleting 200 home directory usage information displaying 199 hostname or IP address setting for LDAP server 75 how to use Command-Line Interface (CLI) 25
I
I/O fencing about 111 checking status 113 destroy 113 disabling 113 enabling 113 importing configuration settings 289 increase LUN storage capacity 108 increasing size of a file system 127 initiating host discovery of LUNs 110 installing patches 310 about 308 iostat about 315
Index
341
local CIFS user creating 202 deleting 202 displaying 202 local CIFS user password changing 202 local user and groups managing 201 logging in to CLI 25 login Technical Support 325 logupload FTP 219 LUN storage capacity increase 108 LUNs initiating host discovery 110
modifying (continued) schedule of a tiered file system 279 more command using 292 mounting snapshots 134 moving disks from one storage pool to another 103
N
naming requirements for adding users 24 NDMP backup method displaying 257 NDMP backup method policy configuring 250 NDMP failure resilient data displaying 257 NDMP failure resilient policy configuring 250 NDMP masquerade as EMC displaying 257 NDMP overwrite data displaying 257 NDMP overwrite policy configuring 250 NDMP policies about 249 ndmp policies restoring 259 NDMP recursive restore data displaying 257 NDMP recursive restore policy configuring 250 NDMP restore DST data displaying 257 NDMP restore DST policy configuring 250 NDMP send history data displaying 257 NDMP send history policy configuring 250 NDMP supported configurations about 247 NDMP update dumpdates data displaying 257 NDMP update dumpdates policy configuring 250 NDMP use snapshot data displaying 257
M
man pages how to access 30 managing group membership 202 local CIFS groups 204 local users and groups 201 managing CIFS shares about 183 managing home directories about 194 masquerade as EMC policy configuring 250 Master, System Administrator, and Storage Administrator users creating 33 mirrored file systems creating 121 mirrored tier adding to a file system 268 mirrored-stripe file systems creating 121 mirrored-striped tier adding to a file system 268 modify snapshot schedule 140 modifying an IP address 60 option tunefstab 299 policy of a tiered file system 275
342
Index
NDMP use snapshot policy configuring 250 NDMP variables displaying 255 NetBackup configuring NetBackup virtual IP address 244 configuring virtual name 245 displaying configurations 260 NetBackup EMM server. See NetBackup Enterprise Media Manager (EMM) server NetBackup Enterprise Media Manager (EMM) server adding to work with SFS 243 NetBackup master server configuring to work with SFS 243 NetBackup media server adding 243 deleting 243 network configuration and statistics 51 testing connectivity 321 Network Data Management Protocol about 246 network services about 50 network traffic details about 325 exporting 326 NFS daemons changing 299 displaying 299 NFS file sharing about 143 NFS options deleting 151 NFS server checking on the status 90 starting 90 stopping 90 NFS share adding 145 NFS statistics displaying 92 ninodes cache size changing 299 displaying 299 NIS about 81 clients disabling 82
NIS (continued) clients (continued) enabling 82 domain name setting on all the nodes of cluster 82 related commands displaying 82 server name setting on all the nodes of cluster 82 node adding to the cluster 4344 in a cluster displaying information for all disk devices 106 installing SFS software onto 43 node-specific network traffic details displaying 326 NSS configuring 84 displaying configuration 84 lookup order configuring 84 NT domain mode configuring CIFS 159 domain user 160 setting domain 160 setting domain controller 160 setting security 160 starting CIFS server 160 NTLM disabling 175 enabling 175 NTP server coordinating cluster nodes to work with 293 disabling 293 displaying the status of 293 enabling 293
O
obtaining details of the configured email server 225 option commands about 296 option tunefstab displaying 299 modifying 299
Index
343
P
password changing a user's password 33 patch level displaying current versions of 307 patches installing 310 synchronizing 310 uninstalling 310 PCI exclusion 317 PCI IDs excluding 319 policies about 267 policy displaying files moved by running 280 displaying for each tiered file system 275 modifying for a tiered file system 275 relocating from a tiered file system 277 removing from a tiered file system 275 running for a tiered file system 275 preserve snapshot schedule 140 printing WWN information 109 privileges about 23 processor activity accessing 327
Q
quota limits enabling or disabling snapshot 134
removing (continued) mirror from a file system 124 mirror from a tier spanning a specified disk 271 mirror from a tier spanning a specified pool 271 mirror from a tiered file system 271 policy of a tiered file system 275 schedule of a tiered file system 279 tier from a file system 270 renaming storage pools 99 replacing coordinator disks 113 resetting default passwords Master, System Administrator, and Storage Administrator users 33 DMP I/O policy 299 restoring ndmp policies 259 retrieving debugging information 314 retrieving the NDMP data about 255 roles about 23 route entries deleting from routing tables 69 routing tables of the nodes in the cluster displaying 69 running policy of a tiered file system 275
S
schedule displaying for all tiered file systems 279 modifying for a tiered file system 279 removing from a tiered file system 279 second tier adding to a file system 268 security standalone mode 156 security settings AD domain mode 171 CIFS server stopped 171 change 165 sending debugging information 314
R
rebooting a node or all nodes in cluster 47 reconfiguring CIFS service about 180 regions and time zones setting 285 relocating policy of a tiered file system 277 remove snapshot schedule 140 removing disks 103 IP address from the cluster 60
344
Index
server start FTP 209 server status FTP 209 server stop FTP 209 services command about 321 using 323 session show FTP 217 session showdetail FTP 217 session terminate FTP 217 set anonymous login FTP 213 set anonymous logon FTP 213 set anonymous write FTP 213 set non-secure logins FTP 213 setting base distinguished name for the LDAP server 75 bind distinguished name for LDAP server 75 details of the external email server 225 domain 181 domain controller 181 domain name for the DNS server 56 domain user name 181 events for event reporting 236 filter of the syslog server 230 home directory file systems 195 LDAP password hash algorithm 75 LDAP server hostname or IP address 75 LDAP server over SSL 75 LDAP server port number 75 LDAP users, groups, and netgroups base DN 75 NIS domain name on all the nodes of cluster 82 regions and time zones 285 root bind DN for the LDAP server 75 severity of the syslog server 230 SNMP filter notifications 233 SNMP severity notifications 233 system date and time 285 the NIS server name on all the nodes of cluster 82 trusted domains for the Active Directory 176
setting domain AD domain mode 167 NT domain mode 160 setting domain controller AD domain mode 167 NT domain mode 160 setting domain user AD domain mode 167 setting NTLM about 173 setting security AD domain mode 167 NT domain mode 160 setting trusted domains about 176 setting up home directories and use quotas 197 severity levels about 222 adding to an email group 225 severity notifications setting 233 SFS cluster and load balancing about 191 SFS Dynamic Storage Tiering (DST) about 263 SFS software installing onto a new node 43 share splitting 192 share file systems CIFS and NFS protocols 148, 188 share properties displaying 184 show snapshot schedule 140 shutting down node or all nodes in a cluster 47 snapshot schedule create 140 delete 140 modify 140 preserve 140 remove 140 show 140 snapshot schedules about 138 snapshots about 133
Index
345
snapshots (continued) creating 134 destroying 134 displaying quotas 134 enabling or disabling quota limits 134 listing 134 mounting 134 that can be exported displayed 93 unmounting 134 SNMP filter notifications setting 233 management server adding 233 deleting configured 233 displaying current list of 233 MIB file exporting to a given URL 233 notifications displaying the values of 233 server setting severity notifications 233 specified group deleting a severity from 225 specifying DNS name servers 56 splitting a share 192 SSL setting the LDAP server for 75 standalone mode CIFS server status 156 CIFS service 156 security 156 starting backup services 260 CIFS server 181 NFS server 90 starting CIFS server AD domain mode 167 NT domain mode 160 stopping backup services 260 NFS server 90 storage pools creating 99 destroying 99 listing 99 listing free space 99
storage pools (continued) moving disks from one to another 103 renaming 99 storage provisioning and management about 95 storing user and group accounts in LDAP 179 user and group accounts locally 179 storing account information about 177 striped file systems creating 121 striped tier adding to a file system 268 striped-mirror file systems creating 121 striped-mirror tier adding to a file system 268 support user about 35 support user account disabling 36 enabling 36 support user password changing 36 support user status checking 36 swap command using 295 synchronizing patches 310 syslog event logging about 229 syslog format exporting audit events to a given URL 237 exporting events to a given URL 237 syslog server adding 230 deleting 230 displaying the list of 230 displaying the values of 230 setting the filter of 230 setting the severity of 230 system date and time displaying 285 setting 285 system statistics displaying 294
346
Index
T
technical support login 325 testing network connectivity 321 tier adding a tier to a file system 271 displaying location of a specified file 274 listing all of the specified files on 273 removing a mirror from 271 removing a mirror spanning a specified pool 271 removing from a file system 270 removing from a tier spanning a specified disk 271 traceroute command using 328 troubleshooting about 313
virtual IP address configuring or changing for NetBackup 244 virtual name configuring for NetBackup 245 vlan adding 86 configuring 86 deleting 86 displaying 86
W
WWN information printing 109
U
unexporting file systems 151 uninstalling DUD driver updates 310 patches 310 unmounting snapshots 134 user and group accounts in LDAP storing 179 user and group accounts locally storing 179 user roles and privileges about 23 users adding new 24 changing passwords 33 creating 33 deleting 33 using AD interface 173 history command 37 more command 292 services command 323 swap command 295 traceroute command 328
V
viewing list of locally saved configuration files 289