Beruflich Dokumente
Kultur Dokumente
Objective:
Install the Centralized Management Console and locate the storage modules on the network.
Installation
This exercise provides experience with installing the Centralized Management Console.
1. Find the CMC installation file provided by your instructor and double-click on the file
to start the installation of the Centralized Management Console.
2. Accept the license conditions.
3. Examine the Installation options that are listed on the left side of the installer. Click
Centralized Management Console.
4. Click Install CMC.
5. Notice the install steps listed on the left side of the Introduction screen. Click Next.
6. Use the Radio button to accept the License Agreement and click Next.
7. Select Typical install and click Next.
8. Accept the default installation path and click Next.
9. Make sure that you leave “Create a shortcut to your desktop” selected and click Next.
10. Make sure that you leave the default to “Start the Centralized Management console
after installation” selected and click Next.
11. Examine your selections on the Installation Summary and click Install. The
installation proceeds.
12. After the installation completes, click Done.
IP Address: ___________________________
Model: _________________________
17. Verify that RAID is normal by examining the RAID field in the Content Pane.
Objective:
This exercise provides experience on how to:
Backup Configuration
1. Open the Centralized Management Console.
18. Locate and select your storage module in the Navigation Pane.
19. In the Content Pane click the Storage Module Tasks button to display all your options
for the storage modules.
20. Click Backup or Restore... Backup and restore provides the capability to save the
configuration file for use in case of a failure. When you back up a configuration, all of
the configuration information is stored in a file on the computer where the Centralized
Management Console is installed.
21. On the Backup or Restore window, click Back Up... The Save Backup File window
opens.
22. Navigate to a folder on the Centralized Management Console computer to contain the
configuration backup file.
23. Accept the default name or enter a new name for the backup file and click Save.
Verify Storage
24. Under your Storage Module in the Navigation Panel, select Storage.
a. What is the RAID configuration? ______________
When running on real P4000 hardware, you could click the RAID Setup Tasks
button to set a different RAID level. The virtual SAN appliance used in the labs
does not support this.
25. Click the Disk Setup tab. All disks in the Storage Module should be Active and
participating in RAID. The drives are labeled in the Disk Setup window; the number
of drives depends on the model. (There is only one drive in the virtual SAN
appliance.)
a. How many drives are listed in the table? ________
b. What is the capacity of each listed drive? _________________
You can specify static routes and/or a default route. If you specify a default route here,
it will not survive a reboot or shut down of the storage module. To create a route that
will survive a storage module reboot or shut down, you must enter a default gateway.
Reports
30. Click Hardware in the Navigation Pane under your storage module.
31. Click Hardware Information tab in the content pane.
32. Click Click to Refresh to update the information.
33. Using the scroll bars, examine the statistics on your storage module in the passive
report.
What version of software is loaded? _________________
34. Click the Hardware Information Tasks to drop down a list of options for the report.
Select Save to File option. The Save Report File dialog opens.
35. Choose the location and name for the report and click Save. The report is saved in
HTML format and can be opened with any web browser. Open the report file you
saved using Internet Explorer.
36. Select Alerts under your storage module in the Navigation Pane.
37. In the list of monitored variables, the frequency, thresholds, and notification settings
display to the right of the variable names. Select the Network Interface Status
variable.
How frequently is the variable monitored? __________
38. Repeat step 37 for a number of the variables. You may double-click the variable and
view/change the parameters allowed for a particular monitored variable. For some
variables, only the notification method can be changed. For example, the frequency
for the motherboard temperature variable is set to 15 seconds and cannot be changed.
39. If you have changed any configuration settings during this exercise, save the
configuration file(s).
Objective:
Understand the process of creating management groups, managers, user groups, users, clusters,
ordering modules, and volumes.
You must configure volumes and associate them with authentication groups in the
Centralized Management Console before connecting the volume to a server.
To create a volume, at least one management group and cluster must exist. This
exercise illustrates how you can perform these prerequisite tasks.
Create Volume
A volume is analogous to a LUN. It can be used as raw data storage or it can be formatted with a file
system and used by an application server or file server.
49. Right-click your Cluster in the Navigation Pane and select New Volume.
50. On the Basic tab of creating a new volume supply the following information:
a. Meaningful Volume name
b. Description if desired
c. Volume Size 1 Gig (this is the volume size the application server will see)
51. On the Advanced tab of creating a new volume, supply the volume the following
information:
a. Cluster
b. Replication Level – Network RAID-10 (2-way mirror) will do for now
c. Type – leave at Primary
d. Provisioning -- select Thin
52. Click OK.
53. Repeat Steps 49 thru 52 and create multiple volumes with sizes ranging from 1GB to
5 GB.
You can find additional details on this process in the HP StorageWorks Application Note Best
Practices for Enabling Microsoft Windows with SANiQ.
66. In the CMC, expand the navigation tree for your Management Group. Right-click on
Servers and select New Server…
67. Enter a name for the Server.
68. You need to specify the Initiator Node Name in the Authentication section. Return to
the initiator UI and find the Initiator Node Name in the General tab. Highlight and
copy the node name, then paste it into the Initiator Node Name field in the New
Server dialog. Click OK.
69. The CMC displays the Details tab for the new Server. Click on the Volumes and
Snapshots tab.
70. Click Tasks Assign and Unassign Volumes and Snapshots. Select the volume(s)
you wish to attach to this Server and click OK.
This command forces the lanmanserver service (Server service, which provides file
sharing) to wait until the MSiSCSI service (Microsoft iSCSI Initiator Service) is
completely initialized. You should add sc config commands for any other services
(such as SQL or Exchange) that require access to the iSCSI volumes.
80. Verify the dependency setting. Open the Services applet, right-click on the Server
service, and select Properties. Select the Dependencies tab and verify that the
“Microsoft iSCSI Initiator Service” is listed.
81. After you have made all other settings, bind the volumes. This forces the iSCSI
service to wait for the bound volumes to become available before it signals its
completion and releases the dependent services.
The volumes can now be accessed from the Windows 2003 system.
Lab 8:
Install Windows Solutions Pack for (DSM for MPIO)
Objective:
Understand how to verify you have multiple sessions to your iSCSI volumes set between server and
storage.
1. Run the Solutions Pack for Windows installer (provided by your instructor). Accept
the license terms and select the SAN/iQ DSM for MPIO package. Click the Install
DSM for MPIO button.
82. Reboot the lab machine when prompted. Once the machine comes back up, login to
the Centralized Management Console and highlight a mounted volume.
83. Go to the iSCSI sessions tab and note how many sessions are currently logged onto
that volume.
Lab 9: Snapshots
Objective:
Create and mount multiple snapshots. Compare and contrast the snapshots.
Snapshots provide a fixed version of a volume for read only access. Unlike backups,
which are typically stored on different physical devices or tapes, snapshots are
stored on the same cluster as the volume. Therefore, snapshots protect against data
corruption, but not device or storage media failure.
Before you create a snapshot, you must have created a management group, a
cluster, and a volume.
1. On the server system, copy some data onto the drive letter representing your volume.
2. Open the Centralized Management Console.
3. Open your Management Group and right-click on your volume. Select New
Snapshot… Specify a snapshot name. Click Assign and Unassign Servers… and
check the box next to your Server. Click OK OK.
4. Notice the new icon next to the snapshot. Select Help Graphical Legend. Note the
icon associated with a snapshot.
5. Right-click the volume on which you want to create a Snapshot Schedule. Select New
Schedule to Snapshot a Volume.
6. Provide the following information for the Snapshot Schedule:
a. A snapshot name, if you do not want to use the default SAN/iQ name
b. Enter a description if desired
c. Click the Edit button to set the start date and time for the first snapshot
d. Select the recurrence period – every day, every week, etc.
e. Specify the retention policy for your snapshot
84. Click OK.
85. On the server system, copy some data onto the drive letter representing the volume.
86. If you are mounting the snapshot to the same server, you need to add the snapshot to
the list of volumes associated to that server.
From the iSCSI initiator, click the Refresh button on the Targets tab. The snapshot
volume should appear in the list.
87. Select the snapshot volume and click the Log On… button.
a. NOTE: You do not need to select automatically reconnect at reboot for a
snapshot. Click OK.
88. The snapshot is now mounted on the server. Go to Disk Management and provide the
snapshot a volume letter or mount point.
NOTE: you can now write to the snapshot but any changes made will not be
committed to the volume.
Objective
Understand the process of creating a SmartClone volume and how they are created.
Objective
Understand the process of creating a remote snap, how they are used, mounted and how they can
be applied to different application environments.
The remote snapshot feature enables copying data to a remote site or across management groups
and keeping that data available for business continuance, backup and recovery, or data migration.
3. Select a primary volume that you have previously created. For simplicity, select a
volume that does not have snapshots or a snapshot schedule.
90. Right-click your volume and select New Schedule to Remote Snapshot a Volume…
91. Provide your New Scheduled Remote Snapshot the following information:
a. Name, if you do not want ot use the default SAN/iQ name
b. Description if desired
c. Start at: Click Edit… to give your remote copy a start date and time
d. Specify the recurrence period for your remote copy
e. Specify the retention policy at your primary site
f. Select the Remote Management group from the drop-down list
g. Click New Remote Volume to create a new volume at the remote site, and specify
the information to create the volume
h. Select your retention policy at your remote site
92. Click OK.
93. By referring to the Snapshots Lab, add the primary snapshot as a volume on your
application server.
If you are using a different Management Group you will need to create a Server for
your Application Server and add the VIP address to the Targets tab.
94. Using File Explorer, compare the contents of the drives associated with the primary
snapshot and the remote snapshot.
Lab 12: Grow a volume on the SAN and extend the file
system in Windows Server
Objective:
Learn how to grow a volume “on the fly” in Windows Server 2003.
4. Edit the volume in the console, increasing its size to reflect the new size the server
will be using.
95. Using Disk Manager, verify that there is “unallocated space” behind the original
volume letter. (Rescan disks if necessary.)
96. From a command prompt, type the command diskpart.
97. From within Diskpart type the following commands:
b. list disk to display the disks seen by disk management.
c. Select the disk number in question with select disk <disk number>
d. Type list vol to display the volumes seen by disk manager.
e. Select the volume number in question by typing select vol <vol number>
f. Now that you have selected both the disk and the volume, type extend.
98. Note the previously unallocated space within the volume is now gone. The volume
has grown to fill the previously available space.
Objective:
To become comfortable creating your volume partitions using diskpart with a 64k offset value for
optimized performance.
5. Create a new volume, assign it to your server and mount it with the MS iSCSI
initiator.
6. Open a command prompt and type the command diskpart.
7. At the Diskpart prompt type list disk to display the disks currently attached to the
server.
8. Note which disk on which you are going to create the partition. You can also
determine this using Disk Manager.
9. Type select disk <disk number>
10. Type create partition primary align 64
11. Format and assign a drive letter to your optimized volume.
12. Type list partition and note the offset value.
Objective:
To become familiar with how to use the sc.exe utility to make the Server service (lanmanserver) wait
for the iSCSI initiator to complete it’s connection to volumes before allowing the server service to
start.
Objective
To understand how to make sure volumes are online ready to go after a server reboots.
Volume Binding
15. Select Bound Volumes/Devices or Initiator Settings tab, depending on Microsoft
iSCSI initiator version.
104. Click Bind All or Bind Volumes.
105. Confirm the Volume/Mount Point/Device window has an entry for each volume. The
entry should have drive letter for the volume. If other characters are present, trouble
shoot all volume connectivity steps & try again.
106. Click OK to close the iSCSI initiator.
Objective
To understand how and where to make this change in the servers registry setting.
16. On the host server select Start Run and enter regedit.
17. In the registry tree highlight My Computer, then click Edit Find…
18. In the Find dialog enter maxrequestholdtime and click Find Next.
19. After a moment the registry editor should find the default value. Double-click the
highlighted value.
a. Change the base from Hexadecimal to Decimal.
b. Change the 60 to 600 and click OK.
c. Hit the F3 key to find the next occurrence of the maxrequestholdtime value.
d. Repeat steps a and b and exit regedit.
Objective
To become familiar with Performance Monitoring capabilities in SAN/iQ version 8.1
The Performance Monitor provides performance statistics for ISCSI and storage node I/Os to help
determine the load the SAN is servicing.
118. If you selected the Selected Statistics from List option, select the statistics you want to
monitor.
119. Click Add Statistics.
The statistics you selected are listed in the Added Statistics list.
NOTE: If you selected a statistic that is already being monitored a message displays
letting you know that the statistics will not be added again.
Instructor-Led Demonstrations
The following exercises demonstrate several capabilities of the SAN/iQ environment. You may find
them useful in managing your own installation.
Due to specific limitations of the virtual-lab environment used in this class, it is not practical to have
all students execute these labs simultaneously. Your instructor will run through them to
demonstrate them to the entire class.
1. Create a management group with a cluster containing 3 storage nodes. These storage
nodes are your current (existing) nodes that you want to replace. The management
group should have 3 Managers.
2. Optional: from a Windows server, mount a volume on the cluster, and start a long file
copy to the volume. This copy will continue throughout the operation, demonstrating
that the volume stays “live” and accessible throughout the update process.
3. Add 3 new storage nodes to the management group. These are the new replacement
nodes.
4. The management group’s icon flashes to indicate an error condition. Select the
management group and notice the “Details” tab is red; select it and you will see the
management group now contains 6 nodes with only 3 Managers. It should have 5
Managers.
5. Manually start Managers on 2 of the new nodes. (This isn’t required, but the restripe
goes much faster with additional Managers.) The management group’s error indicator
stops flashing.
6. Add the 3 new nodes to the cluster. DO NOT add the nodes one at a time, since each
addition would force a re-stripe. Instead, right-click on the cluster and select Edit
Cluster. Click Add Nodes… to specify the new nodes. Use the up/down arrows to
move the new nodes into the desired order.
7. Now, still on the Edit Cluster dialog, select your 3 old nodes and click Remove Nodes.
127. Ensure you have the proper new nodes listed, then click OK. SAN/iQ simultaneously
adds the new nodes and removes the 3 old nodes with only one re-stripe. (You can see
the restriping progress by selecting the volume.)
130. Right-click on the old nodes and select Remove from Management Group…
Now you can remove or decommission the old storage nodes. The new storage nodes have stepped
in to replace them, and the data volumes were fully available during the entire operation.
1. Create a management group with 2 clusters. Assign storage nodes to each cluster.
2. Create a volume on one cluster.
3. Mount the volume from Windows.
4. Edit the volume (right-click and select Edit Volume…), select the Advanced tab, and
change the cluster. Click OK.
5. A dialog warns you that the Windows mount is still referring to the VIP in the first
cluster. Click OK. SAN/iQ moves the volume.
6. When the move is finished, you need to change the Windows mount to use the VIP in
the new cluster:
a. In the iSCSI initiator UI, select the Discovery tab. Add the VIP of the new
cluster.
b. Log off the volume: select the Targets tab and click the Details button. On the
Sessions tab, select the volume and click Log off… (You can check what
partition is mounted on that volume by selecting the Devices tab Advanced
button.) Click OK to close the Target Properties dialog.
7. Remove the old VIP: select the Discovery tab, select the VIP of the old cluster, and
click Remove. If you get a warning that there are persistent targets and bound
volumes on that VIP, go to the Persistent Targets tab and the Bound Volumes/Devices
tab and remove them.
8. Select the Targets tab. Click Refresh. Log back on to the volume if it is not
connected.
This exercise should be done last, since it is difficult to re-start the lab’s storage node Virtual
Machines after they are shut down.
1. Shut down one of the nodes holding your volume. Right-click on the node and select
Power Off or Reboot… Select Power Off, enter 0 in the “in __ minutes” field, and
click OK. Click Power Off OK OK.
(It might work to reboot the node, and that would allow you to re-discover the node
after it came up again. But the virtual machine often reboots so fast that the CMC
doesn’t notice it went down. Shutting down the node is more reliable.)
2. If the powered-down node was running a Manager, the management group notices
that it has lost a Manager. If it is still at or above quorum, it does not issue an error.
3. Check the mounted volume from Windows. Notice that the volume is still online and
fully available.
4. Add a new storage node (from the pool of available nodes) into the management
group to replace the failed node. The new node tries to re-log into all storage nodes in
the management group. The login fails on the “dead” node, and after a few minutes
the new-node login times out.
5. Start up a Manager on the new storage node.
6. Add the new storage node to the cluster, using the Edit Cluster technique shown above
to swap out the dead node.
7. Remove the dead node from the management group.