Sie sind auf Seite 1von 20

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS

TECHNICAL NOTES AVAMAR 3.7.X OR LATER

TECHNICAL NOTE
P/N 300-007-557 REV A02 FEBRUARY 11, 2009

Table of Contents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Single-Node to Non-RAIN 1x2 Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1x2 to Larger Capacity RAIN Server Expansion . . . . . . . . . . . . . . . . . . . . . . . 7

Overview

Overview
The purpose of this document is to describe how to expand the storage capacity of a non-RAIN Avamar server, in particular the following scenarios:
EXPAND FROM TO PROCEDURE

Single-node server

Non-RAIN 1x2

System migration that includes: Installing and configuring a new 1x2 nonRAIN server (page 3) Migrating customer data to the new system (page 3) Parity conversion that includes: Preparing the server (page 7) Adding two new storage nodes (page 12) Changing parity to RAIN (page 15) Rebalancing the server (page 16) Returning the server to normal operating configuration (page 17)

1x2 server

RAIN 1x4 server (or larger)

IMPORTANT: The largest configuration non-RAIN Avamar server is a 1x2. The smallest RAIN server is a 1x4. A 1x3 configuration is not supported.

NOTE: All non-RAIN Avamar servers require a replication target for protecting customer data.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

Single-Node to Non-RAIN 1x2 Expansion

Single-Node to Non-RAIN 1x2 Expansion


This topic describes how to expand the storage capacity of a single-node Avamar server to a non-RAIN 1x2 server. In order to do this, you must migrate all existing customer data (using root-to-root replication) to a new 1x2 server. Task List. Expanding the storage capacity of a single-node Avamar server to a non-RAIN 1x2 server comprises the following tasks: Install and Configure a New 1x2 Non-RAIN Server (page 3) Migrate Customer Data to the New System (page 3) The preferred deployment architecture for single-node servers is to always deploy them using replication. Therefore, after you expand the storage capacity of your old single-node source server, you might also need to expand the storage capacity of the old replication target server. To accomplish this, you can repeat this procedure or build a new 1x2 non-RAIN from scratch and migrate the data using root-to-root replication (that is, the last task of this procedure).

Install and Configure a New 1x2 Non-RAIN Server


Install and configure a new 1x2 Avamar server according to the instructions found in your Server Software Installation Manual. IMPORTANT: Ensure that you configure the server as a non-RAIN server.

Migrate Customer Data to the New System


This task describes how to migrate customer data from the old single-node Avamar server to the new non-RAIN 1x2 server using root-to-root replication. A Word About Root-to-Root Replication. System migration (using root-toroot replication) is a special replication mode that creates an entire logical copy of an source server on a target server. Replicated data is not copied from the root domain (/) to /REPLICATE/REPLICATION_SOURCENAME domain as it is during standard replication. Instead, the data in root is copied directly to the root domain of the target server as though source clients were actually registered with the target server. Source server data replicated in this manner is fully modifiable on the target server. NOTE: Refer to your Avamar System Administration Manual for additional information about system migration using root-to-root replication.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

Single-Node to Non-RAIN 1x2 Expansion 1. Ensure the replication target system is absolutely clean (that is, it has no customer data on it).
Target Server User=admin

2. Open a command shell and log into the target server as user admin. 3. Load the admin OpenSSH key by typing: ssh-agent bash ssh-add ~admin/.ssh/admin_key You are prompted to type a passphrase. 4. Type the admin user account passphrase and press ENTER. 5. On the target server, suspend all possible cron activity as follows: (a) Switch user to root by typing: su (b) When prompted, type the root password and press ENTER. (c) Change directories to /usr/local/avamar/bin by typing: cd /usr/local/avamar/bin (d) Make backup copies of /usr/local/avamar/bin/morning_cron_run and /usr/local/avamar/bin/evening_cron_run; place those copies in a safe location. (e) Open /usr/local/avamar/bin/morning_cron_run in a Unix text editor. (f) Comment out all activities. (g) Save your changes. (h) Repeat steps e thru g for /usr/local/avamar/bin/evening_cron_run. (i) Switch back to the admin user account by typing: exit 6. On the target server, stop EMS and MCS server processes by typing: dpnctl stop ems dpnctl stop mcs

User=root

Target Server User=admin

Source Server User=admin

7. Open a command shell and log into the source server as user admin. 8. Load the admin OpenSSH key by typing: ssh-agent bash ssh-add ~admin/.ssh/admin_key You are prompted to type a passphrase. 9. Type the admin user account passphrase and press ENTER.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

Single-Node to Non-RAIN 1x2 Expansion 10. Initiate the root-to-root replication operation by typing the following on a single command line: nohup replicate --fullcopy --dstaddr=DEST_SERVER --dstid=root --dstpassword=PASSWORD --srcpath=/ --dstpath=/ --fullcopy --failover --timeout=0 >> replicate_migrate_DATE.log 2>&1 & Where DEST_SERVER is the fully-qualified domain name of the target server, PASSWORD is the root password of the server and DATE is the current date in YYYYMMDD format. NOTE: For Avamar servers v3.7 or later, the --failover --srcpath and --dstpath options are not required because the replicate command default settings already set them correctly. However, it does no harm to include them. 11. Wait for the root-to-root replication operation to successfully complete. TIP: You can monitor the replicate_migrate_DATE log file using the tail command in another command shell to determine when the root-to-root replication operation successfully completes.

IMPORTANT: Proceed to step 12 only after the root-to-root replication operation successfully completes.
Target Server User=admin

12. Switch to the target server command shell. 13. Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. 14. Perform a new-system restore by typing: mcserver.sh --restore --restoretype=new-system The following appears in your command shell:
=== BEGIN === check.mcs (prerestore) check.mcs passed === PASS === check.mcs PASSED OVERALL (preavsetup) --restore will modify your Administrator Server database and preferences. Do you want to proceed with the restore Y/N? [Y]:

15. Type y and press ENTER. The following appears in the command shell:
Database server stopped. removing data dir /usr/local/avamar/var/mc/server_data creating /usr/local/avamar/var/mc/server_data with permissions 0755 creating /usr/local/avamar/var/mc/server_data/prefs with permissions 0755 creating /usr/local/avamar/var/mc/server_data/postgres with permissions 0700 creating /usr/local/avamar/var/mc/server_data/postgres/data with permissions 0755 Enter the Avamar Server IP address or fully-qualified domain name to restore from (i.e. dpn.your_company.com):

16. Respond correctly to the remaining prompts. EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE 5

Single-Node to Non-RAIN 1x2 Expansion 17. Start all Avamar server services by typing: mcserver.sh --start dpnctl start 18. Verify successful replication as follows: (a) Start Avamar Administrator. (b) Log into the target server. (c) Ensure that all clients and backups were successfully replicated. Refer to your Avamar System Administration Manual for additional information.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

1x2 to Larger Capacity RAIN Server Expansion

1x2 to Larger Capacity RAIN Server Expansion


This topic describes how to expand storage capacity of a 1x2 non-RAIN server to a 1x4 RAIN server by adding new storage nodes, connecting them to the network, changing the parity of the server, then rebalancing the data across all nodes. IMPORTANT: This procedure requires a minimum of 48 hours to complete and can take as long as 72 hours to complete. During this time, the Avamar server must be completely idle. The system cannot be used for backups or restores, and all maintenance cron jobs must be disabled.

IMPORTANT: If any errors occur during this procedure, the server must be rolled back to a previous validated checkpoint and the entire procedure must be performed from the beginning.

Larger RAIN Servers. This topic describes how convert a 1x2 non-RAIN server to a 1x4 RAIN server. To expand to a server larger than a 1x4, first perform this following procedure, then add additional nodes to create a larger RAIN server. Task List. Expanding the storage capacity of a 1x2 non-RAIN server to a 1x4 RAIN server comprises the following tasks: Server Preparation (page 7) Add Two New Storage Nodes to Existing Server (page 12) Change Parity From Non-RAIN to Rain (page 15) Rebalance the Server (page 16) Return Server to Normal Operational Configuration (page 17)

Server Preparation
1. Start Avamar Administrator. 2. Log into the existing 1x2 non-RAIN server. 3. Ensure that no backups are in progress as follows: (a) Select Navigation > Activity or click the Activity launcher button. The Activity window appears. (b) Click the Activity Monitor tab. Examine the list of activities. If any activities are in progress, either cancel them or wait for them to complete.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

1x2 to Larger Capacity RAIN Server Expansion 4. Ensure that a recent validated checkpoint exists as follows: (a) Select Navigation > Avamar Server or click the Avamar Server launcher button. The Avamar Server window appears. (b) Click the Checkpoint Management tab. (c) Ensure that a recent validated checkpoint exists. Validated checkpoints are designated by a blue check mark ( 5. Ensure that no cron jobs are not in progress as follows (a) Select Navigation > Administration or click the Administration launcher button. The Administration window appears. (b) Click the Services Administration tab. Examine the list of cron jobs. If any cron jobs are running, either cancel them or wait for them to complete. Refer to your Avamar System Administration Manual for additional information.
User=admin

).

6. Open a command shell and log into the utility node as user admin. 7. Load the admin OpenSSH key by typing: ssh-agent bash ssh-add ~admin/.ssh/admin_key You are prompted to type a passphrase. 8. Type the admin user account passphrase and press ENTER. 9. Ensure all nodes are online by typing: status.dpn 10. Disable replication. Regardless of whether the server being expanded is the replication source or target, disable replication in the Avamar Administrator on the replication source. 11. Shut down all Avamar server processes except for the GSAN by typing: dpnctl stop ems dpnctl stop mcs

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

1x2 to Larger Capacity RAIN Server Expansion 12. Disable all cron jobs as follows: (a) Type: suspend_crons
User=root

(b) Switch user to root by typing: su (c) When prompted, type the root password and press ENTER. (d) Verify cron jobs are not running by typing: ps -elf --forest | grep -C 2 "_cron" (e) Do one of the following:
IF DO THIS

No cron jobs are running. If the gc_cron job is running. If the hfscheck cron job is running.

Proceed to step 13. Kill the gc_cron job by typing: avmaint gckill --avamaronly Kill the hfscheck cron job by typing: avmaint hfscheckstop --avamaronly

13. Prevent any possible future cron activity as follows: (a) Change directories to /usr/local/avamar/bin by typing: cd /usr/local/avamar/bin (b) Make backup copies of /usr/local/avamar/bin/morning_cron_run and /usr/local/avamar/bin/evening_cron_run; place those copies in a safe location. (c) Open /usr/local/avamar/bin/morning_cron_run in a Unix text editor. (d) Comment out all activities. (e) Save your changes. (f) Repeat steps c thru e for /usr/local/avamar/bin/evening_cron_run.
User=admin

14. Switch back to the admin user account by typing: exit 15. Ensure backups cannot run during this process as follows: (a) Type: avmaint suspend --avamaronly (b) Ensure backups are not in progress by typing: avmaint sessions | grep path If backups are in progress, either cancel them or wait for completion. (c) To cancel active backups, first list active backups by typing: avmaint sessions | grep "sessionid\|path"

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

1x2 to Larger Capacity RAIN Server Expansion (d) Then, use the session IDs to cancel each one by typing: avmaint kill SESSION_ID 16. Ensure index stripes are not splitting as follows: (a) Type the following on a single command line: avmaint stats --extended --kind=INDX | grep -v "splitstate=NORMAL" If an index stripe returns any condition except splitstate=NORMAL, it is splitting. (b) Do one of the following:
IF DO THIS

Splitting is occurring. Splitting is not occurring.

Wait until the splitting operation completes. Proceed to step 17.

17. Prevent the server from performing crunching as follows: (a) Type: avmaint config asynccrunching=false --avamaronly (b) Ensure that the change took effect by typing: avmaint config --avamaronly | grep "async" 18. Ensure balancing has been disabled as follows: (a) Type: avmaint config balancemin=0 --avamaronly (b) Ensure the change took effect by typing: avmaint config --avamaronly | grep balancemin 19. Ensure the number of retained checkpoints is correct as follows: This step ensures that critical checkpoints do not roll off during the rebalancing operation. (a) Type: avmaint config cpmostrecent=10 --avamaronly avmaint config cphfschecked=5 --avamaronly (b) Ensure the changes took effect by typing: avmaint config --avamaronly | grep "cp" 20. Check the load on all nodes to determine how busy they are as follows:

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

10

1x2 to Larger Capacity RAIN Server Expansion (a) Type: mapall 'uptime' Information similar to the following might appear in your command shell:
(0.0) ssh -x admin@10.0.9.253 'uptime' 22:45:23 up 134 days, 17:33, 0 users, load average: 0.00, 0.00, 0.00 (0.1) ssh -x admin@10.0.9.252 'uptime' 22:45:23 up 134 days, 17:38, 0 users, load average: 0.00, 0.00, 0.00

(b) Do one of the following:


IF DO THIS

Value in the first column for load average (the 1 minute load average) is less than 1. Otherwise.

Proceed to step 21.

Identify the running process and wait for it to complete.

21. Create a new checkpoint as by typing: cp_cron --duplog --override This checkpoint will be used if a rollback is required.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

11

1x2 to Larger Capacity RAIN Server Expansion

Add Two New Storage Nodes to Existing Server


1. Physically add the two new storage nodes to the existing server. 2. Ensure that they are turned on and connected to the network.
Utility Node User=admin

3. Switch to the utility node command shell. 4. Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. 5. Ensure IP addresses for the new storage nodes are correct as follows: This step assumes that these IP addresses were previously obtained from the customer and the new storage nodes have been configured accordingly. (a) Type: ifconfig ping IP_ADDRESS Where IP_ADDRESS is the IP address of one new node. (b) Repeat step b to confirm network connectivity to the second new node. 6. Verify new storage nodes as follows: (a) Open a command shell and log into the first new storage node as user admin. (b) Ensure that four data partitions exist and that they are empty. (c) Examine the following network configuration files by comparing them to those on existing storage nodes: /etc/hosts /etc/resolv.conf /etc/sysconfig/network /etc/sysconfig/network-scripts/ifcfg-eth0 (d) Ensure that all files contain entries for all other Avamar server nodes. (e) Repeat steps a thru d for the other new storage node. 7. Prepare a new probe.out file as follows: 8. Switch to the utility node command shell. 9. Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. (a) Type: cd /usr/local/avamar/var (b) Copy the existing probe.out file to a file named x-probe_DATE.out by typing: cp probe.out x-probe_DATE.out Where DATE is the current date. The probe.out file is located in /usr/local/avamar/var.

New Storage Node User=admin

Utility Node User=admin

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

12

1x2 to Larger Capacity RAIN Server Expansion (c) Open probe.out in a Unix text editor. The probe.out file is a text file containing two lines in the following format:
NONAT MODULE-ID:UTILITY-NODE-ADDRESS,UTILITY-NODE-ADDRESS:STORAGE-NODE-ADDRESS

For example:
NONAT dpn05:10.0.5.5,10.0.5.5:10.0.5.6,10.0.5.7

(d) Append the IP addresses of the two new storage nodes to the end of the second line, separating each address with a comma. (e) Save your changes. 10. Verify user admin and dpn OpenSSH keys as follows: (a) Type: mapall --all date (b) Ensure that all nodes respond to the date command.
Utility Node User=dpn

(c) Switch to user dpn by typing: su - dpn (d) When prompted for a password, type the dpn password and press ENTER. (e) Load the dpnid OpenSSH key by typing: ssh-agent bash ssh-add ~dpn/.ssh/dpnid (f) Type: mapall --all date (g) Ensure that all nodes respond to the date command. (h) Do one of the following:
IF DO THIS

Both mapall commands return valid responses from all four storage nodes and the utility node. Either mapall command fails because passwords were changed.

Proceed to step 11.

Manually copy the correct SSH files from the existing storage node ~dpn/.ssh/ directories to the same locations on the new storage nodes.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

13

1x2 to Larger Capacity RAIN Server Expansion 11. Set the correct time zone on all nodes (both new and existing nodes) as follows: (a) Type: asktime (b) Respond to the prompts. (c) Repeat steps a thru b on all nodes.
Utility Node User=admin

12. Switch back to the admin user account by typing: exit exit 13. Add the new storage nodes to the storage server (also known as GSAN) as follows: (a) Type: start.nodes --clean --nodes=0.n Where 0.n is the physical number for the first new storage node. (b) Repeat step b for the second new storage node. (c) Ensure the new storage nodes are not set to accept backups by typing: avmaint suspend (d) Ensure that all nodes are in the proper states by typing: status.dpn (e) Verify the following: New storage nodes are added to the GSAN. All nodes are set to ONLINE, fullaccess and Suspend=True states. Access mode for all nodes should be mhpu+ohpu+ohpu. 14. Create a new checkpoint as by typing: cp_cron --duplog --override

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

14

1x2 to Larger Capacity RAIN Server Expansion

Change Parity From Non-RAIN to Rain


This task describes how to create the parity stripes for a RAIN server.
Utility Node User=admin

1. Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. 2. Type: avmaint config paritygroups=N8 --avamaronly 3. Wait for this command to complete. Creating the parity stripes can take between 24 and 72 hours to complete. 4. Determine that the system is continuing to make progress by occasionally typing the following: status.dpn 5. Continue to occasionally check for completion. If there is a period of at least 5 minutes with all stripes in the ONLINE state, the operation is complete. 6. Create a checkpoint and validate it as follows: (a) Switch user to root by typing: su (b) When prompted, type the root password and press ENTER. (c) Copy /usr/local/avamar/bin/morning_cron_run file to special_cron_run by typing: cd /usr/local/avamar/bin cp morning_cron_run special_cron_run (d) Open special_cron_run in a Unix text editor. (e) Remove all existing entries. (f) Add the following new entries: cp_cron --waittime=1200 --override hfscheck_cron --throttlelevel=1 --override IMPORTANT: Only run the previous commands from within the special_cron_run file. (g) Save your changes.

Utility Node User=root

Utility Node User=admin Utility Node User=dpn

(h) Switch back to the admin user account by typing: exit (i) Switch to user dpn by typing: su - dpn (j) When prompted for a password, type the dpn password and press ENTER.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

15

1x2 to Larger Capacity RAIN Server Expansion (k) Load the dpnid OpenSSH key by typing: ssh-agent bash ssh-add ~dpn/.ssh/dpnid (l) Initiate the special checkpoint by typing: echo '/usr/local/avamar/bin/cron_env_wrapper special_cron_run' | at now (m)Wait for the checkpoint and validation operations to complete.
Utility Node User=admin

7. Switch back to the admin user account by typing: exit

Rebalance the Server


This task describes how to rebalance the nodes on the Avamar server.
Utility Node User=admin

1. Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. 2. Type: avmaint config balancemin=2 --avamaronly 3. Wait for balancing to complete as follows: (a) Type: status.dpn (b) Ensure that all stripes are ONLINE. (c) Wait 5 minutes, then repeat steps a thru b until all stripes are continuously ONLINE for at least 5 minutes. 4. When balancing is complete, disable balancing by typing: avmaint config balancemin=0 --avamaronly 5. Create a checkpoint and validate it as follows:

Utility Node User=dpn

(a) Switch to user dpn by typing: su - dpn (b) When prompted for a password, type the dpn password and press ENTER. (c) Load the dpnid OpenSSH key by typing: ssh-agent bash ssh-add ~dpn/.ssh/dpnid (d) Type: echo '/usr/local/avamar/bin/cron_env_wrapper special_cron_run' | at now 6. Wait the checkpoint validation (the hfscheck process) to successfully complete.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

16

1x2 to Larger Capacity RAIN Server Expansion


Utility Node User=admin

7. Switch back to the admin user account by typing: exit exit

Return Server to Normal Operational Configuration


This task describes how to reverse the special server preparation (page 7) and return the server to a normal operating configuration. 1. Reset the policy for retaining checkpoints to the default as follows:
Utility Node User=admin

(a) Ensure that you are still logged into the utility node as user admin and that the admin OpenSSH keys are loaded. (b) Type: avmaint config cpmostrecent=2 --avamaronly avmaint config cphfschecked=1 --avamaronly (c) Ensure the changes took effect by typing: avmaint config --avamaronly | grep "cp" 2. Take all nodes out of the suspend state as follows: (a) Type: avmaint resume --avamaronly (b) Ensure that all nodes are in the proper states by typing: status.dpn (c) Verify the following: All nodes are set to ONLINE, and Suspend=False states. Access mode for all nodes should be mhpu+ohpu+ohpu. If any nodes still report Suspend=True, continue issuing avmaint resume --avamaronly commands until all nodes report Suspend=False. 3. Set avmaint asynccrunching to the default (true) setting on all nodes as follows: (a) Type: avmaint config asynccrunching=true --avamaronly (b) Ensure the change took effect by typing: avmaint config --avamaronly | grep "async" 4. Revert back to the original morning_cron_run and evening_cron_run files as follows:

Utility Node User=root

(a) Switch user to root by typing: su (b) When prompted, type the root password and press ENTER. (c) Change directories to /usr/local/avamar/bin by typing: cd /usr/local/avamar/bin (d) Open /usr/local/avamar/bin/morning_cron_run in a Unix text editor.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

17

1x2 to Larger Capacity RAIN Server Expansion (e) Uncomment all activities. (f) Save your changes. (g) Repeat steps d thru f for /usr/local/avamar/bin/evening_cron_run.
Utility Node User=admin

5. Switch back to the admin user account by typing: exit 6. Restart the other Avamar server processes and resume cron jobs as by typing: dpnctl start resume_crons 7. Enable the replication crons on both the replication source and replication target servers.

Additional Server Balancing


After the server has been returned to service, you should periodically recheck balancing. If the server is found to not be optimally balanced, rebalance it by performing the following:
User=admin

1. Open a command shell and log into the utility node as user admin. 2. Load the admin OpenSSH key by typing: ssh-agent bash ssh-add ~admin/.ssh/admin_key You are prompted to type a passphrase. 3. Type the admin user account passphrase and press ENTER. 4. Check server balancing by typing: status.dpn 5. Compare the %Full values of all server nodes. 6. Do one of the following:
IF DO THIS

Newer node %full values are within 2% of existing node values. Newer node %full values are not within 2% of existing node values.

Server is optimally balanced. No further action is required at this time. Server is optimally requires additional balancing. Proceed to step 7.

7. Use cp_cron to roll off any old unneeded checkpoints. Type: cp_cron 8. Place the Avamar server in suspend mode by typing: avmaint suspend --avamaronly

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

18

1x2 to Larger Capacity RAIN Server Expansion 9. Suspend all the crons during rebalancing by typing: suspend_crons 10. Balance the system by typing: avmaint config balancemin=2 --avamaronly 11. Wait for balancing to complete as follows: (a) Type: status.dpn (b) Ensure that all stripes are ONLINE. (c) Wait 5 minutes, then repeat steps a thru b until all stripes are continuously ONLINE for at least 5 minutes. 12. When balancing is complete, disable balancing by typing: avmaint config balancemin=0 --avamaronly 13. Create a checkpoint and validate it as follows:
Utility Node User=dpn

(a) Switch to user dpn by typing: su - dpn (b) When prompted for a password, type the dpn password and press ENTER. (c) Load the dpnid OpenSSH key by typing: ssh-agent bash ssh-add ~dpn/.ssh/dpnid (d) Type: echo '/usr/local/avamar/bin/cron_env_wrapper special_cron_run' | at now

Utility Node User=admin

14. Switch back to the admin user account by typing: exit exit 15. Return server to normal operational configuration by typing: avmaint resume --avamaronly resume_crons

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

19

Copyright 2009 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

EXPANDING THE CAPACITY OF NON-RAIN AVAMAR SERVERS TECHNICAL NOTE

20

Das könnte Ihnen auch gefallen