Sie sind auf Seite 1von 45

FalconStor NSS Virtual Appliance Setup Guide v1.

0
Cormac Hogan Product Support Engineering January 2009 FalconStor provide a very easy method for deploying their NSS Virtual Appliance (VA). The download is in the form of an ISO image which contains an installation script. When this script is run, the script prompts you for a datastore on which to place the VA. Once the script completes, you will have a Virtual Machine called FalconStor-NSSVA. More information and eval copies of the NSS VA can be found at http://www.falconstor.com. Requirements:

2 IP addresses (static or DHCP allocated) 2 vCPUs 1GB Memory 3GB of disk space for the OS Additional disk space for LUN presentation and snapshots FalconStor IPStor Management Console

This guide is split into 5 parts: Part 1: Configuring the FalconStor NSS Virtual Appliance Part 2: Configuring the ESX to use FalconStor NSS VA Storage Part 3: Configure Replciation on the NSS VAs. Part 4: Install the FalconStor Storage Replication Adapter Part 5: Configure the SRA & Test SRM Functionality

Part 1: Configuring the NSS VA


Section 1: Configure the NSS VA network
Once the NSS VA is deployed, open a console session. Login as root (the default password is IPStor101). This should automatically launch the setup menu. If not, at the command line, run the command vaconfig. This will launch the setup menu.

Here you can setup hostname, time zone, root password, network configuration, gateway, DNS & NTP. Once complete, tab to finish and hit enter.

Section 2: Configure the NSS VA storage


Our next step is to configure the NSS VA with additional storage so that can be used for LUN presentation by the appliance. This is quite straight forward as all we have to do is assign additional virtual disks to the appliance. In this example I have created two additional virtual disks. Later on, we will see that one of these is used by the NSS VA to present a LUN and the other disk is used for holding my snapshots. Of course, I could have added a single virtual disk and used it to manage both the LUN presentation and snapshot holding area.

One thing that you will notice here is that there is a new SCSI controller. The virtual disks that we wish the appliance to use as storage entities must be placed on a different controller. This is easily achieved by giving the new virtual disks a target id of 1 rather than 0. This will automatically create a new (LSI Logic) SCSI controller. However the appliance must be powered off in order to do this configuration. Do the same for both the primary and secondary NSS VAs.

Section 3: Configure the NSS VA iSCSI target


In order to configure the iSCSI target on the NSS VA, we need to use the FalconStor IPStor Console. Once installed, this can be used to manage multiple virtual appliances. Here is the main screen of the console after installation:

Step 1: Right click on the IPStor Servers icon and select Add to add individual IPStor Servers. Alternatively, right click on the IPStor Servers icon and select Discover to discover all IPStor servers on a particular subnet: Add:

Discover:

Step 2: In this example, I have successfully discovered two NSS VAs. The first is on my primary/protected site; the second is on my target, recovery site. You will need to authenticate against each NSS VA in order for it to expand its properties as shown below:

Step 3: Now I need to ensure that one of the devices that I added as an additional virtual disk to this appliance are configured for use as a LUN that can be presented to the outside world. To do that, I must first assign this new storage for use as a LUN which can be presented to the outside world. Typically, once you authenticate against the NSS VA via the IPStor Console, you will receive a pop-up window stating that a LUN is ready for use. Note that these storage configuration steps are only necessary on the primary/protected site these steps are not necessary on the target/recovery site.

Notice that the category is currently Unassigned. The first thing we must do is to click on the Prepare Disk button. If you do not see the disk that you just added to the NSS VA, you should be able to rescan the adapter to pick up the new disk. On the IPStor Console under Physical Adapters right click on SCSI Adapter.0 and perform a rescan. This should rescan the scsi bus of the VM and pick up the new storage you have added.

Step 4: Change Device Category from Unassigned to Reserved for Virtual Device.

Click OK, and at the next prompt type the word YES in upper case to confirm the action.

Click OK. The device is now assigned to the category Reserved for Virtual Device.

Step 5: Next, click on the Create SAN Resource button. Alternatively, for future reference, navigate to Logical Resources, then SAN Resources, right click and select New. Either of these steps will launch the following wizard.

Click Next:

Ensure that the SAN Resource Type is set to Virtual Device. Click Next.

Select the disk that you wish to use as a SAN resource. The disk that I just added was id 2 and was 8GB in size so I will select the second disk. As an fyi, the other disk shown here is another

8 GB virtual disk of which 20% is used for holding my snapshots. We will see more about this later. Click Next.

Leave this at Express (default). Also leave the size to allocate at the maximum value as we use the whole disk. Click Next.

Select a name for the SAN Resource. You can also leave this at the default name allocated by the IPStor Console. In this case the name is SANDisk-0003. Click Next. At the final window, you will be displayed the details on the SAN Resource. Simply click Finish.

Step 6: After you have successfully created a SAN Resource, you are prompted to add clients to it. Alternatively, for future reference, navigate to Logical Resource, then SAN Resources, select the disk, right click and select Assign. For SRM, our clients will be our ESX server of course. The pop-up looks like this:

Click No as we have not yet created an iSCSI target on the NSS VA. If you had already created an iSCSI target, you could click Yes. Step 7: To create an iSCSI target, navigate to SAN Clients, right click and select Add. This launches the following wizard:

Step 8: Click Next:

Step 9: Check the iSCSI tick box as shown above and click Next.

Step 10: Select which IP address that you are going to use for iSCSI connectivity. Choose whichever one is not used for management purposes. This will be the IP address that you will add to the targets of the ESX servers iSCSI initiator. Click Next. The next window lists the initiators that the NSS VA already knows about. Here we can see an initiator from an ESX server already listed. If the iSCSI initiator from your ESX server does not appear in the list, then you will have to manually add it by clicking the Add button. Here I just check the tick box against the initiator that I want to present this LUN (SAN Resource) to.

Step 11: Click Next.

Step 12: By default, this window will appear with the option to set CHAP authentication. Click the radio button which states Allow unauthenticated access as shown above. We are not going to worry about CHAP here. Click Next. The next screen is QoS (Quality of Service).

Step 13: This can be left at the default value of Medium. Click Next. The next screen allows us to configure client information.

Step 14: Note that an IP address is not required if the client name is resolvable. Also note that this is the IP address of the ESX hosts VMkernel iSCSI interface, not Service Console interface. Finally note that I chose client type as Linux. Click Next.

Step 15: Uncheck Enable Persistent Reservation & click Next. This final screen displays an overview of the configuration of the client.

Step 16: Click Finish. You should now observe a new SAN Client.

Step 17: We saw earlier that when we successfully created a SAN Resource, we were prompted to add clients to it. At that time we did not have a client defined but now we do. To assign a client to a SAN resource now, navigate to Logical Resource, then SAN Resources, select the disk, right click and select Assign. This will launch a new wizard:

Step 18: Click Next.

Step 19: Select the check box against the iSCSI Target Name and click Next.

Step 20: This screen allows you to change the presentation LUN id. We can leave this at the default of LUN id 0. Click Next.

Step 21: Click Finish. This assigns the LUN to the client.

Step 22: Right Click on the disk (SAN Resource) , go to TimeMark/CDP and select Enable. TimeMarks are point in time images of any SAN or NAS virtual drive.

This launches the following wizard.

Step 23: Many of the screens that follow are related to the how often you should take snapshots. The settings here are related to a customers own environment. In most of the following screens I have chosen the default. However, these values may not be ideal for your customers environment and ideally the customer should discuss these settings with FalconStor to make sure that they meet their needs.

Step 24: For the purposes of my demo configuration, I left Enable CDP unchecked.

Step 25: In the schedule I set the initial snapshot to be at 09:30am on Jan 6th, and then every hour after that, keeping a maximum of 8 snapshots (timemarks).

Step 26: Click Finish to complete setting up the timemark. The NSS VA is now configured and we should be in a position to discover this LUN from the ESX server.

Part 2: Configuring the ESX to use storage from the NSS VA


Step 1: In the dynamic discovery section of the iSCSI initiator on the ESX, add the IP address of the NSS VA iSCSI interface.

Step 2: Rescan the SAN and discover the FalconsTor NSS VA LUN.

Step 3: Build a VMFS on it.

Step 4: Place a virtual machine on the VMFS.

Now that we have a running VM on this volume, we can replicate it to the other NSS VA.

Part 3 Configure Replication on the NSS VAs


Step 1: Return to the FalconStor IPStor Console. Connect to both the NSS virtual appliances. Step 2: On the primary/protected NSS VA, navigate to Replication then Outgoing, right click on Outgoing and select Enable SAN Resources to enable replication on SAN Resources. This opens the following wizard. Click Next.

Step 3: Select the disk to replicate.

Notice that I already have a predefined snapshot resource for SANDisk-0002 but none for SANDisk-0003. This means is that I have a location to store my snapshots. The next screen will prompt me for a location as I have chose SANDisk-0003 which has no snapshot resource created.

Step 4: This disk is already used for storing snapshot resources for SANDisk-0002. I will now use it for storing snapshot resources for SAN Disk-0003 also. If you have not got any free space, you will need to allocate a new virtual disk from the ESX to the NSS VA and use that as a location for storing snapshots.

Step 5: This screen asks you how much space to dedicate on this disk for snapshots. The default is 20%. Click Next. Many of the screens that follow are related to the how often you should take snapshots. The settings here are related to a customers own environment. In most of the following screens I have chosen the default. However, these values may not be ideal for your customers environment and ideally the customer should discuss these settings with FalconStor to make sure that they meet their needs.

Step 6: This screen allows the predefined space allocated to snapshots to grow. The default is to allow it to grow by 20% segments and to grow to use the entire disk allocated to snapshot resources. Click Next to accept the defaults.

Step 7: This screen defines what to do with snapshots when there is not space available on the snapshot resource or the snapshot resource is no longer available. Again, I have accepted the default which means it will delete older snapshots if no space is available, or if there is a disk failure, it stops using the snapshot resource entirely. However, other options may be more suitable to customers.

Step 8: This option can pause I/Os for certain operations on the NSS. By default it is enabled, but can be disabled if your customer wishes. Click Next.

Step 9: This screen asks you to select a remote NSS for replication. In this example I only have two appliances so my remote NSS is chosen automatically. Click Next.

Step 10: Populate with the IP address or hostname that we be used for replication traffic.

Step 11: Decide whether the replication should be done in Continuous Mode or Delta Mode. Again, this is a call that the customer will have to make for themselves. FalconStor support both methods of replication for SRM. FalconStor dont make a recommendation on this, so as I said it is up to the customer. In this case I left it in Delta Mode. Click Next.

Step 12: The recommended timemark schedule is up to the customer and depends what is being snapshoted . If there are a lot of DB applications on the VMFS volumes then a customer will notice if you quiece applications every 10 minutes. According to FalconStor, hourly is the most common timemark deployment schedule. Click Next.

Step 13: This screen allows you to control resources during synchronization. Leave this at the default which is unchecked. Click Next for Replication Protocol.

Step 14: Replication Protocol is TCP. Click Next for Data Transmission Options.

Step 15: Click Next unless you wish to compress or encrypt your replication data.

Step 16: Select a disk on the remote/target NSS to replicate the data (resource) to:

Step 17: Select a disk on the remote/target NSS to store snapshots (timemarks) to:

Step 18: This screen asks you how much space to dedicate on this disk for snapshots. Weve seen this before when setting up timemarks earlier. The default is 20%. Click Next.

Step 19: This screen allows the predefined space allocated to snapshots to grow. The default is to allow it to grow by 20% segments and to grow to use the entire disk allocated to snapshot resources. Click Next to accept the defaults.

Step 20: As before, this screen defines what to do with snapshots when there is not space available on the snapshot resource or the snapshot resource is no longer available. Again, I have accepted the default which means it will delete older snapshots if no space is available, or if there is a disk failure, it stops using the snapshot resource entirely. However, other options may be more suitable to customers.

Step 21: Click Finish to complete the Replication configuration.

Step 22: Click OK after verifying clocks are in sync on both the primary and target replication servers.

Step 23: You can observe the synchronization by viewing the Replication -> Outgoing -> San Resource (disk) -> Replication tab:

Step 24: Enable TimeMark on the replicated LUN. You can configure TimeMark on both the primary & secondary side . On the primary side you can use it for local recovery and Business Continuity & at the DR site you can configure it to give you more than one recovery point. You need to configure it at the DR site to be able to use the test failover feature of the SRM. This is a handy feature because it is possible to validate a DR copy from a timemark without having to break the replication relationship between the primary and secondary sites. To set it up, select the replicated device on the secondary/protected site under Replication -> Incoming -> hostname -> device name, right click and select TimeMark/CDP -> Enable.

This launches the following wizard.

Step 25: Click Next:

Step 26: Click Next:

Step 27: Click Next.

Step 28: Click Finish. A view of the replicated SAN Resource from the primary/protected site:

A view of the replicated SAN Resource from the secondary/recovery site:

Now we can install the SRA and start using SRM.

Part 4 Install the FalconStor Storage Replication Adapter

Step 1: Click Next:

Step 2: Check the Radio button to accept the EULA, then click Next:

Step 3: Click Next:

Step 4: Enter the license key from FalconStor for the SRA & click Next:

Step 5: Click Install.

Step 6: Click Finish.

Part 5 Configure the SRA & Test SRM Functionality


The following tasks are carried out on the primary/protected site SRM.

Section 1: Add the primary side NSS-VA to the array managers.

Step 1: Click Add:

Step 2: Populate the details for the primary/protected side NSS VA and click Connect. Notice that the NSS VA from FalconStor uses manager type NSS-S12.

Step 3: Click Next.

Section 2: Add the secondary/recovery side NSS-VA to the array managers.

Step 1: Click Add:

Step 2: Populate the details for the secondary/recovery side NSS VA and click Connect.

Step 3: Click Next.

Step 4: Verify that you have discovered the replicated LUN. If you do not have a replicated LUN discovered, ensure that: The LUN has a valid VMFS-3 filesystem on it. The VMFS-3 contains a Virtual Machine. The LUN is indeed replicated.

Section 3: Configure the Protection Group


The following tasks are carried out on the primary/protected site SRM.

Step 4.1: Give the Protection Group a name then click Next.

Step 4.2: Select a datastore group (containing the replicated LUN) and assign it to the Protection Group.

Step 4.3: Select a datastore on the recovery site to use as a placeholder for the virtual machine configuration files. Then click Finish.

Section 4: Create the Recovery Plan


The following tasks are carried out on the secondary/recovery site SRM.

Step 1: Give the Recovery Plan a name then click Next.

Step 2: Select one or more Protection Groups to associate with the Recovery Plan. In this case I have only chosen a single PG, which is the FalconStor specific one.

Step 3: Do you wish to change the response times? If not, click Next.

Step 4: Which networks to use when testing failover. Leave it au auto to bring the VM up in a bubble network.

Step 5: Do you want to suspend any virtual machine during a recovery plan run. If not, click Finish.

Section 5: Test Failover


These steps are carried out on the Recovery side SRM. Click on Test to do a test failover using a particular Recovery Plan.

Step 1: Test Failover start completed successfully.

Step 2: Replicated LUN is now visible to recovery side ESX. Remember that this is a timemark taken from the replicated SAN Resource it is not the actual replicated SAN resource on the NSS VA. This is only made visible during an actual failover. This is why we need the TimeMark functionality placed on the replicated SAN Resource.

Step 3: VMFS-3 volume on the LUN is visible, but we can see it is treated as a snapshot.

Step 4: Virtual Machine is up and running. That completes the configuration of Site Recovery Manager with FalconStor NSS-VA. Click Continue on the Recovery Steps screen to end the test failover process and return the storage to its original configuration, i.e. unpresent the timemark which was promoted to a real LUN, etc.

Misc
1. There is a log gathering utility that can be run in the NSS VA to capture configuration information which could be very useful to our support folks. The X-Ray is the configuration and log gathering utility that FalconStor support people use . You can run this from the IPStor Management Console by right clicking on the NSS-VA and selecting X-Ray. 2. I got this message occasionally when trying to do tasks in the console:

It seems to occur because the server disconnects from the console while Im in the middle of setting up a task, i.e. adding a SAN resource. To increase the timeout value between the appliance and the console, go to Tools -> Console Options -> Console Timeout setting.

Das könnte Ihnen auch gefallen