Sie sind auf Seite 1von 74

Citrix Virtual Classroom

Leveraging NetScaler clusters to improve application performance

Self-paced exercise guide

Table of Contents Citrix Virtual Classroom


Table of Contents .............................................................................................................................................. 2 Lab Overview..................................................................................................................................................... 3 Exercise 1: Migrating an HA pair to a 3-node cluster.................................................................................. 6 Exercise 2: Initial cluster setup ...................................................................................................................... 12 Exercise 3: Cluster load balancing configuration........................................................................................ 23 Exercise 4 (Optional): Packet capture analysis in a cluster system .......................................................... 37 Exercise 5 (Optional): ECMP configuration ............................................................................................... 52

Page 2

Lab Overview Citrix Virtual Classroom Hands-on Training Module


This training module has the following details: Objective This lab manual provides hands-on training on the new clustering feature included in NetScaler 10. It covers the most common clustering deployment types as well as step-by-step configuration on how to setup a NetScaler cluster. Upon completion of this lab document, a 3-node cluster environment should be fully operational. Primary: Event attendees

Audience

Lab Environment Details


Machine XenServer AD.training.lab Apache_MySQL_1 (GENTOO_1) Apache_MySQL_2 (GENTOO_2) Apache_MySQL_3 (GENTOO_3) EXT-Win_7 Win7Client NS10_node1 NS10_node2 NS10_node3 vyatta-RTR Details Hosts virtual machines Domain Controller DNS server Apache, PHP, MySQL server Apache, PHP, MySQL server Apache, PHP, MySQL server Remote network Windows 7 workstation Internal network Windows 7 workstation Virtual instance of a NetScaler appliance. Virtual instance of a NetScaler appliance. Virtual instance of a NetScaler appliance Provides connectivity between internal and remote networks

Page 3

Lab Topology Diagram

Citrix Virtual Classroom


Public Network

192.168.10.X/24

.200

.12

.13 .110 .1 .1

172.16.2.0/24

Apache1
Win7Client

NS10_node1
.120

DNS server

.14

Apache2

NS10_node2
.130

Internal Router

.2

172.16.1.0/24

EXT-Win_7

Internal network

.15

NS10_node3 Apache3

Remote Network

Required Lab Credentials


Below are the login credentials required to connect to the workshop system and complete the lab exercises.

Machine AD.training.lab Apache_MySQL_1 (GENTOO_1) Apache_MySQL_2 (GENTOO_2) Apache_MySQL_3 (GENTOO_3) EXT-Win_7 Win7Client

IP Address 192.168.10.11 192.168.10.13 192.168.10.14 192.168.10.15 172.16.2.2 192.168.10.200

Username TRAINING\Administrator root root root TRAINING\Administrator TRAINING\Administrator

Password Citrix123 Citrix123 Citrix123 Citrix123 Citrix123 Citrix123


Page 4

NS10_node0 NS10_node1 NS10_node3 vyatta-RTR

Citrix Virtual Classroomnsroot


nsroot vyatta

NSIP: 192.168.10.110 SNIP: 192.168.10.91 NSIP: 192.168.10.120 SNIP: 192.168.10.92 NSIP: 192.168.10.130 SNIP: 192.168.10.93 eth0: 172.16.2.1 eth1: 172.16.1.1

nsroot

nsroot nsroot nsroot Vyatta

How to log into the lab environment


The self-paced lab environment is hosted on a cloud-based Citrix XenServer. Connecting to your server from the portal page is as easy as 1-2-3.

Step-by-step login instructions


Step 1. Action
Once logged in at the self-paced portal, click the Start lab button to launch a connection to published XenCenter.

2.

When XenCenter loads, right-click the XenCenter node and select Add

3.

On the Add New Server screen enter the XenServer IP address provided on the portal and in the Password field enter the password provided on the portal. The user name will always be root.

Page 5

Exercise 1: Migrating an HA pair to a 3-node Citrix Virtual Classroom cluster


Overview
Clustering is the centerpiece of NetScaler 10. It is part of the TriScale architecture to deliver a service delivery fabric to fundamentally change the ADC space. By allowing all nodes in the cluster to efficiently process traffic as well as providing high availability, it provides the foundation to deploy the necessary infrastructure to meet any kind of performance and scalability requirements. With the existing infrastructure, active-active or HA deployments have shortcomings directly addressed by clustering: HA Scalability (Only one appliance processing traffic) Resource utilization. (One resource always idle) Active/Active Configuration management (Independent configuration between appliances) Heavy dependency on upstream device configuration. Not all L4-L7 features work across all nodes (max. clients, persistence, session reuse, etc..) With the new architecture offered by NetScaler 10, requirements for dynamic capacity, fault tolerance, high scalability and performance are met. The first step is to plan the migration of existing HA deployments. In this exercise, we will prepare an existing HA pair to become ready for a 3-node cluster deployment.

Page 6

Step by step guidance


The lab environment required for this exercise is as follows: 1. NetScaler VPX appliance: NS10_node1 (Existing HA node) 2. NetScaler VPX appliance: NS10_node2 (Existing HA node) 3. NetScaler VPX appliance: NS10_ node3 (Independent node) An HA pair has been pre-configured with a simple load balancing configuration:
enable ns feature WL SP LB enable ns mode FR L3 Edge USNIP PMTUD add ns ip 192.168.10.91 255.255.255.0 -vServer DISABLED add HA node 1 192.168.10.110 / add HA node 1 192.168.10.120 add server GENTOO_1 192.168.10.13 add server GENTOO_2 192.168.10.14 add service GENTOO_1_HTTP_TCP_80 GENTOO_1 HTTP 80 add service GENTOO_2_HTTP_TCP_80 GENTOO_2 HTTP 80 add lb vserver HTTP_vserver_cluster HTTP 192.168.10.18 80 bind lb vserver HTTP_vserver_cluster GENTOO_1_HTTP_TCP_80 bind lb vserver HTTP_vserver_cluster GENTOO_2_HTTP_TCP_80

Citrix Virtual Classroom

Our goal is to migrate to a cluster setup without losing any configuration.

Page 7

Estimated time to complete this lab: 15 minutes. Step 1. Action Citrix Virtual Classroom After XenCenter loads the available VMs, proceed with the next step. Please make sure only the following VMs are powered on: AD.training.lab Apache_MySQL_1 Apache_MySQL_2 Apache_MySQL_3 EXT-WIN_7 NS10_node1 NS10_node2 NS10_node3 Vyatta-RTR Win7Client

2.

In order to migrate to a cluster configuration, we need to break the existing HA pair. However, before removing the HA configuration, a few precautions need to be completed. Login to the primary node (NS10_node1) and verify the HA status.

3.

While in the primary node, backup the configuration.

NS10_node1> show ha node 1) Node ID: 0 IP: 192.168.10.110 (NS10_node1) Node State: UP Master State: Primary ... 2) Node ID: 1 IP: 192.168.10.120 Node State: UP Master State: Secondary ... Done NS10_node1> # shell # cp /nsconfig/ns.conf /nsconfig/ns.conf_beforeCluster

Page 8

Step 4.

gentoo_3 ~ # wbox 192.168.10.18 clients 5 Citrix Virtual Classroom WBOX 192.168.10.18 (192.168.10.18) port 80 0. 200 OK 319 bytes 3 ms 0. 200 OK 319 bytes 3 ms 0. 200 OK 319 bytes 4 ms 0. 200 OK 319 bytes 3 ms 0. 200 OK 319 bytes 4 ms 1. 200 OK (318) bytes 2 ms 1. 200 OK (318) bytes 2 ms 1. 200 OK (318) bytes 2 ms 1. 200 OK (318) bytes 2 ms 1. 200 OK (318) bytes 2 ms

5.

6.

Action As part of this exercise, we will verify that HA disassociation does not disrupt traffic. For this purpose we will use a simple utility to constantly send HTTP requests to the virtual server already configured. On Apache_MySQL_3 (GENTOO_3) use wbox to send 5 GET requests per second to the load balancing virtual server. Leave this utility running. Next, set the primary node to STAYPRIMARY and disable command propagation and synchronization in order to avoid any configuration loss or IP conflicts. Logon to the secondary unit and clear the configuration to avoid IP conflicts when we break the HA pair. Logon to the primary and remove the secondary node from the HA configuration. On Apache_MySQL_3, inspect the output of the probes. Was any request lost? Verify the HA status on both units.

# exit NS10_node1> set ha node -haStatus STAYPRIMARY -haprop DISABLED -haSync DISABLED Done NS10_node1> NS10_node2> clear ns config full Are you sure you want to clear the configuration(Y/N)? [N]:y Done NS10_node2>
NS10_node1> rm ha node 1 Done NS10_node1> NS10_node1> show ha node 1) Node ID: 0 IP: 192.168.10.110 (NS10_node1) Node State: STAYPRIMARY Master State: Primary ... NS10_node1> NS10_node2> show ha node 1) Node ID: 0 IP: 192.168.10.120 (NS10_node2) Node State: UP Master State: Primary ... NS10_node2>

7.

Page 9

Step 8.

Citrix Virtual Classroom

9.

Action NS10_node1> set ha node -haProp ENABLED As a precaution, restore the haStatus ENABLED -haSync ENABLED original HA settings (Since Done clustering does not use HA NS10_node1> configuration, this step might not be required; however, for completion, we are reverting these settings to defaults). At this stage, node 1 should have continued to serve traffic. We are ready to begin the initial cluster configuration. Continue with exercise 2.

Page 10

Summary
Key Takeaways The key takeaways for this exercise are: Citrix Virtual Classroom A high availability pair can be migrated to a cluster setup. This involves breaking the HA pair setup and clearing the configuration on the secondary.

NOTES

No loss of connectivity should occur during the HA dissociation. Unfortunately, upon creating the cluster instance, the configuration will be cleared. We must backup the configuration before migrating to a cluster setup to avoid any loss. A cluster setup is only functional if at minimum (n/2) +1, where n is the number of cluster nodes, are able to serve traffic. If that formula were applied to a two-node cluster, the cluster would fail if one node went down (n/2 +1=2). A two-node cluster is and exception to the rule and it is fully functional even if only one node is able to serve traffic. Creating a two node cluster is the same as creating any other cluster. You must add one node as the configuration coordinator and the other node as an additional cluster node.

Page 11

Exercise 2: Initial cluster setup Citrix Virtual Classroom Overview


A NetScaler cluster uses multiple mechanisms to efficiently distribute traffic across all nodes in the cluster. This can be achieved by using the following available methods: ECMP: Equal Cost Multipath Routing CLAG: Cluster Link Aggregation Channels Link Set

Since we have a virtual lab infrastructure, in this exercise, we will perform the necessary configuration to bring up a cluster using a link set deployment which does not require switch or router configuration for the cluster to be fully operational.

Step by step guidance


The lab environment required for this exercise is as follows: 1. 2. 3. 4. NetScaler VPX appliance: NS10 _node1 NetScaler VPX appliance: NS10_node2 NetScaler VPX appliance: NS10_node3 Internal Windows 7 Workstation: Win7Client

Estimated time to complete this lab: 15 minutes. Step 4. Action All three VPX appliances are already configured with a NSIP and default route. Additionally, node1 will contain existing configuration from the previous HA setup. First, login to each appliance and verify IP connectivity to each other.

> ping 192.168.10.110 > ping 192.168.10.120 > ping 192.168.10.130

Page 12

Step 5.

Citrix Virtual Classroom


There are four logical traffic flows in a cluster system that identify the relevant information that is passed between client, servers, and nodes in the cluster: 1. 2. 3. 4. Client data plane Control plane Server data plane Back plane

Action A cluster topology introduces the concept of communication planes.

Our topology (link set) does not require separation of such planes as they all can co-exist in the shared medium. However, for illustration purposes, in our environment a new virtual interface was configured to simulate the backplane. In XenCenter, verify that all 3 VPX appliances are connected to such logical segment. Continue with the next exercise.

Page 13

Step 6.

Citrix Virtual Classroom


Go back to node 1 (NS10_node1) and define the cluster instance. Add node 1 to the cluster instance. In order to define spotted SNIPs to each node later in this exercise, set the state to passive. Make sure you select interface 1/2 as the backplane interface. Enable the cluster instance. After executing this command, the configuration needs to be saved and the appliance has to be rebooted. Note: After enabling the cluster instance, the configuration is cleared internally using the clear ns config command. We will use the backup to restore the entities lost. When node 1 comes back online, add the cluster IP

Action The next step is to add the first node to the cluster. This node will act as the configuration coordinator (CCO).

NS10_node1> add cluster instance 1

7.

NS10_node1> add cluster node 1 192.168.10.110 state PASSIVE -backplane 1/2

8.

NS10_node1> enable cluster instance 1 Warning: The configuration must be saved and the system rebooted for these settings to take effect NS10_node1> save ns config NS10_node1> reboot warm

9.

NS10_node1> add ns ip 192.168.10.140 255.255.255.255 -type CLIP

Page 14

Step 10.

Citrix Virtual Classroom

Action Verify the cluster instance and node.

NS10_node1> show cluster instance


1) Cluster ID: 1 Dead Interval: 3 secs Hello Interval: 200 msecs Preemption: DISABLED Propagation: ENABLED Cluster Status: ENABLED(admin), ENABLED(operational), UP Member Nodes: Node ID Node IP State Operational State --------------- ----------------1) 1 192.168.10.110* INACTIVE(CCO) Done

Health -----UP

Admin -------PASSIVE

NS10_node1> show cluster node


1) Node ID: 1 IP: 192.168.10.110*

Backplane: 1/1/2 Health: UP Admin State: PASSIVE Operational State: INACTIVE(CCO) Sync State: ENABLED Done

NS10_node1>

Page 15

Step 11.

Citrix Virtual Classroom


IMPORTANT:

Action From the Internal Windows 7 workstation (Win7Client), SSH into the cluster IP (CLIP) using PUTTY (in our example, 192.168.10.140, currently owned by node1) and add node 2 to the cluster.

Make sure the cluster commands are performed from the cluster IP; otherwise, they will not be replicated to other nodes and issues will arise. Even when logging to the NSIP of the Cluster Coordinator (CCO), the commands will not replicate correctly. Specify the state as PASSIVE and the backplane interface for node 2. NOTE: Cluster interface names are identified by a 3-tuple: NodeID / ControlledID / InterfaceID
NS10_node1> add cluster node 2 192.168.10.120 -state PASSIVE backplane 2/1/2 Done NS10_node1> While logged in via SSH to NS10_node1> add cluster node 3 192.168.10.130 state PASSIVE -backplane 3/1/2 the cluster IP (CLIP), Done perform the same step for NS10_node1> node 3.

12.

Page 16

Step 13.

Citrix Virtual Classroom


cluster node

Action On the same SSH session, verify the cluster node status using the show command

Note that node 2 and node 3 should report the status as unknown since we have not completed the required configuration on these nodes.

14.

15.

While logged in to the cluster IP (CLIP), save the configuration. Next, login to node 2 and join the cluster. Save the configuration and reboot.

NS10_node1> show cluster node 1) Node ID: 1 IP: 192.168.10.110* Backplane: 1/1/2 Health: UP Admin State: PASSIVE Operational State: INACTIVE(CCO) Sync State: ENABLED 2) Node ID: 2(Adding) IP: 192.168.10.120 Backplane: 2/1/2 Health: UNKNOWN Admin State: PASSIVE Operational State: UNKNOWN Sync State: UNKNOWN 3) Node ID: 3(Adding) IP: 192.168.10.130 Backplane: 3/1/2 Health: UNKNOWN Admin State: PASSIVE Operational State: UNKNOWN Sync State: UNKNOWN DoneNS10_node1> NS10_node1> save ns config

16.

Perform the same step on node 3.

NS10_node2> join cluster -clip 192.168.10.140 -password nsroot Warning: The configuration must be saved and the system rebooted for these settings to take effect NS10_node2> save ns config Done NS10_node2> reboot -warm NS10_node3> join cluster -clip 192.168.10.140 password nsroot Warning: The configuration must be saved and the system rebooted for these settings to take effect NS10_node3> save ns config Done NS10_node3> reboot -warm

Page 17

Step 17.

Citrix Virtual Classroom


Display the cluster status. Since all the nodes joined the cluster in a PASSIVE state. The Operational State for all nodes should be INACTIVE. Inspect this by issuing the show cluster node command from the CCO.

Action After node 2 and node 3 are online, login to the cluster IP (CLIP) via SSH using the Windows 7 workstation (Win7Client).

18.

Go back to the cluster IP session (CLIP) and add spotted SNIPs for each of the nodes. Make sure you select the -ownerNode parameter.

NS10_node1> show cluster node 1) Node ID: 1 IP: 192.168.10.110* Backplane: 1/1/2 Health: UP Admin State: PASSIVE Operational State: INACTIVE(CCO) Sync State: ENABLED 2) Node ID: 2 IP: 192.168.10.120 Backplane: 2/1/2 Health: UP Admin State: PASSIVE Operational State: INACTIVE Sync State: ENABLED 3) Node ID: 3 IP: 192.168.10.130 Backplane: 3/1/2 Health: UP Admin State: PASSIVE Operational State: INACTIVE Sync State: SUCCESS Done NS10_node1> NS10_node1> add ns ip 192.168.10.91 255.255.255.0 -type SNIP -ownerNode 1 Done NS10_node1> add ns ip 192.168.10.92 255.255.255.0 -type SNIP -ownerNode 2 Done NS10_node1> add ns ip 192.168.10.93 255.255.255.0 -type SNIP -ownerNode 3 Done NS10_node1>

Page 18

Step 19.

NS10_node1> show ip Citrix Virtual Classroom

Action From the cluster IP (CLIP) SSH session, display the cluster system IPs.

NOTE: Every node in the cluster should show all the IPs in the cluster system. However, all configurations should be performed using the Cluster IP. 20. From the same SSH session set the node state on all nodes to ACTIVE.
NS10_node1> set cluster node 1 -state ACTIVE Done NS10_node1> set cluster node 2 -state ACTIVE Done NS10_node1> set cluster node 3 -state ACTIVE Done NS10_node1> NS10_node1> show cluster node 1) Node ID: 1 IP: 192.168.10.110* Backplane: 1/1/2 Health: UP Admin State: ACTIVE Operational State: ACTIVE(CCO) Sync State: ENABLED 2) Node ID: 2 IP: 192.168.10.120 Backplane: 2/1/2 Health: UP Admin State: ACTIVE Operational State: ACTIVE Sync State: ENABLED 3) Node ID: 3 IP: 192.168.10.130 Backplane: 3/1/2 Health: UP Admin State: ACTIVE Operational State: ACTIVE Sync State: SUCCESS DoneNS10_node1>
Page 19

21.

Verify that all nodes are ACTIVE.

Step 22.

Citrix Virtual Classroom

Action OPTIONAL: If any of the nodes appear as NOT UP and they are currently out-of-sync, proceed to remove and re-add it to the cluster.

NS10_node1> show cluster node 1) Node ID: 1 IP: 192.168.10.110* State: ACTIVE Backplane: 1/1/2 Health: UP Master State: ACTIVE(CCO) Sync State: ENABLED 2) Node ID: 2 IP: 192.168.10.120* State: ACTIVE Backplane: 2/1/2 Health: UP Master State: ACTIVE Sync State: SUCCESS 3) Node ID: 3 IP: 192.168.10.130 State: ACTIVE Backplane: 3/1/2 Health: NOT UP Reason(s): Service state is being synchronized with the cluster Master State: INACTIVE Sync State: ENABLED Done NS10_node1> NS10_node3> rm cluster instance 1 Done NS10_node3> join cluster -clip 192.168.10.140 password nsroot Warning: The configuration must be saved and the system rebooted for these settings to take effect NS10_node3> save ns config Done NS10_node3> reboot -warm Are you sure you want to restart NetScaler (Y/N)? [N]:y NS10_node3>

Login to the problematic nodes NSIP (in our example node 3) and remove the cluster instance. Then proceed to re-join the node to the cluster to force a full synchronization.

Page 20

Step 23.

Citrix Virtual Classroom

Action From the cluster IP (CLIP) SSH session, verify that all IPs defined so far are ACTIVE

24.

Next, we need to define the distribution method for the cluster system. Since this lab environment is completely virtualized, we will use the link set deployment type as this does not require any router or switch configuration. First, while logged on to the cluster IP (CLIP), create the link set definition. Then, bind all the interfaces connected to the link set. In our example, the VPX appliances will have interface n/1/1 connected to the upstream virtual switch (where n is the node id). We will select these interfaces in our link set configuration.
NS10_node1> Done NS10_node1> Done NS10_node1> Done NS10_node1> Done NS10_node1> add linkset LS/1 bind linkset LS/1 -ifnum 1/1/1 bind linkset LS/1 -ifnum 2/1/1 bind linkset LS/1 -ifnum 3/1/1

25.

Verify that the link set binding is correct.

NS10_node1> show linkset LS/1


1) Done Id: LS/1 Interfaces: 1/1/1 2/1/1 3/1/1

NS10_node1>

26.

27.

While logged on to the cluster IP (CLIP), save the NS10_node1> save ns config cluster configuration The cluster is now fully operational.
Page 21

Summary
Key Takeaways The key takeaways for this exercise are: Citrix Virtual Classroom New logical and physical design requirements must be considered before deploying a cluster system. Communication traffic flows between the nodes in the cluster, clients, and servers have to be carefully planned in order to avoid issues during implementation. New entity types are available in a cluster system. Spotted entities are only active on a single node. Striped entities are available in all nodes participating in the cluster.

NOTES

One example of a spotted entity is the NSIP of each system. Additionally, in our previous configuration, we defined SNIPs as spotted entities and each node in the cluster will use this IPs to contact backend servers. Spotted SNIPs are recommended by Citrix in order to avoid ARP flux issues. After enabling the cluster instance and joining a node to the cluster setup, the configuration is lost and all entities have to be redefined. It is very important the configuration is backed up before proceeding to add a node to the cluster.

Page 22

Exercise 3: Cluster load balancing Citrix Virtual Classroom configuration


Overview
In a cluster system, most of the core NetScaler features are supported. Virtual server entities are configured from the configuration coordinator which is the system that owns the cluster IP. The election of the configuration coordinator is done automatically by the Quorum Service Provider (cluster) protocol upon cluster setup. If the configuration coordinator suffers a failure, the QSP protocol will elect a new CCO. In this exercise we will configure a load balancing virtual server striped entity. We will explore traffic flow between members of the cluster and force a CCO failure while serving traffic to illustrate how the system behaves.

Step by step guidance


The lab environment required for this exercise is as follows: 1. 2. 3. 4. 5. 6. NetScaler VPX appliance: NS10_node1 NetScaler VPX appliance: NS10_node2 NetScaler VPX appliance: NS10_node3 Apache_MySQL_1 (GENTOO_1) Apache_MySQL_2 (GENTOO_2) Internal Windows 7 Workstation: (Win7Client)

Estimated time to complete this lab: 30 minutes. Step Action 1. From the Windows 7 workstation (Win7Client), login to the cluster IP (CLIP) and enable the load balancing feature.

NS10_node1> enable ns feature LB NS10_node1> enable ns mode USNIP

Page 23

Step Action 2. While connected to the cluster IP (CLIP), configure HTTP and MYSQL server and services for servers 1 and 2 (Apache_MySQL_1 / Apache_MySQL_2)

Citrix Virtual Classroom

NS10_node1> add server GENTOO_1 192.168.10.13 Done NS10_node1> add server GENTOO_2 192.168.10.14 Done NS10_node1> add service GENTOO_1_HTTP_TCP_80 GENTOO_1 HTTP 80 Done NS10_node1> add service GENTOO_2_HTTP_TCP_80 GENTOO_2 HTTP 80 Done NS10_node1> add service GENTOO_1_MYSQL_TCP_3306 GENTOO_1 MYSQL 3306 Done NS10_node1> add service GENTOO_2_MYSQL_TCP_3306 GENTOO_2 MYSQL 3306 Done

Note: HTTP virtual servers and services should be redefined as they are lost after joining the initial node to the cluster. Use the initial configuration to restore these settings.

3. From the same SSH session NS10_node1> add lb vserver HTTP_vserver_cluster HTTP 192.168.10.18 80 to the cluster IP (CLIP), Done create a new virtual server NS10_node1> bind lb vserver HTTP_vserver_cluster for HTTP and MYSQL. Note: Make sure the configuration commands are executed while connected to the cluster IP (CLIP). Even when connecting to the CCO NSIP, the commands are not replicated to all members in the cluster. If by mistake, entities are created locally, remove them and reattempt to add them using the cluster IP (CLIP).
GENTOO_1_HTTP_TCP_80 Done NS10_node1> bind lb vserver HTTP_vserver_cluster GENTOO_2_HTTP_TCP_80 Done NS10_node1> NS10_node1> add lb vserver MYSQL_vserver_cluster MYSQL 192.168.10.18 3306 Done NS10_node1> bind lb vserver MYSQL_vserver_cluster GENTOO_1_MYSQL_TCP_3306 Done NS10_node1> bind lb vserver MYSQL_vserver_cluster GENTOO_2_MYSQL_TCP_3306 Done NS10_node1>

Page 24

Step Action 4. Continue on the SSH session to the cluster IP (CLIP). Verify that virtual server definitions and bindings were replicated to other nodes. Login to each nodes NSIP and execute a show lb vserver

Citrix Virtual Classroom

5. The cluster system uses a distributed monitoring infrastructure. One node in the cluster is elected as the monitoring owner and it is responsible to notify the CCO of the service health status. In turn, the CCO notifies the rest of the nodes in the cluster. To verify which node is the monitoring owner, inspect the output of the show
service <service_name>

command from the SSH session to the cluster IP (CLIP). The output from your environment could be different as a new monitoring owner can be selected.

Page 25

Step Action 6. Now, we will trace the traffic path when generating a GET request from the Windows 7 workstation (Win7Client).

Citrix Virtual Classroom

For simplicity, bind a simple PING monitor. The purpose of this exercise is to avoid cluttering the TCPDUMP output with probe traffic. Important: Make sure you perform all configuration operations from the SSH session to the cluster IP (CLIP).
NS10_node1> bind service GENTOO_1_HTTP_TCP_80 -monitorName PING Done NS10_node1> bind service GENTOO_2_HTTP_TCP_80 -monitorName PING Done 7. Go back to each node SSH # shell session and exit to the shell. # nstcpdump.sh -ne arp | grep "192.168.10.18"

Then, start a new TCPDUMP on each node using the capture filter provided. 8. From the Windows workstation (Win7Client), open the browser and navigate to the VIP previously configured (192.168.10.18). Observe the TCPDUMP output on each node.

Page 26

Step Action 9. When configuring a Link Set, the cluster defines an ARP owner for every striped IP configured. One of the connected nodes is responsible for replying to ARP requests for a VIP. Upon receiving a request, the ARP owner returns its own MAC address and receives the initial traffic flow.

Citrix Virtual Classroom

Inspect the TCPDUMP running on each of the nodes to find out who is the ARP owner. In our example, NS10_node1 (CCO) is the ARP owner. -----------------

10. Verify who owns MAC address given the in the ARP reply. From the cluster IP (CLIP) SSH session, look at the output of the show interface command. Which interface owns it? 11. Next, we will verify the traffic flow between client, server, and nodes in the cluster. From each of the SSH sessions to each nodes NSIP, stop the previous capture and start a new TCPDUMP dump. This time, we will use a more specific filter to capture packets destined to the VIP or any of the backend servers on port 80 as well as ARP traffic. Additionally, using the e flag, the output will display the corresponding source and destination MAC addresses.
# shell # nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or host 192.168.10.14) and tcp port 80) or arp"

Page 27

Step Action 12. From the Win7Client workstation, refresh the page.

Citrix Virtual Classroom

Page 28

Step Action 13. Observe the TCPDUMP output on each node. Which node receives the initial flow? Which node contacts the backend server? In a link set configuration, the traffic will flow from the CLIENT_IP to the VIP through the MAC owner. Then, the cluster will select the node that will process the request (flow processor) and forwards the request to this node using the backplane interface. Next, the flow processor contacts the backend server and fetches the response. Through the same backplane interface, this is returned to the ARP owner who in turn sends it back to the CLIENT_IP.

Citrix Virtual Classroom

Note: During your testing, the traffic flow might be different depending on which node is selected as the flow receiver/processor. Inspect the source/destination MAC addresses to understand how traffic is flowing in your lab environment. Node1:

Node2:

Node3:

Page 29

Step Action 14. For node-to-node communication, the backplane interface is utilized and a shared MAC address is used to identify steered traffic.

Citrix Virtual Classroom


This MAC has the following format:

02:00:6F:<cluster_id>:<node_id>:<flags>

In the previous TCPDUMP output used in step 11, inspect the source and destination MAC addresses used between nodes.

15. OPTIONAL: To further illustrate the traffic flow, we will send sequential GET requests to the VIP and observe the behavior. From Apache_MySQL_3 (GENTOO_3), start the wbox utility to send 1 request per second to the VIP.
gentoo_3 ~ # wbox 192.168.10.18 WBOX 192.168.10.18 (192.168.10.18) port 80 0. 200 OK 318 bytes 2 ms 1. 200 OK 318 bytes 2 ms

From the Windows 7 workstation (Win7Client), open an SSH session to each nodes NSIP and start a TCPDUMP using the same capture filter provided below (same as in step 14).
# nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or host 192.168.10.14) and tcp port 80) or arp"

Observe the traffic flow for the incoming requests. 1. Is the ARP owner always the same? Why? 2. Is the flow processor always the same?
Page 30

Step Action 16. Now, we will simulate a node failure by rebooting one of the nodes in the cluster.

Citrix Virtual Classroom


While leaving the wbox utility running to generate constant HTTP probes (refer to the previous step), reboot one of the nodes in the cluster. 17. When the node selected for reboot goes offline, a small delay might be perceived from the client while connections are redirected to other nodes. Inspect the probe output.

NS10_node3> reboot -warm Are you sure you want to restart NetScaler (Y/N)? [N]:y Done NS10_node3>

Was there any HTTP request lost during your testing? 18. After node 3 comes back online, it rejoins the cluster automatically and connections start to flow through this node once again. From the Windows 7 workstation (Win7Client), go back to the SSH session to the cluster IP (CLIP) and verify the output of the show cluster node. What is the status of the node rebooted? Did it re-join the cluster?

gentoo_3 ~ # wbox 192.168.10.18 ... 65. 200 OK 318 bytes 2 ms 66. 200 OK 318 bytes 3 ms 67. 200 OK 318 bytes 3 ms 68. 200 OK 318 bytes 2 ms 69. 200 OK 318 bytes 3 ms 70. 200 OK 318 bytes 3 ms 71. 200 OK 318 bytes 3295 ms 72. 200 OK 318 bytes 2 ms 73. 200 OK 318 bytes 2 ms 74. 200 OK 318 bytes 1 ms 75. 200 OK 318 bytes 1 ms

Page 31

Step Action 19. From the same workstation, go back to the SSH session to node3. Stop any trace running and start a new TCPDUMP to inspect traffic flow. Use the command provided:

Citrix Virtual Classroom


Is there any traffic flowing through this node? See the example below:

# nstcpdump.sh -ne "((host 192.168.10.18 or host 192.168.10.13 or host 192.168.10.14) and tcp port 80) or arp"

Page 32

Step Action 20. Now, to simulate a Configuration Coordinator node failure, we will reboot the CCO node while sending HTTP probes. From the SSH session to the cluster IP (CLIP), reboot the CCO while running the wbox utility from server 3 (GENTOO_3).

From the CCO node execute:

Citrix Virtual Classroom NS10_node1> reboot warm


Are you sure you want to restart NetScaler (Y/N)? [N]:y Done NS10_node1> From Server 3 (GENTOO_3), observe the wbox output: 395. 396. 397. 398. 399. 400. 401. 402. 403. 404. 405. 406. 407. 408. 200 200 200 200 200 200 200 200 200 200 200 200 200 200 OK OK OK OK OK OK OK OK OK OK OK OK OK OK 318 bytes 318 bytes 318 bytes 318 bytes 318 bytes 318 bytes 318 bytes (319) bytes (318) bytes 318 bytes 318 bytes 318 bytes 318 bytes 318 bytes 2 ms 2 ms 3 ms 3 ms 2 ms 7012 ms 2 ms 4 ms 2 ms 3 ms 2 ms 3 ms 2 ms 2 ms

The cluster system will immediately start the CCO election process and select a new configuration coordinator (CCO). A small delay might be perceived from the client while this is in progress. Note: Since our cluster deployment uses a link set, one of the nodes is the ARP owner. On step 12 of this particular example, we found out that node1, (the rebooted node) was the ARP owner. Therefore, a new node is selected for this function.

Page 33

Step Action 21. From the Windows 7 workstation (Win7Client), reconnect to cluster IP while the previous node is rebooting.

Citrix Virtual Classroom


Execute the show cluster node command. Was the connection successful? Was a new CCO elected?

22. So far, our HTTP vserver had no persistency settings and the LEASTCONNECTIONS method was used by default. Now, we will explore other load balancing algorithms and how the persistence state is replicated in a cluster system. From the SSH session to the cluster IP (CLIP), change the LB method to Round Robin and select source IP persistence.
NS10_node2> set lb vserver HTTP_vserver_cluster -lbMethod ROUNDROBIN -persistenceType SOURCEIP Done NS10_node2>

Page 34

Step Action 23. Go back to the SSH session to Apache_MySQL_3 (GENTOO_3). Using our probing tool, send HTTP requests to the VIP in order to generate constant traffic.

Citrix Virtual Classroom

gentoo_3 ~ # wbox 192.168.10.18 WBOX 192.168.10.18 (192.168.10.18) port 80 0. 200 OK 318 bytes 1 ms 1. 200 OK 318 bytes 1 ms

Note: If you followed the previous steps, the wbox utility should be running. Inspect this is the case. 24. Go back to the SSH session to the cluster IP (CLIP) on the Windows 7 workstation (Win7Client). Inspect the persistence table by executing the show persistentSessions command.
> show persistentSessions

Perform the same command on the other two nodes. Are the persistence entries the same on all nodes in the cluster? 25. From the SSH session to the cluster IP (CLIP), look at the timeout for the persistence entry created. Is the timeout value the same across all nodes?

26. This concludes with our load balancing configuration. Next, we will look into the new packet capture options available in a cluster system.

Page 35

Summary
Key Takeaways The key takeaways for this exercise are: Citrix Virtual Classroom A NetScaler cluster system provides a scalable and fault tolerant solution. If any of the components fail, another member of the cluster will handle the load or roles assigned.

NOTES

Load balancing persistence is accomplished by shared hashed tables across all cluster nodes. Even though traffic does not flow through the same node for every incoming flow, the same destination server is selected. NetScaler clustering uses an extension of the existing persistence algorithms available in nCore technology. This extension applies to independent systems that process traffic independently in the same fashion as packet engines.

Page 36

Exercise 4: Packet capture analysis in a Citrix Virtual Classroom cluster system


Overview
In a cluster system, traffic flows can be complex depending on the deployment type and environment. In order to help troubleshoot networking issues, additional markings are included in captured packets when traces are taken in native format. This feature was already available in previous releases; however, with the introduction of clustering, additional markings are needed to aid during the packet capture analysis. In this exercise, we will capture a trace in native format and examine such markings. Keep in mind that a special version of a protocol analyzer is required as it needs to understand the native NetScaler format. You must download the latest WireShark version which already includes such changes. For this exercise, the required application version was already preloaded to the workstation. No additional download is required.

Step by step guidance


The lab environment required for this exercise is as follows: 1. 2. 3. 4. 5. 6. 7. NetScaler VPX appliance: NS10_node1 NetScaler VPX appliance: NS10_node2 NetScaler VPX appliance: NS10_node3 Apache_MySQL_2 (GENTOO_1) Apache_MySQL_2 (GENTOO_2) Apache_MySQL_3 (GENTOO_3) Windows 7 Workstation: (Win7Client)

Estimated time to complete this lab: 20 minutes.

Step

Action

Page 37

Step 1.

Citrix Virtual Classroom

Action From the Windows 7 workstation (Win7Client), login to the cluster IP (CLIP) GUI using any browser installed on the workstation.

2.

Navigate to Configuration -> System -> Diagnostics -> Technical Support Tools -> Start new trace

Page 38

Step 3.

Citrix Virtual Classroom


Packet size = 0 Trace file format = nstrace Leave other settings default. Click on start.

Action Verify no TCPDUMP sessions are running individually on each node. Stop them if necessary. Then, proceed to start a new trace with the following settings:

Page 39

Step 4.

Citrix Virtual Classroom


http://192.168.10.18/phpmyadmin Login as: U: root P: Citrix123

Action From the same workstation (Win7Client), open a different browser and navigate to the HTTP virtual server in order to generate some traffic to be captured. Since there is a web application pre-installed on all servers bound to the VIP (Apache_MySQL_1 and Apache_MySQL_2), use the URL below to access it:

Page 40

Step 5.

Citrix Virtual Classroom

Action After login in to the web application, go back to the NetScaler trace utility and stop the trace. After stopping the running trace, click on the download button to open the file browser:

6.

Whenever a trace is captured, a new directory will be created containing the trace files for each node. Select the corresponding folder with the correct timestamp and click on the zip utility to compress the archive and then hit select.

Page 41

Step 7.

Action Download the compressed archive to the administrators account desktop:

Citrix Virtual Classroom

8.

Close the trace utility. Proceed to extract the contents of the compressed archive to the workstations desktop.

9.

Inspect the contents of the new directory. You should see a subfolder for every node in the cluster. Open any of the trace files and filter by HTTP.

Page 42

Step 10.

Citrix Virtual Classroom


Some of these are: Source Node Destination Node Cluster Flags

Action Expand any of the frames and inspect the NetScaler Packet Trace layer. Note the additional markings included in each packet:

11.

In our example, frame 3267 was received by node 1 and steered to node 2 for further processing. Look into the source and destination MAC addresses to verify this. From the same workstation (INT-Win_7), open an SSH session and login to the cluster IP (CLIP) and compare to the output of the show interface command:

Page 43

Step 12.

Citrix Virtual Classroom


In order to get the correct node to process traffic, the packet must be steered to the corresponding owner node (in this case node2). The receiving node (Flow Receiver) forwards the packet to the destination node (Flow Processor) to serve the request.

Action As you can see, the source MAC address is the virtual MAC used by all nodes in the cluster.

Page 44

Step 13.

Citrix Virtual Classroom

Action The elegance of the cluster architecture relies around an active/active configuration. This means that any node in the traffic path can receive traffic. This complicates things a bit as each trace will have chunks of the relevant traffic for a particular request and it will be very hard to troubleshoot networking related issues while having multiple trace files. To solve this problem, NetScaler 10 offers a new command line utility to merge all trace files into a single capture. The nstracemerge.sh utility is available from the shell:

14.

Lets merge the previously captured trace files. From the Windows 7 workstation (Win7Client), login to the cluster IP (CLIP) using SSH. Access the shell prompt and navigate to the directory where the trace was captured:
/var/nstrace/DDMonthYYYY_hh_mm_ss

NOTE: The folder is labeled using a current timestamp format. See example below.
> shell # cd /var/nstrace/19Sep2012_03_16_59/

Page 45

Step 15.

Citrix Virtual Classroom


root@NS10_node2# pwd /var/nstrace/17Apr2012_21_55_47 root@NS10_node2# ls -alth drwxr-xr-x 5 root wheel 512B drwxr-xr-x 4 root wheel 512B drwxr-xr-x 2 root wheel 512B drwxr-xr-x 2 root wheel 512B drwxr-xr-x 2 root wheel 512B

Action From the SSH session, invoke the nstracemerge.sh utility with the default parameters. This will use the current directory as source and destination for the input/output files. The result will be a single trace file named nstrace0.cap with the contents of all individual traces:

Apr Apr Apr Apr Apr

18 17 17 17 17

01:38 22:42 21:56 21:55 21:55

. .. node2 node3 node1

root@NS10_node2# nstracemerge.sh nstrace merge source directory : . destination directory : . output file name : nstrace current working directory [.] :/var/nstrace/17Apr2012_21_55_47 Files in source directory: ./node1/nstrace1_node1.cap ./node2/nstrace1_node2.cap ./node3/nstrace1_node3.cap root@NS10_node2# ls total 2378 drwxr-xr-x 5 root -rwx------ 1 root drwxr-xr-x 4 root drwxr-xr-x 2 root drwxr-xr-x 2 root drwxr-xr-x 2 root root@NS10_node2# -alth wheel wheel wheel wheel wheel wheel 512B 2.3M 512B 512B 512B 512B Apr Apr Apr Apr Apr Apr 18 18 17 17 17 17 01:46 01:46 22:42 21:56 21:55 21:55 . nstrace0.cap .. node2 node3 node1

Page 46

Step 16.

Citrix Virtual Classroom

Action From the Windows 7 workstation (Win7Client), use WinSCP to copy the merged trace file to the desktop:

17. 18.

From the Windows 7 workstation (Win7Client), open the file. Verify the size. Is the trace larger that the individual pieces? Does it contain more packets? In WireShark, specify a display filter for the VIP AND HTTP traffic. Use the filter provided below:
ip.addr==192.168.10.18 && http

Page 47

Step 19.

Citrix Virtual Classroom


Are all packets processed by the same node?

Action Continue on WireShark and select the first packet. In the packet details, expand the NetScaler Packet Trace sub-node. Start from the first packet displayed and continue to highlight the next. Observe the Source / Destination node fields as well as the Flow Processor / Flow Receiver flags.

Page 48

Step Action 20. Additional display filters have been introduced in the new WireShark release. Now, we have the ability to filter packets based on cluster flags, source/destination node, etc

Citrix Virtual Classroom


nstrace.snode nstrace.dnode nstrace.flags nstrace.flags.rssh nstrace.flags.srss nstrace.flags.dfd nstrace.flags.fr nstrace.flags.fp

In order to find out which packets were received/processed/steered by a certain cluster node, we can leverage some of the following filters:

For example, to find the packets sent by the client and received by node1, we can use the following filter:
ip.addr==192.168.10.18 && ip.addr==192.168.10.233 && nstrace.snode==-1 && nstrace.dnode==1

In WireShark, apply the filter provided to the merged trace. Were there any packets received by node1?

Page 49

Step 21.

Citrix Virtual Classroom

Action Change the destination node filter to either node 2 or 3. Was any packet received by either of these nodes?

Why / Why not? NOTE: Since we are using a linkset configuration, a node is selected to be the ARP owner. This means that all incoming packets for a single VIP will be received by this node. If you select a different destination node in the filter, only 1 of the 3 nodes in the cluster should receive incoming packets. 22. This concludes this exercise. Next we will explore other traffic distribution methods by extending our lab environment to a remote network and configure ECMP.

Page 50

Summary
Key Takeaways The key takeaways for this exercise are: Citrix Virtual Classroom Traffic flowing through a cluster system can add additional complexity when troubleshooting any network issue. NetScaler 10 introduces new options when capturing network traces in a cluster system to identify the nodes processing traffic; thus simplifying the analysis process. These additional packet markings are only available when capturing a trace in native format and a special version of a protocol analyzer is required to analyze such captures. Citrix has submitted the changes to the Wireshark organization and the latest WireShark build contains the necessary changes to parse such captures.

NOTES

An additional utility is available in NetScaler 10 to merge all trace files generated by each node. This simplifies the analysis process. Administrators familiar with NetScaler traces should find the new options familiar as they extend the existing tracing capabilities of a NetScaler appliance.

Page 51

Exercise 5: ECMP configuration Citrix Virtual Classroom Overview


In a previous exercise, we used a linkset configuration as the traffic distribution mechanism. A linkset configuration is easily implemented as it does not require any changes on the upstream device making it suitable for deployment environments where administrative control of all network devices is not possible. This distribution method has one caveat: Traffic destined to a single VIP will only pass through a single node selected as the MAC address owner, potentially creating a bottleneck if a large amount of traffic surges to the such IP. In addition to linksets, other traffic distribution methods are available to mitigate this limitation: ECMP: Equal Cost Multipath Routing CLAG: Cluster Link Aggregation

In this lab, we will implement an ECMP configuration. For this we will use OSPF as the dynamic routing protocol to advertise the VIP host routes to an intermediate router in order to provide access to a remote network to the virtual servers hosted by the cluster. The following diagram will be used during this setup:

Public Network

192.168.10.X/24

.11

.12

.13 .110 .1 .1

172.16.2.0/24

Apache1 INT-Win_7 DNS server


.14

NS10_node1
.120

Apache2

NS10_node2
.130

Internal Router

.2

172.16.1.0/24

EXT-Win_7

Internal network

.15

NS10_node3 Apache3

Remote Network

Page 52

Step by step guidance Citrix Virtual Classroom


The lab environment required for this exercise is as follows: 1. 2. 3. 4. 5. 6. 7. 8. NetScaler VPX appliance: NS10_node1 NetScaler VPX appliance: NS10_node2 NetScaler VPX appliance: NS10_node3 Apache_MySQL_1 (GENTOO_1) Apache_MySQL_2 (GENTOO_2) Internal Windows 7 workstation: Win7Client Remote network Windows 7 workstation: EXT-Win_7 Vyatta Internal Router: vyatta-RTR

Estimated time to complete this lab: 30 minutes. Step 1. Action The 3-node cluster created in a previous exercise should be fully operational. However, it is still uses a LinkSet configuration. Since we are going to be configuring a fully connected ECMP setup, from the Windows 7 workstation (Win7Client), establish an SSH session to the cluster IP (CLIP) and remove the LinkSet definition.
NS10_node2> rm linkset LS/1 Done

Page 53

Step 2.

Citrix Virtual Classroom


NS10_node1

Action To start with our ECMP configuration, we must establish connectivity to the remote network from the NetScaler cluster going through the internal Vyatta router. In the next step, we will create the necessary interfaces to establish connectivity. Check the diagram provided to get familiar with the logical topology.

172.16.1.0/24

172.16.2.0/24
.101 .1 .102 .1

NS10_node2
.103

Internal Router

.2

EXT-Win_7

NS10_node3

Remote Network

After you have reviewed the topology diagram, continue to the next step.

Page 54

Step 3.

Citrix Virtual Classroom

Action Using XenCenter, add a new interface to Node 1, 2, 3. Bind this interface to the To_172-16-1-0 network on each node.

Page 55

Step 4.

Citrix Virtual Classroom


The new network bound will provide us the necessary connectivity to the remote 172.16.2.0/24 subnet.

Action All three NetScaler cluster nodes should have the following Networks attached.

5.

From the Windows 7 workstation (Win7Client), SSH into the cluster IP (CLIP) and enable OSPF.

NS10_node2> enable ns feature OSPFRouting Done NS10_node2>

Page 56

Step 6.

Citrix Virtual Classroom

Action Next, we will create a new subnet IPs on the new network space (172.16.1.0/24). We will turn on dynamic routing on such IP addresses in order to peer up with the Vyatta router to form an OSPF adjacency.

NS10_node2> add ns ip 172.16.1.11 255.255.255.0 -ownerNode 1 -dynamicRouting ENABLED NS10_node2> add ns ip 172.16.1.12 255.255.255.0 -ownerNode 2 -dynamicRouting ENABLED NS10_node2> add ns ip 172.16.1.13 255.255.255.0 -ownerNode 3 -dynamicRouting ENABLED

From the SSH session to the cluster IP (CLIP), create 3 different SNIPs assigned to each node. Make sure you turn on the dynamic routing option. Use the following IPs: 172.16.2.11 node1 172.16.2.12 node2 172.16.2.13 node3

Page 57

Step 7.

Citrix Virtual Classroom

Action From each node SSH session (to the NSIP), return to the NetScaler CLI and verify connectivity with the vyatta router eth1 interface (172.16.1.1) using PING. NOTE: Since we manipulated the network interfaces on the NetScaler VPX instances while it was running, it is possible that ICMP replies are not received. If this is the case, please reboot each of the cluster nodes as well as the Vyatta router VM. Retry the connectivity test.

8.

From the Windows 7 workstation (Win7Client), return to the SSH session to the cluster IP (CLIP) and proceed to enable the dynamic routing options on the VIP (created in a previous exercise) to inject the host route into OSPF. Additionally, set the Vserver Route Health Injection level to ONE_VSERVER to ensure the route is taken off the routing table if any virtual servers on this VIP go down.
NS10_node2> set ns ip ONE_VSERVER 192.168.10.18 -hostRoute ENABLED -vserverRHILev

Page 58

Step 9.

Citrix Virtual Classroom


Login to the vyatta router console (vyatta-RTR) using the default username and password (vyatta/vyatta). Switch to configuraton mode and start the OSPF instance on area 0. Proceed to add all the new subnet IPs (created on the previous step) as OSPF neighbors with the same priority and polling interval.

Action Next, we will configure the intermediate router to run OSPF and setup an adjacency with the NetScalers in the cluster.

Enter configuration mode: vyatta@vyatta:~$ configure Once in configuration mode, enable OSPF and configure the NetScaler nodes as neighbors: vyatta@vyatta# set protocols ospf area 0 network 172.16.1.0/24 vyatta@vyatta# set protocols ospf neighbor 172.16.1.11 poll-interval 60 vyatta@vyatta# set protocols ospf neighbor 172.16.1.11 priority 0 vyatta@vyatta# set protocols ospf neighbor 172.16.1.12 poll-interval 60 vyatta@vyatta# set protocols ospf neighbor 172.16.1.12 priority 0 vyatta@vyatta# set protocols ospf neighbor 172.16.1.13 poll-interval 60 vyatta@vyatta# set protocols ospf neighbor 172.16.1.13 priority 0

10.

Commit and save the configuration for the changes to be active.

vyatta@vyatta# commit vyatta@vyatta# save

Page 59

Step 11.

Citrix Virtual Classroom


NetScaler uses the ZebOS protocol suite to manage the dynamic routing protocol configuration. From the Windows 7 workstation (Win7Client), go back to the SSH session to cluster IP (CLIP) and access the ZebOS shell. NS10_node2# conf t Access the ZebOS configuration mode, and start NS10_node2(config)# router ospf NS10_node2(config-router)# network 172.16.1.0 the OSPF instance. Add the network adjacent to the vyatta 0.0.0.255 area 0 router on area 0. From the ZebOS shell, display the routing table to ensure the host route is being marked as a kernel route. The VIP/32 route should be displayed.
NS10_node2(config-router)#exit NS10_node2(config)#exit NS10_node2#show ip route

Action The next step is to configure the OSPF instance on the NetScaler appliances.

NS10_node2> vtysh

12.

13.

Page 60

Step 14.

Citrix Virtual Classroom


NS10_node2#show ip ospf neighbor

Action Next, show the OSPF neighbors to ensure the proper adjacencies were established. The vyatta router as well as the other two NetScalers in the cluster should show up as neighbors.

15.

Verify the same adjacencies on the vyatta router. From the vyatta router console, display the OSPF neighbors. NOTE: In order for the show commands to work correctly, make sure you exit out of configuration mode.
vyatta@vyatta# exit exit vyatta@vyatta:~$ show ip ospf neighbor

Page 61

Step 16.

Citrix Virtual Classroom

Action We are now ready to redistribute the host routes into OSPF.

The idea is for the NetScaler to advertise these routes in order for the Vyatta router to NS10_node1> include them in its routing table. This way, hosts on the remote network will be able to access the striped VIP through the advertised route. From the Windows 7 workstation (Win7Client), go back to the cluster IP (CLIP) SSH session. The ZebOS shell should be available.

node0#conf t node0(config)#router ospf node0(config-router)#redistribute kernel node0(config-router)#exit node0(config)#exit node0#exit

17.

Redistribute the kernel routes into the OSPF instance. Go back to the vyatta router console and display the routing table. Was the host route injected? Are there multiple routes? Do they have the same cost? Does it show the proper next hop IP?

vyatta@vyatta:~$ show ip route

Page 62

Step 18.

Citrix Virtual Classroom

Action From the Windows 7 workstation at the remote network (EXT-Win_7), attempt to PING or access the virtual server (192.168.10.18) via HTTP. Does the VIP respond with ICMP replies? Why not?

19.

At this point, the vyatta router has a specific route(s) to reach the virtual server and traffic is being forwarded to one of the nodes in the cluster; however, the NetScaler cluster does not know how to route back to the remote network.

Destination unreachable 172.16.2.0/24 HTTP vserver 192.168.10.18 NS10_node1 MySQL vserver 192.168.10.18
.102 .1 .101

ICMP echo request


.1

NS10_node2
.103

Internal Router

.2

EXT-Win_7

NS10_node3

172.16.1.0/24

Remote Network

Page 63

Step 20.

Citrix Virtual Classroom


NS10_node2> show route

Action From the internal network windows workstation (Win7Client), go back to the cluster (CLIP) SSH session and display the routing table. Is there a route to the remote network (172.16.2.X/24)? Why not?

21.

To get the correct return route into the NetScaler cluster routing table, we have two options. Install a static route or use OSPF and have the router to redistribute its routes. Since there is an OSPF adjacency already established, we can configure the vyatta router to redistribute the connected routes. Login to the vyatta router console, and execute the command provided below for the router to advertise its routes:
vyatta@vyatta$ vyatta@vyatta# vyatta@vyatta# vyatta@vyatta# vyatta@vyatta# configure set protocols ospf redistribute connected commit save exit

22.

Go back to the cluster IP (CLIP) SSH session and inspect the NetScaler cluster routing table. Why is the route still NOT available?

Page 64

Step 23.

Citrix Virtual Classroom


NetScaler Routing Table
NS10_node2> show route Network ------1) 192.168.10.0 2) 172.16.1.0 3) 0.0.0.0 4) 127.0.0.0 Netmask ------255.255.255.0 255.255.255.0 0.0.0.0 255.0.0.0 Gateway/OwnedIP --------------192.168.10.120 172.16.1.11 192.168.10.1 127.0.0.1 State ----UP UP UP UP

Action In order to include any advertised routes via OSPF into the NetScaler routing table, we need to instruct the underlying routing layer (ZebOS) to make these routes available to the NetScaler kernel. Review the diagram below and then continue with the next step.

Type ---DIRECT DIRECT STATIC PERMANENT

Internal Router Vyatta Router Routing Table


vyatta@vyatta:~$ show ip route Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - ISIS, B - BGP, > - selected route, * - FIB route C>* O C>* C>* O>* * * 127.0.0.0/8 is directly connected, lo 172.16.1.0/24 [110/10] is directly connected, eth1, 02:44:30 172.16.1.0/24 is directly connected, eth1 172.16.2.0/24 is directly connected, eth0 192.168.10.18/32 [110/20] via 172.16.1.11, eth1, 02:32:04 via 172.16.1.13, eth1, 02:32:04 via 172.16.1.12, eth1, 02:32:04

NS10_node1

NS10_node2
ER OU TE AD V ER TIS

INSTALL ROUTES

ZebOS routing table


NS10_node2#show ip route Codes: K - kernel, C - connected, S - static, R - RIP, B - BGP O - OSPF, IA - OSPF inter area N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2 E1 - OSPF external type 1, E2 - OSPF external type 2 i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2 ia - IS-IS inter area, I - Intranet * - candidate default C C O E2 C K C 127.0.0.0/8 is directly connected, lo0 172.16.1.0/24 is directly connected, vlan0 172.16.2.0/24 [110/20] via 172.16.1.1, vlan0, 00:05:32 192.168.10.0/24 is directly connected, vlan0 192.168.10.18/32 via 0.0.0.0 192.168.10.140/32 is directly connected, vlan0

NS10_node3

24.

From the Windows 7 workstation (Win7Client), go back to the cluster IP (CLIP) SSH session and access the ZebOS shell. Enter configuration mode and execute the command provided.

NS10_node1> vtysh node0#conf t node0(config)#ns route-install ospf node0(config)#exit node0#exit

Page 65

Step 25.

Citrix Virtual Classroom


NS10_node2> show route

Action From the same SSH session and while on the NetScaler CLI, display the routing table. The OSPF route should be installed.

26.

From the remote network Windows 7 workstation (EXTWin_7), attempt to PING the VIP. Did you receive an ECHO reply?

27.

Attempt to access the VIP over HTTP. On the same workstation, open a browser and navigate to the VIP. Can you access the default site?

Page 66

Step 28.

Citrix Virtual Classroom


Use the filter provided.

Action Next, we will verify that all machines in the cluster are receiving traffic for the striped VIP. From the Windows 7 workstation (Win7Client), SSH into the NetScaler IP of each node, and start a TCPDUMP to capture packets from the remote workstations IP and destined to TCP port 80.

> shell # nstcpdump.sh host 172.16.2.2 and tcp port 80

29.

From the remote Windows 7 workstation (EXT-Win_7), open a browser and connect to the VIP.

30.

Go back to the TCPDUMP started on step 27. Are multiple nodes receiving traffic?

Page 67

Step 31.

Citrix Virtual Classroom


http://192.168.10.18/phpmyadmin U: root P: Citrix123 Was the login successful?

Action From the remote network Windows 7 workstation and login to the phpmyadmin web application hosted through the VIP.

32.

Go back to the SSH sessions running on the Internal Windows 7 workstation (Win7Client), are multiple nodes receiving traffic?

Page 68

Step 33.

Citrix Virtual Classroom

Action Since we have defined source IP persistence on the HTTP virtual server configuration, all the requests from the same workstation should be directed to the same backend server. From (Win7Client), go back to the cluster IP (CLIP) SSH session and display the statistics on the HTTP virtual server with the command provided below. Record the number of hits to each of the services bound.
NS10_node2> stat lb vserver HTTP_vserver_cluster

34.

On the remote network Windows 7 workstation (EXT-Win_7), navigate through the webapp to generate some traffic.

Page 69

Step 35.

Citrix Virtual Classroom


Are both services receiving traffic?

Action Go back to the internal Windows 7 workstation (Win7Client). From the cluster IP (CLIP) SSH session, display the virtual server statistics. Compare the number of hits to each of the services bound.

Page 70

Step 36.

Citrix Virtual Classroom


In our example, we will use node 3.

Action To simulate a node failure, from XenCenter, force shutdown one of the nodes in the cluster.

37.

Go back to the vyatta router console. Display the routing table using the show ip route command. NOTE: Remember to exit out of configuration mode as the show ip route command would not work. (The prompt should display a $ sign as this indicates we are in operational and not in configuration mode). Was the correct host route for the node selected removed?
vyatta@vyatta:~$ show ip route

Page 71

Step 38.

Citrix Virtual Classroom


http://192.168.10.18/phpmyadmin Can you still browse to it?

Action Go back to the remote network Windows 7 workstation (EXT-Win_7) and attempt to access the HTTP VIP.

39. 40.

Since the vyatta router has two more host routes to the VIP, connectivity to the other two nodes is still possible and the VIP should continue serving traffic. In this example, it is the upstream device the one responsible for distributing traffic to each node in the cluster based on the L3 information contained in the routing table. This is the end of the exercise. IMPORTANT! This is the last exercise for this lab. Please go back to the Self-paced Learning Lab site and do the following before you go: Provide us feedback by selecting the Take web survey link. After completing the survey, click the Logout link towards the bottom right to close out your lab session.

Page 72

Summary
Key Takeaways The key takeaways for this exercise are: Citrix Virtual Classroom A NetScaler cluster can use multiple distribution mechanisms to distribute traffic across all active nodes in the cluster. ECMP provides the best distribution as each node will receive traffic directly from the upstream device. State sharing between all nodes in the cluster is crucial for the cluster to support additional features (persistence, session reuse, etc)

NOTES

Load balancing persistence is accomplished by shared hashed tables across all cluster nodes. Even though traffic does not flow through the same node for every incoming flow, the same destination server is selected. An Equal Cost Multipath Routing configuration is dependent upon the upstream device configuration. It relies on dynamic routing protocols and support on other network devices to distribute traffic to all cluster nodes homogeneously.

Please complete this survey

We value your feedback! Please take a moment to let us know about your self-paced lab experience by completing a brief survey on the web portal before you logout.

Page 73

Revision History

Citrix Virtual Classroom


Revision 1.0 Change Description Original Version Updated By WW Technical Readiness Date October 2012

About Citrix Citrix Systems, Inc. designs, develops and markets technology solutions that enable information technology (IT) services. The Enterprise division and the Online Services division constitute its two segments. Its revenues are derived from sales of Enterprise division products, which include its Desktop Solutions, Datacenter and Cloud Solutions, Cloud-based Data Solutions and related technical services and from its Online Services division's Web collaboration, remote access and support services. It markets and licenses its products directly to enterprise customers, over the Web, and through systems integrators (Sis) in addition to indirectly through value-added resellers (VARs), value-added distributors (VADs) and original equipment manufacturers (OEMs). In July 2012, the Company acquired Bytemobile, provider of data and video optimization solutions for mobile network operators. http://www.citrix.com

2012 Citrix Systems, Inc. All rights reserved.

Page 74

Das könnte Ihnen auch gefallen